About the Venue

about-the-venue-copia
About the Venue

Don’t miss Prado Museum Reina Sofía National Art Centre Thyssen-Bornemisza Museum Royal Palace Plaza Mayor Square Alcalá Gate City Hall and Cisneros House Retiro Park Santa Ana Square
Madrid, the capital of Spain, is a cosmopolitan city that combines the most modern infrastructures and the status as an economic, financial, administrative and service centre, with a large cultural and artistic heritage, a legacy of centuries of exciting history.

Read more Strategically located in the geographic centre of the Iberian Peninsula, Madrid has one of the most important historic centres of all the great European cities. This heritage merges seamlessly with the city’s modern and convenient infrastructures, a wide-ranging offer of accommodation and services, and all the latest state-of-the-art technologies in audiovisual and communications media. These conditions, together with all the drive of a dynamic and open society –as well as high-spirited and friendly– have made this metropolis one of the great capitals of the Western world.

The historic centre, also known as the “Madrid of Los Austrias” and the spectacular Plaza Mayor square are a living example of the nascent splendour of the city in the 16th and 17th centuries.

Near the Plaza Mayor is the area known as the “aristocratic centre” where the jewel in the crown is the Royal Palace, an imposing building dating from the 17th century featuring a mixture of Baroque and classicist styles. Beside it is the Plaza de Oriente square, the Teatro Real opera house, and the modern cathedral of La Almudena which was consecrated in 1993 by Pope John Paul II. The Puerta del Sol square is surrounded by a varied and select area of shops and businesses, and the “Paseo del Arte” art route –whose name derives from its world-class museums, palaces and gardens– are further elements in an array of monuments which includes particularly the Bank of Spain building, the Palace of Telecommunications, and the fountains of Cibeles and Neptune.

The importance of its international airport, which every week receives over 1,000 flights from all over the world, its two Conference Centres, the modern trade fair ground in the Campo de las Naciones, and over 80,000 places in other meeting centres make Madrid one of Europe’s most attractive business hubs.

Art and Culture
Art and culture play a key role in Madrid’s cultural life. The capital has over 60 museums which cover every field of human knowledge. Highlights include the Prado Museum, the Thyssen-Bornemisza Museum and the Reina Sofía National Art Centre.

Madrid’s lively nightlife
But if there’s one thing that sets Madrid apart, it must be its deep and infectious passion for life that finds its outlet in the friendly and open character of its inhabitants.

Concerts, exhibitions, ballets, a select theatrical offering, the latest film releases, the opportunity to enjoy a wide range of the best Spanish and international gastronomy, to savour the charms of its bars and taverns… all these are just a few of the leisure options on offer in Madrid.

There is also a tempting array of shops and businesses featuring both traditional establishments and leading stores offering top brands and international labels. Madrid’s lively nightlife is another key attraction of Spain’s capital.

Its design affords it great versality together with the necessary technology and services to cover the market’s demands, especially the organisation of conferences, meetings an product presentations. It has over 30.000 sqm of usable area, divided in large exhibition areas, two auditoriums with capacity for 1.814 an 913 people respectively, a 2.200 sqm multipurpose hall, plus twenty-eight various sized rooms. There’s also a 600 space public parking area.  The Madrid Municipal Conference Centre has 28 meetings rooms with a capacity ranging from 26 to 360 people, two auditoriums, a multipurpose hall and large exhibition areas.

PLENARY SESSIONS

PLENARY SESSIONS
PLENARY SESSIONS

Monday, September 5th Opening Session Welcome from the General Chair

Keynote speaker: Graeme M. Clark, Bionic Ear Institute, Melbourne, Australia (ISCA Medalist) Title: The Multiple-Channel Cochlear Implant: Interfacing Electronic Technology to Human Consciousness Fundamental research on electrical stimulation of the auditory pathways resulted in the Multiple Channel Cochlear Implant, a device which provides understanding of speech to severely-to-profoundly deaf people. The device, a miniaturized receiver-stimulator with multiple electrodes fed with power and speech data through two separate aerials was first implanted in a patient in 1978 as a prototype, and since 1982, was commercially produced by Cochlear Limited, Australia. Speech processing is based on the discovery that the sensation at each electrode is vowel-like. Initially, the second formant was coded as a place of stimulation, the sound pressure was coded as a current level, and the voicing frequency as a pulse rate. Further research showed that there were progressively better open-set word and sentence scores for the extraction of the first formant in addition to the second formant (the F0/F1/F2 processor), the addition of high fixed filter outputs (MULTIPEAK) and then finally 6 to 8 maximal filter outputs at low rates (SPEAK) and high rates (ACE). All the frequencies were coded on a place basis. World trials completed for the US FDA on late-deafened adults in 1985 and in 1990 on children from two years to 17 years proved that a 22-channel cochlear implant was safe and effective in enabling them to understand speech both with and without lip-reading.

Graeme Clark Graeme M. Clark was born and raised in Australia. He received the Bachelor of Medicine (MB) and Bachelor of Surgery (BS) degrees in 1957, the Master of Surgery (MS) in 1968 and the Doctor of Philosophy (PhD) in 1969, all from the University of Sydney, Australia. In 1970 he became the foundation Professor of Otolaryngology at the University of Melbourne and he retired in 2004 to become full-time Director of the Bionic Ear Institute, which he established in 1984. After commencing research on electrical stimulation of the auditory pathways in 1967, Graeme Clark systematically initiated and led the fundamental research resulting in the multiple-channel cochlear implant. It is the first major advance in restoring speech perception in tens of thousands of severely-to-profoundly deaf people worldwide and has given spoken language to children born deaf or deafened early in life. Thus, it is the first clinically effective and safe interface between electronic technology and human consciousness. In addition, Clark has played a key role in the development of the Automatic Brainwave Audiometer, the first method for objective accurate measurement of hearing thresholds for low and high frequencies in infants and young children, and the Tickle Talker, a device enabling deaf children to understand speech through electro-tactile stimulation of the nerves of the fingers. Professor Clark holds honorary doctorates of Medicine (Hon. MD), Law (Hon. LLD), Engineering (Hon. DEng), and Science (Hon. DSc) from Australian and international Universities. His has also been made a Fellow of the Australian Academy of Science, a Fellow of the Royal Society of London, and Honorary Fellows of the Royal Society of Medicine, the Royal College of Surgeons of England, and the Australian Acoustic Society. In 2004 he received the Australian Prime Minister�s Prize for Science, Australia�s pre-eminent award in science and technology, and was made a Companion of the Order of Australia, the country�s highest civil honour. In 2005 he received the Award of Excellence in Surgery from the Royal Australasian College of Surgeons, the A. Charles Holland Foundation International Prize in Audiology and Otology, and the Royal College of Surgeons of Edinburgh Medal at the College Quincentenary celebrations.

Tuesday, September 6th Keynote speaker: Fernando Pereira, University of Pennsylvania, USA Title: Linear Models for Structure Prediction

Over the last few years, several groups have been developing models and algorithms for learning to predict the structure of complex data, sequences in particular, that extend well-known linear classification models and algorithms, such as logistic regression, the perceptron algorithm, and support vector machines. These methods combine the advantages of discriminative learning with those of probabilistic generative models like HMMs and probabilistic context-free grammars. I will introduce linear models for structure prediction and their simplest learning algorithms, and exemplify their benefits with applications to text and speech processing, including information extraction, parsing, and language modeling.

Fernando Pereira Fernando C. N. Pereira is the Andrew and Debra Rachleff Professor and chair of the department of Computer and Information Science, University of Pennsylvania. He received a Ph.D. in Artificial Intelligence from the University of Edinburgh in 1982. Before joining Penn, he held industrial research and management positions at SRI International, at AT&T Labs, where he led the machine learning and information retrieval research department from September 1995 to April 2000, and at WhizBang Labs, a Web information extraction company. His main research interests are in machine-learnable models of language and other natural sequential data such as biological sequences. His contributions to finite-state models for speech and text processing are in everyday industrial use. He has 80 research publications on computational linguistics, speech recognition, machine learning, bioinformatics, and logic programming, and several issued and pending patents on speech recognition, language processing, and human-computer interfaces. He was elected Fellow of the American Association for Artificial Intelligence in 1991 for his contributions to computational linguistics and logic programming, and he is a past president of the Association for Computational Linguistics.

Wednesday, September 7th
Keynote Speaker: Elizabeth Shriberg, SRI and ICSI, USA Title: Spontaneous Speech: How People Really Talk, and Why Engineers Should Care Most of the speech we produce and comprehend each day is spontaneous. This “speech in the wild” requires no special training, is remarkably efficient, imposes minimal cognitive load, and carries a wealth of information at multiple levels. Spontaneous speech differs, however, from the types of speech for which spoken language technology is often developed. This talk will illustrate some interesting, important and even amusing characteristics of spontaneous speech — including disfluencies, dialog phenomena, turn-taking patterns, emotion, and speaker differences. The talk will overview research in these areas, outline current challenges, and hopefully convince the more technology-minded members of the audience that modeling these aspects of our everyday speech has much to offer for spoken language applications.

Elizabeth Shriberg Elizabeth Shriberg is a Senior Researcher in the speech groups at both SRI International and the International Computer Science Institute. She received a Ph.D. in Cognitive Psychology from U.C. Berkeley (1994) and was an NSF-NATO postdoc at IPO (the Netherlands, 1995). Her main interest is spontaneous speech. Her work aims to combine linguistic knowledge with corpora and techniques from speech and speaker recognition, to advance both scientific understanding and recognition technology. Over the last decade she has led projects on modeling disfluencies, punctuation, dialog, emotion, and speakers, using lexical and prosodic features. She has published over 100 journal and conference papers in speech science, speech technology, and related fields. She serves as an Associate Editor of Language and Speech, on the boards of Speech Communication and other journals, on the ISCA Advisory Council, and on the ICSLP Permanent Council.

Thursday, September 8th Panel: Ubiquitous Speech Processing
Roger Moore Chair: Roger K. Moore, University of Sheffield, UK

Panel Members:
Alex Acero Jordan Cohen Paul Dalsgaard Sadaoki Furui

Recent years have seen significant advances in the capabilities of practical speech technology systems. A growing number of ordinary people have either used dictation software to create documents on their own PC, spoken to Interactive Voice Response (IVR) systems or used voice-dialling on their mobile phone. Speech technology applications are certainly becoming more commonplace, but there is arguably someway to go before it could be called ubiquitous. For example, speech-based interaction is not commonly used in the home, at work, at school or on holiday.
In his book The Age of Spiritual Machines (Phoenix Press), Ray Kurzweil predicts that language user interfaces will be ubiquitous by 2009. He also says that the majority of text will be created using continuous speech recognition, there will be listening machines for the deaf and that translating telephones will commonly be used for many language pairs.
A panel of experts drawn from academia and industry will discuss these issues and will address the core question Will speech technology become truly ubiquitous and, if so, what applications will there be and when will it happen?.

Thursday, September 8th Closing Session (pdf) FADO Explained and Performed In this show, Teresa Machado and Daniel Gouveia share with us singing and instrumental examples of the typical Lisbon-Portugal urban song named Fado, its different styles and features, ways of singing, poetic themes and historical evolution.
Informally and in dialogue with the audience, they sing lyrics that have been previously translated. The historical evolution of the Fado is briefly presented, the differences between Traditional and Modern Fado are explained, the importance of improvisation is pointed out, the original Fados (Menor, Mouraria, Corrido) are sung, among many others. They stress the importance of the Portuguese Guitar by showing the instrument and demonstrating its technique through playing an instrumental virtuoso piece.

GUIDELINES FOR PAPER SUBMISSION

GUIDELINES FOR PAPER SUBMISSION
GUIDELINES FOR PAPER SUBMISSION

Submission Guidelines A paper for INTERSPEECH 2005 must be submitted as a full and final paper. Consult the Paper Author’s Kit for detailed information on the format of the papers. The dealine for paper submission is April 14, 2005. Please follow the Paper Submission link to perform the actual paper submission. Please note that all papers for INTERSPEECH 2005 must be submitted electronically through this link.

Those who have problems using the fully Web-based submission procedure should contact the organizers at soon as possible. Submission of a paper implies that at least one of the author(s) agrees to register and present the paper at the conference. Only papers by authors registered by June 30, 2005, will be included in the Conference. A participant may present more than one paper. No paper, however, can be presented by a person that is not one of the authors of the paper.

The official language of INTERSPEECH 2005 is English. Paper status

After submission, each paper will be given a unique Paper ID. This will be shown on the confirmation page right after the submission of the paper. A confirmation email including the Paper ID will be sent to each co-author of the paper as well. It will be possible to check and correct (if necessary) the submitted paper information (names, affiliations, etc.) at the Paper Status page (using the Paper ID and Email Address as ‘login’ and ‘password’). Also an updated paper file can be uploaded (if errors were found or new results became available, etc). Corrections/uploads can be made until the submission deadline (April 14, 2005). Paper Acceptance/Rejection Notification

Each corresponding author will be notified by e-mail of the acceptance/rejection of the paper. Reviewer feedback will also be available for each paper.TECHNICAL PROGRAM The conference schedule includes the full list of regular and special sessions, plenary sessions and tutorials. Selecting “Conference Schedule” it is possible to browse all sessions. By clicking on each session, you will be able to see the complete list of papers included in that session. When browsing session pages, it is possible to check all paper details, including session location, time and abstract clicking on the paper title. Cliking on author name lists all papers by a given author.

You may also use the Paper Search and My Schedule options in the left menu to look for papers by title and/or author names and to develop a custom schedule for selected papers.
TUTORIALS Tutorials for Interspeech’2005 will be held on September 4th. These tutorials are organized by internationally recognized experts in their fields. The idea behind the tutorials is to provide an overview of the topic as well as to bring to light the recent developments in fields concerning spoken language processing to all students, engineers ans scientists. It is possible to register for tutorials without registering for the conference.

SPECIAL SESSIONS
Submission to all special sessions was done using the same format and deadlines as for regular papers. For further information concerning special sessions please  The Organizing Committee has received 27 proposals for special sessions. From these proposals, 9 were selected as oral sessions. Due to the large number of accepted papers in some of these sessions, some of these papers will be presented as posters. One proposal (no. 10) was selected as a poster session followed by a panel. The last two proposals were selected as panels.

Emotional Speech Analysis and Synthesis: Towards a Multimodal Approach Organizers: Ellen Douglas-Cowie and Roddy Cowie
E-inclusion and Spoken Language Processing Organizers: Paul Dalsgaard and Roger Moore The Blizzard Challenge 2005: Evaluating Corpus-based Speech Synthesis on Common Databases Organizers: Alan W. Black and Keiichi Tokuda Gender and Age Issues in Speech and Language Research
Organizers: Els den Os, Lori Lamel, and Martin Russell
Early Language Acquisition: Infant Studies, Animal Models and Theories
Organizer: Francisco Lacerda Rapid Development of Spoken Dialogue System: Data Driven, Knowledge-based and Hybrid Methods Organizers: Giuseppe Di Fabbrizio, Junlan Feng, Juan M. Huerta, Roberto Pieraccini, Manny Rayner, and Ye-Yi Wang Human factors, User Experience and Natural Language Application Design
Organizers: Liz Alba, M. Gabriela Alvarez-Ryan, Fang Chen, and Kristiina Jokinen Speech Recognition in Ubiquitous Networking and Context-Aware Computing Organizers: Zheng-Hua Tan, Paul Dalsgaard and Børge Lindberg Speech Inversion Organizer: Victor Sorokin Bridging the Gap Between Human and Automatic Speech Processing Organizers: Katrin Kirchoff and Louis ten Bosch Panel: History of Speech Technology Moderators: Janet M. Baker and Patri J. Pugliese Panel: Towards a SIG on Iberian Languages Moderator: Nestor Becerra Yoma

PRESENTATION GUIDELINES

PRESENTATION GUIDELINES
PRESENTATION GUIDELINES

For the presentation of your paper at Interspeech’2005, please kindly note the following instructions. Thank you!

Oral presentations

The time slot for an oral presentation is 20 minutes, including discussion. Your talk should therefore not exceed 15 minutes, otherwise your session chairman will alert you.

Each room for oral presentations will be equipped with a PC running Windows XP. The PCs will have a DVD drive and Microsoft PowerPoint 2003 and Acrobat Reader 7.0 will be available. Note that you will not be allowed to use your own PC! You have to bring your presentation on a CD (CD-R iso-9660) or on a USB pen drive (MS-DOS formatted). In case you have doubts about your presentation being properly displayed by the available equipment, please check it as soon as possible in the preparation room. In any case, contact your session chairperson 20 minutes before the start of your oral session in the corresponding room.

In case your presentation needs any special fonts, e.g. for phonetic symbols or for languages such as Arabic, Chinese, Japanese, Korean, etc., we recommend to generate a PDF document. Note that this will possibly discard some facilities of PowerPoint documents. If you desperately need to use such facilities, you can generate a self-contained version of your presentation using the “Pack and Go Wizard” of the PowerPoint file menu. Make sure to activate the switches “Include linked files” and “Embed TrueType fonts”. You don’t need to include the “Viewer for Windows”, however. The two resulting files are “pngsetup.exe” and “pres0.ppz”. Double clicking the first one extracts the self-contained version of your PowerPoint document. Copy this document on a pen disk or a CD and if possible, test it on an other computer before the conference.

Poster presentations

Please be aware that a poster is not intended to be a reproduction of your paper in the proceedings. The size of your poster should not be smaller than A0 format (1.19 x 0.84 meters) and must not exceed the size of the poster board, which is 1.41 meters wide and 1.70 meters high.

Poster sessions are mostly two hours long, as indicated in the session tables. This is the time interval you should be present at your poster to eventually answer questions. Note that all posters of the morning sessions have to be installed before 8:30 and should be removed at 12:30. The posters of the early afternoon sessions have to be ready at 13:00 and the boards should be cleared during the coffee break. For the late afternoon sessions, the posters should be installed during the coffee break and cleared before 18h30. Please make sure to place your poster at the right board! A poster presentation labeled “3BP3.14” in the program has to use board “P3.14” that is located in room “P3”. In case you have any questions, please contact your session chairperson. He/she will be on site 20 minutes before the session starts.

Recommendations for Posters

These recommendations are primarily meant for authors who are less familiar with poster sessions at conferences.

Poster sessions are a valuable method for authors to present papers and meet with interested attendees for in-depth technical discussions. Therefore, it is important that you display your results clearly to attract people who have an interest in your work and your paper. In order to make your poster presentation a success you should:

… before the Conference:

  • Your poster should cover the key points of your work. The ideal poster is designed to (1) attract attention; (2) provide a brief overview of your work; and (3) initiate discussion and questions.
  • The title of your poster should appear at the top in capital letters about 25mm (1 inch) high. Below the title, put the author(s) name(s) and affiliation(s).
  • Carefully prepare your poster well in advance of the conference. There will be no time or materials available for last minute preparations at the conference. If you think you may need certain materials to repair the poster after travelling, bring them with you.
  • Use color for highlighting and to make your poster more attractive. Think about what attracts you to posters and other visual displays. Use pictures, diagrams, cartoons, figures, etc., rather than only text wherever possible.
  • The smallest text on your poster should be at least 9mm (3/8 inch) high, and the important points should be in a larger size.
  • Make your poster as self-explanatory as possible. This will save you time to use for discussions and questions.

… at the Conference:

  • Please make sure to locate your posterboard and attach your poster accoding to the instructions for presentations at Interspeech’2005.
  • Prepare a short presentation (several minutes) that you can periodically give to those assembled around your poster. Be ready to give it several times as people move through the area.
  • If there is more than one author attending the conference, all should attend the poster presentation to aid in the presentation and discussion and to provide the main presenter with a chance to rest and to answer questions.
  • There will be no audio-visual equipment for poster presentations. You have to print it out on paper (before the conference), bring it along and put it at the posterboard that will be associated to you. For demonstrations you have to use your own equipment. Power strips will be available with correct (230V/50Hz)

INTERSPEECH ‘2005 – EUROSPEECH is the sixth conference in the annual series of INTERSPEECH events and the ninth biennial conference of the International Speech Communication Association (ISCA). It will be held September 4-8 in Lisbon, Portugal, following previous INTERSPEECH events in Jeju (2004), Geneva (2003), Denver (2002), Aalborg (2001) and Beijing (2000).

INTERSPEECH’2005 will be held at Centro Cultural de Belém, located in Lisbon’s most renowned historic area, next to the Jerónimos Monastery and facing the river Tagus.

Although this interdisciplinary conference will cover all aspects of speech science and technology, INTERSPEECH’2005 will have a special focus on the theme of “Ubiquitous Speech Processing”. The conference will include plenary talks by world-class experts, tutorials, exhibits and parallel oral and poster sessions on the following topics:

Phonetics and Phonology
Discourse and Dialogue
Prosody
Paralinguistic and Nonlinguistic Information
Speech Production
Speech Perception
Physiology and Pathology
Spoken Language Acquisition, Development and Learning
Signal Analysis, Processing and Feature Estimation
Single- and Multi-channel Speech Enhancement
Speech Coding and Transmission
Spoken Language Generation and Synthesis
Speech Recognition
Acoustic processing for ASR, language and pronunciation modeling, adaptation and general robustness issues, engineering issues in ASR (e.g. searches, large vocabulary), etc.
Spoken Language Understanding
Speaker Characterization and Recognition
Language/Dialect Identification
Multi-modal/Multi-media Processing
Spoken Language Resources and Annotation
Spoken/Multi-modal Dialogue Systems
Spoken Language Extraction/Retrieval
Spoken Language Translation
Spoken Language Technology for the Aged and Disabled (e-inclusion)
Spoken Language Technology for Education (e-learning)
New Applications
Evaluation and Standardization
Ubiquitous Speech Processing
Others
PAPER SUBMISSION
The deadline for full paper submission (4 pages) has been extended to April 14, 2005. Paper submission is done exclusively via the conference website, using the submission guidelines. No previously published papers should be submitted. Each corresponding author will be notified by e-mail of the acceptance of the paper by June 10, 2005. Minor updates will be allowed during June 10 – June 16.
PROPOSALS FOR TUTORIALS AND SPECIAL SESSIONS
We encourage proposals for half-day pre-conference tutorials to be held on September 4, 2005. Those interested in organizing a tutorial should send a one-page description to by January 14, 2005.

INTERSPEECH’2005 also welcomes proposals for special sessions. Each special session normally consists of 6 invited papers, although other formats may be considered. The topics of the special sessions should be important, new, emerging areas of interest to the speech processing community, yet have little overlap with the regular sessions. They could also be interdisciplinary topics that encourage cross -fertilization of fields or topics investigated by members of other societies that are becoming of keen interest to the speech and language community. Special session papers follow the same submission format as regular papers. Proposals for special sessions should be sent to by January 14, 2005.

IMPORTANT DATES:
Proposals for Tutorials and Special Sessions due by: January 14, 2005
Full paper submission deadline: April 14, 2005
Notification of paper acceptance/rejection June 10, 2005
Early registration deadline: June 30, 2005
INFORMATION
If you want to be updated as more information becomes available, please send an e-mail to the following address