default search action
AVSP 2005: Vancouver Island, British Columbia, Canada
- Eric Vatikiotis-Bateson:
Auditory-Visual Speech Processing 2005, Vancouver Island, British Columbia, Canada, July 24-27, 2005. ISCA 2005
Invited Lectures
- Janet Beavin Bavelas:
Appreciating face-to-face dialogue. 1
Human Perception and Processing of Auditory-Visual Speech
- Hansjörg Mixdorff, Patavee Charnvivit, Denis K. Burnham:
Auditory-visual perception of syllabic tones in Thai. 3-8 - Dominic W. Massaro, Miguel Hidalgo-Barnes:
Read my lips: an animated face helps communicate musical lyrics. 9-10 - Azra Nahid Ali, Ashraf Hassan-Haj, Michael Ingleby, Ali Idrissi:
McGurk fusion effects in Arabic words. 11-16 - Jeesun Kim, Chris Davis, Guillaume Vignali, Harold Hill:
A visual concomitant of the Lombard reflex. 17-22 - Nicole Lees, Denis K. Burnham:
Facilitating speech detection in style!: the effect of visual speaking style on the detection of speech in noise. 23-28 - Marc Swerts, Emiel Krahmer:
Cognitive processing of audiovisual cues to prominence. 29-30 - Cheryl M. Capek, Ruth Campbell, Mairéad MacSweeney, Marc L. Seal, Dafydd Waters, Bencie Woll, Tony David, Philip K. McGuire, Mick Brammer:
Reading speech and emotion from still faces: fMRI findings. 31-34 - Alexandra Jesse, Dominic W. Massaro:
Towards a lexical fuzzy logical model of perception: the time-course of audiovisual speech processing in word identification. 35-36 - Jacques C. Koreman, Georg Meyer:
The integration of coarticulated segments in visual speech. 37-38 - Jintao Jiang, Lynne E. Bernstein, Edward T. Auer Jr.:
Perception of congruent and incongruent audiovisual speech stimuli. 39-44 - Slim Ouni, Michael M. Cohen, Hope Ishak, Dominic W. Massaro:
Visual contribution to speech perception: measuring the intelligibility of talking heads. 45-46 - Michael Walsh, Stephen Wilson:
An agent-based framework for auditory-visual speech investigation. 47-52 - Daniel E. Callan:
Internal models differentially implicated in audiovisual perception of non-native vowel contrasts. 53-54 - Victor Chung, Nicole Mirante, Jolien Otten, Eric Vatikiotis-Bateson:
Audiovisual processing of Lombard speech. 55-56 - V. Dogu Erdener, Denis K. Burnham:
Development of auditory-visual speech perception in English-speaking children: the role of language-specific factors. 57-62 - Harold Hill, Eric Vatikiotis-Bateson:
Using graphics to study the perception of speech-in-noise, and vice versa. 63-64 - Vincent Robert, Brigitte Wrobel-Dautcourt, Yves Laprie, Anne Bonneau:
Inter speaker variability of labial coarticulation with the view of developing a formal coarticulation model for French. 65-70
Invited Lectures
- Yohan Payan:
How to model face and tongue biomechanics for the study of speech production? 71-72
Machine-based Recognition and Processing of Auditory-Visual Speech
- Patrick Lucey, David Dean, Sridha Sridharan:
Problems associated with current area-based visual speech feature extraction techniques. 73-78 - Gerasimos Potamianos, Patricia Scanlon:
Exploiting lower face symmetry in appearance-based automatic speechreading. 79-84 - Simon Lucey, Patrick Lucey:
Improved speech reading through a free-parts representation. 85-86 - Edson Bárcenas, Mauricio Díaz, Rafael Carrillo, Ricardo Solano, Carolina Soto, Luis Valderrama, Javier Villegas, Pedro R. Vizcaya:
A coding method for visual telephony sequences. 87-92 - Petr Císar, Milos Zelezný, Zdenek Krnoul, Jakub Kanis, Jan Zelinka, Ludek Müller:
Design and recording of Czech speech corpus for audio-visual continuous speech recognition. 93-96 - David Dean, Patrick Lucey, Sridha Sridharan:
Audio-visual speaker identification using the CUAVE database. 97-102 - Jianxia Xue, Jintao Jiang, Abeer Alwan, Lynne E. Bernstein:
Consonant confusion structure based on machine classification of visual features in continuous speech. 103-108
The Production of Auditory-Visual Speech
- Roland Goecke:
3d lip tracking and co-inertia analysis for improved robustness of audio-video automatic speech recognition. 109-114 - Marion Dohen, Hélène Loevenbruck, Harold Hill:
A multi-measurement approach to the identification of the audiovisual facial correlates of contrastive focus in French. 115-116 - Philip Rubin, Gordon Ramsay, Mark Tiede:
The history of articulatory synthesis at Haskins laboratories. 117-118 - Sidney S. Fels, Florian Vogt, Kees van den Doel, John E. Lloyd, Oliver Guenther:
Artisynth: an extensible, cross-platform 3d articulatory speech synthesizer. 119-124 - Frédéric Elisei, Gérard Bailly, Guillaume Gibert, Rémi Brun:
Capturing data and realistic 3d models for cued speech analysis and audiovisual synthesis. 125-130 - Takaaki Kuratate:
Statistical analysis and synthesis of 3d faces for auditory-visual speech animation. 131-136 - Sonia Sangari, Mustapha Skhiri, Bertil Lyberg:
Computational model of some communication head movements in a speech act. 137-142 - Florian Vogt:
Finite element modeling of the tongue. 143-144 - Brigitte Wrobel-Dautcourt, Marie-Odile Berger, Blaise Potard, Yves Laprie, Slim Ouni:
A low-cost stereovision based system for acquisition of visible articulatory data. 145-150
Invited Lectures
- Alan G. Hannam:
Structure and function in the human jaw. 151
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.