![](https://dblp.uni-trier.de./img/logo.ua.320x120.png)
![](https://dblp.uni-trier.de./img/dropdown.dark.16x16.png)
![](https://dblp.uni-trier.de./img/peace.dark.16x16.png)
Остановите войну!
for scientists:
![search dblp search dblp](https://dblp.uni-trier.de./img/search.dark.16x16.png)
![search dblp](https://dblp.uni-trier.de./img/search.dark.16x16.png)
default search action
AVSP 1999: Santa Cruz, CA, USA
- Dominic W. Massaro:
Auditory-Visual Speech Processing, AVSP '99, Santa Cruz, CA, USA, August 7-10, 1999. ISCA 1999, ISBN 978-0-9674047-0-7
Plenary Papers
- Clifford Nass, Li Gong:
Maximized Modality or constrained consistency? 1 - David J. Lewkowicz:
Infants' perception of the audible, visible and bimodal attributes of talking and singing faces. 2 - Barry E. Stein, Mark T. Wallace, Wan Jiang, Huai Jian, J. William Vaughn:
Cross-Modal Integration: Bringing Coherence to the Sensory World. 3
Auditory/Visual Speech Perception I (Dedicated to Kerry P. Green)
- Linda W. Norrix, Kerry P. Green:
Visual context effects on the perception of /r/ and /l/: Varying F1 and F2 acoustic characteristics. 4 - Ethan Cox, Linda W. Norrix, Kerry P. Green:
The contribution of visual information to on-line sentence processing: Evidence from phoneme monitoring. 5 - M. de Haan, Ruth Campbell:
Lateralized event-related cortical potentials in discriminating images of facial speech. 6 - Ruth Campbell, Gemma A. Calvert, Michael J. Brammer, Mairéad MacSweeney, Simon A. Surguladze, Philip K. McGuire, Bencie Woll, Steve Williams, Edson Amaro Jr., Anthony S. David:
Activation in auditory cortex by speechreading in hearing people: FMRI studies. 7 - Rika Kanzaki, Ruth Campbell:
Effect of facial brightness reversal on visual and audiovisual speech perception. 8
Auditory/Visual Speech Perception II
- Jean-Pierre Gagné, Monique Charest, Anne-Josée Rochette:
An analysis of the effects of clear speech on the visual-speech intelligibility of consonants. 9 - Philip Franz Seitz, Ken W. Grant:
Modality, perceptual encoding speed, and time-course of phonetic information. 10 - Lawrence Brancazio:
Lexical influences on the McGurk effect. 11 - Chris Davis, Jeesun Kim:
Perception of clearly presented foreign language sounds: The effects of visible speech. 12 - Denis Burnham, Susanna Lau:
The integration of auditory and visual speech information with foreign speakers: The role of expectancy. 13
Speech Analysis and Recognition by Machine
- James F. Baldwin, Trevor P. Martin, Mehreen Saeed:
Automatic computer lip-reading using fuzzy set theory. 14 - Javier R. Movellan, Paul Mineiro:
A diffusion network approach to visual speech recognition. 15 - Partha Niyogi, Eric Petajan, Jialin Zhong:
Feature based representation for audio-visual speech recognition. 16 - Barbara Helga Talle, Andreas Wichert:
Audio-visual sensor fusion with neural architectures. 17 - Andrew W. Senior, Chalapathy Neti, Benoît Maison:
On the use of visual information for improving audio-based speaker recognition. 18
Correspondences between Auditory and Visual Speech and Auditory Speech to Visual Speech (AStVS) Synthesis
- Jon P. Barker, Frédéric Berthommier:
Estimation of speech acoustics from visual speech features: A comparison of linear and non-linear models. 19 - Eric Vatikiotis-Bateson, Takaaki Kuratate, Miyuki Kamachi, Hani Yehia:
Facial deformation parameters for audiovisual synthesis. 20 - Eva Agelfors, Jonas Beskow, Björn Granström, Magnus Lundeberg, Giampiero Salvi, Karl-Erik Spens, Tobias Öhman:
Synthetic visual speech driven from auditory speech. 21 - Fabio Vignoli, Carlo Braccini:
A text-speech synchronization technique with applications to talking heads. 22 - Dominic W. Massaro, Jonas Beskow, Michael M. Cohen, Christopher L. Fry, Tony Rodriguez:
Picture my voice: Audio to visual speech synthesis using artificial neural networks. 23
Communicating Characters
- Kazuya Imaizumi, Shizuo Hiki, Yumiko Fukuda:
A symbolic system for multi-purpose description of the mouth shapes. 24 - Georg Fries, Aldo Paradiso, Frank Nack, Karlheinz Schuhmacher:
A tool for designing MPEG-4 compliant expressions and animations on VRML cartoon-faces. 25 - Magnus Lundeberg, Jonas Beskow:
Developing a 3D-agent for the august dialogue system. 26 - Jean-Luc Olives, Riikka Möttönen, Janne Kulju, Mikko Sams:
Audio-visual speech synthesis for finnish. 27 - Max Ritter, Uwe Meier, Jie Yang, Alex Waibel:
Face translation: A multimodal translation agent. 28
![](https://dblp.uni-trier.de./img/cog.dark.24x24.png)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.