


default search action
AVSP 2015: Vienna, Austria
- Auditory-Visual Speech Processing, AVSP 2015, Vienna, Austria, September 11-13, 2015. ISCA 2015
Keynotes
- Volker Helzle:
An artistic and tool-driven approach for believable digital characters. - Verónica Costa Orvalho:
How to create a look-a-like avatar pipeline using low-cost equipment. - Jean-Luc Schwartz:
Audiovisual binding in speech perception. - Frank K. Soong, Lijuan Wang:
From text-to-speech (TTS) to talking head - a machine learning approach to A/V speech modeling and rendering.
Life Span
- Mandy Visser, Emiel Krahmer, Marc Swerts:
Children's spontaneous emotional expressions while receiving (un)wanted prizes in the presence of peers. 1-6 - Mathilde Fort, Anira Escrichs, Alba Ayneto-Gimeno, Núria Sebastián-Gallés:
You can raise your eyebrows, i don't mind: are monolingual and bilingual infants equally good at learning from the eyes region of a talking face? 7-11 - Paula Dornhofer Paro Costa, Daniella Batista, Mayara Toffoli, Keila A. Baraldi Knobel, Cíntia Alves Salgado, José Mario De Martino:
Comparison of visual speech perception of sampled-based talking heads: adults and children with and without developmental dyslexia. 12-16 - Simone Simonetti, Jeesun Kim, Chris Davis:
Cross-modality matching of linguistic prosody in older and younger adults. 17-21 - Aurélie Huyse, Frédéric Berthommier, Jacqueline Leybaert:
"i do not see what you are saying": reduced visual influence on mulimodal speech integration in children with SLI. 22-27
Emotion, Personality, and Dialogue
- Catherine T. Best, Christian Kroos, Karen E. Mulak, Shaun Halovic, Mathilde Fort, Christine Kitamura:
Message vs. messenger effects on cross-modal matching for spoken phrases. 28-33 - Adela Barbulescu, Gérard Bailly, Rémi Ronfard, Maël Pouget:
Audiovisual generation of social attitudes from neutral stimuli. 34-39 - Elizabeth Stelle, Caroline L. Smith, Eric Vatikiotis-Bateson:
Delayed auditory feedback with static and dynamic visual feedback. 40-45 - Chee Seng Chong, Jeesun Kim, Chris Davis:
Visual vs. auditory emotion information: how language and culture affect our bias towards the different modalities. 46-51
Poster Sessions
- Hansjörg Mixdorff, Angelika Hönemann, Jeesun Kim, Chris Davis:
Anticipation of turn-Switching in auditory-visual dialogs. 52-56
Culture and Language
- Sachiko Takagi, Shiho Miyazawa, Elisabeth Huis in 't Veld, Béatrice de Gelder, Akihiro Tanaka:
Comparison of multisensory display rules in expressing complex emotions between cultures. 57-62 - Akihiro Tanaka, Sachiko Takagi, Saori Hiramatsu, Elisabeth Huis in 't Veld, Béatrice de Gelder:
Towards the development of facial and vocal expression database in east Asian and Western cultures. 63-66 - Sarah Fenwick, Chris Davis, Catherine T. Best, Michael D. Tyler:
The effect of modality and speaking style on the discrimination of non-native phonological and phonetic contrasts in noise. 67-72 - Rui Wang, Biao Zeng, Simon Thompson:
Audio-visual perception of Mandarin lexical tones in AX same-different judgment task. 73-77
Visual Speech Synthesis
- Yu Ding, Catherine Pelachaud:
Lip animation synthesis: a unified framework for speaking and laughing virtual agent. 78-83 - Dietmar Schabus, Michael Pucher:
Comparison of dialect models and phone mappings in HSMM-based visual dialect speech synthesis. 84-87 - Ausdang Thangthai, Barry-John Theobald:
HMM-based visual speech synthesis using dynamic visemes. 88-92 - Najwa Alghamdi, Steve Maddock, Guy J. Brown, Jon Barker:
Investigating the impact of artificial enhancement of lip visibility on the intelligibility of spectrally-distorted speech. 93-98 - Chris Davis, Jeesun Kim, Vincent Aubanel, Gregory Zelic, Yatin Mahajan:
The stability of mouth movements for multiple talkers over multiple sessions. 99-102
Audio-Visual Speech Recognition
- Thomas Le Cornu, Ben Milner:
Voicing classification of visual speech using convolutional neural networks. 103-108 - Stavros Petridis, Varun Rajgarhia, Maja Pantic:
Comparison of single-model and multiple-model prediction-based audiovisual fusion. 109-114 - Helen L. Bear, Richard W. Harvey, Yuxuan Lan:
Finding phonemes: improving machine lip-reading. 115-120 - Stephen Cox:
Discovering patterns in visual speech. 121-126 - Kwanchiva Thangthai, Richard W. Harvey, Stephen J. Cox, Barry-John Theobald:
Improving lip-reading performance for robust audiovisual speech recognition using DNNs. 127-131
Visual Speech Perception
- Vincent Aubanel, Chris Davis, Jeesun Kim:
Explaining the visual and masked-visual advantage in speech perception in noise: the role of visual phonetic cues. 132-136 - Danny Websdale, Ben Milner:
Analysing the importance of different visual feature coefficients. 137-142 - Lucie Scarbel, Denis Beautemps, Jean-Luc Schwartz, Marc Sato:
Auditory and audiovisual close-shadowing in normal and cochlear-implanted hearing impaired subjects. 143-146
Poster Sessions
- Elena Tsankova, Eva Krumhuber, Andrew J. Aubrey, Arvid Kappas, Guido Möllering, A. David Marshall, Paul L. Rosin:
The multi-modal nature of trustworthiness perception. 147-152 - Hrishikesh Rao, Zhefan Ye, Yin Li, Mark A. Clements, Agata Rozga, James M. Rehg:
Combining acoustic and visual features to detect laughter in adults' speech. 153-156 - Jason Vandeventer, Andrew J. Aubrey, Paul L. Rosin, A. David Marshall:
4D Cardiff Conversation Database (4D CCDb): a 4D database of natural, dyadic conversations. 157-162
Visual Speech Perception
- Clémence Bayard, Cécile Colin, Jacqueline Leybaert:
Integration of auditory, labial and manual signals in cued speech perception by deaf adults: an adaptation of the McGurk paradigm. 163-168
Poster Sessions
- Christiaan Rademan, Thomas Niesler:
Improved visual speech synthesis using dynamic viseme k-means clustering and decision trees. 169-174 - Etienne Marcheret, Gerasimos Potamianos, Josef Vopicka, Vaibhava Goel:
Scattering vs. discrete cosine transform features in visual speech processing. 175-180 - Kazuto Ukai, Satoshi Tamura, Satoru Hayamizu:
Stream weight estimation using higher order statistics in multi-modal speech recognition. 181-184 - Maiko Takahashi, Akihiro Tanaka:
Optimal timing of audio-visual text presentation: the role of attention. 185-189 - Helen L. Bear, Stephen J. Cox, Richard W. Harvey:
Speaker-independent machine lip-reading with speaker-dependent viseme classifiers. 190-195 - Vasudev Bethamcherla, Will Paul, Cecilia Ovesdotter Alm, Reynold J. Bailey, Joe Geigel, Linwei Wang:
Face-speech sensor fusion for non-invasive stress detection. 196-201
Emotion, Personality, and Dialogue
- Angelika Hönemann, Hansjörg Mixdorff, Albert Rilliard:
Classification of auditory-visual attitudes in German. 202-207
Poster Sessions
- Julia Irwin, Lawrence Brancazio:
The development of patterns of gaze to a speaking face. 208-212 - João Paulo Cabral, Yuyun Huang, Christy Elias, Ketong Su, Nick Campbell:
Interface for monitoring of engagement from audio-visual cues. AVSP 2015 - Pasquale Dente, Dennis Küster, Eva Krumhuber:
Boxing the face: a comparison of dynamic facial databases used in facial analysis and animation. AVSP 2015 - Michael Pucher, Dietmar Schabus:
Visio-articulatory to acoustic conversion of speech. AVSP 2015 - Darren Cosker, Eva Krumhuber, Adrian Hilton:
Perceived emotionality of linear and non-linear AUs synthesised using a 3d dynamic morphable facial model. AVSP 2015 - Nicholas Smith, Timothy Vallier, Bob McMurray, Christine Hammans, Julia Garrick:
Environmental, linguistic, and developmental influences on mothers? speech to children: an examination of audible and visible properties. AVSP 2015 - Ganesh Attigodu Chandrashekara, Frédéric Berthommier, Jean-Luc Schwartz:
Dynamics of audiovisual binding in elderly population. AVSP 2015 - Alexandre Hennequin, Amélie Rochet-Capellan, Marion Dohen:
Auditory-visual perception of VCVs produced by people with down syndrome: a preliminary study. AVSP 2015 - Bo Holm-Rasmussen, Tobias Andersen:
The perceived sequence of consonants in mcgurk combination illusions depends on syllabic stress. AVSP 2015 - Tobias Andersen:
An answer to a naïve question to the McGurk effect: why does audio b give more d percepts with visual g than with visual d?". AVSP 2015 - Irene De la Cruz-Pavía, Michael McAuliffe, Janet F. Werker, Judit Gervain, Eric Vatikiotis-Bateson:
Visual cues to phrase segmentation and the acquisition of word order. AVSP 2015 - Gilbert Ambrazaitis, Malin Svensson Lundmark, David House:
Head movements, eyebrows, and phonological prosodic prominence levels in Stockholm Swedish news broadcasts. AVSP 2015 - Antje Strauß, Christophe Savariaux, Sonia Kandel, Jean-Luc Schwartz:
Visual lip information supports auditory word segmentation. AVSP 2015

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.