default search action
AVSP 2011: Volterra, Italy
- Giampiero Salvi, Jonas Beskow, Olov Engwall, Samer Al Moubayed:
Auditory-Visual Speech Processing, AVSP 2011, Volterra, Italy, September 1-2, 2011. ISCA 2011
Keynote Papers
- Sverre Sjölander:
Acoustical and visual processing in the animal kingdom. - Colm Massey:
From actor to avatar: real world challenges in capturing the human face.
Perception
- Tim Paris, Jeesun Kim, Chris Davis:
Visual speech influences speeded auditory identification. 5-8 - Catherine T. Best, Christian Kroos, Julia Irwin:
Do infants detect a-v articulator congruency for non-native click consonants? 9-14 - Erin Cvejic, Jeesun Kim, Chris Davis:
Perceiving visual prosody from point-light displays. 15-20 - Olha Nahorna, Frédéric Berthommier, Jean-Luc Schwartz:
Binding and unbinding the Mcgurk effect in audiovisual speech fusion: follow-up experiments on a new paradigm. 21-24 - Mandy Visser, Emiel Krahmer, Marc Swerts:
Children's expression of uncertainty in collaborative and competitive contexts. 25-30 - Michael Fitzpatrick, Jeesun Kim, Chris Davis:
The effect of seeing the interlocutor on auditory and visual speech production in noise. 31-35 - Denis Burnham, Virginie Attina, Benjawan Kasisopa:
Auditory-visual discrimination and identification of lexical tone within and across tone languages. 37-42 - Joan Borràs-Comes, Cecilia Pugliesi, Pilar Prieto:
Audiovisual perception of counter-expectational questions. 43-47
Synthesis
- Utpala Musti, Vincent Colotte, Asterios Toutios, Slim Ouni:
Introducing visual target cost within an acoustic-visual unit-selection speech synthesizer. 49-55 - Wesley Mattheyses, Lukas Latacz, Werner Verhelst:
Auditory and photo-realistic audiovisual speech synthesis for Dutch. 55-60 - Peng Wu, Dongmei Jiang, He Zhang, Hichem Sahli:
Photo-realistic visual speech synthesis based on AAM features and an articulatory DBN model with constrained asynchrony. 61-66
Demo Session
- Sascha Fagel:
Talking heads for elderly and Alzheimer patients (THEA): project report and demonstration. 67 - László Czap, János Mátyás:
Improving naturalness of visual speech synthesis. 69 - Samer Al Moubayed, Simon Alexandersson, Jonas Beskow, Björn Granström:
A robotic head using projected animated faces. 71
Perception and Modeling
- Jeesun Kim, Chris Davis:
Audiovisual speech processing in visual speech noise. 73-76 - Frédéric Berthommier, Jean-Luc Schwartz:
Audiovisual streaming in voicing perception: new evidence for a low-level interaction between audio and visual modalities. 77-80 - Tobias S. Andersen:
An ordinal model of the Mcgurk illusion. 81-86 - Bart Joosten, Marije van Amelsvoort, Emiel Krahmer, Eric O. Postma:
Thin slices of head movements during problem solving reveal level of difficulty. 87-92 - Yoshiko Arimoto, Kazuo Okanoya:
Dimensional mapping of multimodal integration on audiovisual emotion perception. 93-98 - Samer Al Moubayed, Gabriel Skantze:
Turn-taking control using gaze in multiparty human-computer dialogue: effects of 2d and 3d displays. 99-102
Corpora and Applications
- Georgios Galatas, Gerasimos Potamianos, Dimitrios I. Kosmopoulos, Christopher McMurrough, Fillia Makedon:
Bilingual corpus for AVASR using multiple sensors and depth information. 103-106 - Jonas Beskow, Simon Alexandersson, Samer Al Moubayed, Jens Edlund, David House:
Kinetic data for large-scale analysis and modeling of face-to-face conversation. 107-110 - Takaaki Kuratate, Brennand Pierce, Gordon Cheng:
"mask-bot" - a life-size talking head animated robot for AV speech and human-robot communication research. 111-116 - Takeshi Saitoh:
Development of communication support system using lip reading. 117-122 - Giuseppe Riccardo Leone, Piero Cosi:
LUCIA-webGL: a web based Italian MPEG-4 talking head. 123-126
Analysis and Recognition
- Qiang Huang, Stephen J. Cox, Fei Yan, Teofilo de Campos, David Windridge, Josef Kittler, William J. Christmas:
Improved detection of ball hit events in a tennis game using multimodal information. 127-130 - Carlos Toshinori Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita:
Speech-driven lip motion generation for tele-operated humanoid robots. 131-135 - László Czap:
On the audiovisual asynchrony of speech. 137-140
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.