default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 49 matches
- 2008
- Magnus Alm, Dawn M. Behne:
Age-related experience in audio-visual speech perception. AVSP 2008: 205-208 - Gérard Bailly, Antoine Bégault, Frédéric Elisei, Pierre Badin:
Speaking with smile or disgust: data and models. AVSP 2008: 111-114 - Gérard Bailly, Yu Fang, Frédéric Elisei, Denis Beautemps:
Retargeting cued speech hand gestures for different talking heads and speakers. AVSP 2008: 153-158 - Adriano Vilela Barbosa, Hani C. Yehia, Eric Vatikiotis-Bateson:
Algorithm for computing spatiotemporal coordination. AVSP 2008: 131-136 - Adriano Vilela Barbosa, Hani C. Yehia, Eric Vatikiotis-Bateson:
Linguistically valid movement behavior measured non-invasively. AVSP 2008: 173-177 - Dawn M. Behne, Yue Wang, Stein-Ove Belsby, Solveig Kaasa, Lisa Simonsen, Kirsti Back:
Visual field advantage in the perception of audiovisual speech segments. AVSP 2008: 47-50 - Douglas Brungart, Nandini Iyer, Brian D. Simpson, Virginie van Wassenhove:
The effects of temporal asynchrony on the intelligibility of accelerated speech. AVSP 2008: 19-24 - Denis Burnham, Arman Abrahamyan, Lawrence Cavedon, Chris Davis, Andrew Hodgins, Jeesun Kim, Christian Kroos, Takaaki Kuratate, Trent W. Lewis, Martin H. Luerssen, Garth Paine, David M. W. Powers, Marcia Riley, Stelarc, Kate Stevens:
From talking to thinking heads: report 2008. AVSP 2008: 127-130 - Josef Chaloupka, Jan Nouza, Jindrich Zdánský:
Audio-visual voice command recognition in noisy conditions. AVSP 2008: 25-30 - Girija Chetty, Michael Wagner:
A multilevel fusion approach for audiovisual emotion recognition. AVSP 2008: 115-120 - Jeffrey F. Cohn:
Facial dynamics reveals person identity and communicative intent, regulates person perception and social interaction. AVSP 2008: 3 - Piero Cosi, Graziano Tisato:
Describing "INTERFACE" a matlabÉ tool for building talking heads. AVSP 2008: 143-146 - Stephen J. Cox, Richard W. Harvey, Yuxuan Lan, Jacob L. Newman, Barry-John Theobald:
The challenge of multispeaker lip-reading. AVSP 2008: 179-184 - David Dean, Sridha Sridharan:
Fused HMM adaptation of synchronous HMMs for audio-visual speaker verification. AVSP 2008: 137-141 - Marion Dohen, Chun-Huei Wu, Harold Hill:
Auditory-visual perception of prosodic information: inter-linguistic analysis - contrastive focus in French and Japanese. AVSP 2008: 89-94 - James D. Edge, Adrian Hilton, Philip J. B. Jackson:
Parameterisation of 3d speech lip movements. AVSP 2008: 229-234 - Sascha Fagel, Gérard Bailly:
German text-to-audiovisual-speech by 3-d speaker cloning. AVSP 2008: 43-46 - Sascha Fagel, Christine Kühnel, Benjamin Weiss, Ina Wechsung, Sebastian Möller:
A comparison of German talking heads in a smart home environment. AVSP 2008: 75-78 - Sascha Fagel, Katja Madany:
Guided non-linear model estimation (gnoME). AVSP 2008: 59-62 - Sidney S. Fels, Robert Pritchard, Eric Vatikiotis-Bateson:
Building a portable gesture-to-audio/visual speech system. AVSP 2008: 13-18 - Maeva Garnier:
May speech modifications in noise contribute to enhance audio-visible cues to segment perception? AVSP 2008: 95-100 - Gianluca Giorgolo, Frans A. J. Verstraten:
Perception of 'speech-and-gesture2 integration. AVSP 2008: 31-36 - Roland Göcke, Akshay Asthana:
A comparative study of 2d and 3d lip tracking methods for AV ASR. AVSP 2008: 235-240 - Sanaul Haq, Philip J. B. Jackson, James D. Edge:
Audio-visual feature selection and reduction for emotion classification. AVSP 2008: 185-190 - Þórir Harðarson, Hans-Heinrich Bothe:
A model for the dynamics of articulatory lip movements. AVSP 2008: 209-214 - Carlos Toshinori Ishi, Hiroshi Ishiguro, Norihiro Hagita:
Analysis of inter- and intra-speaker variability of head motions during spoken dialogue. AVSP 2008: 37-42 - Alexandra Jesse, Elizabeth K. Johnson:
Audiovisual alignment in child-directed speech facilitates word learning. AVSP 2008: 101-106 - Jeesun Kim, Christian Kroos, Chris Davis:
Hearing a talking face: an auditory influence on a visual detection task. AVSP 2008: 107-110 - Zdenek Krnoul, Patrik Rostík, Milos Zelezný:
Evaluation of synthesized sign and visual speech by deaf. AVSP 2008: 215-218 - Bernd J. Kröger, Jim Kannampuzha:
A neurofunctional model of speech production including aspects of auditory and audio-visual speech perception. AVSP 2008: 83-88
skipping 19 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-11-17 05:29 CET from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint