default search action
Thurid Vogt
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
Books and Theses
- 2010
- [b1]Thurid Vogt:
Real-time automatic emotion recognition from speech. Bielefeld University, 2010
Journal Articles
- 2011
- [j3]Anton Batliner, Stefan Steidl, Björn W. Schuller, Dino Seppi, Thurid Vogt, Johannes Wagner, Laurence Devillers, Laurence Vidrascu, Vered Aharonson, Loïc Kessous, Noam Amir:
Whodunnit - Searching for the most important feature types signalling emotion-related user states in speech. Comput. Speech Lang. 25(1): 4-28 (2011) - [j2]Thurid Vogt, Elisabeth André:
An Evaluation of Emotion Units and Feature Types for Real-Time Speech Emotion Recognition. Künstliche Intell. 25(3): 213-223 (2011) - 2009
- [j1]Jonghwa Kim, Johannes Wagner, Thurid Vogt, Elisabeth André, Frank Jung, Matthias Rehm:
Emotional Sensitivity in Human-Computer Interaction (Emotionale Sensitivität in der Mensch-Maschine Interaktion). it Inf. Technol. 51(6): 325-328 (2009)
Conference and Workshop Papers
- 2022
- [c27]Anna-Maria Meck, Christoph Draxler, Thurid Vogt:
A Question of Fidelity: Comparing Different User Testing Methods for Evaluating In-Car Prompts. CUI 2022: 31:1-31:5 - 2010
- [c26]Johannes Wagner, Frank Jung, Jonghwa Kim, Thurid Vogt, Elisabeth André:
The Smart Sensor Integration Framework and its Application in EU Projects. B-Interface 2010: 13-21 - [c25]Nikolaus Bee, Johannes Wagner, Elisabeth André, Thurid Vogt, Fred Charles, David Pizzi, Marc Cavazza:
Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application. ICMI-MLMI 2010: 9:1-9:8 - [c24]Florian Lingenfelser, Johannes Wagner, Thurid Vogt, Jonghwa Kim, Elisabeth André:
Age and gender classification from speech using decision level fusion and ensemble based techniques. INTERSPEECH 2010: 2798-2801 - [c23]Jean-Luc Lugrin, Marc Cavazza, David Pizzi, Thurid Vogt, Elisabeth André:
Exploring the usability of immersive interactive storytelling. VRST 2010: 103-110 - 2009
- [c22]Stephen W. Gilroy, Marc Cavazza, Markus Niiranen, Elisabeth André, Thurid Vogt, Jérôme Urbain, Maurice Benayoun, Hartmut Seichter, Mark Billinghurst:
PAD-based multimodal affective fusion. ACII 2009: 1-8 - [c21]Jonghwa Kim, Elisabeth André, Thurid Vogt:
Towards user-independent classification of multimodal emotional signals. ACII 2009: 1-7 - [c20]Alexander Osherenko, Elisabeth André, Thurid Vogt:
Affect sensing in speech: Studying fusion of linguistic and acoustic features. ACII 2009: 1-6 - [c19]Thurid Vogt, Elisabeth André, Johannes Wagner, Stephen W. Gilroy, Fred Charles, Marc Cavazza:
Real-time vocal emotion recognition in artistic installations and interactive storytelling: Experiences and lessons learnt from CALLAS and IRIS. ACII 2009: 1-8 - [c18]Marc Cavazza, David Pizzi, Fred Charles, Thurid Vogt, Elisabeth André:
Emotional input for character-based interactive storytelling. AAMAS (1) 2009: 313-320 - [c17]Fred Charles, David Pizzi, Marc Cavazza, Thurid Vogt, Elisabeth André:
EmoEmma: emotional speech input for interactive storytelling. AAMAS (2) 2009: 1381-1382 - [c16]Thurid Vogt, Elisabeth André:
Exploring the benefits of discretization of acoustic features for speech emotion recognition. INTERSPEECH 2009: 328-331 - 2008
- [c15]Stephen W. Gilroy, Marc Cavazza, Rémi Chaignon, Satu-Marja Mäkelä, Markus Niiranen, Elisabeth André, Thurid Vogt, Jérôme Urbain, Hartmut Seichter, Mark Billinghurst, Maurice Benayoun:
An affective model of user experience for interactive art. Advances in Computer Entertainment Technology 2008: 107-110 - [c14]Matthias Rehm, Thurid Vogt, Michael Wissner, Nikolaus Bee:
Dancing the night away: controlling a virtual karaoke dancer by multimodal expressive cues. AAMAS (3) 2008: 1249-1252 - [c13]Dino Seppi, Anton Batliner, Björn W. Schuller, Stefan Steidl, Thurid Vogt, Johannes Wagner, Laurence Devillers, Laurence Vidrascu, Noam Amir, Vered Aharonson:
Patterns, prototypes, performance: classifying emotional user states. INTERSPEECH 2008: 601-604 - [c12]Stephen W. Gilroy, Marc Cavazza, Rémi Chaignon, Satu-Marja Mäkelä, Markus Niiranen, Elisabeth André, Thurid Vogt, Jérôme Urbain, Mark Billinghurst, Hartmut Seichter, Maurice Benayoun:
E-tree: emotionally driven augmented reality art. ACM Multimedia 2008: 945-948 - [c11]Thurid Vogt, Elisabeth André, Nikolaus Bee:
EmoVoice - A Framework for Online Recognition of Emotions from Voice. PIT 2008: 188-199 - 2007
- [c10]Johannes Wagner, Thurid Vogt, Elisabeth André:
A Systematic Comparison of Different HMM Designs for Emotion Recognition from Acted and Spontaneous Speech. ACII 2007: 114-125 - [c9]João Dias, Wan Ching Ho, Thurid Vogt, Nathalie Beeckman, Ana Paiva, Elisabeth André:
I Know What I Did Last Summer: Autobiographic Memory in Synthetic Characters. ACII 2007: 606-617 - [c8]Karin Leichtenstern, Elisabeth André, Thurid Vogt:
Role Assignment Via Physical Mobile Interaction Techniques in Mobile Multi-user Applications for Children. AmI 2007: 38-54 - [c7]Björn W. Schuller, Anton Batliner, Dino Seppi, Stefan Steidl, Thurid Vogt, Johannes Wagner, Laurence Devillers, Laurence Vidrascu, Noam Amir, Loïc Kessous, Vered Aharonson:
The relevance of feature type for the automatic classification of emotional user states: low level descriptors and functionals. INTERSPEECH 2007: 2253-2256 - [c6]Christian Weiss, Luís C. Oliveira, Sérgio Paulo, Carlos Mendes, Luís Figueira, Marco Vala, Pedro Sequeira, Ana Paiva, Thurid Vogt, Elisabeth André:
eCIRCUS: building voices for autonomous speaking agents. SSW 2007: 300-303 - [c5]Fred Charles, Samuel Lemercier, Thurid Vogt, Nikolaus Bee, Maurizio Mancini, Jérôme Urbain, Marc Price, Elisabeth André, Catherine Pelachaud, Marc Cavazza:
Affective Interactive Narrative in the CALLAS Project. International Conference on Virtual Storytelling 2007: 210-213 - 2006
- [c4]Frank Hegel, Thorsten Spexard, Britta Wrede, Gernot Horstmann, Thurid Vogt:
Playing a different imitation game: Interaction with an Empathic Android Robot. Humanoids 2006: 56-61 - [c3]Thurid Vogt, Elisabeth André:
Improving Automatic Emotion Recognition from Speech via Gender Differentiaion. LREC 2006: 1123-1126 - 2005
- [c2]Thurid Vogt, Elisabeth André:
Comparing Feature Sets for Acted and Spontaneous Speech in View of Automatic Emotion Recognition. ICME 2005: 474-477 - [c1]Jonghwa Kim, Elisabeth André, Matthias Rehm, Thurid Vogt, Johannes Wagner:
Integrating information from speech and physiological signals to achieve emotional sensitivity. INTERSPEECH 2005: 809-812
Parts in Books or Collections
- 2008
- [p1]Thurid Vogt, Elisabeth André, Johannes Wagner:
Automatic Recognition of Emotions from Speech: A Review of the Literature and Recommendations for Practical Realisation. Affect and Emotion in Human-Computer Interaction 2008: 75-91
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 22:21 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint