default search action
Michael Voit
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c35]David Lerch, Zeyun Zhong, Manuel Martin, Michael Voit, Jürgen Beyerer:
Unsupervised 3D Skeleton-Based Action Recognition using Cross-Attention with Conditioned Generation Capabilities. WACV (Workshops) 2024: 202-211 - 2023
- [c34]Christian Lengenfelder, Jutta Hild, Michael Voit, Elisabeth Peinsipp-Byma:
Pilot Study on Gaze-Based Mental Fatigue Detection During Interactive Image Exploitation. HCI (7) 2023: 109-119 - [c33]Jutta Hild, Lars Sommer, Gerrit Holzbach, Michael Voit, Elisabeth Peinsipp-Byma:
Proposing Gaze-Based Interaction and Automated Screening Results for Visual Aerial Image Analysis. HCI (7) 2023: 200-214 - [c32]Jutta Hild, Wolfgang Krüger, Gerrit Holzbach, Michael Voit, Elisabeth Peinsipp-Byma:
Pilot Study on Interaction with Wide Area Motion Imagery Comparing Gaze Input and Mouse Input. HCI (5) 2023: 352-369 - [c31]Manuel Martin, David Lerch, Michael Voit:
Viewpoint Invariant 3D Driver Body Pose-Based Activity Recognition. IV 2023: 1-6 - [c30]Zeyun Zhong, David Schneider, Michael Voit, Rainer Stiefelhagen, Jürgen Beyerer:
Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation. WACV 2023: 6057-6066 - [i3]Zeyun Zhong, Manuel Martin, Michael Voit, Juergen Gall, Jürgen Beyerer:
A Survey on Deep Learning Techniques for Action Anticipation. CoRR abs/2309.17257 (2023) - [i2]Zeyun Zhong, Chengzhi Wu, Manuel Martin, Michael Voit, Juergen Gall, Jürgen Beyerer:
DiffAnt: Diffusion Models for Action Anticipation. CoRR abs/2311.15991 (2023) - 2022
- [j8]Sahar Deppe, Marc Brünninghaus, Michael Voit, Carsten Röcker:
Anwendungsszenarien für AR in der Produktion: Use Cases und Technologielösungen. HMD Prax. Wirtsch. 59(1): 351-366 (2022) - [c29]Jutta Hild, Gerrit Holzbach, Sebastian Maier, Florian van de Camp, Michael Voit, Elisabeth Peinsipp-Byma:
Gaze-Enhanced User Interface for Real-Time Video Surveillance. HCI (48) 2022: 46-53 - [c28]Frederik Diederichs, Christoph Wannemacher, Fabian Faller, Martin Mikolajewski, Manuel Martin, Michael Voit, Harald Widlroither, Eike Schmidt, Doreen Engelhardt, Lena Rittger, Vahid Hashemi, Manya Sahakyan, Massimo Romanelli, Bernd Kiefer, Victor Fäßler, Tobias Rößler, Marc Großerüschkamp, Andreas Kurbos, Miriam Bottesch, Pia Immoor, Arnd Engeln, Marlis Fleischmann, Miriam Schweiker, Anne Pagenkopf, Lesley-Ann Mathis, Daniela Piechnik:
Artificial Intelligence for Adaptive, Responsive, and Level-Compliant Interaction in the Vehicle of the Future (KARLI). HCI (40) 2022: 164-171 - [i1]Zeyun Zhong, David Schneider, Michael Voit, Rainer Stiefelhagen, Jürgen Beyerer:
Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation. CoRR abs/2210.12649 (2022) - 2021
- [c27]Manuel Martin, Michael Voit, Rainer Stiefelhagen:
An Evaluation of Different Methods for 3D-Driver-Body-Pose Estimation. ITSC 2021: 1578-1584 - 2020
- [c26]Jutta Hild, Michael Voit, Elisabeth Peinsipp-Byma:
Estimating Immersed User States from Eye Movements: A Survey. HCI (38) 2020: 337-342 - [c25]Christian Lengenfelder, Gerrit Holzbach, Michael Voit, Jürgen Beyerer:
Defect Annotation on Objects Using a Laser Remote Conrol. HCI (38) 2020: 535-542 - [c24]Manuel Martin, Michael Voit, Rainer Stiefelhagen:
Dynamic Interaction Graphs for Driver Activity Recognition. ITSC 2020: 1-7
2010 – 2019
- 2019
- [j7]Michael Voit:
Central limit theorems for multivariate Bessel processes in the freezing regime. J. Approx. Theory 239: 210-231 (2019) - [j6]Sergio Andraus, Michael Voit:
Central limit theorems for multivariate Bessel processes in the freezing regime II: The covariance matrices. J. Approx. Theory 246: 65-84 (2019) - [c23]Jutta Hild, Elisabeth Peinsipp-Byma, Michael Voit, Jürgen Beyerer:
Suggesting Gaze-based Selection for Surveillance Applications. AVSS 2019: 1-8 - [c22]Marco Pattke, Manuel Martin, Michael Voit:
Towards a Mixed Reality Assistance System for the Inspection After Final Car Assembly. HCI (10) 2019: 536-546 - [c21]Manuel Martin, Alina Roitberg, Monica Haurilet, Matthias Horne, Simon Reiß, Michael Voit, Rainer Stiefelhagen:
Drive&Act: A Multi-Modal Dataset for Fine-Grained Driver Behavior Recognition in Autonomous Vehicles. ICCV 2019: 2801-2810 - 2018
- [j5]Julian Ludwig, Manuel Martin, Matthias Horne, Michael Flad, Michael Voit, Rainer Stiefelhagen, Sören Hohmann:
Driver observation and shared vehicle control: supporting the driver on the way back into the control loop. Autom. 66(2): 146-159 (2018) - [c20]Jutta Hild, Michael Voit, Christian Kühnle, Jürgen Beyerer:
Predicting observer's task from eye movement patterns during motion image analysis. ETRA 2018: 58:1-58:5 - [c19]Jutta Hild, Günter Saur, Patrick Petersen, Michael Voit, Elisabeth Peinsipp-Byma, Jürgen Beyerer:
Evaluating User Interfaces Supporting Change Detection in Aerial Images and Aerial Image Sequences. HCI (5) 2018: 383-402 - [c18]Jutta Hild, Edmund Klaus, Jan Hendrik Hammer, Manuel Martin, Michael Voit, Elisabeth Peinsipp-Byma, Jürgen Beyerer:
A Pilot Study on Gaze-Based Control of a Virtual Camera Using 360°-Video Data. HCI (6) 2018: 419-428 - [c17]Manuel Martin, Johannes Popp, Mathias Anneken, Michael Voit, Rainer Stiefelhagen:
Body Pose and Context Information for Driver Secondary Task Detection. Intelligent Vehicles Symposium 2018: 2015-2021 - 2017
- [c16]Manuel Martin, Stephan Stuehmer, Michael Voit, Rainer Stiefelhagen:
Real time driver body pose estimation for novel assistance systems. ITSC 2017: 1-7 - 2016
- [c15]Jan Hendrik Hammer, Michael Voit, Jürgen Beyerer:
Motion segmentation and appearance change detection based 2D hand tracking. FUSION 2016: 1743-1750 - 2014
- [c14]Jahanzaib Imtiaz, Nils Koch, Holger Flatt, Jürgen Jasperneite, Michael Voit, Florian van de Camp:
A flexible context-aware assistance system for industrial applications using camera based localization. ETFA 2014: 1-4 - 2013
- [j4]Michael Voit, Florian van de Camp, Joris IJsselmuiden, Alexander Schick, Rainer Stiefelhagen:
Visuelle Perzeption für die multimodale Mensch-Maschine-Interaktion in und mit aufmerksamen Räumen. Autom. 61(11): 784-792 (2013) - 2011
- [b1]Michael Voit:
Multimodale Bestimmung des visuellen Aufmerksamkeitsfokus von Personen am Beispiel aufmerksamer Umgebungen. Karlsruhe Institute of Technology, 2011 - 2010
- [c13]Michael Voit, Rainer Stiefelhagen:
3D user-perspective, voxel-based estimation of visual focus of attention in dynamic meeting scenarios. ICMI-MLMI 2010: 51:1-51:8 - [c12]Lukas Rybok, Michael Voit, Hazim Kemal Ekenel, Rainer Stiefelhagen:
Multi-view Based Estimation of Human Upper-Body Orientation. ICPR 2010: 1558-1561 - [c11]Florian van de Camp, Michael Voit, Rainer Stiefelhagen:
Efficient person identification using active cameras in a smartroom. MPVA@MM 2010: 17-22
2000 – 2009
- 2009
- [c10]Michael Voit, Rainer Stiefelhagen:
A System for Probabilistic Joint 3D Head Tracking and Pose Estimation in Low-Resolution, Multi-view Environments. ICVS 2009: 415-424 - [p2]Michael Voit, Nicolas Gourier, Cristian Canton-Ferrer, Oswald Lanz, Rainer Stiefelhagen, Roberto Brunelli:
Estimation of Head Pose. Computers in the Human Interaction Loop 2009: 33-42 - [p1]Oswald Lanz, Roberto Brunelli, Paul Chippendale, Michael Voit, Rainer Stiefelhagen:
Extracting Interaction Cues: Focus of Attention, Body Pose, and Gestures. Computers in the Human Interaction Loop 2009: 87-93 - 2008
- [c9]Rainer Stiefelhagen, Keni Bernardin, Hazim Kemal Ekenel, Michael Voit:
Tracking identities and attention in smart environments - contributions and progress in the CHIL project. FG 2008: 1-8 - [c8]Michael Voit, Rainer Stiefelhagen:
Deducing the visual focus of attention from head pose estimation in dynamic multi-view meeting scenarios. ICMI 2008: 173-180 - [c7]Michael Voit, Rainer Stiefelhagen:
Visual Focus of Attention in Dynamic Meeting Scenarios. MLMI 2008: 1-13 - 2007
- [j3]Rainer Stiefelhagen, Hazim Kemal Ekenel, Christian Fügen, Petra Gieselmann, Hartwig Holzapfel, Florian Kraft, Kai Nickel, Michael Voit, Alex Waibel:
Enabling Multimodal Human-Robot Interaction for the Karlsruhe Humanoid Robot. IEEE Trans. Robotics 23(5): 840-851 (2007) - [c6]Michael Voit, Kai Nickel, Rainer Stiefelhagen:
Head Pose Estimation in Single- and Multi-view Environments - Results on the CLEAR'07 Benchmarks. CLEAR 2007: 307-316 - 2006
- [j2]Rainer Stiefelhagen, Keni Bernardin, Hazim Kemal Ekenel, John W. McDonough, Kai Nickel, Michael Voit, Matthias Wölfel:
Audio-visual perception of a lecturer in a smart seminar room. Signal Process. 86(12): 3518-3533 (2006) - [c5]Michael Voit, Kai Nickel, Rainer Stiefelhagen:
Neural Network-Based Head Pose Estimation and Multi-view Fusion. CLEAR 2006: 291-298 - [c4]Michael Voit, Rainer Stiefelhagen:
Tracking head pose and focus of attention with multiple far-field cameras. ICMI 2006: 281-286 - [c3]Michael Voit, Kai Nickel, Rainer Stiefelhagen:
A Bayesian Approach for Multi-view Head Pose Estimation. MFI 2006: 31-34 - 2005
- [c2]Michael Voit, Kai Nickel, Rainer Stiefelhagen:
Multi-View Head Pose Estimation using Neural Networks. CRV 2005: 347-352 - [c1]Michael Voit, Kai Nickel, Rainer Stiefelhagen:
Estimating the Lecturer's Head Pose in Seminar Scenarios - A Multi-view Approach. MLMI 2005: 230-240 - 2003
- [j1]Michael Voit:
A product formula for orthogonal polynomials associated with infinite distance-transitive graphs. J. Approx. Theory 120(2): 337-354 (2003)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-21 00:14 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint