default search action
Quan Huynh-Thu
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
Books and Theses
- 2009
- [b1]Quan Huynh-Thu:
Perceptual quality assessment of communications-grade video with temporal artefacts. University of Essex, Colchester, UK, 2009
Journal Articles
- 2012
- [j9]Margaret H. Pinson, Lucjan Janowski, Romuald Pépion, Quan Huynh-Thu, Christian Schmidmer, Phillip Corriveau, Audrey Younkin, Patrick Le Callet, Marcus Barkowsky, William Ingram:
The Influence of Subjects and Environment on Audiovisual Subjective Tests: An International Study. IEEE J. Sel. Top. Signal Process. 6(6): 640-651 (2012) - [j8]Julien Fleureau, Philippe Guillotel, Quan Huynh-Thu:
Physiological-Based Affect Event Detector for Entertainment Video Applications. IEEE Trans. Affect. Comput. 3(3): 379-385 (2012) - [j7]Quan Huynh-Thu, Mohammed Ghanbari:
The accuracy of PSNR in predicting video quality for different video scenes and frame rates. Telecommun. Syst. 49(1): 35-48 (2012) - 2011
- [j6]Fatih Porikli, Al Bovik, Chris Plack, Ghassan Alregib, Joyce E. Farrell, Patrick Le Callet, Quan Huynh-Thu, Sebastian Möller, Stefan Winkler:
Multimedia Quality Assessment [DSP Forum]. IEEE Signal Process. Mag. 28(6): 164-177 (2011) - [j5]Quan Huynh-Thu, Marie-Neige Garcia, Filippo Speranza, Philip J. Corriveau, Alexander Raake:
Study of Rating Scales for Subjective Quality Assessment of High-Definition Video. IEEE Trans. Broadcast. 57(1): 1-14 (2011) - [j4]Quan Huynh-Thu, Marcus Barkowsky, Patrick Le Callet:
The Importance of Visual Attention in Improving the 3D-TV Viewing Experience: Overview and New Perspectives. IEEE Trans. Broadcast. 57(2): 421-431 (2011) - 2010
- [j3]Quan Huynh-Thu, Mohammed Ghanbari:
Modelling of spatio-temporal interaction for video quality assessment. Signal Process. Image Commun. 25(7): 535-546 (2010) - 2008
- [j2]Quan Huynh-Thu, Mohammed Ghanbari:
Temporal Aspect of Perceived Quality in Mobile Video Broadcasting. IEEE Trans. Broadcast. 54(3): 641-651 (2008) - 2006
- [j1]Matthew D. Brotherton, Quan Huynh-Thu, David S. Hands, Kjell Brunnström:
Subjective Multimedia Quality Assessment. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 89-A(11): 2920-2932 (2006)
Conference and Workshop Papers
- 2013
- [c16]Quan Huynh-Thu, Cyril Vienne, Laurent Blondé:
Visual storytelling in 2D and stereoscopic 3D video: effect of blur on visual attention. Human Vision and Electronic Imaging 2013: 865112 - [c15]Margaret H. Pinson, Christian Schmidmer, Lucjan Janowski, Romuald Pépion, Quan Huynh-Thu, Phillip Corriveau, Audrey Younkin, Patrick Le Callet, Marcus Barkowsky, William Ingram:
Subjective and objective evaluation of an audiovisual subjective dataset for research and development. QoMEX 2013: 30-31 - 2012
- [c14]Dar'ya Khaustova, Laurent Blondé, Quan Huynh-Thu, Cyril Vienne, Didier Doyen:
Method and simulation to study 3D crosstalk perception. SD&A 2012: 82880X - [c13]Lasith Yasakethu, Laurent Blondé, Didier Doyen, Quan Huynh-Thu:
3D cinema to 3DTV content adaptation. SD&A 2012: 82880G - [c12]Laurent Blondé, Didier Doyen, Cédric Thébault, Quan Huynh-Thu, D. Stoenescu, E. Daniel, Jean-Louis de Bougrenet de la Tocnaye, S. Bentahar:
Towards adapting current 3DTV for an improved 3D experience. SD&A 2012: 82881Q - 2011
- [c11]Paul Kerbiriou, Guillaume Boisson, Korian Sidibé, Quan Huynh-Thu:
Depth-based representations: Which coding format for 3D video broadcast applications? SD&A 2011: 78630D - [c10]Quan Huynh-Thu, Luca Schiatti:
Examination of 3D visual attention in stereoscopic video content. Human Vision and Electronic Imaging 2011: 78650J - [c9]Lasith Yasakethu, Laurent Blondé, Didier Doyen, Quan Huynh-Thu:
Transforming 3D cinema content for an enhanced 3DTV experience. IC3D 2011: 1-8 - 2010
- [c8]Dominique Thoreau, Aurélie Martin, Edouard François, Jérôme Viéron, Quan Huynh-Thu:
Sparse shift-DCT spatial prediction. ICIP 2010: 3385-3388 - [c7]Quan Huynh-Thu, Patrick Le Callet, Marcus Barkowsky:
Video quality assessment: From 2D to 3D - Challenges and future trends. ICIP 2010: 4025-4028 - 2009
- [c6]Quan Huynh-Thu, Mohammed Ghanbari:
No-reference temporal quality metric for video impaired by frame freezing artefacts. ICIP 2009: 2221-2224 - 2008
- [c5]Quan Huynh-Thu, Mohammed Ghanbari:
Asymmetrical temporal masking near video scene change. ICIP 2008: 2568-2571 - 2006
- [c4]Quan Huynh-Thu, Mohammed Ghanbari, David S. Hands, Matthew D. Brotherton:
Subjective video quality evaluation for multimedia applications. Human Vision and Electronic Imaging 2006: 60571D - 2005
- [c3]Quan Huynh-Thu, Mohammed Ghanbari:
A comparison of subjective video quality assessment methods for low-bit rate and low-resolution video. SIP 2005: 69-76 - 2002
- [c2]Quan Huynh-Thu, Mitsuhiko Meguro, Masahide Kaneko:
Skin-Color-Based Image Segmentation and Its Application in Face Detection. MVA 2002: 48-51 - [c1]Quan Huynh-Thu, Mitsuhiko Meguro, Masahide Kaneko:
Skin-Color Extraction in Images with Complex Background and Varying Illumination. WACV 2002: 280-
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-04-24 23:08 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint