default search action
Chi-Chun Lee
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j28]Shreya G. Upadhyay, Woan-Shiuan Chien, Bo-Hao Su, Chi-Chun Lee:
Learning With Rater-Expanded Label Space to Improve Speech Emotion Recognition. IEEE Trans. Affect. Comput. 15(3): 1539-1552 (2024) - [j27]Chin-Po Chen, Ho-Hsien Pan, Susan Shur-Fen Gau, Chi-Chun Lee:
Using Measures of Vowel Space for Autistic Traits Characterization. IEEE ACM Trans. Audio Speech Lang. Process. 32: 591-607 (2024) - [c133]An-Yan Chang, Jing-Tong Tzeng, Huan-Yu Chen, Chih-Wei Sung, Chun-Hsiang Huang, Edward Pei-Chuan Huang, Chi-Chun Lee:
GaP-Aug: Gamma Patch-Wise Correction Augmentation Method for Respiratory Sound Classification. ICASSP 2024: 551-555 - [c132]Po-Chen Lin, Jeng-Lin Li, Woan-Shiuan Chien, Chi-Chun Lee:
In-The-Wild Physiological-Based Stress Detection Using Federated Strategy. ICASSP 2024: 1681-1685 - [c131]Woan-Shiuan Chien, Shreya G. Upadhyay, Chi-Chun Lee:
Balancing Speaker-Rater Fairness for Gender-Neutral Speech Emotion Recognition. ICASSP 2024: 11861-11865 - [c130]Wei-Tung Hsu, Chin-Po Chen, Chi-Chun Lee:
Concealing Medical Condition by Node Toggling in ASR for Dementia Patients. ICASSP 2024: 12496-12500 - [i6]Haibin Wu, Huang-Cheng Chou, Kai-Wei Chang, Lucas Goncalves, Jiawei Du, Jyh-Shing Roger Jang, Chi-Chun Lee, Hung-Yi Lee:
EMO-SUPERB: An In-depth Look at Speech Emotion Recognition. CoRR abs/2402.13018 (2024) - [i5]Shreya G. Upadhyay, Carlos Busso, Chi-Chun Lee:
A Layer-Anchoring Strategy for Enhancing Cross-Lingual Speech Emotion Recognition. CoRR abs/2407.04966 (2024) - [i4]Wenze Ren, Yi-Cheng Lin, Huang-Cheng Chou, Haibin Wu, Yi-Chiao Wu, Chi-Chun Lee, Hung-yi Lee, Yu Tsao:
EMO-Codec: An In-Depth Look at Emotion Preservation capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations. CoRR abs/2407.15458 (2024) - [i3]Huang-Cheng Chou, Haibin Wu, Chi-Chun Lee:
Stimulus Modality Matters: Impact of Perceptual Evaluations from Different Modalities on Speech Emotion Recognition System Performance. CoRR abs/2409.10762 (2024) - 2023
- [j26]Chi-Chun Lee, Theodora Chaspari, Emily Mower Provost, Shrikanth S. Narayanan:
An Engineering View on Emotions and Speech: From Analysis and Predictive Models to Responsible Human-Centered Applications. Proc. IEEE 111(10): 1142-1158 (2023) - [j25]Hao-Chun Yang, Chi-Chun Lee:
A Media-Guided Attentive Graphical Network for Personality Recognition Using Physiology. IEEE Trans. Affect. Comput. 14(2): 931-943 (2023) - [j24]Chun-Min Chang, Gao-Yi Chao, Chi-Chun Lee:
Enforcing Semantic Consistency for Cross Corpus Emotion Prediction Using Adversarial Discrepancy Learning in Emotion. IEEE Trans. Affect. Comput. 14(2): 1098-1109 (2023) - [j23]Chun-Min Chang, Chi-Chun Lee:
Learning Enhanced Acoustic Latent Representation for Small Scale Affective Corpus with Adversarial Cross Corpora Integration. IEEE Trans. Affect. Comput. 14(2): 1308-1321 (2023) - [j22]Bo-Hao Su, Chi-Chun Lee:
Unsupervised Cross-Corpus Speech Emotion Recognition Using a Multi-Source Cycle-GAN. IEEE Trans. Affect. Comput. 14(3): 1991-2004 (2023) - [j21]Jeng-Lin Li, Chi-Chun Lee:
An Enroll-to-Verify Approach for Cross-Task Unseen Emotion Class Recognition. IEEE Trans. Affect. Comput. 14(4): 3066-3077 (2023) - [j20]Huan-Yu Chen, Hui-Min Wang, Ching-Heng Lin, Rob Yang, Chi-Chun Lee:
Lung Cancer Prediction Using Electronic Claims Records: A Transformer-Based Approach. IEEE J. Biomed. Health Informatics 27(12): 6062-6073 (2023) - [j19]Yun-Shao Lin, Yi-Ching Liu, Chi-Chun Lee:
An Interaction-process-guided Framework for Small-group Performance Prediction. ACM Trans. Multim. Comput. Commun. Appl. 19(2): 58:1-58:25 (2023) - [c129]Luz Martinez-Lucas, Ali N. Salman, Seong-Gyun Leem, Shreya G. Upadhyay, Chi-Chun Lee, Carlos Busso:
Analyzing the Effect of Affective Priming on Emotional Annotations. ACII 2023: 1-8 - [c128]Shreya G. Upadhyay, Woan-Shiuan Chien, Bo-Hao Su, Lucas Goncalves, Ya-Tse Wu, Ali N. Salman, Carlos Busso, Chi-Chun Lee:
An Intelligent Infrastructure Toward Large Scale Naturalistic Affective Speech Corpora Collection. ACII 2023: 1-8 - [c127]Woan-Shiuan Chien, Chi-Chun Lee:
Achieving Fair Speech Emotion Recognition via Perceptual Fairness. ICASSP 2023: 1-5 - [c126]Shreya G. Upadhyay, Luz Martinez-Lucas, Bo-Hao Su, Wei-Cheng Lin, Woan-Shiuan Chien, Ya-Tse Wu, William Katz, Carlos Busso, Chi-Chun Lee:
Phonetic Anchor-Based Transfer Learning to Facilitate Unsupervised Cross-Lingual Speech Emotion Recognition. ICASSP 2023: 1-5 - [c125]Huang-Cheng Chou, Lucas Goncalves, Seong-Gyun Leem, Chi-Chun Lee, Carlos Busso:
The Importance of Calibration: Rethinking Confidence and Performance of Speech Multi-label Emotion Classifiers. INTERSPEECH 2023: 641-645 - [c124]Shao-Hao Lu, Yun-Shao Lin, Chi-Chun Lee:
Speaking State Decoder with Transition Detection for Next Speaker Prediction. INTERSPEECH 2023: 1868-1872 - [c123]Ya-Tse Wu, Yuan-Ting Chang, Shao-Hao Lu, Jing-Yi Chuang, Chi-Chun Lee:
A Context-Constrained Sentence Modeling for Deception Detection in Real Interrogation. INTERSPEECH 2023: 3582-3586 - [c122]Ya-Tse Wu, Chi-Chun Lee:
MetricAug: A Distortion Metric-Lead Augmentation Strategy for Training Noise-Robust Speech Emotion Recognizer. INTERSPEECH 2023: 3587-3591 - [c121]Yin-Tse Lin, Bo-Hao Su, Chi-Han Lin, Shih-Chan Kuo, Jyh-Shing Roger Jang, Chi-Chun Lee:
Noise-Robust Bandwidth Expansion for 8K Speech Recordings. INTERSPEECH 2023: 5107-5111 - 2022
- [j18]Jeng-Lin Li, Yun-Chun Lin, Yu-Fen Wang, Sara A. Monaghan, Bor-Sheng Ko, Chi-Chun Lee:
A Chunking-for-Pooling Strategy for Cytometric Representation Learning for Automatic Hematologic Malignancy Classification. IEEE J. Biomed. Health Informatics 26(9): 4773-4784 (2022) - [j17]Fu-Sheng Tsai, Wei-Wen Chang, Chi-Chun Lee:
A Social Condition-Enhanced Network for Recognizing Power Distance Using Expressive Prosody and Intrinsic Brain Connectivity. IEEE Trans. Multim. 24: 2046-2057 (2022) - [c120]Woan-Shiuan Chien, Shreya G. Upadhyay, Wei-Cheng Lin, Ya-Tse Wu, Bo-Hao Su, Carlos Busso, Chi-Chun Lee:
Monologue versus Conversation: Differences in Emotion Perception and Acoustic Expressivity. ACII 2022: 1-7 - [c119]Po-Chien Hsu, Jeng-Lin Li, Chi-Chun Lee:
Romantic and Family Movie Database: Towards Understanding Human Emotion and Relationship via Genre-Dependent Movies. ACII 2022: 1-8 - [c118]Chun-Chia Chiu, Jeng-Lin Li, Yu-Fen Wang, Bor-Sheng Ko, Chi-Chun Lee:
A Coarse-to-Fine Pathology Patch Selection for Improving Gene Mutation Prediction in Acute Myeloid Leukemia. EMBC 2022: 3207-3210 - [c117]Meng-Han Lin, Jeng-Lin Li, Chi-Chun Lee:
Improving Multimodal Movie Scene Segmentation Using Mixture of Acoustic Experts. EUSIPCO 2022: 6-10 - [c116]Shreya G. Upadhyay, Bo-Hao Su, Chi-Chun Lee:
Improving Induced Valence Recognition by Integrating Acoustic Sound Semantics in Movies. EUSIPCO 2022: 16-20 - [c115]Ya-Tse Wu, Jeng-Lin Li, Chi-Chun Lee:
An Audio-Saliency Masking Transformer for Audio Emotion Classification in Movies. ICASSP 2022: 4813-4817 - [c114]Huang-Cheng Chou, Wei-Cheng Lin, Chi-Chun Lee, Carlos Busso:
Exploiting Annotators' Typed Description of Emotion Perception to Maximize Utilization of Ratings for Speech Emotion Recognition. ICASSP 2022: 7717-7721 - [c113]Chi-Yu Chen, Po-Chien Hsu, Tang-Chen Chang, Huan Ho, Min-Chun Hu, Chi-Chun Lee, Hui-Ju Chen, Mary Hsin-Ju Ko, Chia-Fan Lee, Pei-Yi Wang:
Computer Vision Based Cognition Assessment for Developmental-Behavioral Screening. ICDH 2022: 151-156 - [c112]Huang-Cheng Chou, Chi-Chun Lee, Carlos Busso:
Exploiting Co-occurrence Frequency of Emotions in Perceptual Evaluations To Train A Speech Emotion Classifier. INTERSPEECH 2022: 161-165 - [c111]Chun-Yu Chen, Yun-Shao Lin, Chi-Chun Lee:
Emotion-Shift Aware CRF for Decoding Emotion Sequence in Conversation. INTERSPEECH 2022: 1148-1152 - [c110]Bo-Hao Su, Chi-Chun Lee:
Vaccinating SER to Neutralize Adversarial Attacks with Self-Supervised Augmentation Strategy. INTERSPEECH 2022: 1153-1157 - [c109]Yu-Lin Huang, Bo-Hao Su, Y.-W. Peter Hong, Chi-Chun Lee:
An Attention-Based Method for Guiding Attribute-Aligned Speech Representation Learning. INTERSPEECH 2022: 5030-5034 - [i2]Wan-Ting Hsieh, Jeremy Lefort-Besnard, Hao-Chun Yang, Li-Wei Kuo, Chi-Chun Lee:
Behavior Score-Embedded Brain Encoder Network for Improved Classification of Alzheimer Disease Using Resting State fMRI. CoRR abs/2211.09735 (2022) - 2021
- [j16]Chi-Chun Lee, Kusha Sridhar, Jeng-Lin Li, Wei-Cheng Lin, Bo-Hao Su, Carlos Busso:
Deep Representation Learning for Affective Speech Signal Analysis and Processing: Preventing unwanted signal disparities. IEEE Signal Process. Mag. 38(6): 22-38 (2021) - [c108]Huang-Cheng Chou, Woan-Shiuan Chien, Da-Cheng Juan, Chi-Chun Lee:
"Does it Matter When I Think You Are Lying?" Improving Deception Detection by Integrating Interlocutor's Judgements in Conversations. ACL/IJCNLP (Findings) 2021: 1846-1860 - [c107]Ya-Lin Huang, Hao-Chun Yang, Chi-Chun Lee:
Federated Learning via Conditional Mutual Learning for Alzheimer's Disease Classification on T1w MRI. EMBC 2021: 2427-2432 - [c106]Woan-Shiuan Chien, Huang-Cheng Chou, Chi-Chun Lee:
Belongingness and Satisfaction Recognition from Physiological Synchrony with A Group-Modulated Attentive BLSTM under Small-group Conversation. ICMI Companion 2021: 220-229 - [c105]Woan-Shiuan Chien, Huang-Cheng Chou, Chi-Chun Lee:
Self-assessed Emotion Classification from Acoustic and Physiological Features within Small-group Conversation. ICMI Companion 2021: 230-239 - [c104]Yu-Lin Huang, Bo-Hao Su, Y.-W. Peter Hong, Chi-Chun Lee:
An Attribute-Aligned Strategy for Learning Speech Representation. Interspeech 2021: 1179-1183 - [c103]Bo-Hao Su, Chi-Chun Lee:
A Conditional Cycle Emotion Gan for Cross Corpus Speech Emotion Recognition. SLT 2021: 351-357 - [c102]Huan-Yu Chen, Yun-Shao Lin, Chi-Chun Lee:
Through the Words of Viewers: Using Comment-Content Entangled Network for Humor Impression Recognition. SLT 2021: 1058-1064 - [i1]Yu-Lin Huang, Bo-Hao Su, Y.-W. Peter Hong, Chi-Chun Lee:
An Attribute-Aligned Strategy for Learning Speech Representation. CoRR abs/2106.02810 (2021) - 2020
- [j15]Jeng-Lin Li, Tzu-Yun Huang, Chun-Min Chang, Chi-Chun Lee:
A Waveform-Feature Dual Branch Acoustic Embedding Network for Emotion Recognition. Frontiers Comput. Sci. 2: 13 (2020) - [j14]Yun-Shao Lin, Susan Shur-Fen Gau, Chi-Chun Lee:
A Multimodal Interlocutor-Modulated Attentional BLSTM for Classifying Autism Subgroups During Clinical Interviews. IEEE J. Sel. Top. Signal Process. 14(2): 299-311 (2020) - [j13]Wei-Cheng Lin, Chi-Chun Lee:
Computational Analyses of Thin-Sliced Behavior Segments in Session-Level Affect Perception. IEEE Trans. Affect. Comput. 11(4): 560-573 (2020) - [c101]Chun-Min Chang, Huan-Yu Chen, Hsiang-Chun Chen, Chi-Chun Lee:
Sensing with Contexts: Crying Reason Classification for Infant Care Center with Environmental Fusion. APSIPA 2020: 314-318 - [c100]Huang-Cheng Chou, Chi-Chun Lee:
"Your Behavior Makes Me Think It Is a Lie": Recognizing Perceived Deception using Multimodal Data in Dialog Games. APSIPA 2020: 393-402 - [c99]Hao-Chun Yang, Chi-Chun Lee:
From Intended to Subjective: A Conditional Tensor Fusion Network for Recognizing Self-Reported Emotion Using Physiology. APSIPA 2020: 900-904 - [c98]Ming-Shan Gao, Fu-Sheng Tsai, Chi-Chun Lee:
Learning a Phenotypic-Attribute Attentional Brain Connectivity Embedding for ADHD Classification using rs-fMRI. EMBC 2020: 5472-5475 - [c97]Jeng-Lin Li, Ting-Yu Chang, Yu-Fen Wang, Bor-Sheng Ko, Jih-Luh Tang, Chi-Chun Lee:
A Knowledge-Reserved Distillation with Complementary Transfer for Automated FC-based Classification Across Hematological Malignancies. EMBC 2020: 5482-5485 - [c96]Wan-Ting Hsieh, Jeremy Lefort-Besnard, Hao-Chun Yang, Li-Wei Kuo, Chi-Chun Lee:
Behavior Score-Embedded Brain Encoder Network for Improved Classification of Alzheimer Disease Using Resting State fMRI. EMBC 2020: 5486-5489 - [c95]Chen-Ying Hung, Huan-Yu Chen, Lawrence J. K. Wee, Ching-Heng Lin, Chi-Chun Lee:
Deriving A Novel Health Index Using A Large-Scale Population Based Electronic Health Record With Deep Networks. EMBC 2020: 5872-5875 - [c94]Ya-Lin Huang, Wan-Ting Hsieh, Hao-Chun Yang, Chi-Chun Lee:
Conditional Domain Adversarial Transfer for Robust Cross-Site ADHD Classification Using Functional MRI. ICASSP 2020: 1190-1194 - [c93]Hao-Chun Yang, Chi-Chun Lee:
A Siamese Content-Attentive Graph Convolutional Network for Personality Recognition Using Physiology. ICASSP 2020: 4362-4366 - [c92]Sung-Lin Yeh, Yun-Shao Lin, Chi-Chun Lee:
A Dialogical Emotion Decoder for Speech Motion Recognition in Spoken Dialog. ICASSP 2020: 6479-6483 - [c91]Yun-Shao Lin, Chi-Chun Lee:
Predicting Performance Outcome with a Conversational Graph Convolutional Network for Small Group Interactions. ICASSP 2020: 8044-8048 - [c90]Chin-Po Chen, Susan Shur-Fen Gau, Chi-Chun Lee:
Learning Converse-Level Multimodal Embedding to Assess Social Deficit Severity for Autism Spectrum Disorder. ICME 2020: 1-6 - [c89]Jeng-Lin Li, Chi-Chun Lee:
Using Speaker-Aligned Graph Memory Block in Multimodally Attentive Emotion Recognition Network. INTERSPEECH 2020: 389-393 - [c88]Bo-Hao Su, Chun-Min Chang, Yun-Shao Lin, Chi-Chun Lee:
Improving Speech Emotion Recognition Using Graph Attentive Bi-Directional Gated Recurrent Unit Network. INTERSPEECH 2020: 506-510 - [c87]Sung-Lin Yeh, Yun-Shao Lin, Chi-Chun Lee:
Speech Representation Learning for Emotion Recognition Using End-to-End ASR with Factorized Adaptation. INTERSPEECH 2020: 536-540 - [c86]Shreya G. Upadhyay, Bo-Hao Su, Chi-Chun Lee:
Attentive Convolutional Recurrent Neural Network Using Phoneme-Level Acoustic Representation for Rare Sound Event Detection. INTERSPEECH 2020: 3102-3106 - [c85]Shun-Chang Zhong, Bo-Hao Su, Wei Huang, Yi-Ching Liu, Chi-Chun Lee:
Predicting Collaborative Task Performance Using Graph Interlocutor Acoustic Network in Small Group Interaction. INTERSPEECH 2020: 3122-3126 - [c84]Huang-Cheng Chou, Chi-Chun Lee:
Learning to Recognize Per-Rater's Emotion Perception Using Co-Rater Training Strategy with Soft and Hard Labels. INTERSPEECH 2020: 4108-4112 - [c83]Woan-Shiuan Chien, Hao-Chun Yang, Chi-Chun Lee:
Cross Corpus Physiological-based Emotion Recognition Using a Learnable Visual Semantic Graph Convolutional Network. ACM Multimedia 2020: 2999-3006
2010 – 2019
- 2019
- [j12]Chin-Po Chen, Susan Shur-Fen Gau, Chi-Chun Lee:
Toward differential diagnosis of autism spectrum disorder using multimodal behavior descriptors and executive functions. Comput. Speech Lang. 56: 17-35 (2019) - [j11]Shan-Wen Hsiao, Hung-Ching Sun, Ming-Chuan Hsieh, Ming-Hsueh Tsai, Yu Tsao, Chi-Chun Lee:
Toward Automating Oral Presentation Scoring During Principal Certification Program Using Audio-Video Low-Level Behavior Profiles. IEEE Trans. Affect. Comput. 10(4): 552-567 (2019) - [c82]Hao-Chun Yang, Chi-Chun Lee:
Annotation Matters: A Comprehensive Study on Recognizing Intended, Self-reported, and Observed Emotion Labels using Physiology. ACII 2019: 1-7 - [c81]Tzu-Yun Huang, Jeng-Lin Li, Chun-Min Chang, Chi-Chun Lee:
A Dual-Complementary Acoustic Embedding Network Learned from Raw Waveform for Speech Emotion Recognition. ACII 2019: 83-88 - [c80]Jeng-Lin Li, Chi-Chun Lee:
Attention Learning with Retrievable Acoustic Embedding of Personality for Emotion Recognition. ACII 2019: 171-177 - [c79]Hui-Ting Hong, Jeng-Lin Li, Chun-Min Chang, Chi-Chun Lee:
Improving Automatic Pain Level Recognition using Pain Site as an Auxiliary Task. ACII Workshops 2019: 284-289 - [c78]Huan-Yu Chen, Yun-Shao Lin, Chi-Chun Lee:
Through the Eyes of Viewers: A Comment-Enhanced Media Content Representation for TED Talks Impression Recognition. APSIPA 2019: 414-418 - [c77]Fu-Sheng Tsai, Yi-Ming Weng, Chip-Jin Ng, Chi-Chun Lee:
Pain versus Affect? An Investigation in the Relationship between Observed Emotional States and Self-Reported Pain. APSIPA 2019: 508-512 - [c76]Huang-Cheng Chou, Yi-Wen Liu, Chi-Chun Lee:
Joint Learning of Conversational Temporal Dynamics and Acoustic Features for Speech Deception Detection in Dialog Games. APSIPA 2019: 1044-1050 - [c75]Jeng-Lin Li, Yu-Fen Wang, Bor-Sheng Ko, Chi-Cheng Li, Jih-Luh Tang, Chi-Chun Lee:
Learning a Cytometric Deep Phenotype Embedding for Automatic Hematological Malignancies Classification. EMBC 2019: 1733-1736 - [c74]Chen-Ying Hung, Ching-Heng Lin, Chi-Sen Chang, Jeng-Lin Li, Chi-Chun Lee:
Predicting Gastrointestinal Bleeding Events from Multimodal In-Hospital Electronic Health Records Using Deep Fusion Networks. EMBC 2019: 2447-2450 - [c73]Chih-Chuan Lu, Jeng-Lin Li, Yu-Fen Wang, Bor-Sheng Ko, Jih-Luh Tang, Chi-Chun Lee:
A BLSTM with Attention Network for Predicting Acute Myeloid Leukemia Patient's Prognosis using Comprehensive Clinical Parameters. EMBC 2019: 2455-2458 - [c72]Chun-Min Chang, Yu-Lin Huang, Jui-Cheng Chen, Chi-Chun Lee:
Improving Automatic Tremor and Movement Motor Disorder Severity Assessment for Parkinson's Disease with Deep Joint Training. EMBC 2019: 3408-3411 - [c71]Hao-Chun Yang, Chi-Chun Lee:
An Attribute-invariant Variational Learning for Emotion Recognition Using Physiology. ICASSP 2019: 1184-1188 - [c70]Wan-Ting Hsieh, Hao-Chun Yang, Fu-Sheng Tsai, Chon-Wen Shyi, Chi-Chun Lee:
An Event-contrastive Connectome Network for Automatic Assessment of Individual Face Processing and Memory Ability. ICASSP 2019: 1358-1362 - [c69]Wei-Hao Chang, Jeng-Lin Li, Chi-Chun Lee:
Learning Semantic-preserving Space Using User Profile and Multimodal Media Content from Political Social Network. ICASSP 2019: 3990-3994 - [c68]Huang-Cheng Chou, Chi-Chun Lee:
Every Rating Matters: Joint Learning of Subjective Labels and Individual Annotators for Speech Emotion Classification. ICASSP 2019: 5886-5890 - [c67]Sung-Lin Yeh, Yun-Shao Lin, Chi-Chun Lee:
An Interaction-aware Attention Network for Speech Emotion Recognition in Spoken Dialogs. ICASSP 2019: 6685-6689 - [c66]Chun-Min Chang, Chi-Chun Lee:
Adversarially-enriched Acoustic Code Vector Learned from Out-of-context Affective Corpus for Robust Emotion Recognition. ICASSP 2019: 7395-7399 - [c65]Ming-Ya Ko, Jeng-Lin Li, Chi-Chun Lee:
Learning Minimal Intra-Genre Multimodal Embedding from Trailer Content and Reactor Expressions for Box Office Prediction. ICME 2019: 1804-1809 - [c64]Jeng-Lin Li, Chi-Chun Lee:
Attentive to Individual: A Multimodal Emotion Recognition Network with Personalized Attention Profile. INTERSPEECH 2019: 211-215 - [c63]Shun-Chang Zhong, Yun-Shao Lin, Chun-Min Chang, Yi-Ching Liu, Chi-Chun Lee:
Predicting Group Performances Using a Personality Composite-Network Architecture During Collaborative Task. INTERSPEECH 2019: 1676-1680 - [c62]Gao-Yi Chao, Yun-Shao Lin, Chun-Min Chang, Chi-Chun Lee:
Enforcing Semantic Consistency for Cross Corpus Valence Regression from Speech Using Adversarial Discrepancy Learning. INTERSPEECH 2019: 1681-1685 - [c61]Chih-Hsiang Huang, Huang-Cheng Chou, Yi-Tong Wu, Chi-Chun Lee, Yi-Wen Liu:
Acoustic Indicators of Deception in Mandarin Daily Conversations Recorded from an Interactive Game. INTERSPEECH 2019: 1731-1735 - [c60]Sung-Lin Yeh, Gao-Yi Chao, Bo-Hao Su, Yu-Lin Huang, Meng-Han Lin, Yin-Chun Tsai, Yu-Wen Tai, Zheng-Chi Lu, Chieh-Yu Chen, Tsung-Ming Tai, Chiu-Wang Tseng, Cheng-Kuang Lee, Chi-Chun Lee:
Using Attention Networks and Adversarial Augmentation for Styrian Dialect Continuous Sleepiness and Baby Sound Recognition. INTERSPEECH 2019: 2398-2402 - [c59]Hui-Ting Hong, Jeng-Lin Li, Yi-Ming Weng, Chip-Jin Ng, Chi-Chun Lee:
Investigating the Variability of Voice Quality and Pain Levels as a Function of Multiple Clinical Parameters. INTERSPEECH 2019: 3058-3062 - 2018
- [c58]Chen-Ying Hung, Ching-Heng Lin, Chi-Chun Lee:
Improving Young Stroke Prediction by Learning with Active Data Augmenter in a Large-Scale Electronic Medical Claims Database. EMBC 2018: 5362-5365 - [c57]Wan-Ting Hsieh, Hao-Chun Yang, Ya-Tse Wu, Fu-Sheng Tsai, Li-Wei Kuo, Chi-Chun Lee:
Integrating Perceivers Neural-Perceptual Responses Using a Deep Voting Fusion Network for Automatic Vocal Emotion Decoding. ICASSP 2018: 1015-1019 - [c56]Hao-Chun Yang, Fu-Sheng Tsai, Yi-Ming Weng, Chip-Jin Ng, Chi-Chun Lee:
A Triplet-Loss Embedded Deep Regressor Network for Estimating Blood Pressure Changes Using Prosodic Features. ICASSP 2018: 6019-6023 - [c55]Yu-Shuo Liu, Chin-Po Chen, Susan Shur-Fen Gau, Chi-Chun Lee:
Learning Lexical Coherence Representation Using LSTM Forget Gate for Children with Autism Spectrum Disorder During Story-Telling. ICASSP 2018: 6029-6033 - [c54]Wei-Hao Chang, Jeng-Lin Li, Yun-Shao Lin, Chi-Chun Lee:
A Genre-Affect Relationship Network with Task-Specific Uncertainty Weighting foR Recognizing Induced Emotion in Music. ICME 2018: 1-6 - [c53]Gao-Yi Chao, Chun-Min Chang, Jeng-Lin Li, Ya-Tse Wu, Chi-Chun Lee:
Generating fMRI-Enriched Acoustic Vectors using a Cross-Modality Adversarial Network for Emotion Recognition. ICMI 2018: 55-62 - [c52]Yun-Shao Lin, Chi-Chun Lee:
Using Interlocutor-Modulated Attention BLSTM to Predict Personality Traits in Small Group Interaction. ICMI 2018: 163-169 - [c51]Fu-Sheng Tsai, Hao-Chun Yang, Wei-Wen Chang, Chi-Chun Lee:
Automatic Assessment of Individual Culture Attribute of Power Distance Using a Social Context-Enhanced Prosodic Network Representation. INTERSPEECH 2018: 436-440 - [c50]Bo-Hao Su, Sung-Lin Yeh, Ming-Ya Ko, Huan-Yu Chen, Shun-Chang Zhong, Jeng-Lin Li, Chi-Chun Lee:
Self-Assessed Affect Recognition Using Fusion of Attentional BLSTM and Static Acoustic Features. INTERSPEECH 2018: 536-540 - [c49]Yun-Shao Lin, Susan Shur-Fen Gau, Chi-Chun Lee:
An Interlocutor-Modulated Attentional LSTM for Differentiating between Subgroups of Autism Spectrum Disorder. INTERSPEECH 2018: 2329-2333 - [c48]Jeng-Lin Li, Chi-Chun Lee:
Encoding Individual Acoustic Features Using Dyad-Augmented Deep Variational Representations for Dialog-level Emotion Recognition. INTERSPEECH 2018: 3102-3106 - [c47]Jeng-Lin Li, Yi-Ming Weng, Chip-Jin Ng, Chi-Chun Lee:
Learning Conditional Acoustic Latent Representation with Gender and Age Attributes for Automatic Pain Level Recognition. INTERSPEECH 2018: 3438-3442 - [c46]Yi-Ying Kao, Hsiang-Ping Hsu, Chien-Feng Liao, Yu Tsao, Hao-Chun Yang, Jeng-Lin Li, Chi-Chun Lee, Hung-Shin Lee, Hsin-Min Wang:
Automatic Detection of Speech Under Cold Using Discriminative Autoencoders and Strength Modeling with Multiple Sub-Dictionary Generation. IWAENC 2018: 416-420 - [c45]Chi-Chun Lee:
Interpersonal Behavior Modeling for Personality, Affect, and Mental States Recognition and Analysis. AVEC@MM 2018: 1-2 - [c44]Chih-Chuan Lu, Jeng-Lin Li, Chi-Chun Lee:
Learning an Arousal-Valence Speech Front-End Network using Media Data In-the-Wild for Emotion Recognition. AVEC@MM 2018: 99-105 - [e1]Chi-Chun Jeremy Lee, Cheng-Zen Yang, Jen-Tzung Chien:
Proceedings of the 30th Conference on Computational Linguistics and Speech Processing, ROCLING 2018, Hsinchu, Taiwan, October 4-5, 2018. The Association for Computational Linguistics and Chinese Language Processing (ACLCLP) 2018, ISBN 978-986-95769-1-8 [contents] - 2017
- [j10]Tassadaq Hussain, Sabato Marco Siniscalchi, Chi-Chun Lee, Syu-Siang Wang, Yu Tsao, Wen-Hung Liao:
Experimental Study on Extreme Learning Machine Applications for Speech Enhancement. IEEE Access 5: 25542-25554 (2017) - [j9]Chun-Min Chang, Wei-Cheng Lin, Chi-Chun Lee:
A Novel Trajectory-based Spatial-Temporal Spectral Features for Speech Emotion Recognition. Int. J. Comput. Linguistics Chin. Lang. Process. 22(2) (2017) - [j8]Daniel Bone, Chi-Chun Lee, Theodora Chaspari, James Gibson, Shrikanth S. Narayanan:
Signal Processing and Machine Learning for Mental Health Research and Clinical Applications [Perspectives]. IEEE Signal Process. Mag. 34(5): 196-195 (2017) - [c43]Huang-Cheng Chou, Wei-Cheng Lin, Lien-Chiang Chang, Chyi-Chang Li, Hsi-Pin Ma, Chi-Chun Lee:
NNIME: The NTHU-NTUA Chinese interactive multimodal emotion corpus. ACII 2017: 292-298 - [c42]Fu-Sheng Tsai, Yi-Ming Weng, Chip-Jin Ng, Chi-Chun Lee:
Embedding stacked bottleneck vocal features in a LSTM architecture for automatic pain level classification during emergency triage. ACII 2017: 313-318 - [c41]Chun-Min Chang, Bo-Hao Su, Shih-Chen Lin, Jeng-Lin Li, Chi-Chun Lee:
A bootstrapped multi-view weighted Kernel fusion framework for cross-corpus integration of multimodal emotion recognition. ACII 2017: 377-382 - [c40]Chen-Ying Hung, Wei-Chen Chen, Po-Tsun Lai, Ching-Heng Lin, Chi-Chun Lee:
Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database. EMBC 2017: 3110-3113 - [c39]Chun-Min Chang, Chi-Chun Lee:
Fusion of multiple emotion perspectives: Improving affect recognition through integrating cross-lingual emotion information. ICASSP 2017: 5820-5824 - [c38]Chin-Po Chen, Xian-Hong Tseng, Susan Shur-Fen Gau, Chi-Chun Lee:
Computing Multimodal Dyadic Behaviors During Spontaneous Diagnosis Interviews Toward Automatic Categorization of Autism Spectrum Disorder. INTERSPEECH 2017: 2361-2365 - [c37]Yun-Shao Lin, Chi-Chun Lee:
Deriving Dyad-Level Interaction Representation Using Interlocutors Structural and Expressive Multimodal Behavior Features. INTERSPEECH 2017: 2366-2370 - [c36]Ya-Tse Wu, Hsuan-Yu Chen, Yu-Hsien Liao, Li-Wei Kuo, Chi-Chun Lee:
Modeling Perceivers Neural-Responses Using Lobe-Dependent Convolutional Neural Network to Improve Speech Emotion Recognition. INTERSPEECH 2017: 3261-3265 - [c35]Chun-Min Chang, Wei-Cheng Lin, Chi-Chun Lee:
A Novel Trajectory-based Spatial-Temporal Spectral Features for Speech Emotion Recognition. ROCLING 2017: 52 - [c34]Huang-Cheng Chou, Chun-Min Chang, Yu-Shuo Liu, Shiuan-Kai Kao, Chi-Chun Lee:
Amplifying a Sense of Emotion toward Drama-Long Short-Term Memory Recurrent Neural Network for dynamic emotion recognition. ROCLING 2017: 136-147 - 2016
- [j7]Angeliki Metallinou, Zhaojun Yang, Chi-Chun Lee, Carlos Busso, Sharon Carnicke, Shrikanth S. Narayanan:
The USC CreativeIT database of multimodal dyadic interactions: from speech and full body motion capture to continuous emotional annotations. Lang. Resour. Evaluation 50(3): 497-521 (2016) - [c33]Hsuan-Yu Chen, Yu-Hsien Liao, Heng-Tai Jan, Li-Wei Kuo, Chi-Chun Lee:
A Gaussian mixture regression approach toward modeling the affective dynamics between acoustically-derived vocal arousal score (VC-AS) and internal brain fMRI bold signal response. ICASSP 2016: 5775-5779 - [c32]Wei-Cheng Lin, Chi-Chun Lee:
A thin-slice perception of emotion? An information theoretic-based framework to identify locally emotion-rich behavior segments for global affect recognition. ICASSP 2016: 5790-5794 - [c31]Fu-Sheng Tsai, Ya-Ling Hsu, Wei-Chen Chen, Yi-Ming Weng, Chip-Jin Ng, Chi-Chun Lee:
Toward Development and Evaluation of Pain Level-Rating Scale for Emergency Triage based on Vocal Characteristics and Facial Expressions. INTERSPEECH 2016: 92-96 - [c30]Wen-Yu Huang, Shan-Wen Hsiao, Hung-Ching Sun, Ming-Chuan Hsieh, Ming-Hsueh Tsai, Chi-Chun Lee:
Enhancement of Automatic Oral Presentation Assessment System Using Latent N-Grams Word Representation and Part-of-Speech Information. INTERSPEECH 2016: 1432-1436 - [c29]Hung-Shin Lee, Yu Tsao, Chi-Chun Lee, Hsin-Min Wang, Wei-Cheng Lin, Wei-Chen Chen, Shan-Wen Hsiao, Shyh-Kang Jeng:
Minimization of Regression and Ranking Losses with Shallow Neural Networks on Automatic Sincerity Evaluation. INTERSPEECH 2016: 2031-2035 - [c28]Hung-Ching Sun, Chi-Chun Lee:
基於多模態主動式學習法進行需備標記樣本之挑選用於候用校長評鑑之自動化評分系統建置(A Multimodal Active Learning Approach toward Identifying Samples to Label during the Development of Automatic Oral Presentation Assessment System for Pre-service Principals Certification Program)[In Chinese]. ROCLING 2016 - 2015
- [j6]Po-Hsuan Chen, Chi-Chun Lee:
Automating Behavior Coding for Distressed Couples Interactions Based on Stacked Sparse Autoencoder Framework using Speech-acoustic Features. Int. J. Comput. Linguistics Chin. Lang. Process. 20(2) (2015) - [c27]Wei-Chen Chen, Po-Tsun Lai, Yu Tsao, Chi-Chun Lee:
Multimodal arousal rating using unsupervised fusion technique. ICASSP 2015: 5296-5300 - [c26]Chi-Chun Lee, Daniel Bone, Shrikanth S. Narayanan:
An analysis of the relationship between signal-derived vocal arousal score and human emotion production and perception. INTERSPEECH 2015: 1304-1308 - [c25]Shan-Wen Hsiao, Hung-Ching Sun, Ming-Chuan Hsieh, Ming-Hsueh Tsai, Hsin-Chih Lin, Chi-Chun Lee:
A multimodal approach for automatic assessment of school principals' oral presentation during pre-service training program. INTERSPEECH 2015: 2529-2533 - [c24]Po-Hsuan Chen, Chi-Chun Lee:
透過語音特徵建構基於堆疊稀疏自編碼器演算法之婚姻治療中夫妻互動行為量表自動化評分系統(Automating Behavior Coding for Distressed Couples Interactions Based on Stacked Sparse Autoencoder Framework using Speech-acoustic Features)[In Chinese]. ROCLING 2015 - 2014
- [j5]Chi-Chun Lee, Athanasios Katsamanis, Matthew P. Black, Brian R. Baucom, Andrew Christensen, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Computing vocal entrainment: A signal-derived PCA-based quantification scheme with application to affect analysis in married couple interactions. Comput. Speech Lang. 28(2): 518-539 (2014) - [j4]Daniel Bone, Chi-Chun Lee, Shrikanth S. Narayanan:
Robust Unsupervised Arousal Rating: A Rule-Based Framework withKnowledge-Inspired Vocal Features. IEEE Trans. Affect. Comput. 5(2): 201-213 (2014) - [c23]Daniel Bone, Chi-Chun Lee, Alexandros Potamianos, Shrikanth S. Narayanan:
An investigation of vocal arousal dynamics in child-psychologist interactions using synchrony measures and a conversation-based model. INTERSPEECH 2014: 218-222 - [c22]How Jing, Ting-Yao Hu, Hung-Shin Lee, Wei-Chen Chen, Chi-Chun Lee, Yu Tsao, Hsin-Min Wang:
Ensemble of machine learning algorithms for cognitive and physical speaker load detection. INTERSPEECH 2014: 447-451 - 2013
- [j3]Matthew P. Black, Athanasios Katsamanis, Brian R. Baucom, Chi-Chun Lee, Adam C. Lammert, Andrew Christensen, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Toward automating a human behavioral coding system for married couples' interactions using speech acoustic features. Speech Commun. 55(1): 1-21 (2013) - [c21]Theodora Chaspari, Daniel Bone, James Gibson, Chi-Chun Lee, Shrikanth S. Narayanan:
Using physiology and language cues for modeling verbal response latencies of children with ASD. ICASSP 2013: 3702-3706 - [c20]Bo Xiao, Panayiotis G. Georgiou, Chi-Chun Lee, Brian R. Baucom, Shrikanth S. Narayanan:
Head motion synchrony and its correlation to affectivity in dyadic interactions. ICME 2013: 1-6 - [c19]Daniel Bone, Chi-Chun Lee, Theodora Chaspari, Matthew P. Black, Marian E. Williams, Sungbok Lee, Pat Levitt, Shrikanth S. Narayanan:
Acoustic-prosodic, turn-taking, and language cues in child-psychologist interactions for varying social demand. INTERSPEECH 2013: 2400-2404 - [c18]Daniel Bone, Chi-Chun Lee, Vikram Ramanarayanan, Shrikanth S. Narayanan, Renske S. Hoedemaker, Peter C. Gordon:
Analyzing eye-voice coordination in rapid automatized naming. INTERSPEECH 2013: 2425-2429 - 2012
- [c17]Chi-Chun Lee, Athanasios Katsamanis, Brian R. Baucom, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Using measures of vocal entrainment to inform outcome-related behaviors in marital conflicts. APSIPA 2012: 1-5 - [c16]Rahul Gupta, Chi-Chun Lee, Shrikanth S. Narayanan:
Classification of emotional content of sighs in dyadic human interactions. ICASSP 2012: 2265-2268 - [c15]Chi-Chun Lee, Athanasios Katsamanis, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Based on Isolated Saliency or Causal Integration? Toward a Better Understanding of Human Annotation Process using Multiple Instance Learning and Sequential Probability Ratio Test. INTERSPEECH 2012: 619-622 - [c14]Daniel Bone, Matthew P. Black, Chi-Chun Lee, Marian E. Williams, Pat Levitt, Sungbok Lee, Shrikanth S. Narayanan:
Spontaneous-Speech Acoustic-Prosodic Features of Children with Autism and the Interacting Psychologist. INTERSPEECH 2012: 1043-1046 - [c13]Daniel Bone, Chi-Chun Lee, Shrikanth S. Narayanan:
A Robust Unsupervised Arousal Rating Framework using Prosody with Cross-Corpora Evaluation. INTERSPEECH 2012: 1175-1178 - [c12]Theodora Chaspari, Chi-Chun Lee, Shrikanth S. Narayanan:
Interplay between verbal response latency and physiology of children with autism during ECA interactions. INTERSPEECH 2012: 1319-1322 - [c11]Rahul Gupta, Chi-Chun Lee, Daniel Bone, Agata Rozga, Sungbok Lee, Shrikanth S. Narayanan:
Acoustical analysis of engagement behavior in children. WOCCI 2012: 25-31 - 2011
- [j2]Chi-Chun Lee, Emily Mower, Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Emotion recognition using a hierarchical binary decision tree approach. Speech Commun. 53(9-10): 1162-1171 (2011) - [c10]Chi-Chun Lee, Athanasios Katsamanis, Matthew P. Black, Brian R. Baucom, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Affective State Recognition in Married Couples' Interactions Using PCA-Based Vocal Entrainment Measures with Multiple Instance Learning. ACII (2) 2011: 31-41 - [c9]Emily Mower, Chi-Chun Lee, James Gibson, Theodora Chaspari, Marian E. Williams, Shrikanth S. Narayanan:
Analyzing the Nature of ECA Interactions in Children with Autism. INTERSPEECH 2011: 2989-2993 - [c8]Chi-Chun Lee, Athanasios Katsamanis, Matthew P. Black, Brian R. Baucom, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
An Analysis of PCA-Based Vocal Entrainment Measures in Married Couples' Affective Spoken Interactions. INTERSPEECH 2011: 3101-3104 - 2010
- [c7]Chi-Chun Lee, Shrikanth S. Narayanan:
Predicting interruptions in dyadic spoken interactions. ICASSP 2010: 5250-5253 - [c6]Chi-Chun Lee, Matthew Black, Athanasios Katsamanis, Adam C. Lammert, Brian R. Baucom, Andrew Christensen, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Quantification of prosodic entrainment in affective spontaneous spoken interactions of married couples. INTERSPEECH 2010: 793-796 - [c5]Matthew Black, Athanasios Katsamanis, Chi-Chun Lee, Adam C. Lammert, Brian R. Baucom, Andrew Christensen, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Automatic classification of married couples' behavior using audio features. INTERSPEECH 2010: 2030-2033
2000 – 2009
- 2009
- [c4]Emily Mower, Angeliki Metallinou, Chi-Chun Lee, Abe Kazemzadeh, Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Interpreting ambiguous emotional expressions. ACII 2009: 1-8 - [c3]Chi-Chun Lee, Emily Mower, Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Emotion recognition using a hierarchical binary decision tree approach. INTERSPEECH 2009: 320-323 - [c2]Chi-Chun Lee, Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Modeling mutual influence of interlocutor emotion states in dyadic spoken interactions. INTERSPEECH 2009: 1983-1986 - 2008
- [j1]Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N. Chang, Sungbok Lee, Shrikanth S. Narayanan:
IEMOCAP: interactive emotional dyadic motion capture database. Lang. Resour. Evaluation 42(4): 335-359 (2008) - [c1]Chi-Chun Lee, Sungbok Lee, Shrikanth S. Narayanan:
An analysis of multimodal cues of interruption in dyadic spoken interactions. INTERSPEECH 2008: 1678-1681
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-22 21:16 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint