default search action
Runnan Li
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2023
- [j1]Jun Ling, Xu Tan, Liyang Chen, Runnan Li, Yuchao Zhang, Sheng Zhao, Li Song:
StableFace: Analyzing and Improving Motion Stability for Talking Face Generation. IEEE J. Sel. Top. Signal Process. 17(6): 1232-1247 (2023) - [c25]Zenghao Chai, Tianke Zhang, Tianyu He, Xu Tan, Tadas Baltrusaitis, HsiangTao Wu, Runnan Li, Sheng Zhao, Chun Yuan, Jiang Bian:
HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and Dynamic Details. ICCV 2023: 9053-9064 - [c24]Liyang Chen, Zhiyong Wu, Runnan Li, Weihong Bao, Jun Ling, Xu Tan, Sheng Zhao:
VAST: Vivify Your Talking Avatar via Zero-Shot Expressive Facial Style Transfer. ICCV (Workshops) 2023: 2969-2979 - [i6]Shengmeng Li, Luping Liu, Zenghao Chai, Runnan Li, Xu Tan:
ERA-Solver: Error-Robust Adams Solver for Fast Sampling of Diffusion Probabilistic Models. CoRR abs/2301.12935 (2023) - [i5]Zenghao Chai, Tianke Zhang, Tianyu He, Xu Tan, Tadas Baltrusaitis, HsiangTao Wu, Runnan Li, Sheng Zhao, Chun Yuan, Jiang Bian:
HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and Dynamic Details. CoRR abs/2303.11225 (2023) - [i4]Liyang Chen, Zhiyong Wu, Runnan Li, Weihong Bao, Jun Ling, Xu Tan, Sheng Zhao:
VAST: Vivify Your Talking Avatar via Zero-Shot Expressive Facial Style Transfer. CoRR abs/2308.04830 (2023) - 2022
- [c23]Liyang Chen, Zhiyong Wu, Jun Ling, Runnan Li, Xu Tan, Sheng Zhao:
Transformer-S2A: Robust and Efficient Speech-to-Animation. ICASSP 2022: 7247-7251 - [i3]Jun Ling, Xu Tan, Liyang Chen, Runnan Li, Yuchao Zhang, Sheng Zhao, Li Song:
StableFace: Analyzing and Improving Motion Stability for Talking Face Generation. CoRR abs/2208.13717 (2022) - [i2]Anni Tang, Tianyu He, Xu Tan, Jun Ling, Runnan Li, Sheng Zhao, Li Song, Jiang Bian:
Memories are One-to-Many Mapping Alleviators in Talking Face Generation. CoRR abs/2212.05005 (2022) - 2021
- [i1]Liyang Chen, Zhiyong Wu, Jun Ling, Runnan Li, Xu Tan, Sheng Zhao:
Transformer-S2A: Robust and Efficient Speech-to-Animation. CoRR abs/2111.09771 (2021) - 2020
- [c22]Xiangyu Liang, Zhiyong Wu, Runnan Li, Yanqing Liu, Sheng Zhao, Helen Meng:
Enhancing Monotonicity for Robust Autoregressive Transformer TTS. INTERSPEECH 2020: 3181-3185
2010 – 2019
- 2019
- [c21]Liangqi Liu, Zhiyong Wu, Runnan Li, Jia Jia, Helen Meng:
Learning Contextual Representation with Convolution Bank and Multi-head Self-attention for Speech Emphasis Detection. APSIPA 2019: 922-926 - [c20]Runnan Li, Zhiyong Wu, Jia Jia, Sheng Zhao, Helen Meng:
Dilated Residual Network with Multi-head Self-attention for Speech Emotion Recognition. ICASSP 2019: 6675-6679 - [c19]Hui Lu, Zhiyong Wu, Runnan Li, Shiyin Kang, Jia Jia, Helen Meng:
A Compact Framework for Voice Conversion Using Wavenet Conditioned on Phonetic Posteriorgrams. ICASSP 2019: 6810-6814 - [c18]Dongyang Dai, Zhiyong Wu, Runnan Li, Xixin Wu, Jia Jia, Helen Meng:
Learning Discriminative Features from Spectrograms Using Center Loss for Speech Emotion Recognition. ICASSP 2019: 7405-7409 - [c17]Runnan Li, Zhiyong Wu, Jia Jia, Yaohua Bu, Sheng Zhao, Helen Meng:
Towards Discriminative Representation Learning for Speech Emotion Recognition. IJCAI 2019: 5060-5066 - [c16]Hui Lu, Zhiyong Wu, Dongyang Dai, Runnan Li, Shiyin Kang, Jia Jia, Helen Meng:
One-Shot Voice Conversion with Global Speaker Embeddings. INTERSPEECH 2019: 669-673 - [c15]Jingbei Li, Zhiyong Wu, Runnan Li, Pengpeng Zhi, Song Yang, Helen Meng:
Knowledge-Based Linguistic Encoding for End-to-End Mandarin Text-to-Speech Synthesis. INTERSPEECH 2019: 4494-4498 - 2018
- [c14]Jingbei Li, Zhiyong Wu, Runnan Li, Mingxing Xu, Kehua Lei, Lianhong Cai:
Multi-modal Multi-scale Speech Expression Evaluation in Computer-Assisted Language Learning. AIMS 2018: 16-28 - [c13]Ziwei Zhu, Zhiyong Wu, Runnan Li, Yishuang Ning, Helen Meng:
Learning Frame-Level Recurrent Neural Networks Representations for Query-by-Example Spoken Term Detection on Mobile Devices. AIMS 2018: 55-66 - [c12]Runnan Li, Zhiyong Wu, Yuchen Huang, Jia Jia, Helen Meng, Lianhong Cai:
Emphatic Speech Generation with Conditioned Input Layer and Bidirectional LSTMS for Expressive Speech Synthesis. ICASSP 2018: 5129-5133 - [c11]Shaoguang Mao, Zhiyong Wu, Runnan Li, Xu Li, Helen Meng, Lianhong Cai:
Applying Multitask Learning to Acoustic-Phonemic Model for Mispronunciation Detection and Diagnosis in L2 English Speech. ICASSP 2018: 6254-6258 - [c10]Shaoguang Mao, Zhiyong Wu, Xu Li, Runnan Li, Xixin Wu, Helen Meng:
Integrating Articulatory Features into Acoustic-Phonemic Model for Mispronunciation Detection and Diagnosis in L2 English Speech. ICME 2018: 1-6 - [c9]Ziwei Zhu, Zhiyong Wu, Runnan Li, Helen Meng, Lianhong Cai:
Siamese Recurrent Auto-Encoder Representation for Query-by-Example Spoken Term Detection. INTERSPEECH 2018: 102-106 - [c8]Long Zhang, Jia Jia, Fanbo Meng, Suping Zhou, Wei Chen, Cunjun Zhang, Runnan Li:
Emphasis Detection for Voice Dialogue Applications Using Multi-channel Convolutional Bidirectional Long Short-Term Memory Network. ISCSLP 2018: 210-214 - [c7]Runnan Li, Zhiyong Wu, Jia Jia, Jingbei Li, Wei Chen, Helen Meng:
Inferring User Emotive State Changes in Realistic Human-Computer Conversational Dialogs. ACM Multimedia 2018: 136-144 - 2017
- [c6]Yishuang Ning, Jia Jia, Zhiyong Wu, Runnan Li, Yongsheng An, Yanfeng Wang, Helen M. Meng:
Multi-Task Deep Learning for User Intention Understanding in Speech Interaction Systems. AAAI 2017: 161-167 - [c5]Runnan Li, Zhiyong Wu, Xunying Liu, Helen M. Meng, Lianhong Cai:
Multi-task learning of structured output layer bidirectional LSTMS for speech synthesis. ICASSP 2017: 5510-5514 - [c4]Yishuang Ning, Zhiyong Wu, Runnan Li, Jia Jia, Mingxing Xu, Helen M. Meng, Lianhong Cai:
Learning cross-lingual knowledge with multilingual BLSTM for emphasis detection with limited training data. ICASSP 2017: 5615-5619 - [c3]Yuchen Huang, Zhiyong Wu, Runnan Li, Helen Meng, Lianhong Cai:
Multi-Task Learning for Prosodic Structure Generation Using BLSTM RNN with Structured Output Layer. INTERSPEECH 2017: 779-783 - [c2]Runnan Li, Zhiyong Wu, Yishuang Ning, Lifa Sun, Helen Meng, Lianhong Cai:
Spectro-Temporal Modelling with Time-Frequency LSTM and Structured Output Layer for Voice Conversion. INTERSPEECH 2017: 3409-3413 - 2016
- [c1]Runnan Li, Zhiyong Wu, Helen M. Meng, Lianhong Cai:
DBLSTM-based multi-task learning for pitch transformation in voice conversion. ISCSLP 2016: 1-5
Coauthor Index
aka: Helen Meng
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-07-25 20:18 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint