default search action
Liang Ding 0006
Person information
- affiliation: JD Explore Academy, JD.com Inc., NLP Group
- affiliation: The University of Sydney, Sydney, Australia
Other persons with the same name
- Liang Ding — disambiguation page
- Liang Ding 0001 — Harbin Institute of Technology, State Key Laboratory of Robotics and System, Harbin, China
- Liang Ding 0002 — Jiangxi University of Finance and Economics, Nanchang, China
- Liang Ding 0003 — The Hong Kong University of Science and Technology, Hong Kong, China
- Liang Ding 0004 — Northeast Forestry University, Department of Mathematics, Harbin, China
- Liang Ding 0005 — Luliang University, Luliang, China
- Liang Ding 0007 — St. Jude Children's Research Hospital, Memphis, TN, USA (and 1 more)
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [j15]Li Shen, Yan Sun, Zhiyuan Yu, Liang Ding, Xinmei Tian, Dacheng Tao:
On Efficient Training of Large-Scale Deep Learning Models. ACM Comput. Surv. 57(3): 57:1-57:36 (2025) - 2024
- [j14]Hao Sun, Li Shen, Qihuang Zhong, Liang Ding, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, Dacheng Tao:
AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks. Neural Networks 169: 506-519 (2024) - [j13]Chuang Liu, Yibing Zhan, Xueqi Ma, Liang Ding, Dapeng Tao, Jia Wu, Wenbin Hu, Bo Du:
Exploring sparsity in graph transformers. Neural Networks 174: 106265 (2024) - [j12]Haoran Wang, Qinghua Cheng, Baosheng Yu, Yibing Zhan, Dapeng Tao, Liang Ding, Haibin Ling:
Free-Form Composition Networks for Egocentric Action Recognition. IEEE Trans. Circuits Syst. Video Technol. 34(10): 9967-9978 (2024) - [j11]Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
PanDa: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation. IEEE Trans. Knowl. Data Eng. 36(9): 4835-4848 (2024) - [j10]Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation. IEEE Trans. Knowl. Data Eng. 36(12): 8037-8050 (2024) - [j9]Jun Rao, Xv Meng, Liang Ding, Shuhan Qi, Xuebo Liu, Min Zhang, Dacheng Tao:
Parameter-Efficient and Student-Friendly Knowledge Distillation. IEEE Trans. Multim. 26: 4230-4241 (2024) - [j8]Chuang Liu, Xueqi Ma, Yibing Zhan, Liang Ding, Dapeng Tao, Bo Du, Wenbin Hu, Danilo P. Mandic:
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks. IEEE Trans. Neural Networks Learn. Syst. 35(10): 14903-14917 (2024) - [c62]Zhiyao Ren, Yibing Zhan, Liang Ding, Gaoang Wang, Chaoyue Wang, Zhongyi Fan, Dacheng Tao:
Multi-Step Denoising Scheduled Sampling: Towards Alleviating Exposure Bias for Diffusion Models. AAAI 2024: 4667-4675 - [c61]Tengfei Yu, Xuebo Liu, Liang Ding, Kehai Chen, Dacheng Tao, Min Zhang:
Speech Sense Disambiguation: Tackling Homophone Ambiguity in End-to-End Speech Translation. ACL (1) 2024: 8020-8035 - [c60]Hong Chen, Chengtao Lv, Liang Ding, Haotong Qin, Xiabin Zhou, Yifu Ding, Xuebo Liu, Min Zhang, Jinyang Guo, Xianglong Liu, Dacheng Tao:
DB-LLM: Accurate Dual-Binarization for Efficient LLMs. ACL (Findings) 2024: 8719-8730 - [c59]Qingyu Lu, Baopu Qiu, Liang Ding, Kanjian Zhang, Tom Kocmi, Dacheng Tao:
Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models. ACL (Findings) 2024: 8801-8816 - [c58]Keqin Peng, Liang Ding, Yancheng Yuan, Xuebo Liu, Min Zhang, Yuanxin Ouyang, Dacheng Tao:
Revisiting Demonstration Selection Strategies in In-Context Learning. ACL (1) 2024: 9090-9101 - [c57]Shilong Pan, Zhiliang Tian, Liang Ding, Haoqi Zheng, Zhen Huang, Zhihua Wen, Dongsheng Li:
POMP: Probability-driven Meta-graph Prompter for LLMs in Low-resource Unsupervised Neural Machine Translation. ACL (1) 2024: 9976-9992 - [c56]Qihuang Zhong, Liang Ding, Li Shen, Juhua Liu, Bo Du, Dacheng Tao:
Revisiting Knowledge Distillation for Autoregressive Language Models. ACL (1) 2024: 10900-10913 - [c55]Yikun Wang, Rui Zheng, Liang Ding, Qi Zhang, Dahua Lin, Dacheng Tao:
Uncertainty Aware Learning for Language Model Alignment. ACL (1) 2024: 11087-11099 - [c54]Shuai Wang, Liang Ding, Li Shen, Yong Luo, Bo Du, Dacheng Tao:
OOP: Object-Oriented Programming Evaluation Benchmark for Large Language Models. ACL (Findings) 2024: 13619-13639 - [c53]Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding. ACL (Findings) 2024: 13721-13736 - [c52]Xinyu Ma, Xuebo Liu, Derek F. Wong, Jun Rao, Bei Li, Liang Ding, Lidia S. Chao, Dacheng Tao, Min Zhang:
3AM: An Ambiguity-Aware Multi-Modal Machine Translation Dataset. LREC/COLING 2024: 1-13 - [c51]Ziyang Xu, Keqin Peng, Liang Ding, Dacheng Tao, Xiliang Lu:
Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction. LREC/COLING 2024: 15552-15565 - [c50]Zhiyuan Yu, Li Shen, Liang Ding, Xinmei Tian, Yixin Chen, Dacheng Tao:
Sheared Backpropagation for Fine-Tuning Foundation Models. CVPR 2024: 5883-5892 - [c49]Boan Liu, Liang Ding, Li Shen, Keqin Peng, Yu Cao, Dazhao Cheng, Dacheng Tao:
Diversifying the Mixture-of-Experts Representation for Language Models with Orthogonal Optimizer. ECAI 2024: 2966-2973 - [c48]Hongyu Li, Liang Ding, Meng Fang, Dacheng Tao:
Revisiting Catastrophic Forgetting in Large Language Model Tuning. EMNLP (Findings) 2024: 4297-4308 - [c47]Qihuang Zhong, Kunfeng Chen, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL. EMNLP (Findings) 2024: 6874-6885 - [c46]Tengfei Yu, Xuebo Liu, Zhiyi Hou, Liang Ding, Dacheng Tao, Min Zhang:
Self-Powered LLM Modality Expansion for Large Speech-Text Models. EMNLP 2024: 12401-12417 - [c45]Yuxuan Guo, Zhiliang Tian, Yiping Song, Tianlun Liu, Liang Ding, Dongsheng Li:
Context-aware Watermark with Semantic Balanced Green-red Lists for Large Language Models. EMNLP 2024: 22633-22646 - [c44]Wenbin Wang, Liang Ding, Li Shen, Yong Luo, Han Hu, Dacheng Tao:
WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual World Knowledge. ACM Multimedia 2024: 2282-2291 - [i93]Shilong Pan, Zhiliang Tian, Liang Ding, Zhen Huang, Zhihua Wen, Dongsheng Li:
POMP: Probability-driven Meta-graph Prompter for LLMs in Low-resource Unsupervised Neural Machine Translation. CoRR abs/2401.05596 (2024) - [i92]Yuqi Zhang, Liang Ding, Lefei Zhang, Dacheng Tao:
Intention Analysis Prompting Makes Large Language Models A Good Jailbreak Defender. CoRR abs/2401.06561 (2024) - [i91]Shuai Wang, Liang Ding, Li Shen, Yong Luo, Bo Du, Dacheng Tao:
OOP: Object-Oriented Programming Evaluation Benchmark for Large Language Models. CoRR abs/2401.06628 (2024) - [i90]Wenbin Wang, Liang Ding, Li Shen, Yong Luo, Han Hu, Dacheng Tao:
WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual World Knowledge. CoRR abs/2401.06659 (2024) - [i89]Keqin Peng, Liang Ding, Yancheng Yuan, Xuebo Liu, Min Zhang, Yuanxin Ouyang, Dacheng Tao:
Revisiting Demonstration Selection Strategies in In-Context Learning. CoRR abs/2401.12087 (2024) - [i88]Yuchun Miao, Sen Zhang, Liang Ding, Rong Bao, Lefei Zhang, Dacheng Tao:
Mitigating Reward Hacking via Information-Theoretic Reward Modeling. CoRR abs/2402.09345 (2024) - [i87]Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding. CoRR abs/2402.11889 (2024) - [i86]Qihuang Zhong, Liang Ding, Li Shen, Juhua Liu, Bo Du, Dacheng Tao:
Revisiting Knowledge Distillation for Autoregressive Language Models. CoRR abs/2402.11890 (2024) - [i85]Hong Chen, Chengtao Lv, Liang Ding, Haotong Qin, Xiabin Zhou, Yifu Ding, Xuebo Liu, Min Zhang, Jinyang Guo, Xianglong Liu, Dacheng Tao:
DB-LLM: Accurate Dual-Binarization for Efficient LLMs. CoRR abs/2402.11960 (2024) - [i84]Zhiyao Ren, Yibing Zhan, Baosheng Yu, Liang Ding, Dacheng Tao:
Healthcare Copilot: Eliciting the Power of General LLMs for Medical Consultation. CoRR abs/2402.13408 (2024) - [i83]Hanyao Wang, Yibing Zhan, Liu Liu, Liang Ding, Jun Yu:
Balanced Similarity with Auxiliary Prompts: Towards Alleviating Text-to-Image Retrieval Bias for CLIP in Zero-shot Learning. CoRR abs/2402.18400 (2024) - [i82]Zhonghai Wang, Jie Jiang, Yibing Zhan, Bohao Zhou, Yanhong Li, Chong Zhang, Liang Ding, Hua Jin, Jun Peng, Xu Lin, Weifeng Liu:
Towards Training A Chinese Large Language Model for Anesthesiology. CoRR abs/2403.02742 (2024) - [i81]Ziyang Xu, Keqin Peng, Liang Ding, Dacheng Tao, Xiliang Lu:
Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction. CoRR abs/2403.09963 (2024) - [i80]Changtong Zan, Liang Ding, Li Shen, Yibing Zhen, Weifeng Liu, Dacheng Tao:
Building Accurate Translation-Tailored LLMs with Language Aware Instruction Tuning. CoRR abs/2403.14399 (2024) - [i79]Qihuang Zhong, Kang Wang, Ziyang Xu, Juhua Liu, Liang Ding, Bo Du, Dacheng Tao:
Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Reasoners. CoRR abs/2404.14963 (2024) - [i78]Xinyu Ma, Xuebo Liu, Derek F. Wong, Jun Rao, Bei Li, Liang Ding, Lidia S. Chao, Dacheng Tao, Min Zhang:
3AM: An Ambiguity-Aware Multi-Modal Machine Translation Dataset. CoRR abs/2404.18413 (2024) - [i77]Tianle Xia, Liang Ding, Guojia Wan, Yibing Zhan, Bo Du, Dacheng Tao:
Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning. CoRR abs/2405.01649 (2024) - [i76]Shwai He, Daize Dong, Liang Ding, Ang Li:
Demystifying the Compression of Mixture-of-Experts Through a Unified Framework. CoRR abs/2406.02500 (2024) - [i75]Hongyu Li, Liang Ding, Meng Fang, Dacheng Tao:
Revisiting Catastrophic Forgetting in Large Language Model Tuning. CoRR abs/2406.04836 (2024) - [i74]Yikun Wang, Rui Zheng, Liang Ding, Qi Zhang, Dahua Lin, Dacheng Tao:
Uncertainty Aware Learning for Language Model Alignment. CoRR abs/2406.04854 (2024) - [i73]Rong Bao, Rui Zheng, Shihan Dou, Xiao Wang, Enyu Zhou, Bo Wang, Qi Zhang, Liang Ding, Dacheng Tao:
Aligning Large Language Models from Self-Reference AI Feedback with one General Principle. CoRR abs/2406.11190 (2024) - [i72]Wenbin Wang, Liang Ding, Minyan Zeng, Xiabin Zhou, Li Shen, Yong Luo, Dacheng Tao:
Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Large Language Models. CoRR abs/2408.15556 (2024) - [i71]Shuai Wang, Liang Ding, Li Shen, Yong Luo, Zheng He, Wei Yu, Dacheng Tao:
USCD: Improving Code Generation of LLMs by Uncertainty-Aware Selective Contrastive Decoding. CoRR abs/2409.05923 (2024) - [i70]Jun Rao, Xuebo Liu, Zepeng Lin, Liang Ding, Jing Li, Dacheng Tao, Min Zhang:
Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models. CoRR abs/2409.12512 (2024) - [i69]Qingyu Lu, Liang Ding, Kanjian Zhang, Jinxia Zhang, Dacheng Tao:
MQM-APE: Toward High-Quality Error Annotation Predictors with Automatic Post-Editing in LLM Translation Evaluators. CoRR abs/2409.14335 (2024) - [i68]Tengfei Yu, Xuebo Liu, Zhiyi Hou, Liang Ding, Dacheng Tao, Min Zhang:
Self-Powered LLM Modality Expansion for Large Speech-Text Models. CoRR abs/2410.03798 (2024) - [i67]Fei Wang, Li Shen, Liang Ding, Chao Xue, Ye Liu, Changxing Ding:
Simultaneous Computation and Memory Efficient Zeroth-Order Optimizer for Fine-Tuning Large Language Models. CoRR abs/2410.09823 (2024) - [i66]Qihuang Zhong, Kunfeng Chen, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL. CoRR abs/2410.11371 (2024) - 2023
- [j7]Xinyao Li, Yibing Zhan, Yanhua Zhao, Yiqiang Wu, Liang Ding, Yuanyuan Li, Dapeng Tao, Hua Jin:
A perioperative risk assessment dataset with multi-view data based on online accelerated pairwise comparison. Inf. Fusion 99: 101838 (2023) - [j6]Dong Zhao, Guojia Wan, Yibing Zhan, Zengmao Wang, Liang Ding, Zhigao Zheng, Bo Du:
KE-X: Towards subgraph explanations of knowledge graph embedding based on knowledge information gain. Knowl. Based Syst. 278: 110772 (2023) - [j5]Liang Ding, Longyue Wang, Siyou Liu:
Recurrent graph encoder for syntax-aware neural machine translation. Int. J. Mach. Learn. Cybern. 14(4): 1053-1062 (2023) - [j4]Yan Sun, Li Shen, Hao Sun, Liang Ding, Dacheng Tao:
Efficient Federated Learning Via Local Adaptive Amended Optimizer With Linear Speedup. IEEE Trans. Pattern Anal. Mach. Intell. 45(12): 14453-14464 (2023) - [j3]Juhua Liu, Qihuang Zhong, Liang Ding, Hua Jin, Bo Du, Dacheng Tao:
Unified Instance and Knowledge Alignment Pretraining for Aspect-Based Sentiment Analysis. IEEE ACM Trans. Audio Speech Lang. Process. 31: 2629-2642 (2023) - [j2]Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Hua Jin, Dacheng Tao:
Knowledge Graph Augmented Network Towards Multiview Representation Learning for Aspect-Based Sentiment Analysis. IEEE Trans. Knowl. Data Eng. 35(10): 10098-10111 (2023) - [j1]Jun Rao, Liang Ding, Shuhan Qi, Meng Fang, Yang Liu, Li Shen, Dacheng Tao:
Dynamic Contrastive Distillation for Image-Text Retrieval. IEEE Trans. Multim. 25: 8383-8395 (2023) - [c43]Hexuan Deng, Liang Ding, Xuebo Liu, Meishan Zhang, Dacheng Tao, Min Zhang:
Improving Simultaneous Machine Translation with Monolingual Data. AAAI 2023: 12728-12736 - [c42]Rong Bao, Rui Zheng, Liang Ding, Qi Zhang, Dacheng Tao:
CASN: Class-Aware Score Network for Textual Adversarial Detection. ACL (1) 2023: 671-687 - [c41]Keqin Peng, Liang Ding, Qihuang Zhong, Yuanxin Ouyang, Wenge Rong, Zhang Xiong, Dacheng Tao:
Token-Level Self-Evolution Training for Sequence-to-Sequence Learning. ACL (2) 2023: 841-850 - [c40]Qingyue Wang, Liang Ding, Yanan Cao, Yibing Zhan, Zheng Lin, Shi Wang, Dacheng Tao, Li Guo:
Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking. ACL (1) 2023: 2048-2061 - [c39]Tao Fang, Xuebo Liu, Derek F. Wong, Runzhe Zhan, Liang Ding, Lidia S. Chao, Dacheng Tao, Min Zhang:
TransGEC: Improving Grammatical Error Correction with Translationese. ACL (Findings) 2023: 3614-3633 - [c38]Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
Self-Evolution Learning for Discriminative Language Model Pretraining. ACL (Findings) 2023: 4130-4145 - [c37]Qingyu Lu, Liang Ding, Liping Xie, Kanjian Zhang, Derek F. Wong, Dacheng Tao:
Toward Human-Like Evaluation for Natural Language Generation with Error Analysis. ACL (1) 2023: 5892-5907 - [c36]Qihuang Zhong, Liang Ding, Juhua Liu, Xuebo Liu, Min Zhang, Bo Du, Dacheng Tao:
Revisiting Token Dropping Strategy in Efficient BERT Pretraining. ACL (1) 2023: 10391-10405 - [c35]Yibin Lei, Liang Ding, Yu Cao, Changtong Zan, Andrew Yates, Dacheng Tao:
Unsupervised Dense Retrieval with Relevance-Aware Contrastive Pre-Training. ACL (Findings) 2023: 10932-10940 - [c34]Shwai He, Liang Ding, Daize Dong, Boan Liu, Fuqiang Yu, Dacheng Tao:
PAD-Net: An Efficient Framework for Dynamic Networks. ACL (1) 2023: 14354-14366 - [c33]Keqin Peng, Liang Ding, Qihuang Zhong, Li Shen, Xuebo Liu, Min Zhang, Yuanxin Ouyang, Dacheng Tao:
Towards Making the Most of ChatGPT for Machine Translation. EMNLP (Findings) 2023: 5622-5633 - [c32]Haoqi Zheng, Qihuang Zhong, Liang Ding, Zhiliang Tian, Xin Niu, Changjian Wang, Dongsheng Li, Dacheng Tao:
Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks. EMNLP 2023: 8964-8974 - [c31]Tengfei Yu, Liang Ding, Xuebo Liu, Kehai Chen, Meishan Zhang, Dacheng Tao, Min Zhang:
PromptST: Abstract Prompt Learning for End-to-End Speech Translation. EMNLP 2023: 10140-10154 - [c30]Miaoxi Zhu, Qihuang Zhong, Li Shen, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models. EMNLP 2023: 11305-11327 - [c29]Shwai He, Run-Ze Fan, Liang Ding, Li Shen, Tianyi Zhou, Dacheng Tao:
Merging Experts into One: Improving Computational Efficiency of Mixture of Experts. EMNLP 2023: 14685-14691 - [c28]Yan Sun, Li Shen, Tiansheng Huang, Liang Ding, Dacheng Tao:
FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy. ICLR 2023 - [c27]Yan Sun, Li Shen, Shixiang Chen, Liang Ding, Dacheng Tao:
Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape. ICML 2023: 32991-33013 - [c26]Chuang Liu, Yibing Zhan, Xueqi Ma, Liang Ding, Dapeng Tao, Jia Wu, Wenbin Hu:
Gapformer: Graph Transformer with Graph Pooling for Node Classification. IJCAI 2023: 2196-2205 - [c25]Chiaming Hsu, Changtong Zan, Liang Ding, Longyue Wang, Xiaoting Wang, Weifeng Liu, Fu Lin, Wenbin Hu:
Prompt-Learning for Cross-Lingual Relation Extraction. IJCNN 2023: 1-9 - [c24]Zheng Zhang, Donglin Yang, Yaqi Xia, Liang Ding, Dacheng Tao, Xiaobo Zhou, Dazhao Cheng:
MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism. IPDPS 2023: 167-177 - [c23]Shwai He, Chenbo Jiang, Daize Dong, Liang Ding:
SD-Conv: Towards the Parameter-Efficiency of Dynamic Convolution. WACV 2023: 6443-6452 - [i65]Qihuang Zhong, Liang Ding, Keqin Peng, Juhua Liu, Bo Du, Li Shen, Yibing Zhan, Dacheng Tao:
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE. CoRR abs/2302.09268 (2023) - [i64]Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT. CoRR abs/2302.10198 (2023) - [i63]Yan Sun, Li Shen, Tiansheng Huang, Liang Ding, Dacheng Tao:
FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy. CoRR abs/2302.10429 (2023) - [i62]Chao Xue, Wei Liu, Shuai Xie, Zhenfang Wang, Jiaxing Li, Xuyang Peng, Liang Ding, Shanshan Zhao, Qiong Cao, Yibo Yang, Fengxiang He, Bohua Cai, Rongcheng Bian, Yiyan Zhao, Heliang Zheng, Xiangyang Liu, Dongkai Liu, Daqing Liu, Li Shen, Chang Li, Shijin Zhang, Yukang Zhang, Guanpu Chen, Shixiang Chen, Yibing Zhan, Jing Zhang, Chaoyue Wang, Dacheng Tao:
OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System. CoRR abs/2303.00501 (2023) - [i61]Hao Sun, Li Shen, Qihuang Zhong, Liang Ding, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, Dacheng Tao:
AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks. CoRR abs/2303.00565 (2023) - [i60]Keqin Peng, Liang Ding, Qihuang Zhong, Li Shen, Xuebo Liu, Min Zhang, Yuanxin Ouyang, Dacheng Tao:
Towards Making the Most of ChatGPT for Machine Translation. CoRR abs/2303.13780 (2023) - [i59]Qingyu Lu, Baopu Qiu, Liang Ding, Liping Xie, Dacheng Tao:
Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT. CoRR abs/2303.13809 (2023) - [i58]Li Shen, Yan Sun, Zhiyuan Yu, Liang Ding, Xinmei Tian, Dacheng Tao:
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review. CoRR abs/2304.03589 (2023) - [i57]Chiaming Hsu, Changtong Zan, Liang Ding, Longyue Wang, Xiaoting Wang, Weifeng Liu, Fu Lin, Wenbin Hu:
Prompt-Learning for Cross-Lingual Relation Extraction. CoRR abs/2304.10354 (2023) - [i56]Yan Sun, Li Shen, Shixiang Chen, Liang Ding, Dacheng Tao:
Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape. CoRR abs/2305.11584 (2023) - [i55]Haoqi Zheng, Qihuang Zhong, Liang Ding, Zhiliang Tian, Xin Niu, Dongsheng Li, Dacheng Tao:
Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks. CoRR abs/2305.13547 (2023) - [i54]Qihuang Zhong, Liang Ding, Juhua Liu, Xuebo Liu, Min Zhang, Bo Du, Dacheng Tao:
Revisiting Token Dropping Strategy in Efficient BERT Pretraining. CoRR abs/2305.15273 (2023) - [i53]Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
Self-Evolution Learning for Discriminative Language Model Pretraining. CoRR abs/2305.15275 (2023) - [i52]Qingyue Wang, Liang Ding, Yanan Cao, Yibing Zhan, Zheng Lin, Shi Wang, Dacheng Tao, Li Guo:
Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking. CoRR abs/2306.00434 (2023) - [i51]Yibin Lei, Liang Ding, Yu Cao, Changtong Zan, Andrew Yates, Dacheng Tao:
Unsupervised Dense Retrieval with Relevance-Aware Contrastive Pre-Training. CoRR abs/2306.03166 (2023) - [i50]Haoran Wang, Qinghua Cheng, Baosheng Yu, Yibing Zhan, Dapeng Tao, Liang Ding, Haibin Ling:
Free-Form Composition Networks for Egocentric Action Recognition. CoRR abs/2307.06527 (2023) - [i49]Yan Sun, Li Shen, Hao Sun, Liang Ding, Dacheng Tao:
Efficient Federated Learning via Local Adaptive Amended Optimizer with Linear Speedup. CoRR abs/2308.00522 (2023) - [i48]Fei Wang, Liang Ding, Jun Rao, Ye Liu, Li Shen, Changxing Ding:
Can Linguistic Knowledge Improve Multimodal Alignment in Vision-Language Pretraining? CoRR abs/2308.12898 (2023) - [i47]Qingyue Wang, Liang Ding, Yanan Cao, Zhiliang Tian, Shi Wang, Dacheng Tao, Li Guo:
Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models. CoRR abs/2308.15022 (2023) - [i46]Shwai He, Run-Ze Fan, Liang Ding, Li Shen, Tianyi Zhou, Dacheng Tao:
MerA: Merging Pretrained Adapters For Few-Shot Learning. CoRR abs/2308.15982 (2023) - [i45]Weishi Li, Yong Peng, Miao Zhang, Liang Ding, Han Hu, Li Shen:
Deep Model Fusion: A Survey. CoRR abs/2309.15698 (2023) - [i44]Changtong Zan, Liang Ding, Li Shen, Yibin Lei, Yibing Zhan, Weifeng Liu, Dacheng Tao:
Unlikelihood Tuning on Negative Samples Amazingly Improves Zero-Shot Translation. CoRR abs/2309.16599 (2023) - [i43]Boan Liu, Liang Ding, Li Shen, Keqin Peng, Yu Cao, Dazhao Cheng, Dacheng Tao:
Diversifying the Mixture-of-Experts Representation for Language Models with Orthogonal Optimizer. CoRR abs/2310.09762 (2023) - [i42]Shwai He, Run-Ze Fan, Liang Ding, Li Shen, Tianyi Zhou, Dacheng Tao:
Merging Experts into One: Improving Computational Efficiency of Mixture of Experts. CoRR abs/2310.09832 (2023) - [i41]Miaoxi Zhu, Qihuang Zhong, Li Shen, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models. CoRR abs/2310.13315 (2023) - [i40]Lei Wang, Yibing Zhan, Leilei Ma, Dapeng Tao, Liang Ding, Chen Gong:
SpliceMix: A Cross-scale and Semantic Blending Augmentation Strategy for Multi-label Image Classification. CoRR abs/2311.15200 (2023) - [i39]Chuang Liu, Yibing Zhan, Xueqi Ma, Liang Ding, Dapeng Tao, Jia Wu, Wenbin Hu, Bo Du:
Exploring Sparsity in Graph Transformers. CoRR abs/2312.05479 (2023) - [i38]Anke Tang, Li Shen, Yong Luo, Liang Ding, Han Hu, Bo Du, Dacheng Tao:
Concrete Subspace Learning based Interference Elimination for Multi-task Model Fusion. CoRR abs/2312.06173 (2023) - 2022
- [c22]Liang Ding, Longyue Wang, Shuming Shi, Dacheng Tao, Zhaopeng Tu:
Redistributing Low-Frequency Words: Making the Most of Monolingual Data in Non-Autoregressive Translation. ACL (1) 2022: 2417-2426 - [c21]Changtong Zan, Liang Ding, Li Shen, Yu Cao, Weifeng Liu, Dacheng Tao:
On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation. COLING 2022: 5029-5034 - [c20]Bing Wang, Liang Ding, Qihuang Zhong, Ximing Li, Dacheng Tao:
A Contrastive Cross-Channel Data Augmentation Framework for Aspect-Based Sentiment Analysis. COLING 2022: 6691-6704 - [c19]Lin Zhang, Li Shen, Liang Ding, Dacheng Tao, Ling-Yu Duan:
Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning. CVPR 2022: 10164-10173 - [c18]Shwai He, Liang Ding, Daize Dong, Jeremy Zhang, Dacheng Tao:
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters. EMNLP (Findings) 2022: 2184-2190 - [c17]Qihuang Zhong, Liang Ding, Li Shen, Peng Mi, Juhua Liu, Bo Du, Dacheng Tao:
Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models. EMNLP (Findings) 2022: 4064-4085 - [c16]Di Wu, Liang Ding, Shuo Yang, Mingyang Li:
MirrorAlign: A Super Lightweight Unsupervised Word Alignment Model via Cross-Lingual Contrastive Learning. IWSLT@ACL 2022: 83-91 - [c15]Jun Rao, Fei Wang, Liang Ding, Shuhan Qi, Yibing Zhan, Weifeng Liu, Dacheng Tao:
Where Does the Performance Improvement Come From?: - A Reproducibility Concern about Image-Text Retrieval. SIGIR 2022: 2727-2737 - [c14]Changtong Zan, Keqin Peng, Liang Ding, Baopu Qiu, Boan Liu, Shwai He, Qingyu Lu, Zheng Zhang, Chuang Liu, Weifeng Liu, Yibing Zhan, Dacheng Tao:
Vega-MT: The JD Explore Academy Machine Translation System for WMT22. WMT 2022: 411-422 - [i37]Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Hua Jin, Dacheng Tao:
Knowledge Graph Augmented Network Towards Multiview Representation Learning for Aspect-based Sentiment Analysis. CoRR abs/2201.04831 (2022) - [i36]Liang Ding, Keqin Peng, Dacheng Tao:
Improving Neural Machine Translation by Denoising Training. CoRR abs/2201.07365 (2022) - [i35]Jun Rao, Fei Wang, Liang Ding, Shuhan Qi, Yibing Zhan, Weifeng Liu, Dacheng Tao:
Where Does the Performance Improvement Come From? - A Reproducibility Concern about Image-Text Retrieval. CoRR abs/2203.03853 (2022) - [i34]Lin Zhang, Li Shen, Liang Ding, Dacheng Tao, Ling-Yu Duan:
Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning. CoRR abs/2203.09249 (2022) - [i33]Bing Wang, Liang Ding, Qihuang Zhong, Ximing Li, Dacheng Tao:
A Contrastive Cross-Channel Data Augmentation Framework for Aspect-based Sentiment Analysis. CoRR abs/2204.07832 (2022) - [i32]Changtong Zan, Liang Ding, Li Shen, Yu Cao, Weifeng Liu, Dacheng Tao:
Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation. CoRR abs/2204.07834 (2022) - [i31]Zheng Zhang, Liang Ding, Dazhao Cheng, Xuebo Liu, Min Zhang, Dacheng Tao:
BLISS: Robust Sequence-to-Sequence Learning via Self-Supervised Input Representation. CoRR abs/2204.07837 (2022) - [i30]Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation. CoRR abs/2205.14912 (2022) - [i29]Jun Rao, Xv Meng, Liang Ding, Shuhan Qi, Dacheng Tao:
Parameter-Efficient and Student-Friendly Knowledge Distillation. CoRR abs/2205.15308 (2022) - [i28]Jun Rao, Liang Ding, Shuhan Qi, Meng Fang, Yang Liu, Li Shen, Dacheng Tao:
Dynamic Contrastive Distillation for Image-Text Retrieval. CoRR abs/2207.01426 (2022) - [i27]Chuang Liu, Xueqi Ma, Yibing Zhan, Liang Ding, Dapeng Tao, Bo Du, Wenbin Hu, Danilo P. Mandic:
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks. CoRR abs/2207.08629 (2022) - [i26]Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao:
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation. CoRR abs/2208.10160 (2022) - [i25]Changtong Zan, Liang Ding, Li Shen, Yu Cao, Weifeng Liu, Dacheng Tao:
On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation. CoRR abs/2209.03316 (2022) - [i24]Changtong Zan, Keqin Peng, Liang Ding, Baopu Qiu, Boan Liu, Shwai He, Qingyu Lu, Zheng Zhang, Chuang Liu, Weifeng Liu, Yibing Zhan, Dacheng Tao:
Vega-MT: The JD Explore Academy Translation System for WMT22. CoRR abs/2209.09444 (2022) - [i23]Shwai He, Liang Ding, Daize Dong, Miao Zhang, Dacheng Tao:
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters. CoRR abs/2210.04284 (2022) - [i22]Qihuang Zhong, Liang Ding, Li Shen, Peng Mi, Juhua Liu, Bo Du, Dacheng Tao:
Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models. CoRR abs/2210.05497 (2022) - [i21]Shwai He, Liang Ding, Daize Dong, Boan Liu, Fuqiang Yu, Dacheng Tao:
Cherry Hypothesis: Identifying the Cherry on the Cake for Dynamic Networks. CoRR abs/2211.05528 (2022) - [i20]Hexuan Deng, Liang Ding, Xuebo Liu, Meishan Zhang, Dacheng Tao, Min Zhang:
Improving Simultaneous Machine Translation with Monolingual Data. CoRR abs/2212.01188 (2022) - [i19]Qihuang Zhong, Liang Ding, Yibing Zhan, Yu Qiao, Yonggang Wen, Li Shen, Juhua Liu, Baosheng Yu, Bo Du, Yixin Chen, Xinbo Gao, Chunyan Miao, Xiaoou Tang, Dacheng Tao:
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE. CoRR abs/2212.01853 (2022) - [i18]Qingyu Lu, Liang Ding, Liping Xie, Kanjian Zhang, Derek F. Wong, Dacheng Tao:
Toward Human-Like Evaluation for Natural Language Generation with Error Analysis. CoRR abs/2212.10179 (2022) - [i17]Baopu Qiu, Liang Ding, Di Wu, Lin Shang, Yibing Zhan, Dacheng Tao:
Original or Translated? On the Use of Parallel Data for Translation Quality Estimation. CoRR abs/2212.10257 (2022) - 2021
- [c13]Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, Zhaopeng Tu:
Progressive Multi-Granularity Training for Non-Autoregressive Translation. ACL/IJCNLP (Findings) 2021: 2797-2803 - [c12]Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, Zhaopeng Tu:
Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in Non-Autoregressive Translation. ACL/IJCNLP (1) 2021: 3431-3441 - [c11]Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, Zhaopeng Tu:
On the Copying Behaviors of Pre-Training for Neural Machine Translation. ACL/IJCNLP (Findings) 2021: 4265-4275 - [c10]Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, Zhaopeng Tu:
On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation. EMNLP (Findings) 2021: 2900-2907 - [c9]Liang Ding, Di Wu, Dacheng Tao:
Improving Neural Machine Translation by Bidirectional Training. EMNLP (1) 2021: 3278-3284 - [c8]Yu Cao, Liang Ding, Zhiliang Tian, Meng Fang:
Towards Efficiently Diversifying Dialogue Generation Via Embedding Augmentation. ICASSP 2021: 7443-7447 - [c7]Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Zhaopeng Tu:
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning. ICLR 2021 - [c6]Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, Zhaopeng Tu:
Understanding and Improving Lexical Choice in Non-Autoregressive Translation. ICLR 2021 - [c5]Liang Ding, Dacheng Tao:
The USYD-JD Speech Translation System for IWSLT2021. IWSLT 2021: 182-191 - [i16]Di Wu, Liang Ding, Shuo Yang, Dacheng Tao:
SLUA: A Super Lightweight Unsupervised Word Alignment Model via Cross-Lingual Contrastive Learning. CoRR abs/2102.04009 (2021) - [i15]Yu Cao, Liang Ding, Zhiliang Tian, Meng Fang:
Towards Efficiently Diversifying Dialogue Generation via Embedding Augmentation. CoRR abs/2103.01534 (2021) - [i14]Di Wu, Yiren Chen, Liang Ding, Dacheng Tao:
Bridging the Gap Between Clean Data Training and Real-World Inference for Spoken Language Understanding. CoRR abs/2104.06393 (2021) - [i13]Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, Zhaopeng Tu:
Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in Non-Autoregressive Translation. CoRR abs/2106.00903 (2021) - [i12]Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, Zhaopeng Tu:
Progressive Multi-Granularity Training for Non-Autoregressive Translation. CoRR abs/2106.05546 (2021) - [i11]Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, Zhaopeng Tu:
On the Copying Behaviors of Pre-Training for Neural Machine Translation. CoRR abs/2107.08212 (2021) - [i10]Liang Ding, Di Wu, Dacheng Tao:
The USYD-JD Speech Translation System for IWSLT 2021. CoRR abs/2107.11572 (2021) - [i9]Liang Ding, Di Wu, Dacheng Tao:
Improving Neural Machine Translation by Bidirectional Training. CoRR abs/2109.07780 (2021) - [i8]Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, Zhaopeng Tu:
On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation. CoRR abs/2110.01811 (2021) - [i7]Juhua Liu, Qihuang Zhong, Liang Ding, Hua Jin, Bo Du, Dacheng Tao:
Unified Instance and Knowledge Alignment Pretraining for Aspect-based Sentiment Analysis. CoRR abs/2110.13398 (2021) - 2020
- [c4]Liang Ding, Longyue Wang, Dacheng Tao:
Self-Attention with Cross-Lingual Position Representation. ACL 2020: 1679-1685 - [c3]Liang Ding, Longyue Wang, Di Wu, Dacheng Tao, Zhaopeng Tu:
Context-Aware Cross-Attention for Non-Autoregressive Translation. COLING 2020: 4396-4402 - [c2]Longyue Wang, Zhaopeng Tu, Xing Wang, Li Ding, Liang Ding, Shuming Shi:
Tencent AI Lab Machine Translation Systems for WMT20 Chat Translation Task. WMT@EMNLP 2020: 483-491 - [i6]Liang Ding, Longyue Wang, Dacheng Tao:
Self-Attention with Cross-Lingual Position Representation. CoRR abs/2004.13310 (2020) - [i5]Liang Ding, Longyue Wang, Di Wu, Dacheng Tao, Zhaopeng Tu:
Context-Aware Cross-Attention for Non-Autoregressive Translation. CoRR abs/2011.00770 (2020) - [i4]Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, Zhaopeng Tu:
Understanding and Improving Lexical Choice in Non-Autoregressive Translation. CoRR abs/2012.14583 (2020) - [i3]Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Zhaopeng Tu:
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning. CoRR abs/2012.14768 (2020)
2010 – 2019
- 2019
- [c1]Liang Ding, Dacheng Tao:
The University of Sydney's Machine Translation System for WMT19. WMT (2) 2019: 175-182 - [i2]Liang Ding, Dacheng Tao:
The University of Sydney's Machine Translation System for WMT19. CoRR abs/1907.00494 (2019) - [i1]Liang Ding, Dacheng Tao:
Recurrent Graph Syntax Encoder for Neural Machine Translation. CoRR abs/1908.06559 (2019)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-23 20:33 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint