default search action
Mehdi Rezagholizadeh
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c69]Suyuchen Wang, Ivan Kobyzev, Peng Lu, Mehdi Rezagholizadeh, Bang Liu:
Resonance RoPE: Improving Context Length Generalization of Large Language Models. ACL (Findings) 2024: 586-598 - [c68]Abbas Ghaddar, David Alfonso-Hermelo, Philippe Langlais, Mehdi Rezagholizadeh, Boxing Chen, Prasanna Parthasarathi:
CHARP: Conversation History AwaReness Probing for Knowledge-grounded Dialogue Systems. ACL (Findings) 2024: 1534-1551 - [c67]Chenyang Huang, Abbas Ghaddar, Ivan Kobyzev, Mehdi Rezagholizadeh, Osmar Zaïane, Boxing Chen:
OTTAWA: Optimal TransporT Adaptive Word Aligner for Hallucination and Omission Translation Errors Detection. ACL (Findings) 2024: 6322-6334 - [c66]Mohammad Dehghan, Mohammad Ali Alomrani, Sunyam Bagga, David Alfonso-Hermelo, Khalil Bibi, Abbas Ghaddar, Yingxue Zhang, Xiaoguang Li, Jianye Hao, Qun Liu, Jimmy Lin, Boxing Chen, Prasanna Parthasarathi, Mahdi Biparva, Mehdi Rezagholizadeh:
EWEK-QA : Enhanced Web and Efficient Knowledge Graph Retrieval for Citation-based Question Answering Systems. ACL (1) 2024: 14169-14187 - [c65]Parsa Kavehzadeh, Mojtaba Valipour, Marzieh S. Tahaei, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh:
Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference. EACL (Findings) 2024: 2129-2145 - [c64]Hossein Rajabzadeh, Mojtaba Valipour, Tianshu Zhu, Marzieh S. Tahaei, Hyock Ju Kwon, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh:
QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning. EMNLP (Industry Track) 2024: 712-718 - [c63]Fengran Mo, Abbas Ghaddar, Kelong Mao, Mehdi Rezagholizadeh, Boxing Chen, Qun Liu, Jian-Yun Nie:
CHIQ: Contextual History Enhancement for Improving Query Rewriting in Conversational Search. EMNLP 2024: 2253-2268 - [c62]Michael R. Metel, Peng Lu, Boxing Chen, Mehdi Rezagholizadeh, Ivan Kobyzev:
Draft on the Fly: Adaptive Self-Speculative Decoding using Cosine Similarity. EMNLP (Findings) 2024: 2267-2272 - [c61]Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh, Sarath Chandar:
Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models. EMNLP 2024: 5817-5830 - [c60]Nandan Thakur, Luiz Bonifacio, Crystina Zhang, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Boxing Chen, Mehdi Rezagholizadeh, Jimmy Lin:
"Knowing When You Don't Know": A Multilingual Relevance Assessment Dataset for Robust Retrieval-Augmented Generation. EMNLP (Findings) 2024: 12508-12526 - [c59]Xindi Wang, Mahsa Salmani, Parsa Omidi, Xiangyu Ren, Mehdi Rezagholizadeh, Armaghan Eshaghi:
Beyond the Limits: A Survey of Techniques to Extend the Context Length in Large Language Models. IJCAI 2024: 8299-8307 - [c58]Marzieh S. Tahaei, Aref Jafari, Ahmad Rashid, David Alfonso-Hermelo, Khalil Bibi, Yimeng Wu, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh:
Efficient Citer: Tuning Large Language Models for Enhanced Answer Quality and Verification. NAACL-HLT (Findings) 2024: 4443-4450 - [c57]Mofetoluwa Adeyemi, Akintunde Oladipo, Xinyu Zhang, David Alfonso-Hermelo, Mehdi Rezagholizadeh, Boxing Chen, Abdul-Hakeem Omotayo, Idris Abdulmumin, Naome A. Etori, Toyib Babatunde Musa, Samuel Fanijo, Oluwabusayo Olufunke Awoyomi, Saheed Abdullahi Salahudeen, Labaran Adamu Mohammed, Daud Olamide Abolade, Falalu Ibrahim Lawan, Maryam Sabo Abubakar, Ruqayya Nasir Iro, Amina Abubakar Imam, Shafie Abdi Mohamed, Hanad Mohamud Mohamed, Tunde Oluwaseyi Ajayi, Jimmy Lin:
CIRAL: A Test Collection for CLIR Evaluations in African Languages. SIGIR 2024: 293-302 - [i73]Abbas Ghaddar, Philippe Langlais, Mehdi Rezagholizadeh, Boxing Chen:
On the importance of Data Scale in Pretraining Arabic Language Models. CoRR abs/2401.07760 (2024) - [i72]Xindi Wang, Mahsa Salmani, Parsa Omidi, Xiangyu Ren, Mehdi Rezagholizadeh, Armaghan Eshaghi:
Beyond the Limits: A Survey of Techniques to Extend the Context Length in Large Language Models. CoRR abs/2402.02244 (2024) - [i71]Hossein Rajabzadeh, Mojtaba Valipour, Tianshu Zhu, Marzieh S. Tahaei, Hyock Ju Kwon, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh:
QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning. CoRR abs/2402.10462 (2024) - [i70]Suyuchen Wang, Ivan Kobyzev, Peng Lu, Mehdi Rezagholizadeh, Bang Liu:
Resonance RoPE: Improving Context Length Generalization of Large Language Models. CoRR abs/2403.00071 (2024) - [i69]Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Boxing Chen, Tiago H. Falk:
An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning. CoRR abs/2403.08654 (2024) - [i68]Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh, Sarath Chandar:
Towards Practical Tool Usage for Continually Learning LLMs. CoRR abs/2404.09339 (2024) - [i67]Abbas Ghaddar, David Alfonso-Hermelo, Philippe Langlais, Mehdi Rezagholizadeh, Boxing Chen, Prasanna Parthasarathi:
CHARP: Conversation History AwaReness Probing for Knowledge-grounded Dialogue Systems. CoRR abs/2405.15110 (2024) - [i66]Chenyang Huang, Abbas Ghaddar, Ivan Kobyzev, Mehdi Rezagholizadeh, Osmar R. Zaïane, Boxing Chen:
OTTAWA: Optimal TransporT Adaptive Word Aligner for Hallucination and Omission Translation Errors Detection. CoRR abs/2406.01919 (2024) - [i65]Fengran Mo, Abbas Ghaddar, Kelong Mao, Mehdi Rezagholizadeh, Boxing Chen, Qun Liu, Jian-Yun Nie:
CHIQ: Contextual History Enhancement for Improving Query Rewriting in Conversational Search. CoRR abs/2406.05013 (2024) - [i64]Mohammad Dehghan, Mohammad Ali Alomrani, Sunyam Bagga, David Alfonso-Hermelo, Khalil Bibi, Abbas Ghaddar, Yingxue Zhang, Xiaoguang Li, Jianye Hao, Qun Liu, Jimmy Lin, Boxing Chen, Prasanna Parthasarathi, Mahdi Biparva, Mehdi Rezagholizadeh:
EWEK-QA: Enhanced Web and Efficient Knowledge Graph Retrieval for Citation-based Question Answering Systems. CoRR abs/2406.10393 (2024) - [i63]Parsa Kavehzadeh, Mohammadreza Pourreza, Mojtaba Valipour, Tinashu Zhu, Haoli Bai, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh:
S2D: Sorted Speculative Decoding For More Efficient Deployment of Nested Large Language Models. CoRR abs/2407.01955 (2024) - [i62]Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh, Sarath Chandar:
Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models. CoRR abs/2408.08470 (2024) - [i61]Hossein Rajabzadeh, Aref Jafari, Aman Sharma, Benyamin Jami, Hyock Ju Kwon, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh:
EchoAtt: Attend, Copy, then Adjust for More Efficient Large Language Models. CoRR abs/2409.14595 (2024) - [i60]Michael R. Metel, Peng Lu, Boxing Chen, Mehdi Rezagholizadeh, Ivan Kobyzev:
Draft on the Fly: Adaptive Self-Speculative Decoding using Cosine Similarity. CoRR abs/2410.01028 (2024) - [i59]Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh, Boxing Chen, Sarath Chandar:
Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination. CoRR abs/2410.17477 (2024) - 2023
- [j2]Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, Jimmy Lin:
MIRACL: A Multilingual Retrieval Dataset Covering 18 Diverse Languages. Trans. Assoc. Comput. Linguistics 11: 1114-1131 (2023) - [c56]Ehsan Kamalloo, Xinyu Zhang, Odunayo Ogundepo, Nandan Thakur, David Alfonso-Hermelo, Mehdi Rezagholizadeh, Jimmy Lin:
Evaluating Embedding APIs for Information Retrieval. ACL (industry) 2023: 518-526 - [c55]Runcheng Liu, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart:
Attribute Controlled Dialogue Prompting. ACL (Findings) 2023: 2380-2389 - [c54]Asaad Alghamdi, Xinyu Duan, Wei Jiang, Zhenhai Wang, Yimeng Wu, Qingrong Xia, Zhefeng Wang, Yi Zheng, Mehdi Rezagholizadeh, Baoxing Huai, Peilun Cheng, Abbas Ghaddar:
AraMUS: Pushing the Limits of Data and Model Scale for Arabic Natural Language Processing. ACL (Findings) 2023: 2883-2894 - [c53]Peng Lu, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, Philippe Langlais:
LABO: Towards Learning Optimal Label Regularization via Bi-level Optimization. ACL (Findings) 2023: 5759-5774 - [c52]Yassir El Mesbahi, Atif Mahmud, Abbas Ghaddar, Mehdi Rezagholizadeh, Philippe Langlais, Prasanna Parthasarathi:
On the utility of enhancing BERT syntactic bias with Token Reordering Pretraining. CoNLL 2023: 165-182 - [c51]Ivan Kobyzev, Aref Jafari, Mehdi Rezagholizadeh, Tianda Li, Alan Do-Omri, Peng Lu, Pascal Poupart, Ali Ghodsi:
Do we need Label Regularization to Fine-tune Pre-trained Language Models? EACL 2023: 166-177 - [c50]Ankur Agarwal, Mehdi Rezagholizadeh, Prasanna Parthasarathi:
Practical Takes on Federated Learning with Pretrained Language Models. EACL (Findings) 2023: 454-471 - [c49]Mohammadreza Tayaranian, Alireza Ghaffari, Marzieh S. Tahaei, Mehdi Rezagholizadeh, Masoud Asgharian, Vahid Partovi Nia:
Towards Fine-tuning Pre-trained Language Models with Integer Forward and Backward Propagation. EACL (Findings) 2023: 1867-1876 - [c48]Mojtaba Valipour, Mehdi Rezagholizadeh, Ivan Kobyzev, Ali Ghodsi:
DyLoRA: Parameter-Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation. EACL 2023: 3266-3279 - [c47]Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi, Sarath Chandar:
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models. EMNLP (Findings) 2023: 4305-4319 - [c46]Peng Lu, Suyuchen Wang, Mehdi Rezagholizadeh, Bang Liu, Ivan Kobyzev:
Efficient Classification of Long Documents via State-Space Models. EMNLP 2023: 6559-6565 - [c45]Mofetoluwa Adeyemi, Akintunde Oladipo, Xinyu Zhang, David Alfonso-Hermelo, Mehdi Rezagholizadeh, Boxing Chen, Jimmy Lin:
CIRAL at FIRE 2023: Cross-Lingual Information Retrieval for African Languages. FIRE 2023: 4-6 - [c44]Mofetoluwa Adeyemi, Akintunde Oladipo, Xinyu Crystina Zhang, David Alfonso-Hermelo, Mehdi Rezagholizadeh, Boxing Chen, Jimmy Lin:
Overview of the CIRAL Track at FIRE 2023: Cross-lingual Information Retrieval for African Languages. FIRE (Working Notes) 2023: 118-136 - [c43]Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Boxing Chen, Tiago H. Falk:
Robustdistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness. ICASSP 2023: 1-5 - [i58]Aref Jafari, Mehdi Rezagholizadeh, Ali Ghodsi:
Improved knowledge distillation by utilizing backward pass knowledge in neural networks. CoRR abs/2301.12006 (2023) - [i57]Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Boxing Chen, Tiago H. Falk:
RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness. CoRR abs/2302.09437 (2023) - [i56]Jimmy Lin, David Alfonso-Hermelo, Vitor Jeronymo, Ehsan Kamalloo, Carlos Lassance, Rodrigo Frassetto Nogueira, Odunayo Ogundepo, Mehdi Rezagholizadeh, Nandan Thakur, Jheng-Hong Yang, Xinyu Zhang:
Simple Yet Effective Neural Ranking and Reranking Baselines for Cross-Lingual Information Retrieval. CoRR abs/2304.01019 (2023) - [i55]Peng Lu, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, Philippe Langlais:
LABO: Towards Learning Optimal Label Regularization via Bi-level Optimization. CoRR abs/2305.04971 (2023) - [i54]Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Tiago H. Falk:
An Exploration into the Performance of Unsupervised Cross-Task Speech Representations for "In the Wild" Edge Applications. CoRR abs/2305.05443 (2023) - [i53]Ehsan Kamalloo, Xinyu Zhang, Odunayo Ogundepo, Nandan Thakur, David Alfonso-Hermelo, Mehdi Rezagholizadeh, Jimmy Lin:
Evaluating Embedding APIs for Information Retrieval. CoRR abs/2305.06300 (2023) - [i52]Vamsikrishna Chemudupati, Marzieh S. Tahaei, Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Boxing Chen, Tiago H. Falk:
On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications. CoRR abs/2305.14546 (2023) - [i51]Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi, Sarath Chandar:
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models. CoRR abs/2305.14775 (2023) - [i50]Asaad Alghamdi, Xinyu Duan, Wei Jiang, Zhenhai Wang, Yimeng Wu, Qingrong Xia, Zhefeng Wang, Yi Zheng, Mehdi Rezagholizadeh, Baoxing Huai, Peilun Cheng, Abbas Ghaddar:
AraMUS: Pushing the Limits of Data and Model Scale for Arabic Natural Language Processing. CoRR abs/2306.06800 (2023) - [i49]Anderson R. Avila, Mehdi Rezagholizadeh, Chao Xing:
Multimodal Audio-textual Architecture for Robust Spoken Language Understanding. CoRR abs/2306.06819 (2023) - [i48]Runcheng Liu, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart:
Attribute Controlled Dialogue Prompting. CoRR abs/2307.05228 (2023) - [i47]Mojtaba Valipour, Mehdi Rezagholizadeh, Hossein Rajabzadeh, Marzieh S. Tahaei, Boxing Chen, Ali Ghodsi:
SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks. CoRR abs/2309.00255 (2023) - [i46]Parsa Kavehzadeh, Mojtaba Valipour, Marzieh S. Tahaei, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh:
Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT). CoRR abs/2309.08968 (2023) - [i45]Arthur Pimentel, Heitor R. Guimarães, Anderson R. Avila, Mehdi Rezagholizadeh, Tiago H. Falk:
On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild". CoRR abs/2309.14462 (2023) - [i44]Nandan Thakur, Luiz Bonifacio, Xinyu Zhang, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Boxing Chen, Mehdi Rezagholizadeh, Jimmy Lin:
NoMIRACL: Knowing When You Don't Know for Robust Multilingual Retrieval-Augmented Generation. CoRR abs/2312.11361 (2023) - 2022
- [c42]Krtin Kumar, Peyman Passban, Mehdi Rezagholizadeh, Yiu Sing Lau, Qun Liu:
From Fully Trained to Fully Random Embeddings: Improving Neural Machine Translation with Compact Word Embedding Tables. AAAI 2022: 10930-10937 - [c41]Ali Edalati, Marzieh S. Tahaei, Ahmad Rashid, Vahid Partovi Nia, James J. Clark, Mehdi Rezagholizadeh:
Kronecker Decomposition for GPT Compression. ACL (2) 2022: 219-226 - [c40]Ehsan Kamalloo, Mehdi Rezagholizadeh, Ali Ghodsi:
When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation. ACL (Findings) 2022: 1048-1062 - [c39]Md. Akmal Haidar, Mehdi Rezagholizadeh, Abbas Ghaddar, Khalil Bibi, Philippe Langlais, Pascal Poupart:
CILDA: Contrastive Data Augmentation Using Intermediate Layer Knowledge Distillation. COLING 2022: 4707-4713 - [c38]Mehdi Rezagholizadeh, Aref Jafari, Puneeth S. M. Saladi, Pranav Sharma, Ali Saheb Pasand, Ali Ghodsi:
Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher. COLING 2022: 4714-4727 - [c37]Joyce Zheng, Mehdi Rezagholizadeh, Peyman Passban:
Dynamic Position Encoding for Transformers. COLING 2022: 5076-5084 - [c36]Abbas Ghaddar, Yimeng Wu, Sunyam Bagga, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Xinyu Duan, Zhefeng Wang, Baoxing Huai, Xin Jiang, Qun Liu, Philippe Langlais:
Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Processing. EMNLP 2022: 3135-3151 - [c35]Peng Lu, Ivan Kobyzev, Mehdi Rezagholizadeh, Ahmad Rashid, Ali Ghodsi, Philippe Langlais:
Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging. EMNLP (Findings) 2022: 4948-4954 - [c34]Aref Jafari, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart, Ali Ghodsi:
Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization. EMNLP (Findings) 2022: 5260-5269 - [c33]Md. Akmal Haidar, Nithin Anchuri, Mehdi Rezagholizadeh, Abbas Ghaddar, Philippe Langlais, Pascal Poupart:
RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation. NAACL-HLT (Findings) 2022: 1389-1400 - [c32]Marzieh S. Tahaei, Ella Charlaix, Vahid Partovi Nia, Ali Ghodsi, Mehdi Rezagholizadeh:
KroneckerBERT: Significant Compression of Pre-trained Language Models Through Kronecker Decomposition and Knowledge Distillation. NAACL-HLT 2022: 2116-2127 - [c31]Ehsan Kamalloo, David Alfonso-Hermelo, Mehdi Rezagholizadeh:
Huawei Noah's Ark Lab at TREC NeuCLIR 2022. TREC 2022 - [c30]Jimmy Lin, David Alfonso-Hermelo, Vitor Jeronymo, Ehsan Kamalloo, Carlos Lassance, Rodrigo Frassetto Nogueira, Odunayo Ogundepo, Mehdi Rezagholizadeh, Nandan Thakur, Jheng-Hong Yang, Xinyu Zhang:
Simple Yet Effective Neural Ranking and Reranking Baselines for Cross-Lingual Information Retrieval. TREC 2022 - [c29]Kira A. Selby, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart:
Learning functions on multiple sets using multi-set transformers. UAI 2022: 1760-1770 - [i43]Ehsan Kamalloo, Mehdi Rezagholizadeh, Ali Ghodsi:
When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation. CoRR abs/2203.09391 (2022) - [i42]Md. Akmal Haidar, Mehdi Rezagholizadeh, Abbas Ghaddar, Khalil Bibi, Philippe Langlais, Pascal Poupart:
CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation. CoRR abs/2204.07674 (2022) - [i41]Joyce Zheng, Mehdi Rezagholizadeh, Peyman Passban:
Dynamic Position Encoding for Transformers. CoRR abs/2204.08142 (2022) - [i40]Abbas Ghaddar, Yimeng Wu, Sunyam Bagga, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Duan Xinyu, Zhefeng Wang, Baoxing Huai, Xin Jiang, Qun Liu, Philippe Langlais:
Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding. CoRR abs/2205.10687 (2022) - [i39]Ivan Kobyzev, Aref Jafari, Mehdi Rezagholizadeh, Tianda Li, Alan Do-Omri, Peng Lu, Ali Ghodsi, Pascal Poupart:
Towards Understanding Label Regularization for Fine-tuning Pre-trained Language Models. CoRR abs/2205.12428 (2022) - [i38]Kira A. Selby, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart:
Learning Functions on Multiple Sets using Multi-Set Transformers. CoRR abs/2206.15444 (2022) - [i37]Mohammadreza Tayaranian, Alireza Ghaffari, Marzieh S. Tahaei, Mehdi Rezagholizadeh, Masoud Asgharian, Vahid Partovi Nia:
Integer Fine-tuning of Transformer-based Models. CoRR abs/2209.09815 (2022) - [i36]Mojtaba Valipour, Mehdi Rezagholizadeh, Ivan Kobyzev, Ali Ghodsi:
DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation. CoRR abs/2210.07558 (2022) - [i35]Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, Jimmy Lin:
Making a MIRACL: Multilingual Information Retrieval Across a Continuum of Languages. CoRR abs/2210.09984 (2022) - [i34]Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Tiago H. Falk:
Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement. CoRR abs/2211.06562 (2022) - [i33]Peng Lu, Ivan Kobyzev, Mehdi Rezagholizadeh, Ahmad Rashid, Ali Ghodsi, Philippe Langlais:
Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging. CoRR abs/2212.05956 (2022) - [i32]Aref Jafari, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart, Ali Ghodsi:
Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization. CoRR abs/2212.05998 (2022) - [i31]Ali Edalati, Marzieh S. Tahaei, Ivan Kobyzev, Vahid Partovi Nia, James J. Clark, Mehdi Rezagholizadeh:
KronA: Parameter Efficient Tuning with Kronecker Adapter. CoRR abs/2212.10650 (2022) - 2021
- [j1]Abbas Ghaddar, Philippe Langlais, Ahmad Rashid, Mehdi Rezagholizadeh:
Context-aware Adversarial Training for Name Regularity Bias in Named Entity Recognition. Trans. Assoc. Comput. Linguistics 9: 586-604 (2021) - [c28]Peyman Passban, Yimeng Wu, Mehdi Rezagholizadeh, Qun Liu:
ALP-KD: Attention-Based Layer Projection for Knowledge Distillation. AAAI 2021: 13657-13665 - [c27]Ahmad Rashid, Vasileios Lioutas, Mehdi Rezagholizadeh:
MATE-KD: Masked Adversarial TExt, a Companion to Knowledge Distillation. ACL/IJCNLP (1) 2021: 1062-1071 - [c26]Abbas Ghaddar, Philippe Langlais, Mehdi Rezagholizadeh, Ahmad Rashid:
End-to-End Self-Debiasing Framework for Robust NLU Training. ACL/IJCNLP (Findings) 2021: 1923-1929 - [c25]Ehsan Kamalloo, Mehdi Rezagholizadeh, Peyman Passban, Ali Ghodsi:
Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax. ACL/IJCNLP (Findings) 2021: 3522-3533 - [c24]Shivendra Bhardwaj, Abbas Ghaddar, Ahmad Rashid, Khalil Bibi, Chengyang Li, Ali Ghodsi, Philippe Langlais, Mehdi Rezagholizadeh:
Knowledge Distillation with Noisy Labels for Natural Language Understanding. W-NUT 2021: 297-303 - [c23]Aref Jafari, Mehdi Rezagholizadeh, Pranav Sharma, Ali Ghodsi:
Annealing Knowledge Distillation. EACL 2021: 2493-2504 - [c22]Tianda Li, Ahmad Rashid, Aref Jafari, Pranav Sharma, Ali Ghodsi, Mehdi Rezagholizadeh:
How to Select One Among All ? An Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding. EMNLP (Findings) 2021: 750-762 - [c21]Peng Lu, Abbas Ghaddar, Ahmad Rashid, Mehdi Rezagholizadeh, Ali Ghodsi, Philippe Langlais:
RW-KD: Sample-wise Loss Terms Re-Weighting for Knowledge Distillation. EMNLP (Findings) 2021: 3145-3152 - [c20]Ahmad Rashid, Vasileios Lioutas, Abbas Ghaddar, Mehdi Rezagholizadeh:
Towards Zero-Shot Knowledge Distillation for Natural Language Processing. EMNLP (1) 2021: 6551-6561 - [c19]Yimeng Wu, Mehdi Rezagholizadeh, Abbas Ghaddar, Md. Akmal Haidar, Ali Ghodsi:
Universal-KD: Attention-based Output-Grounded Intermediate Layer Knowledge Distillation. EMNLP (1) 2021: 7649-7661 - [c18]Md. Akmal Haidar, Mehdi Rezagholizadeh:
Fine-Tuning of Pre-Trained End-to-End Speech Recognition with Generative Adversarial Networks. ICASSP 2021: 6204-6208 - [c17]Md. Akmal Haidar, Chao Xing, Mehdi Rezagholizadeh:
Transformer-Based ASR Incorporating Time-Reduction Layer and Fine-Tuning with Self-Knowledge Distillation. Interspeech 2021: 2102-2106 - [c16]David Alfonso-Hermelo, Ahmad Rashid, Abbas Ghaddar, Philippe Langlais, Mehdi Rezagholizadeh:
NATURE: Natural Auxiliary Text Utterances for Realistic Spoken Language Evaluation. NeurIPS Datasets and Benchmarks 2021 - [i30]Md. Akmal Haidar, Chao Xing, Mehdi Rezagholizadeh:
Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation. CoRR abs/2103.09903 (2021) - [i29]Md. Akmal Haidar, Mehdi Rezagholizadeh:
Fine-tuning of Pre-trained End-to-end Speech Recognition with Generative Adversarial Networks. CoRR abs/2103.13329 (2021) - [i28]Aref Jafari, Mehdi Rezagholizadeh, Pranav Sharma, Ali Ghodsi:
Annealing Knowledge Distillation. CoRR abs/2104.07163 (2021) - [i27]Kira A. Selby, Yinong Wang, Ruizhe Wang, Peyman Passban, Ahmad Rashid, Mehdi Rezagholizadeh, Pascal Poupart:
Robust Embeddings Via Distributions. CoRR abs/2104.08420 (2021) - [i26]Krtin Kumar, Mehdi Rezagholizadeh, Yiu Sing Lau, Qun Liu:
Improving Neural Machine Translation with Compact Word Embedding Tables. CoRR abs/2104.08677 (2021) - [i25]Ahmad Rashid, Vasileios Lioutas, Mehdi Rezagholizadeh:
MATE-KD: Masked Adversarial TExt, a Companion to Knowledge Distillation. CoRR abs/2105.05912 (2021) - [i24]Ehsan Kamalloo, Mehdi Rezagholizadeh, Peyman Passban, Ali Ghodsi:
Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax. CoRR abs/2105.13608 (2021) - [i23]Abbas Ghaddar, Philippe Langlais, Ahmad Rashid, Mehdi Rezagholizadeh:
Context-aware Adversarial Training for Name Regularity Bias in Named Entity Recognition. CoRR abs/2107.11610 (2021) - [i22]Abbas Ghaddar, Philippe Langlais, Mehdi Rezagholizadeh, Ahmad Rashid:
End-to-End Self-Debiasing Framework for Robust NLU Training. CoRR abs/2109.02071 (2021) - [i21]Tianda Li, Ahmad Rashid, Aref Jafari, Pranav Sharma, Ali Ghodsi, Mehdi Rezagholizadeh:
How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding. CoRR abs/2109.05696 (2021) - [i20]Marzieh S. Tahaei, Ella Charlaix, Vahid Partovi Nia, Ali Ghodsi, Mehdi Rezagholizadeh:
KroneckerBERT: Learning Kronecker Decomposition for Pre-trained Language Models via Knowledge Distillation. CoRR abs/2109.06243 (2021) - [i19]Shivendra Bhardwaj, Abbas Ghaddar, Ahmad Rashid, Khalil Bibi, Chengyang Li, Ali Ghodsi, Philippe Langlais, Mehdi Rezagholizadeh:
Knowledge Distillation with Noisy Labels for Natural Language Understanding. CoRR abs/2109.10147 (2021) - [i18]Md. Akmal Haidar, Nithin Anchuri, Mehdi Rezagholizadeh, Abbas Ghaddar, Philippe Langlais, Pascal Poupart:
RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation. CoRR abs/2109.10164 (2021) - [i17]Ali Edalati, Marzieh S. Tahaei, Ahmad Rashid, Vahid Partovi Nia, James J. Clark, Mehdi Rezagholizadeh:
Kronecker Decomposition for GPT Compression. CoRR abs/2110.08152 (2021) - [i16]Tianda Li, Yassir El Mesbahi, Ivan Kobyzev, Ahmad Rashid, Atif Mahmud, Nithin Anchuri, Habib Hajimolahoseini, Yang Liu, Mehdi Rezagholizadeh:
A Short Study on Compressing Decoder-Based Language Models. CoRR abs/2110.08460 (2021) - [i15]Mehdi Rezagholizadeh, Aref Jafari, Puneeth Salad, Pranav Sharma, Ali Saheb Pasand, Ali Ghodsi:
Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher. CoRR abs/2110.08532 (2021) - [i14]David Alfonso-Hermelo, Ahmad Rashid, Abbas Ghaddar, Philippe Langlais, Mehdi Rezagholizadeh:
NATURE: Natural Auxiliary Text Utterances for Realistic Spoken Language Evaluation. CoRR abs/2111.05196 (2021) - [i13]Abbas Ghaddar, Yimeng Wu, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Duan Xinyu, Zhefeng Wang, Baoxing Huai, Xin Jiang, Qun Liu, Philippe Langlais:
JABER: Junior Arabic BERt. CoRR abs/2112.04329 (2021) - 2020
- [c15]Gabriele Prato, Ella Charlaix, Mehdi Rezagholizadeh:
Fully Quantized Transformer for Machine Translation. EMNLP (Findings) 2020: 1-14 - [c14]Yimeng Wu, Peyman Passban, Mehdi Rezagholizadeh, Qun Liu:
Why Skip If You Can Combine: A Simple Knowledge Distillation Technique for Intermediate Layers. EMNLP (1) 2020: 1016-1021 - [c13]Vasileios Lioutas, Ahmad Rashid, Krtin Kumar, Md. Akmal Haidar, Mehdi Rezagholizadeh:
Improving Word Embedding Factorization for Compression using Distilled Nonlinear Neural Decomposition. EMNLP (Findings) 2020: 2774-2784 - [c12]Ahmad Rashid, Alan Do-Omri, Md. Akmal Haidar, Qun Liu, Mehdi Rezagholizadeh:
From Unsupervised Machine Translation to Adversarial Text Generation. ICASSP 2020: 8194-8198 - [i12]Yimeng Wu, Peyman Passban, Mehdi Rezagholizadeh, Qun Liu:
Why Skip If You Can Combine: A Simple Knowledge Distillation Technique for Intermediate Layers. CoRR abs/2010.03034 (2020) - [i11]Ahmad Rashid, Alan Do-Omri, Md. Akmal Haidar, Qun Liu, Mehdi Rezagholizadeh:
From Unsupervised Machine Translation To Adversarial Text Generation. CoRR abs/2011.05449 (2020) - [i10]Peyman Passban, Yimeng Wu, Mehdi Rezagholizadeh, Qun Liu:
ALP-KD: Attention-Based Layer Projection for Knowledge Distillation. CoRR abs/2012.14022 (2020) - [i9]Ahmad Rashid, Vasileios Lioutas, Abbas Ghaddar, Mehdi Rezagholizadeh:
Towards Zero-Shot Knowledge Distillation for Natural Language Processing. CoRR abs/2012.15495 (2020)
2010 – 2019
- 2019
- [c11]Yue Dong, Zichao Li, Mehdi Rezagholizadeh, Jackie Chi Kit Cheung:
EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing. ACL (1) 2019: 3393-3402 - [c10]Md. Akmal Haidar, Mehdi Rezagholizadeh:
TextKD-GAN: Text Generation Using Knowledge Distillation and Generative Adversarial Networks. Canadian AI 2019: 107-118 - [c9]Jules Gagnon-Marchand, Hamed Sadeghi, Md. Akmal Haidar, Mehdi Rezagholizadeh:
SALSA-TEXT: Self Attentive Latent Space Based Adversarial Text Generation. Canadian AI 2019: 119-131 - [c8]Md. Akmal Haidar, Mehdi Rezagholizadeh, Alan Do-Omri, Ahmad Rashid:
Latent Code and Text-based Generative Adversarial Networks for Soft-text Generation. NAACL-HLT (1) 2019: 2248-2258 - [i8]Ahmad Rashid, Alan Do-Omri, Md. Akmal Haidar, Qun Liu, Mehdi Rezagholizadeh:
Bilingual-GAN: A Step Towards Parallel Text Generation. CoRR abs/1904.04742 (2019) - [i7]Md. Akmal Haidar, Mehdi Rezagholizadeh, Alan Do-Omri, Ahmad Rashid:
Latent Code and Text-based Generative Adversarial Networks for Soft-text Generation. CoRR abs/1904.07293 (2019) - [i6]Md. Akmal Haidar, Mehdi Rezagholizadeh:
TextKD-GAN: Text Generation using KnowledgeDistillation and Generative Adversarial Networks. CoRR abs/1905.01976 (2019) - [i5]Yue Dong, Zichao Li, Mehdi Rezagholizadeh, Jackie Chi Kit Cheung:
EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing. CoRR abs/1906.08104 (2019) - [i4]Vasileios Lioutas, Ahmad Rashid, Krtin Kumar, Md. Akmal Haidar, Mehdi Rezagholizadeh:
Distilled embedding: non-linear embedding factorization using knowledge distillation. CoRR abs/1910.06720 (2019) - [i3]Gabriele Prato, Ella Charlaix, Mehdi Rezagholizadeh:
Fully Quantized Transformer for Improved Translation. CoRR abs/1910.10485 (2019) - [i2]Alex Bie, Bharat Venkitesh, Joao Monteiro, Md. Akmal Haidar, Mehdi Rezagholizadeh:
Fully Quantizing a Simplified Transformer for End-to-end Speech Recognition. CoRR abs/1911.03604 (2019) - 2018
- [c7]Mehdi Rezagholizadeh, Md. Akmal Haidar:
Reg-Gan: Semi-Supervised Learning Based on Generative Adversarial Networks for Regression. ICASSP 2018: 2806-2810 - [i1]Jules Gagnon-Marchand, Hamed Sadeghi, Md. Akmal Haidar, Mehdi Rezagholizadeh:
SALSA-TEXT : self attentive latent space based adversarial text generation. CoRR abs/1809.11155 (2018) - 2016
- [c6]Mehdi Rezagholizadeh, Tara Akhavan, Afsoon Soudi, Hannes Kaufmann, James J. Clark:
A Retargeting Approach for Mesopic Vision: Simulation and Compensation. Color Imaging: Displaying, Processing, Hardcopy, and Applications 2016: 1-12 - 2015
- [c5]Mehdi Rezagholizadeh, James J. Clark:
Image Sensor Modeling: Noise and Linear Transformation Impacts on the Color Gamut. CRV 2015: 169-175 - 2014
- [c4]Mehdi Rezagholizadeh, James J. Clark:
Photon Detection and Color Perception at Low Light Levels. CRV 2014: 283-290 - [c3]Mehdi Rezagholizadeh, James J. Clark:
Image Sensor Modeling: Color Measurement at Low Light Levels. CIC 2014: 265-275 - 2013
- [c2]Mehdi Rezagholizadeh, James J. Clark:
Edge-Based and Efficient Chromaticity Spatio-spectral Models for Color Constancy. CRV 2013: 188-195 - [c1]Mehdi Rezagholizadeh, James J. Clark:
Maximum Entropy Spectral Modeling Approach to Mesopic Tone Mapping. CIC 2013: 154-159
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-28 21:23 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint