default search action
NAACL 2024: Mexico City, Mexico - Student Research Workshop
- Yang (Trista) Cao, Isabel Papadimitriou, Anaelia Ovalle, Marcos Zampieri, Francis Ferraro, Swabha Swayamdipta:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop, NAACL 2024, Mexico City, Mexico, June 18, 2024. Association for Computational Linguistics 2024, ISBN 979-8-89176-117-9 - Shih-Cheng Huang, Shih-Heng Wang, Min-Han Shih, Saurav Sahay, Hung-yi Lee:
Systematic Analysis for Pretrained Language Model Priming for Parameter-Efficient Fine-tuning. 1-7 - Haoran Yang, Hongyuan Lu, Wai Lam:
Rephrasing Invokes Better Generations for Large Language Models. 8-15 - Haoran Yang, Hongyuan Lu, Wai Lam, Deng Cai:
Exploring Compositional Generalization of Large Language Models. 16-24 - Dahyun Jung, Sugyeong Eo, Chanjun Park, Heuiseok Lim:
Explainable CED: A Dataset for Explainable Critical Error Detection in Machine Translation. 25-35 - Jean-Thomas Baillargeon, Luc Lamontagne:
SMARTR: A Framework for Early Detection using Survival Analysis of Longitudinal Texts. 36-41 - Richard Zhu:
Fast Exact Retrieval for Nearest-neighbor Lookup (FERN). 42-47 - Yunfei Luo, Yuyang Liu, Rukai Cai, Tauhidur Rahman:
Start Simple: Progressive Difficulty Multitask Learning. 48-55 - Joe Stacey, Jianpeng Cheng, John Torr, Tristan Guigue, Joris Driesen, Alexandru Coca, Mark Gaynor, Anders Johannsen:
LUCID: LLM-Generated Utterances for Complex and Interesting Dialogues. 56-74 - Sankalp Bahad, Pruthwik Mishra, Parameswari Krishnamurthy, Dipti Misra Sharma:
Fine-tuning Pre-trained Named Entity Recognition Models For Indian Languages. 75-82 - Selene Baez Santamaría:
Knowledge-centered conversational agents with a drive to learn. 83-92 - Seungyoon Lee, Dong Kim, Dahyun Jung, Chanjun Park, Heuiseok Lim:
Exploring Inherent Biases in LLMs within Korean Social Context: A Comparative Analysis of ChatGPT and GPT-4. 93-104 - Alina Leippert, Tatiana Anikina, Bernd Kiefer, Josef van Genabith:
To Clarify or not to Clarify: A Comparative Analysis of Clarification Classification with Fine-Tuning, Prompt Tuning, and Prompt Engineering. 105-115 - Ryohei Kamei, Daiki Shiono, Reina Akama, Jun Suzuki:
Detecting Response Generation Not Requiring Factual Judgment. 116-123 - Wondimagegnhue Tufa, Ilia Markov, Piek Vossen:
Unknown Script: Impact of Script on Cross-Lingual Transfer. 124-129 - Mizuki Kondo, Daisuke Kawahara, Toshiyuki Kurabayashi:
Improving Repository-level Code Search with Text Conversion. 130-137 - Minsu Park, Seyeon Choi, Chanyeol Choi, Jun-Seong Kim, Jy-yong Sohn:
Improving Multi-lingual Alignment Through Soft Contrastive Learning. 138-145 - Aboubacar Tuo, Romaric Besançon, Olivier Ferret, Julien Tourille:
Few-Shot Event Argument Extraction Based on a Meta-Learning Approach. 146-153 - Rintaro Enomoto, Arseny Tolmachev, Takuro Niitsuma, Shuhei Kurita, Daisuke Kawahara:
Investigating Web Corpus Filtering Methods for Language Model Development in Japanese. 154-160 - Jaap Kruijt:
Referring Expressions in Human-Robot Common Ground: A Thesis Proposal. 161-167 - Mohammed Ataaur Rahaman, Julia Ive:
Source Code is a Graph, Not a Sequence: A Cross-Lingual Perspective on Code Clone Detection. 168-199 - Chiyu Zhang, Honglong Cai, Yuezhang Li, Yuexin Wu, Le Hou, Muhammad Abdul-Mageed:
Distilling Text Style Transfer With Self-Explanation From LLMs. 200-211 - Hao Wang, Tetsuro Morimura, Ukyo Honda, Daisuke Kawahara:
Reinforcement Learning for Edit-Based Non-Autoregressive Neural Machine Translation. 212-218 - Koki Horiguchi, Tomoyuki Kajiwara, Yuki Arase, Takashi Ninomiya:
Evaluation Dataset for Japanese Medical Text Simplification. 219-225 - Reon Kajikawa, Keiichiro Yamada, Tomoyuki Kajiwara, Takashi Ninomiya:
Multi-Source Text Classification for Multilingual Sentence Encoder with Machine Translation. 226-232 - Hasti Toossi, Guo Qing Huai, Jinyu Liu, Eric Khiu, A. Seza Dogruöz, En-shiun Lee:
A Reproducibility Study on Quantifying Language Similarity: The Impact of Missing Values in the URIEL Knowledge Base. 233-241 - Yuki Zenimoto, Ryo Hasegawa, Takehito Utsuro, Masaharu Yoshioka, Noriko Kando:
Coding Open-Ended Responses using Pseudo Response Generation by Large Language Models. 242-254 - Qinyuan Ye:
Cross-Task Generalization Abilities of Large Language Models. 255-262 - Zihan Wang, Naoki Yoshinaga:
Commentary Generation from Data Records of Multiplayer Strategy Esports Game. 263-271 - Michiel van der Meer:
Facilitating Opinion Diversity through Hybrid NLP Approaches. 272-284 - Gokul Srinivasagan, Simon Ostermann:
HybridBERT - Making BERT Pretraining More Efficient Through Hybrid Mixture of Attention Mechanisms. 285-291
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.