default search action
20th BioNLP@NAACL-HLT 2021: Online
- Dina Demner-Fushman, Kevin Bretonnel Cohen, Sophia Ananiadou, Junichi Tsujii:
Proceedings of the 20th Workshop on Biomedical Language Processing, BioNLP@NAACL-HLT 2021, Online, June 11, 2021. Association for Computational Linguistics 2021, ISBN 978-1-954085-40-4 - Peng Su, Yifan Peng, K. Vijay-Shanker:
Improving BERT Model Using Contrastive Learning for Biomedical Relation Extraction. 1-10 - Dongfang Xu, Steven Bethard:
Triplet-Trained Vector Space and Sieve-Based Search Improve Biomedical Concept Normalization. 11-22 - Pieter Fivez, Simon Suster, Walter Daelemans:
Scalable Few-Shot Learning of Robust Biomedical Name Representations. 23-29 - Gjorgjina Cenikj, Tome Eftimov, Barbara Korousic-Seljak:
SAFFRON: tranSfer leArning For Food-disease RelatiOn extractioN. 30-40 - Madhumita Sushil, Simon Suster, Walter Daelemans:
Are we there yet? Exploring clinical domain knowledge of BERT models. 41-53 - Damian Pascual, Sandro Luck, Roger Wattenhofer:
Towards BERT-based Automatic ICD Coding: Limitations and Opportunities. 54-63 - Preethi Raghavan, Jennifer J. Liang, Diwakar Mahajan, Rachita Chandra, Peter Szolovits:
emrKBQA: A Clinical Knowledge-Base Question Answering Dataset. 64-73 - Asma Ben Abacha, Yassine Mrabet, Yuhao Zhang, Chaitanya Shivade, Curtis P. Langlotz, Dina Demner-Fushman:
Overview of the MEDIQA 2021 Shared Task on Summarization in the Medical Domain. 74-85 - Mario Sänger, Leon Weber, Ulf Leser:
WBI at MEDIQA 2021: Summarizing Consumer Health Questions with Generative Transformers. 86-95 - Wei Zhu, Yilong He, Ling Chai, Yunxiao Fan, Yuan Ni, Guotong Xie, Xiaoling Wang:
paht_nlp @ MEDIQA 2021: Multi-grained Query Focused Multi-Answer Summarization. 96-102 - Songtai Dai, Quan Wang, Yajuan Lyu, Yong Zhu:
BDKG at MEDIQA 2021: System Report for the Radiology Report Summarization Task. 103-111 - Yifan He, Mosha Chen, Songfang Huang:
damo_nlp at MEDIQA 2021: Knowledge-based Preprocessing and Coverage-oriented Reranking for Medical Question Summarization. 112-118 - Vladimir Araujo, Andrés Carvallo, Carlos Aspillaga, Camilo Thorne, Denis Parra:
Stress Test Evaluation of Biomedical Word Embeddings. 119-125 - William P. Hogan, Yoshiki Vazquez-Baeza, Yannis Katsis, Tyler Baldwin, Ho-Cheol Kim, Chun-Nan Hsu:
BLAR: Biomedical Local Acronym Resolver. 126-130 - Amelie Wührl, Roman Klinger:
Claim Detection in Biomedical Twitter Posts. 131-142 - Kamal Raj Kanakarajan, Bhuvana Kundumani, Malaikannan Sankarasubbu:
BioELECTRA: Pretrained Biomedical text Encoder using Discriminators. 143-154 - Zelalem Gero, Joyce C. Ho:
Word centrality constrained representation for keyphrase extraction. 155-161 - Shogo Ujiie, Hayate Iso, Shuntaro Yada, Shoko Wakamiya, Eiji Aramaki:
End-to-end Biomedical Entity Linking with Span-based Dictionary Matching. 162-167 - Mark-Christoph Müller, Sucheta Ghosh, Ulrike Wittig, Maja Rey:
Word-Level Alignment of Paper Documents with their Electronic Full-Text Counterparts. 168-179 - Zheng Yuan, Yijia Liu, Chuanqi Tan, Songfang Huang, Fei Huang:
Improving Biomedical Pretrained Language Models with Knowledge. 180-190 - Chen Lin, Timothy A. Miller, Dmitriy Dligach, Steven Bethard, Guergana Savova:
EntityBERT: Entity-centric Masking Strategy for Model Pretraining for the Clinical Domain. 191-201 - Madhumita Sushil, Simon Suster, Walter Daelemans:
Contextual explanation rules for neural clinical classifiers. 202-212 - Yang Liu, Yuanhe Tian, Tsung-Hui Chang, Song Wu, Xiang Wan, Yan Song:
Exploring Word Segmentation and Medical Concept Recognition for Chinese Medical Texts. 213-220 - Sultan Alrowili, Vijay Shanker:
BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA. 221-227 - Minghao Zhu, Keyuan Jiang:
Semi-Supervised Language Models for Identification of Personal Health Experiential from Twitter Data: A Case for Medication Effects. 228-237 - Emilee Holtzapple, Brent Cochran, Natasa Miskov-Zivanov:
Context-aware query design combines knowledge and data for efficient reading and reasoning. 238-246 - Lana Yeganova, Won Kim, Donald C. Comeau, W. John Wilbur, Zhiyong Lu:
Measuring the relative importance of full text sections for information retrieval from scientific literature. 247-256 - Khalil Mrini, Franck Dernoncourt, Seunghyun Yoon, Trung Bui, Walter Chang, Emilia Farcas, Ndapa Nakashole:
UCSD-Adobe at MEDIQA 2021: Transfer Learning and Answer Sentence Selection for Medical Summarization. 257-262 - Liwen Xu, Yan Zhang, Lei Hong, Yi Cai, Szui Sung:
ChicHealth @ MEDIQA 2021: Exploring the limits of pre-trained seq2seq models for medical summarization. 263-267 - Lung-Hao Lee, Po-Han Chen, Yu-Xiang Zeng, Po-Lei Lee, Kuo-Kai Shyu:
NCUEE-NLP at MEDIQA 2021: Health Question Summarization Using PEGASUS Transformers. 268-272 - Spandana Balumuri, Sony Bachina, Sowmya Kamath S.:
SB_NITK at MEDIQA 2021: Leveraging Transfer Learning for Question Summarization in Medical Domain. 273-279 - Ravi Kondadadi, Sahil Manchanda, Jason Ngo, Ronan McCormack:
Optum at MEDIQA 2021: Abstractive Summarization of Radiology Reports using simple BART Finetuning. 280-284 - Jean-Benoit Delbrouck, Cassie Zhang, Daniel L. Rubin:
QIAI at MEDIQA 2021: Multimodal Radiology Report Summarization. 285-290 - Shweta Yadav, Mourad Sarrouti, Deepak Gupta:
NLM at MEDIQA 2021: Transfer Learning-based Approaches for Consumer Question and Multi-Answer Summarization. 291-301 - Diwakar Mahajan, Ching-Huei Tsou, Jennifer J. Liang:
IBMResearch at MEDIQA 2021: Toward Improving Factual Correctness of Radiology Report Abstractive Summarization. 302-310 - Duy-Cat Can, Vo Nguyen Quoc Bao, Quoc-Hung Duong, Minh-Quang Nguyen, Huy-Son Nguyen, Linh Nguyen Tran Ngoc, Quang-Thuy Ha, Mai-Vu Tran:
UETrice at MEDIQA 2021: A Prosper-thy-neighbour Extractive Multi-document Summarization Model. 311-319 - Jooyeon Lee, Huong Dang, Özlem Uzuner, Sam Henry:
MNLP at MEDIQA 2021: Fine-Tuning PEGASUS for Consumer Health Question Summarization. 320-327 - Hoang-Quynh Le, Quoc-An Nguyen, Quoc-Hung Duong, Minh-Quang Nguyen, Huy-Son Nguyen, Tam Doan Thanh, Hai-Yen Thi Vuong, Trang M. Nguyen:
UETfishes at MEDIQA 2021: Standing-on-the-Shoulders-of-Giants Model for Abstractive Multi-answer Summarization. 328-335
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.