default search action
Liunian Harold Li
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c17]Sahana Ramnath, Brihi Joshi, Skyler Hallinan, Ximing Lu, Liunian Harold Li, Aaron Chan, Jack Hessel, Yejin Choi, Xiang Ren:
Tailoring Self-Rationalizers with Multi-Reward Distillation. ICLR 2024 - [i19]Wenbo Hu, Zi-Yi Dou, Liunian Harold Li, Amita Kamath, Nanyun Peng, Kai-Wei Chang:
Matryoshka Query Transformer for Large Vision-Language Models. CoRR abs/2405.19315 (2024) - 2023
- [c16]Masoud Monajatipoor, Liunian Harold Li, Mozhdeh Rouhsedaghat, Lin Yang, Kai-Wei Chang:
MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models. ACL (2) 2023: 495-508 - [c15]Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, Yejin Choi:
Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step. ACL (1) 2023: 2665-2679 - [c14]Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang, Guy Van den Broeck:
On the Paradox of Learning to Reason from Data. IJCAI 2023: 3365-3373 - [c13]Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang:
DesCo: Learning Object Recognition with Rich Language Descriptions. NeurIPS 2023 - [i18]Masoud Monajatipoor, Liunian Harold Li, Mozhdeh Rouhsedaghat, Lin F. Yang, Kai-Wei Chang:
MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models. CoRR abs/2306.01311 (2023) - [i17]Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, Yejin Choi:
Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step. CoRR abs/2306.14050 (2023) - [i16]Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang:
DesCo: Learning Object Recognition with Rich Language Descriptions. CoRR abs/2306.14060 (2023) - [i15]Sahana Ramnath, Brihi Joshi, Skyler Hallinan, Ximing Lu, Liunian Harold Li, Aaron Chan, Jack Hessel, Yejin Choi, Xiang Ren:
Tailoring Self-Rationalizers with Multi-Reward Distillation. CoRR abs/2311.02805 (2023) - 2022
- [c12]Zhecan Wang, Haoxuan You, Liunian Harold Li, Alireza Zareian, Suji Park, Yiqing Liang, Kai-Wei Chang, Shih-Fu Chang:
SGEITL: Scene Graph Enhanced Image-Text Learning for Visual Commonsense Reasoning. AAAI 2022: 5914-5922 - [c11]Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, Jianfeng Gao:
Grounded Language-Image Pre-training. CVPR 2022: 10955-10965 - [c10]Yiwu Zhong, Jianwei Yang, Pengchuan Zhang, Chunyuan Li, Noel Codella, Liunian Harold Li, Luowei Zhou, Xiyang Dai, Lu Yuan, Yin Li, Jianfeng Gao:
RegionCLIP: Region-based Language-Image Pretraining. CVPR 2022: 16772-16782 - [c9]Da Yin, Hritik Bansal, Masoud Monajatipoor, Liunian Harold Li, Kai-Wei Chang:
GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models. EMNLP 2022: 2039-2055 - [c8]Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, Kurt Keutzer:
How Much Can CLIP Benefit Vision-and-Language Tasks? ICLR 2022 - [c7]Masoud Monajatipoor, Mozhdeh Rouhsedaghat, Liunian Harold Li, C.-C. Jay Kuo, Aichi Chien, Kai-Wei Chang:
BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease Diagnosis. MICCAI (5) 2022: 725-734 - [c6]Chunyuan Li, Haotian Liu, Liunian Harold Li, Pengchuan Zhang, Jyoti Aneja, Jianwei Yang, Ping Jin, Houdong Hu, Zicheng Liu, Yong Jae Lee, Jianfeng Gao:
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models. NeurIPS 2022 - [c5]Haotian Zhang, Pengchuan Zhang, Xiaowei Hu, Yen-Chun Chen, Liunian Harold Li, Xiyang Dai, Lijuan Wang, Lu Yuan, Jenq-Neng Hwang, Jianfeng Gao:
GLIPv2: Unifying Localization and Vision-Language Understanding. NeurIPS 2022 - [e1]Daphne Ippolito, Liunian Harold Li, Maria Leonor Pacheco, Danqi Chen, Nianwen Xue:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop, NAACL-HLT 2022, Hybrid Event / Seattle, WA, USA, July 10-15, 2022. Association for Computational Linguistics 2022, ISBN 978-1-955917-73-5 [contents] - [i14]Chunyuan Li, Haotian Liu, Liunian Harold Li, Pengchuan Zhang, Jyoti Aneja, Jianwei Yang, Ping Jin, Yong Jae Lee, Houdong Hu, Zicheng Liu, Jianfeng Gao:
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models. CoRR abs/2204.08790 (2022) - [i13]Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang, Guy Van den Broeck:
On the Paradox of Learning to Reason from Data. CoRR abs/2205.11502 (2022) - [i12]Da Yin, Hritik Bansal, Masoud Monajatipoor, Liunian Harold Li, Kai-Wei Chang:
GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models. CoRR abs/2205.12247 (2022) - [i11]Jingnong Qu, Liunian Harold Li, Jieyu Zhao, Sunipa Dev, Kai-Wei Chang:
DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation. CoRR abs/2205.12617 (2022) - [i10]Haotian Zhang, Pengchuan Zhang, Xiaowei Hu, Yen-Chun Chen, Liunian Harold Li, Xiyang Dai, Lijuan Wang, Lu Yuan, Jenq-Neng Hwang, Jianfeng Gao:
GLIPv2: Unifying Localization and Vision-Language Understanding. CoRR abs/2206.05836 (2022) - 2021
- [c4]Da Yin, Liunian Harold Li, Ziniu Hu, Nanyun Peng, Kai-Wei Chang:
Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning. EMNLP (1) 2021: 2115-2129 - [c3]Masoud Monajatipoor, Mozhdeh Rouhsedaghat, Liunian Harold Li, Aichi Chien, C.-C. Jay Kuo, Fabien Scalzo, Kai-Wei Chang:
BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease Diagnosis. ICCVW 2021: 3327-3336 - [c2]Liunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, Kai-Wei Chang:
Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions. NAACL-HLT 2021: 5339-5350 - [i9]Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, Kurt Keutzer:
How Much Can CLIP Benefit Vision-and-Language Tasks? CoRR abs/2107.06383 (2021) - [i8]Masoud Monajatipoor, Mozhdeh Rouhsedaghat, Liunian Harold Li, Aichi Chien, C.-C. Jay Kuo, Fabien Scalzo, Kai-Wei Chang:
BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease Diagnosis. CoRR abs/2108.04938 (2021) - [i7]Da Yin, Liunian Harold Li, Ziniu Hu, Nanyun Peng, Kai-Wei Chang:
Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning. CoRR abs/2109.06860 (2021) - [i6]Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, Jianfeng Gao:
Grounded Language-Image Pre-training. CoRR abs/2112.03857 (2021) - [i5]Zhecan Wang, Haoxuan You, Liunian Harold Li, Alireza Zareian, Suji Park, Yiqing Liang, Kai-Wei Chang, Shih-Fu Chang:
SGEITL: Scene Graph Enhanced Image-Text Learning for Visual Commonsense Reasoning. CoRR abs/2112.08587 (2021) - [i4]Yiwu Zhong, Jianwei Yang, Pengchuan Zhang, Chunyuan Li, Noel Codella, Liunian Harold Li, Luowei Zhou, Xiyang Dai, Lu Yuan, Yin Li, Jianfeng Gao:
RegionCLIP: Region-based Language-Image Pretraining. CoRR abs/2112.09106 (2021) - 2020
- [c1]Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang:
What Does BERT with Vision Look At? ACL 2020: 5265-5275 - [i3]Liunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, Kai-Wei Chang:
Weakly-supervised VisualBERT: Pre-training without Parallel Images and Captions. CoRR abs/2010.12831 (2020)
2010 – 2019
- 2019
- [j1]Liunian Harold Li, Patrick H. Chen, Cho-Jui Hsieh, Kai-Wei Chang:
Efficient Contextual Representation Learning With Continuous Outputs. Trans. Assoc. Comput. Linguistics 7: 611-624 (2019) - [i2]Liunian Harold Li, Patrick H. Chen, Cho-Jui Hsieh, Kai-Wei Chang:
Efficient Contextual Representation Learning Without Softmax Layer. CoRR abs/1902.11269 (2019) - [i1]Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang:
VisualBERT: A Simple and Performant Baseline for Vision and Language. CoRR abs/1908.03557 (2019)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-16 23:22 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint