![](https://dblp.uni-trier.de./img/logo.320x120.png)
![search dblp search dblp](https://dblp.uni-trier.de./img/search.dark.16x16.png)
![search dblp](https://dblp.uni-trier.de./img/search.dark.16x16.png)
default search action
Seonghyeon Ye
Person information
Refine list
![note](https://dblp.uni-trier.de./img/note-mark.dark.12x12.png)
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j1]Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo:
Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis. Trans. Assoc. Comput. Linguistics 12: 664-680 (2024) - [c14]Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo:
Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following. AAAI 2024: 19386-19394 - [c13]Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, Minjoon Seo:
Self-Explore: Enhancing Mathematical Reasoning in Language Models with Fine-grained Rewards. EMNLP (Findings) 2024: 1444-1466 - [c12]Changho Lee, Janghoon Han, Seonghyeon Ye, Stanley Jungkyu Choi, Honglak Lee, Kyunghoon Bae:
Instruction Matters: A Simple yet Effective Task Selection for Optimized Instruction Tuning of Specific Tasks. EMNLP 2024: 18620-18642 - [c11]Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo:
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets. ICLR 2024 - [c10]Yujin Kim, Jaehong Yoon, Seonghyeon Ye, Sangmin Bae, Namgyu Ho, Sung Ju Hwang, Se-Young Yun:
Carpe diem: On the Evaluation of World Knowledge in Lifelong Language Models. NAACL-HLT 2024: 5401-5415 - [i20]Hanseok Oh, Hyunji Lee, Seonghyeon Ye, Haebin Shin, Hansol Jang, Changwook Jun, Minjoon Seo:
INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models. CoRR abs/2402.14334 (2024) - [i19]Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, Minjoon Seo:
Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards. CoRR abs/2404.10346 (2024) - [i18]Changho Lee, Janghoon Han, Seonghyeon Ye, Stanley Jungkyu Choi, Honglak Lee, Kyunghoon Bae:
Instruction Matters, a Simple yet Effective Task Selection Approach in Instruction Tuning for Specific Tasks. CoRR abs/2404.16418 (2024) - [i17]Seungone Kim, Juyoung Suk, Ji Yong Cho, Shayne Longpre, Chaeeun Kim, Dongkeun Yoon, Guijin Son, Yejin Choi, Sheikh Shafayat, Jinheon Baek, Sue Hyun Park, Hyeonbin Hwang, Jinkyung Jo, Hyowon Cho, Haebin Shin, Seongyun Lee, Hanseok Oh, Noah Lee, Namgyu Ho, Se June Joo, Miyoung Ko, Yoonjoo Lee, Hyungjoo Chae, Jamin Shin, Joel Jang, Seonghyeon Ye, Bill Yuchen Lin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, Minjoon Seo:
The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models. CoRR abs/2406.05761 (2024) - [i16]Hoyeon Chang, Jinho Park, Seonghyeon Ye, Sohee Yang, Youngkyung Seo, Du-Seong Chang, Minjoon Seo:
How Do Large Language Models Acquire Factual Knowledge During Pretraining? CoRR abs/2406.11813 (2024) - [i15]Shayne Longpre, Robert Mahari, Ariel Lee, Campbell Lund, Hamidah Oderinwale, William Brannon, Nayan Saxena, Naana Obeng-Marnu, Tobin South, Cole Hunter, Kevin Klyman, Christopher Klamm, Hailey Schoelkopf, Nikhil Singh, Manuel Cherep, Ahmad Anis, An Dinh, Caroline Chitongo, Da Yin, Damien Sileo, Deividas Mataciunas, Diganta Misra, Emad A. Alghamdi, Enrico Shippole, Jianguo Zhang, Joanna Materzynska, Kun Qian, Kush Tiwary, Lester James V. Miranda, Manan Dey, Minnie Liang, Mohammed Hamdy, Niklas Muennighoff, Seonghyeon Ye, Seungone Kim, Shrestha Mohanty, Vipul Gupta, Vivek Sharma, Vu Minh Chien, Xuhui Zhou, Yizhi Li, Caiming Xiong, Luis Villa, Stella Biderman, Hanlin Li, Daphne Ippolito, Sara Hooker, Jad Kabbara, Sandy Pentland:
Consent in Crisis: The Rapid Decline of the AI Data Commons. CoRR abs/2407.14933 (2024) - [i14]Seonghyeon Ye, Joel Jang, Byeongguk Jeon, Se June Joo, Jianwei Yang, Baolin Peng, Ajay Mandlekar, Reuben Tan, Yu-Wei Chao, Bill Yuchen Lin, Lars Liden, Kimin Lee, Jianfeng Gao, Luke Zettlemoyer, Dieter Fox, Minjoon Seo:
Latent Action Pretraining from Videos. CoRR abs/2410.11758 (2024) - [i13]Shayne Longpre, Nikhil Singh, Manuel Cherep, Kushagra Tiwary, Joanna Materzynska, William Brannon, Robert Mahari, Manan Dey, Mohammed Hamdy, Nayan Saxena, Ahmad Mustafa Anis, Emad A. Alghamdi, Vu Minh Chien, Naana Obeng-Marnu, Da Yin, Kun Qian, Yizhi Li, Minnie Liang, An Dinh, Shrestha Mohanty, Deividas Mataciunas, Tobin South, Jianguo Zhang, Ariel Lee, Campbell Lund, Christopher Klamm, Damien Sileo, Diganta Misra, Enrico Shippole, Kevin Klyman, Lester JV Miranda, Niklas Muennighoff, Seonghyeon Ye, Seungone Kim, Vipul Gupta, Vivek Sharma, Xuhui Zhou, Caiming Xiong, Luis Villa, Stella Biderman, Alex Pentland, Sara Hooker, Jad Kabbara:
Bridging the Data Provenance Gap Across Text, Speech and Video. CoRR abs/2412.17847 (2024) - 2023
- [c9]Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo:
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt. EMNLP (Findings) 2023: 12288-12309 - [c8]Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo:
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning. EMNLP 2023: 12685-12708 - [c7]Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo:
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners. ICLR 2023 - [c6]Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo:
Exploring the Benefits of Training Expert Language Models over Instruction Tuning. ICML 2023: 14702-14729 - [i12]Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo:
Exploring the Benefits of Training Expert Language Models over Instruction Tuning. CoRR abs/2302.03202 (2023) - [i11]Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo:
In-Context Instruction Learning. CoRR abs/2302.14691 (2023) - [i10]Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo:
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning. CoRR abs/2305.14045 (2023) - [i9]Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo:
Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis. CoRR abs/2305.14877 (2023) - [i8]Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo:
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets. CoRR abs/2307.10928 (2023) - [i7]Yujin Kim, Jaehong Yoon, Seonghyeon Ye, Sung Ju Hwang, Se-young Yun:
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models. CoRR abs/2311.08106 (2023) - 2022
- [c5]Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo:
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models. EMNLP 2022: 6237-6250 - [c4]Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo:
Towards Continual Knowledge Learning of Language Models. ICLR 2022 - [c3]Joel Jang, Seonghyeon Ye, Minjoon Seo:
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts. TL4NLP 2022: 52-62 - [i6]Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo:
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models. CoRR abs/2204.14211 (2022) - [i5]Joel Jang, Seonghyeon Ye, Minjoon Seo:
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts. CoRR abs/2209.12711 (2022) - [i4]Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo:
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners. CoRR abs/2210.02969 (2022) - [i3]Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo:
Retrieval of Soft Prompt Enhances Zero-Shot Task Generalization. CoRR abs/2210.03029 (2022) - 2021
- [c2]Seonghyeon Ye, Jiseon Kim, Alice Oh:
Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning. EMNLP (1) 2021: 1832-1838 - [c1]Sungjoon Park, Jiseon Kim, Seonghyeon Ye, Jaeyeol Jeon, Heeyoung Park, Alice Oh:
Dimensional Emotion Detection from Categorical Emotion. EMNLP (1) 2021: 4367-4380 - [i2]Seonghyeon Ye, Jiseon Kim, Alice Oh:
Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning. CoRR abs/2109.05941 (2021) - [i1]Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo:
Towards Continual Knowledge Learning of Language Models. CoRR abs/2110.03215 (2021)
Coauthor Index
![](https://dblp.uni-trier.de./img/cog.dark.24x24.png)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-29 22:15 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint