"Unfamiliar Finetuning Examples Control How Language Models Hallucinate."

Katie Kang et al. (2024)

Details and statistics

DOI: 10.48550/ARXIV.2403.05612

access: open

type: Informal or Other Publication

metadata version: 2024-04-04