"An LLM can Fool Itself: A Prompt-Based Adversarial Attack."

Xilie Xu et al. (2024)

Details and statistics

DOI:

access: open

type: Conference or Workshop Paper

metadata version: 2024-08-13