default search action
"An LLM can Fool Itself: A Prompt-Based Adversarial Attack."
Xilie Xu et al. (2024)
- Xilie Xu, Keyi Kong, Ning Liu, Lizhen Cui, Di Wang, Jingfeng Zhang, Mohan S. Kankanhalli:
An LLM can Fool Itself: A Prompt-Based Adversarial Attack. ICLR 2024
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.