default search action
"Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do ..."
Xiangyu Qi et al. (2023)
- Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, Peter Henderson:
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! CoRR abs/2310.03693 (2023)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.