default search action
"Prefixing Attention Sinks can Mitigate Activation Outliers for Large ..."
Seungwoo Son et al. (2024)
- Seungwoo Son, Wonpyo Park, Woohyun Han, Kyuyeun Kim, Jaeho Lee:
Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization. CoRR abs/2406.12016 (2024)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.