default search action
"Layer-wise Pruning of Transformer Attention Heads for Efficient Language ..."
Kyuhong Shim et al. (2021)
- Kyuhong Shim, Iksoo Choi, Wonyong Sung, Jungwook Choi:
Layer-wise Pruning of Transformer Attention Heads for Efficient Language Modeling. ISOCC 2021: 357-358
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.