default search action
Will We Ever Really Replace the N-gram Model?@HLT-NAACL 2012: Montrèal, Canada
- Bhuvana Ramabhadran, Sanjeev Khudanpur, Ebru Arisoy:
Proceedings of the Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, WLM@NAACL-HLT 2012, Montrèal, Canada, June 8, 2012. Association for Computational Linguistics 2012, ISBN 978-1-937284-20-6 - Hai Son Le, Alexandre Allauzen, François Yvon:
Measuring the Influence of Long Range Dependencies with Neural Network Language Models. 1-10 - Holger Schwenk, Anthony Rousseau, Mohammed Attik:
Large, Pruned or Continuous Space Language Models on a GPU for Statistical Machine Translation. 11-19 - Ebru Arisoy, Tara N. Sainath, Brian Kingsbury, Bhuvana Ramabhadran:
Deep Neural Network Language Models. 20-28 - Geoffrey Zweig, Christopher J. C. Burges:
A Challenge Set for Advancing Language Modeling. 29-36 - André Mansikkaniemi, Mikko Kurimo:
Unsupervised Vocabulary Adaptation for Morph-based Language Models. 37-40 - Preethi Jyothi, Leif Johnson, Ciprian Chelba, Brian Strope:
Large-scale discriminative language model reranking for voice-search. 41-49 - Ariya Rastrow, Sanjeev Khudanpur, Mark Dredze:
Revisiting the Case for Explicit Syntactic Information in Language Models. 50-58
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.