default search action
1st DeepLo@ACL 2018: Melbourne, Australia
- Reza Haffari, Colin Cherry, George F. Foster, Shahram Khadivi, Bahar Salehi:
Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP, DeepLo@ACL 2018, Melbourne, Australia, July 19, 2018. Association for Computational Linguistics 2018, ISBN 978-1-948087-47-6 - Katharina Kann, Johannes Bjerva, Isabelle Augenstein, Barbara Plank, Anders Søgaard:
Character-level Supervision for Low-resource POS Tagging. 1-11 - Michael A. Hedderich, Dietrich Klakow:
Training a Neural Network in a Low-Resource Setting on Automatically Annotated Noisy Data. 12-18 - Marcel Bollmann, Anders Søgaard, Joachim Bingel:
Multi-task learning for historical text normalization: Size matters. 19-24 - Shiran Dudy, Steven Bedrick:
Compositional Language Modeling for Icon-Based Augmentative and Alternative Communication. 25-32 - Koel Dutta Chowdhury, Mohammed Hasanuzzaman, Qun Liu:
Multimodal Neural Machine Translation for Low-resource Language Pairs using Synthetic Data. 33-42 - Fariz Ikhwantri, Samuel Louvan, Kemal Kurniawan, Bagas Abisena, Valdi Rachman, Alfan Farizki Wicaksono, Rahmad Mahendra:
Multi-Task Active Learning for Neural Semantic Role Labeling on Low Resource Conversational Corpus. 43-50 - Prathusha Kameswara Sarma, Yingyu Liang, Bill Sethares:
Domain Adapted Word Embeddings for Improved Sentiment Classification. 51-59 - Kanako Komiya, Hiroyuki Shinnou:
Investigating Effective Parameters for Fine-tuning of Word Embeddings Using Only a Small Corpus. 60-67 - Mingkuan Liu, Musen Wen, Selçuk Köprü, Xianjing Liu, Alan Lu:
Semi-Supervised Learning with Auxiliary Evaluation Component for Large Scale e-Commerce Text Classification. 68-76 - Antonio Valerio Miceli Barone:
Low-rank passthrough neural networks. 77-86
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.