default search action
Motoi Omachi
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
Journal Articles
- 2017
- [j1]Motoi Omachi, Tetsuji Ogawa, Tetsunori Kobayashi:
Associative Memory Model-Based Linear Filtering and Its Application to Tandem Connectionist Blind Source Separation. IEEE ACM Trans. Audio Speech Lang. Process. 25(3): 637-650 (2017)
Conference and Workshop Papers
- 2023
- [c11]Motoi Omachi, Brian Yan, Siddharth Dalmia, Yuya Fujita, Shinji Watanabe:
Align, Write, Re-Order: Explainable End-to-End Speech Translation via Operation Sequence Generation. ICASSP 2023: 1-5 - 2022
- [c10]Motoi Omachi, Yuya Fujita, Shinji Watanabe, Tianzi Wang:
Non-Autoregressive End-To-End Automatic Speech Recognition Incorporating Downstream Natural Language Processing. ICASSP 2022: 6772-6776 - 2021
- [c9]Yuya Fujita, Tianzi Wang, Shinji Watanabe, Motoi Omachi:
Toward Streaming ASR with Non-Autoregressive Insertion-Based Model. Interspeech 2021: 3740-3744 - [c8]Motoi Omachi, Yuya Fujita, Shinji Watanabe, Matthew Wiesner:
End-to-end ASR to jointly predict transcriptions and linguistic annotations. NAACL-HLT 2021: 1861-1871 - 2020
- [c7]Yuya Fujita, Aswin Shanmugam Subramanian, Motoi Omachi, Shinji Watanabe:
Attention-Based ASR with Lightweight and Dynamic Convolutions. ICASSP 2020: 7034-7038 - [c6]Xuankai Chang, Aswin Shanmugam Subramanian, Pengcheng Guo, Shinji Watanabe, Yuya Fujita, Motoi Omachi:
End-to-End ASR with Adaptive Span Self-Attention. INTERSPEECH 2020: 3595-3599 - [c5]Yuya Fujita, Shinji Watanabe, Motoi Omachi, Xuankai Chang:
Insertion-Based Modeling for End-to-End Automatic Speech Recognition. INTERSPEECH 2020: 3660-3664 - 2018
- [c4]Dung T. Tran, Ken-ichi Iso, Motoi Omachi, Yuya Fujita:
Multi Scale Feedback Connection for Noise Robust Acoustic Modeling. ICASSP 2018: 4834-4838 - [c3]Yusuke Kida, Dung T. Tran, Motoi Omachi, Toru Taniguchi, Yuya Fujita:
Speaker Selective Beamformer with Keyword Mask Estimation. SLT 2018: 528-534 - 2015
- [c2]Motoi Omachi, Tetsuji Ogawa, Tetsunori Kobayashi, Masaru Fujieda, Kazuhiro Katagiri:
Separation matrix optimization using associative memory model for blind source separation. EUSIPCO 2015: 1098-1102 - 2014
- [c1]Yuichi Kubota, Motoi Omachi, Tetsuji Ogawa, Tetsunori Kobayashi, Tsuneo Nitta:
Effect of frequency weighting on MLP-based speaker canonicalization. INTERSPEECH 2014: 2987-2991
Informal and Other Publications
- 2022
- [i3]Motoi Omachi, Brian Yan, Siddharth Dalmia, Yuya Fujita, Shinji Watanabe:
Align, Write, Re-order: Explainable End-to-End Speech Translation via Operation Sequence Generation. CoRR abs/2211.05967 (2022) - 2020
- [i2]Yuya Fujita, Shinji Watanabe, Motoi Omachi, Xuankai Chang:
Insertion-Based Modeling for End-to-End Automatic Speech Recognition. CoRR abs/2005.13211 (2020) - 2018
- [i1]Yusuke Kida, Dung T. Tran, Motoi Omachi, Toru Taniguchi, Yuya Fujita:
Speaker Selective Beamformer with Keyword Mask Estimation. CoRR abs/1810.10727 (2018)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-04-24 22:57 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint