default search action
Yasuo Horiuchi
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c41]Satoshi Naito, Masafumi Nishimura, Masafumi Nishida, Yasuo Horiuchi, Shingo Kuroiwa:
Food Recognition Using Large-scale Pre-trained Speech Models. GCCE 2024: 119-120 - [c40]Takumi Uehara, Shingo Kuroiwa, Yasuo Horiuchi, Masafumi Nishida, Satoru Tsuge:
Template-Based Speech Recognition Using Pre-trained Large Speech Models for Voice-Activated Shower Control. GCCE 2024: 141-143 - [c39]Kentaro Kameda, Satoru Tsuge, Shingo Kuroiwa, Yasuo Horiuchi, Masafumi Nishida:
Text-Dependent Speaker Verification Using SSI-DNN Trained on Short Utterance. GCCE 2024: 808-810 - 2023
- [c38]Yuya Soma, Yasuo Horiuchi, Shingo Kuroiwa:
Multiple Words to Single Word Associations Using Masked Language Models. KST 2023: 1-6 - 2022
- [c37]Aoi Sugita, Masafumi Nishida, Masafumi Nishimura, Yasuo Horiuchi, Shingo Kuroiwa:
Identification of vocal tract state before and after swallowing using acoustic features. GCCE 2022: 752-753 - [c36]Manaka Takamizawa, Satoru Tsuge, Yasuo Horiuchi, Shingo Kuroiwa:
Same Speaker Identification with Deep Learning and Application to Text-Dependent Speaker Verification. KES-HCIS 2022: 149-158 - 2020
- [c35]Yuji Nagashima, Keiko Watanabe, Daisuke Hara, Yasuo Horiuchi, Shinji Sako, Akira Ichikawa:
Constructing a Highly Accurate Japanese Sign Language Motion Database Including Dialogue. HCI (40) 2020: 76-81 - [c34]Toshiyuki Ugawa, Satoru Tsuge, Yasuo Horiuchi, Shingo Kuroiwa:
Text-Dependent Closed-Set Two-Speaker Recognition of a Key Phrase Uttered Synchronously by Two Persons. KES-HCIS 2020: 405-413
2010 – 2019
- 2019
- [c33]Keiko Watanabe, Yuji Nagashima, Daisuke Hara, Yasuo Horiuchi, Shinji Sako, Akira Ichikawa:
Construction of a Japanese Sign Language Database with Various Data Types. HCI (33) 2019: 317-322 - 2016
- [j6]Fuming Fang, Takahiro Shinozaki, Yasuo Horiuchi, Shingo Kuroiwa, Sadaoki Furui, Toshimitsu Musha:
Improving Eye Motion Sequence Recognition Using Electrooculography Based on Context-Dependent HMM. Comput. Intell. Neurosci. 2016: 6898031:1-6898031:9 (2016) - 2015
- [j5]Haoze Lu, Wenbin Zhang, Yasuo Horiuchi, Shingo Kuroiwa:
Phoneme dependent inter-session variability reduction for speaker verification. Int. J. Biom. 7(2): 83-96 (2015) - 2014
- [c32]Shizuka Wada, Yasuo Horiuchi, Shingo Kuroiwa:
Tempo Prediction Model for Accompaniment System. ICMC 2014 - 2013
- [j4]Yutaka Fukuoka, Kenji Miyazawa, Hiroki Mori, Manabi Miyagi, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa, Hiroshi Hoshino, Makoto Noshiro, Akinori Ueno:
Development of a Compact Wireless Laplacian Electrode Module for Electromyograms and Its Human Interface Applications. Sensors 13(2): 2368-2383 (2013) - [c31]Takaaki Ishii, Hiroki Komiyama, Takahiro Shinozaki, Yasuo Horiuchi, Shingo Kuroiwa:
Reverberant speech recognition based on denoising autoencoder. INTERSPEECH 2013: 3512-3516 - 2012
- [j3]Amira Abdelwahab, Hiroo Sekiya, Ikuo Matsuba, Yasuo Horiuchi, Shingo Kuroiwa:
Alleviating the Sparsity Problem of Collaborative Filtering Using an Efficient Iterative Clustered Prediction Technique. Int. J. Inf. Technol. Decis. Mak. 11(1): 33-53 (2012) - [c30]Yutaka Ono, Misuzu Otake, Takahiro Shinozaki, Ryuichi Nisimura, Takeshi Yamada, Kenkichi Ishizuka, Yasuo Horiuchi, Shingo Kuroiwa, Shingo Imai:
Open answer scoring for S-CAT automated speaking test system using support vector regression. APSIPA 2012: 1-4 - [c29]Takahiro Shinozaki, Sadaoki Furui, Yasuo Horiuchi, Shingo Kuroiwa:
Pipeline decomposition of speech decoders and their implementation based on delayed evaluation. APSIPA 2012: 1-4 - [c28]Takahiro Shinozaki, Yasuo Horiuchi, Shingo Kuroiwa:
Unsupervised CV language model adaptation based on direct likelihood maximization sentence selection. ICASSP 2012: 5029-5032 - [c27]Fuming Fang, Takahiro Shinozaki, Yasuo Horiuchi, Shingo Kuroiwa, Sadaoki Furui, Toshimitsu Musha:
HMM Based Continuous EOG Recognition for Eye-input Speech Interface. INTERSPEECH 2012: 735-738 - 2011
- [c26]Shiori Takenaka, Takahiro Shinozaki, Yasuo Horiuchi, Shingo Kuroiwa:
Pseudo speaker models for text-independent speaker verification using rank threshold. NLPKE 2011: 265-268 - 2010
- [j2]Haoze Lu, Masafumi Nishida, Yasuo Horiuchi, Shingo Kuroiwa:
Text-Independent speaker identification in phoneme-independent subspace using PCA transformation. Int. J. Biom. 2(4): 379-390 (2010) - [c25]Masafumi Nishida, Yasuo Horiuchi, Shingo Kuroiwa, Akira Ichikawa:
Automatic Speech Recognition Based on Multiple Level Units in Spoken Dialogue System for In-Vehicle Appliances. TSD 2010: 539-546
2000 – 2009
- 2009
- [c24]Junichi Iizuka, Akira Okamoto, Yasuo Horiuchi, Akira Ichikawa:
Considerations of Efficiency and Mental Stress of Search Tasks on Websites by Blind Persons. HCI (7) 2009: 693-700 - [c23]Amira Abdelwahab, Hiroo Sekiya, Ikuo Matsuba, Yasuo Horiuchi, Shingo Kuroiwa:
Collaborative filtering based on an iterative prediction method to alleviate the sparsity problem. iiWAS 2009: 375-379 - [c22]Haruka Okamoto, Satoru Tsuge, Amira Abdelwahab, Masafumi Nishida, Yasuo Horiuchi, Shingo Kuroiwa:
Text-independent speaker verification using rank threshold in large number of speaker models. INTERSPEECH 2009: 2367-2370 - [c21]Yuta Yasugahira, Yasuo Horiuchi, Shingo Kuroiwa:
Analysis of hand movement variation related to speed in Japanese sign language. IUCS 2009: 331-334 - [c20]Amira Abdelwahab, Hiroo Sekiya, Ikuo Matsuba, Yasuo Horiuchi, Shingo Kuroiwa, Masafumi Nishida:
An efficient collaborative filtering algorithm using SVD-free latent Semantic indexing and particle swarm optimization. NLPKE 2009: 1-4 - 2008
- [j1]Saori Tanaka, Kaoru Nakazono, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Evaluating Interpreter's Skill by Measurement of Prosody Recognition. Inf. Media Technol. 3(2): 375-384 (2008) - [c19]Shota Sato, Taro Kimura, Yasuo Horiuchi, Masafumi Nishida, Shingo Kuroiwa, Akira Ichikawa:
A method for automatically estimating F0 model parameters and a speech re-synthesis tool using F0 model and STRAIGHT. INTERSPEECH 2008: 545-548 - [c18]Masaru Maebatake, Iori Suzuki, Masafumi Nishida, Yasuo Horiuchi, Shingo Kuroiwa:
Sign Language Recognition Based on Position and Movement Using Multi-Stream HMM. ISUC 2008: 478-481 - 2007
- [c17]Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Unsupervised training of adaptation rate using q-learning in large vocabulary continuous speech recognition. INTERSPEECH 2007: 278-281 - 2006
- [c16]Manabi Miyagi, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Analysis of Prosody in Finger Braille Using Electromyography. EMBC 2006: 4901-4904 - [c15]Manabi Miyagi, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Investigation on Effect of Prosody in Finger Braille. ICCHP 2006: 863-869 - 2005
- [c14]Tomoko Ohsuga, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Investigation of the relationship between turn-taking and prosodic features in spontaneous dialogue. INTERSPEECH 2005: 33-36 - [c13]Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Automatic speech recognition based on adaptation and clustering using temporal-difference learning. INTERSPEECH 2005: 285-288 - [c12]Saori Tanaka, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Production of prominence in Japanese sign language. INTERSPEECH 2005: 2421-2424 - 2004
- [c11]Masafumi Nishida, Yoshitaka Mamiya, Yasuo Horiuchi, Akira Ichikawa:
On-line incremental adaptation based on reinforcement learning for robust speech recognition. INTERSPEECH 2004: 1985-1988 - [c10]Tomoko Ohsuga, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Estimating syntactic structure from prosodic features in Japanese speech. INTERSPEECH 2004: 3041-3044 - 2003
- [c9]Toshie Hatano, Yasuo Horiuchi, Akira Ichikawa:
How does human segment the speech by prosody ? INTERSPEECH 2003: 149-152 - 2002
- [c8]Yasuo Horiuchi, Tomoko Ohsuga, Akira Ichikawa:
Estimating syntactic structure from F0 contour and pause duration in Japanese speech. INTERSPEECH 2002: 1177-1180 - 2001
- [c7]Yasuo Horiuchi, Akira Ichikawa:
Prosody in finger braille and teletext receiver for finger braille. INTERSPEECH 2001: 2697-2702 - 2000
- [c6]Manabi Miyagi, Yuji Fujimori, Yasuo Horiuchi, Akira Ichikawa:
Prosody Rule for Time Structure of Finger Braille. RIAO 2000: 862-869
1990 – 1999
- 1999
- [c5]Yasuo Horiuchi, Fujiwara Atsushi, Akira Ichikawa:
New WWW browser for visually impaired people using interactive voice technology. EUROSPEECH 1999 - [c4]Akira Ichikawa, Tomoyuki Shimizu, Yasuo Horiuchi:
Reinforcement learning for phoneme recognition. EUROSPEECH 1999 - 1998
- [c3]Yasuo Horiuchi, Akira Ichikawa:
Prosodic structure in Japanese spontaneous speech. ICSLP 1998 - [c2]Akira Ichikawa, Masahiro Araki, Massato Ishizaki, Shuichi Itabashi, Toshihiko Itoh, Hideki Kashioka, Keiji Kato, Hideaki Kikuchi, Tomoko Kumagai, Akira Kurematsu, Hanae Koiso, Masafumi Tamoto, Syun Tutiya, Shu Nakazato, Yasuo Horiuchi, Kikuo Maekawa, Yoichi Yamashita, Takashi Yoshimnra:
Standardising annotation schemes for Japanese discourse. LREC 1998: 731-738 - 1993
- [c1]Yasuo Horiuchi, Hozumi Tanaka:
A Computer Accompaniment System with Independence. ICMC 1993
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-09 12:59 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint