default search action
Yuya Chiba
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j8]Michimasa Inaba, Yuya Chiba, Zhiyang Qi, Ryuichiro Higashinaka, Kazunori Komatani, Yusuke Miyao, Takayuki Nagai:
Travel Agency Task Dialogue Corpus: A Multimodal Dataset with Age-Diverse Speakers. ACM Trans. Asian Low Resour. Lang. Inf. Process. 23(9): 130:1-130:23 (2024) - 2023
- [j7]Yuya Chiba, Ryuichiro Higashinaka:
Dialogue Situation Recognition in Everyday Conversation From Audio, Visual, and Linguistic Information. IEEE Access 11: 70819-70832 (2023) - [j6]Yuya Chiba, Ryuichiro Higashinaka:
Analyzing Variations of Everyday Japanese Conversations Based on Semantic Labels of Functional Expressions. ACM Trans. Asian Low Resour. Lang. Inf. Process. 22(2): 52:1-52:26 (2023) - [c40]Ao Guo, Atsumoto Ohashi, Yuya Chiba, Yuiko Tsunomori, Ryu Hirai, Ryuichiro Higashinaka:
Personality-aware Natural Language Generation for Task-oriented Dialogue using Reinforcement Learning. RO-MAN 2023: 1823-1828 - 2022
- [c39]Michimasa Inaba, Yuya Chiba, Ryuichiro Higashinaka, Kazunori Komatani, Yusuke Miyao, Takayuki Nagai:
Collection and Analysis of Travel Agency Task Dialogues with Age-Diverse Speakers. LREC 2022: 5759-5767 - [c38]Hiroaki Sugiyama, Masahiro Mizukami, Tsunehiro Arimoto, Hiromi Narimatsu, Yuya Chiba, Hideharu Nakajima, Toyomi Meguro:
Empirical Analysis of Training Strategies of Transformer-Based Japanese Chit-Chat Systems. SLT 2022: 685-691 - 2021
- [c37]Yuya Chiba, Ryuichiro Higashinaka:
Dialogue Situation Recognition for Everyday Conversation Using Multimodal Information. Interspeech 2021: 241-245 - [c36]Yoshihiro Yamazaki, Yuya Chiba, Takashi Nose, Akinori Ito:
Neural Spoken-Response Generation Using Prosodic and Linguistic Context for Conversational Systems. Interspeech 2021: 246-250 - [c35]Ryota Yahagi, Yuya Chiba, Takashi Nose, Akinori Ito:
Multimodal Dialogue Response Timing Estimation Using Dialogue Context Encoder. IWSDS 2021: 133-141 - [c34]Yuya Chiba, Ryuichiro Higashinaka:
Variation across Everyday Conversations: Factor Analysis of Conversations using Semantic Categories of Functional Expressions. PACLIC 2021: 160-169 - [i1]Hiroaki Sugiyama, Masahiro Mizukami, Tsunehiro Arimoto, Hiromi Narimatsu, Yuya Chiba, Hideharu Nakajima, Toyomi Meguro:
Empirical Analysis of Training Strategies of Transformer-based Japanese Chit-chat Systems. CoRR abs/2109.05217 (2021) - 2020
- [j5]Kosuke Nakamura, Takashi Nose, Yuya Chiba, Akinori Ito:
A Symbol-level Melody Completion Based on a Convolutional Neural Network with Generative Adversarial Learning. J. Inf. Process. 28: 248-257 (2020) - [j4]Jiang Fu, Yuya Chiba, Takashi Nose, Akinori Ito:
Automatic assessment of English proficiency for Japanese learners without reference sentences based on deep neural network acoustic models. Speech Commun. 116: 86-97 (2020) - [c33]Rikiya Takahashi, Takashi Nose, Yuya Chiba, Akinori Ito:
Successive Japanese Lyrics Generation Based on Encoder-Decoder Model. GCCE 2020: 126-127 - [c32]Ryota Yahagi, Yuya Chiba, Takashi Nose, Akinori Ito:
Incremental Response Generation Using Prefix-to-Prefix Model for Dialogue System. GCCE 2020: 349-350 - [c31]Satoru Mizuochi, Yuya Chiba, Takashi Nose, Akinori Ito:
Spoken Term Detection Based on Acoustic Models Trained in Multiple Languages for Zero-Resource Language. GCCE 2020: 351-352 - [c30]Satsuki Naijo, Yuya Chiba, Takashi Nose, Akinori Ito:
Analysis and Estimation of Sentence Speakability for English Pronunciation Evaluation. GCCE 2020: 353-355 - [c29]Yoshihiro Yamazaki, Yuya Chiba, Takashi Nose, Akinori Ito:
Filler Prediction Based on Bidirectional LSTM for Generation of Natural Response of Spoken Dialog. GCCE 2020: 360-361 - [c28]Yuya Chiba, Takashi Nose, Akinori Ito:
Multi-Stream Attention-Based BLSTM with Feature Segmentation for Speech Emotion Recognition. INTERSPEECH 2020: 3301-3305 - [c27]Yoshihiro Yamazaki, Yuya Chiba, Takashi Nose, Akinori Ito:
Construction and Analysis of a Multimodal Chat-talk Corpus for Dialog Systems Considering Interpersonal Closeness. LREC 2020: 443-448
2010 – 2019
- 2019
- [j3]Hafiyan Prafianto, Takashi Nose, Yuya Chiba, Akinori Ito:
Improving human scoring of prosody using parametric speech synthesis. Speech Commun. 111: 14-21 (2019) - [c26]Kenji Moriya, Rikuto Osawa, Yuya Chiba, Yoshiko Maruyama, Masahiro Nakagawa:
Do Virtual Reality Images Provide Greater Relaxation Effects than 2D Images? ACIT 2019: 6:1-6:6 - 2018
- [c25]Shunsuke Tada, Yuya Chiba, Takashi Nose, Akinori Ito:
Effect of Mutual Self-Disclosure in Spoken Dialog System on User Impression. APSIPA 2018: 806-810 - [c24]Jiang Fu, Yuya Chiba, Takashi Nose, Akinori Ito:
Evaluation of English Speech Recognition for Japanese Learners Using DNN-Based Acoustic Models. IIH-MSP (2) 2018: 93-100 - [c23]Mai Yamanaka, Yuya Chiba, Takashi Nose, Akinori Ito:
A Study on a Spoken Dialogue System with Cooperative Emotional Speech Synthesis Using Acoustic and Linguistic Information. IIH-MSP (2) 2018: 101-108 - [c22]Takashi Kimura, Takashi Nose, Shinji Hirooka, Yuya Chiba, Akinori Ito:
Comparison of Speech Recognition Performance Between Kaldi and Google Cloud Speech API. IIH-MSP (2) 2018: 109-115 - [c21]Kosuke Nakamura, Takashi Nose, Yuya Chiba, Akinori Ito:
Melody Completion Based on Convolutional Neural Networks and Generative Adversarial Learning. IIH-MSP (2) 2018: 116-123 - [c20]Hiroto Aoyama, Takashi Nose, Yuya Chiba, Akinori Ito:
Improvement of Accent Sandhi Rules Based on Japanese Accent Dictionaries. IIH-MSP (2) 2018: 140-148 - [c19]Takahiro Furuya, Yuya Chiba, Takashi Nose, Akinori Ito:
Data Collection and Analysis for Automatically Generating Record of Human Behaviors by Environmental Sound Recognition. IIH-MSP (2) 2018: 149-156 - [c18]Haoran Wu, Yuya Chiba, Takashi Nose, Akinori Ito:
Analyzing Effect of Physical Expression on English Proficiency for Multimodal Computer-Assisted Language Learning. INTERSPEECH 2018: 1746-1750 - [c17]Yukiko Kageyama, Yuya Chiba, Takashi Nose, Akinori Ito:
Improving User Impression in Spoken Dialog System with Gradual Speech Form Control. SIGDIAL Conference 2018: 235-240 - [c16]Yuya Chiba, Takashi Nose, Taketo Kase, Mai Yamanaka, Akinori Ito:
An Analysis of the Effect of Emotional Speech Synthesis on Non-Task-Oriented Dialogue System. SIGDIAL Conference 2018: 371-375 - 2017
- [j2]Yuya Chiba, Takashi Nose, Akinori Ito:
Cluster-based approach to discriminate the user's state whether a user is embarrassed or thinking to an answer to a prompt. J. Multimodal User Interfaces 11(2): 185-196 (2017) - [c15]Yuya Chiba, Takashi Nose, Akinori Ito:
Analysis of efficient multimodal features for estimating user's willingness to talk: Comparison of human-machine and human-human dialog. APSIPA 2017: 428-431 - [c14]Yukiko Kageyama, Yuya Chiba, Takashi Nose, Akinori Ito:
Collection of Example Sentences for Non-task-Oriented Dialog Using a Spoken Dialog System and Comparison with Hand-Crafted DB. HCI (29) 2017: 458-464 - [c13]Hayato Mori, Yuya Chiba, Takashi Nose, Akinori Ito:
Dialog-Based Interactive Movie Recommendation: Comparison of Dialog Strategies. IIH-MSP (2) 2017: 77-83 - [c12]Shunsuke Tada, Yuya Chiba, Takashi Nose, Akinori Ito:
Response Selection of Interview-Based Dialog System Using User Focus and Semantic Orientation. IIH-MSP (2) 2017: 84-90 - [c11]Yusuke Yamada, Takashi Nose, Yuya Chiba, Akinori Ito, Takahiro Shinozaki:
Development and Evaluation of Julius-Compatible Interface for Kaldi ASR. IIH-MSP (2) 2017: 91-96 - [c10]Sou Miyamoto, Takashi Nose, Suzunosuke Ito, Harunori Koike, Yuya Chiba, Akinori Ito, Takahiro Shinozaki:
Voice Conversion from Arbitrary Speakers Based on Deep Neural Networks with Adversarial Learning. IIH-MSP (2) 2017: 97-103 - [c9]Kosuke Nakamura, Yuya Chiba, Takashi Nose, Akinori Ito:
Evaluation of Nonlinear Tempo Modification Methods Based on Sinusoidal Modeling. IIH-MSP (2) 2017: 104-111 - [c8]Kazuki Sato, Takashi Nose, Akira Ito, Yuya Chiba, Akinori Ito, Takahiro Shinozaki:
A Study on 2D Photo-Realistic Facial Animation Generation Using 3D Facial Feature Points and Deep Neural Networks. IIH-MSP (2) 2017: 112-118 - [c7]Isao Miyagawa, Yuya Chiba, Takashi Nose, Akinori Ito:
Detection of Singing Mistakes from Singing Voice. IIH-MSP (2) 2017: 130-136 - 2016
- [c6]Yuya Chiba, Akinori Ito:
Estimation of User's Willingness to Talk About the Topic: Analysis of Interviews Between Humans. IWSDS 2016: 411-419 - 2014
- [c5]Hafiyan Prafianto, Takashi Nose, Yuya Chiba, Akinori Ito, Kazuyuki Sato:
A study on the effect of speech rate on perception of spoken easy Japanese using speech synthesis. ICAILP 2014: 476-479 - [c4]Noriko Totsuka, Yuya Chiba, Takashi Nose, Akinori Ito:
Robot: Have I done something wrong? - Analysis of prosodic features of speech commands under the robot's unintended behavior. ICAILP 2014: 887-890 - [c3]Yuya Chiba, Masashi Ito, Takashi Nose, Akinori Ito:
User Modeling by Using Bag-of-Behaviors for Building a Dialog System Sensitive to the Interlocutor's Internal State. SIGDIAL Conference 2014: 74-78 - 2013
- [c2]Yuya Chiba, Masashi Ito, Akinori Ito:
Estimation of User's State during a Dialog Turn with Sequential Multi-modal Features. HCI (29) 2013: 572-576 - 2012
- [j1]Yuya Chiba, Akinori Ito:
Estimating a User's Internal State before the First Input Utterance. Adv. Hum. Comput. Interact. 2012: 865362:1-865362:10 (2012) - [c1]Yuya Chiba, Masashi Ito, Akinori Ito:
Estimation of User's Internal State before the User's First Utterance Using Acoustic Features and Face Orientation. HSI 2012: 23-28
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-07 21:36 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint