default search action
Tetsunari Inamura
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c78]Tetsunari Inamura, Reiko Gotoh, Madoka Matsumoto:
Development of a Virtual Travel System to Enhance the Discovery of Aspiration and Pleasure. COMPSAC 2024: 2060-2064 - [c77]Vittorio Fiscale, Tetsunari Inamura, Agata Marta Soccini:
Adaptive Training in Virtual Reality Through Dynamic Alien Motion Support. GEM 2024: 1-5 - [c76]Tetsunari Inamura, Kouhei Nagata, Nanami Takahashi:
Implementation of a Virtual Success Experience System with Difficulty Adjustment for Enhancing Self-Efficacy. SII 2024: 259-265 - [c75]Yusuke Goutsu, Tetsunari Inamura:
Effectiveness of Adaptive Difficulty Settings on Self-efficacy in VR Exercise. VRST 2024: 77:1-77:2 - [c74]Haruka Murakami, Vittorio Fiscale, Agata Marta Soccini, Tetsunari Inamura:
White Lies in Virtual Reality: Impact on Enjoyment and Fatigue. VRST 2024: 81:1-81:2 - 2023
- [j37]Yoshiaki Mizuchi, Hiroki Yamada, Tetsunari Inamura:
Evaluation of an online human-robot interaction competition platform based on virtual reality - case study in RCAP2021. Adv. Robotics 37(8): 510-517 (2023) - [j36]Mitsunori Tada, Tetsunari Inamura:
Editorial: Human Digital Twin Technology. Int. J. Autom. Technol. 17(3): 205 (2023) - [j35]Tetsunari Inamura:
Digital Twin of Experience for Human-Robot Collaboration Through Virtual Reality. Int. J. Autom. Technol. 17(3): 284-291 (2023) - [j34]Alessandra Sciutti, Michael Beetz, Tetsunari Inamura, Ayorkor Korsah, Jean Oh, Giulio Sandini, Shingo Shimoda, David Vernon:
The Present and the Future of Cognitive Robotics [TC Spotlight]. IEEE Robotics Autom. Mag. 30(3): 160-163 (2023) - [c73]Yusuke Goutsu, Tetsunari Inamura:
Instant Difficulty Adjustment: Predicting Success Rate of VR Kendama when Changing the Difficulty Level. AHs 2023: 346-348 - [c72]Yoshiaki Mizuchi, Yusuke Tanno, Tetsunari Inamura:
Designing Evaluation Metrics for Quality of Human-Robot Interaction in Guiding Human Behavior. HAI 2023: 39-45 - [c71]Vittorio Fiscale, Tetsunari Inamura, Agata Marta Soccini:
Enhancing Training and Learning in Virtual Reality: The Influence of Alien Motion on Sense of Embodiment. HAI 2023: 412-414 - [c70]Tetsunari Inamura:
The Impact of the Order of Vicarious and Self Experiences in a VR Environment on Self-Efficacy. HAI 2023: 422-424 - 2022
- [j33]Akira Taniguchi, Michael Spranger, Hiroshi Yamakawa, Tetsunari Inamura:
Editorial: Constructive approach to spatial cognition in intelligent robotics. Frontiers Neurorobotics 16 (2022) - [j32]Agata Marta Soccini, Alessandro Clocchiatti, Tetsunari Inamura:
Effects of Frequent Changes in Extended Self-Avatar Movements on Adaptation Performance. J. Robotics Mechatronics 34(4): 756-766 (2022) - [j31]Taisuke Kobayashi, Shingo Murata, Tetsunari Inamura:
Latent Representation in Human-Robot Interaction With Explicit Consideration of Periodic Dynamics. IEEE Trans. Hum. Mach. Syst. 52(5): 928-940 (2022) - [c69]Tetsunari Inamura, Shinichirou Eitoku, Iwaki Toshima, Shinya Shimizu, Atsushi Fukayama, Shiro Ozawa, Takao Nakamura:
Effect of repetitive motion intervention on self-avatar on the sense of self-individuality. HAI 2022: 167-175 - [c68]Yoshiaki Mizuchi, Kouichi Iwami, Tetsunari Inamura:
VR and GUI based Human-Robot Interaction Behavior Collection for Modeling the Subjective Evaluation of the Interaction Quality. SII 2022: 375-382 - 2021
- [j30]Tetsunari Inamura, Yoshiaki Mizuchi, Hiroki Yamada:
VR platform enabling crowdsourcing of embodied HRI experiments - case study of online robot competition. Adv. Robotics 35(11): 697-703 (2021) - [j29]Tetsunari Inamura, Yoshiaki Mizuchi:
SIGVerse: A Cloud-Based VR Platform for Research on Multimodal Human-Robot Interaction. Frontiers Robotics AI 8: 549360 (2021) - [c67]Yusuke Goutsu, Tetsunari Inamura:
Linguistic Descriptions of Human Motion with Generative Adversarial Seq2Seq Learning. ICRA 2021: 4281-4287 - [c66]Nanami Takahashi, Tetsunari Inamura, Yoshiaki Mizuchi, Yongwoon Choi:
Evaluation of the Difference of Human Behavior between VR and Real Environments in Searching and Manipulating Objects in a Domestic Environment. RO-MAN 2021: 454-460 - [i9]Taisuke Kobayashi, Shingo Murata, Tetsunari Inamura:
Latent Representation in Human-Robot Interaction with Explicit Consideration of Periodic Dynamics. CoRR abs/2106.08531 (2021) - 2020
- [j28]Hiroyuki Okada, Tetsunari Inamura, Kazuyoshi Wada:
Special issue on service robot technology - selected papers from WRS 2018. Adv. Robotics 34(3-4): 141 (2020) - [j27]Yoshiaki Mizuchi, Tetsunari Inamura:
Optimization of criterion for objective evaluation of HRI performance that approximates subjective evaluation: a case study in robot competition. Adv. Robotics 34(3-4): 142-156 (2020) - [j26]Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura:
Spatial concept-based navigation with human speech instructions via probabilistic inference on Bayesian generative model. Adv. Robotics 34(19): 1213-1228 (2020) - [j25]Tetsunari Inamura, Amit Kumar Pandey, Swagat Kumar, Mary-Anne Williams, John-John Cabibihan, Laxmidhar Behera:
Special Issue on Robot and Human Interactive Communication 2020. Adv. Robotics 34(20): 1279 (2020) - [j24]Tetsunari Inamura, Amit Kumar Pandey, Swagat Kumar, Mary-Anne Williams, John-John Cabibihan, Laxmidhar Behera:
Special Issue on Robot and Human Interactive Communication 2020 (Part II). Adv. Robotics 34(24): 1545 (2020) - [j23]Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura:
Improved and scalable online learning of spatial concepts and language models with mapping. Auton. Robots 44(6): 927-946 (2020) - [c65]Fangkai Yang, Wenjie Yin, Tetsunari Inamura, Mårten Björkman, Christopher E. Peters:
Group Behavior Recognition Using Attention- and Graph-Based Neural Networks. ECAI 2020: 1626-1633 - [i8]Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura:
Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model. CoRR abs/2002.07381 (2020) - [i7]Tetsunari Inamura, Yoshiaki Mizuchi:
SIGVerse: A cloud-based VR platform for research on social and embodied human-robot interaction. CoRR abs/2005.00825 (2020)
2010 – 2019
- 2019
- [j22]Kazuhiro Nakadai, Emilia I. Barakova, Michita Imai, Tetsunari Inamura:
Special issue on robot and human interactive communication. Adv. Robotics 33(7-8): 307-308 (2019) - [j21]Tetsunari Inamura, Hiroki Yokoyama, Emre Ugur, Xavier Hinaut, Michael Beetz, Tadahiro Taniguchi:
Section focused on machine learning methods for high-level cognitive capabilities in robotics. Adv. Robotics 33(11): 537-538 (2019) - [j20]Tadashi Ogura, Tetsunari Inamura:
Bidirectional estimation between context and motion in motion sequence in which context changes. Adv. Robotics 33(11): 550-565 (2019) - [j19]Kazuhiro Nakadai, Emilia I. Barakova, Michita Imai, Tetsunari Inamura:
Special issue on robot and human interactive communication. Adv. Robotics 33(15-16): 699 (2019) - [j18]Tadahiro Taniguchi, Daichi Mochihashi, Takayuki Nagai, Satoru Uchida, Naoya Inoue, Ichiro Kobayashi, Tomoaki Nakamura, Yoshinobu Hagiwara, Naoto Iwahashi, Tetsunari Inamura:
Survey on frontiers of language and robotics. Adv. Robotics 33(15-16): 700-730 (2019) - [j17]Hiroyuki Okada, Tetsunari Inamura, Kazuyoshi Wada:
What competitions were conducted in the service categories of the World Robot Summit? Adv. Robotics 33(17): 900-910 (2019) - [c64]Miguel Vasco, Francisco S. Melo, David Martins de Matos, Ana Paiva, Tetsunari Inamura:
Online Motion Concept Learning: A Novel Algorithm for Sample-Efficient Learning and Recognition of Human Actions. AAMAS 2019: 2244-2246 - [c63]Tetsunari Inamura, Yoshiaki Mizuchi:
Robot Competition to Evaluate Guidance Skill for General Users in VR Environment. HRI 2019: 552-553 - [c62]Miguel Vasco, Francisco S. Melo, David Martins de Matos, Ana Paiva, Tetsunari Inamura:
Learning Multimodal Representations for Sample-efficient Recognition of Human Actions. IROS 2019: 4288-4293 - [c61]Yoshiaki Mizuchi, Tetsunari Inamura:
Estimation of Subjective Evaluation of HRI Performance Based on Objective Behaviors of Human and Robots. RoboCup 2019: 201-212 - [c60]Agata Marta Soccini, Marco Grangetto, Tetsunari Inamura, Sotaro Shimada:
Virtual Hand Illusion: The Alien Finger Motion Experiment. VR 2019: 1165-1166 - [i6]Miguel Vasco, Francisco S. Melo, David Martins de Matos, Ana Paiva, Tetsunari Inamura:
Learning multimodal representations for sample-efficient recognition of human actions. CoRR abs/1903.02511 (2019) - 2018
- [j16]Akira Taniguchi, Tadahiro Taniguchi, Tetsunari Inamura:
Unsupervised spatial lexical acquisition by updating a language model with place clues. Robotics Auton. Syst. 99: 166-180 (2018) - [c59]Jeffrey Too Chuan Tan, Yoshiaki Mizuchi, Yoshinobu Hagiwara, Tetsunari Inamura:
Representation of Embodied Collaborative Behaviors in Cyber-Physical Human-Robot Interaction with Immersive User Interfaces. HRI (Companion) 2018: 251-252 - [c58]Yoshiaki Mizuchi, Tetsunari Inamura:
Evaluation of Human Behavior Difference with Restricted Field of View in Real and VR Environments. RO-MAN 2018: 196-201 - [c57]Tatsuya Sakato, Tetsunari Inamura:
Evaluation of Rapid Active Learning Method for Motion Label Learning in Variable VR Environment. ROBIO 2018: 1935-1940 - [i5]Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura:
SpCoSLAM 2.0: An Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping. CoRR abs/1803.03481 (2018) - 2017
- [j15]Tetsunari Inamura, Satoshi Unenaka, Satoshi Shibuya, Yukari Ohki, Yutaka Oouchida, Shin-Ichi Izumi:
Development of VR platform for cloud-based neurorehabilitation and its application to research on sense of agency and ownership. Adv. Robotics 31(1-2): 97-106 (2017) - [j14]Tomohiro Mimura, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura:
Bayesian body schema estimation using tactile information obtained through coordinated random movements. Adv. Robotics 31(3): 118-134 (2017) - [c56]Jeffrey Too Chuan Tan, Yoshinobu Hagiwara, Tetsunari Inamura:
Learning from Human Collaborative Experience: Robot Learning via Crowdsourcing of Human-Robot Interaction. HRI (Companion) 2017: 297-298 - [c55]Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura:
Online spatial concept and lexical acquisition with simultaneous localization and mapping. IROS 2017: 811-818 - [c54]Tamas Bates, Karinne Ramirez-Amaro, Tetsunari Inamura, Gordon Cheng:
On-line simultaneous learning and recognition of everyday activities from virtual reality performances. IROS 2017: 3510-3515 - [c53]Tetsunari Inamura, Yoshiaki Mizuchi:
Competition Design to Evaluate Cognitive Functions in Human-Robot Interaction Based on Immersive VR. RoboCup 2017: 84-94 - [c52]Yoshiaki Mizuchi, Tetsunari Inamura:
Cloud-based multimodal human-robot interaction simulator utilizing ROS and unity frameworks. SII 2017: 948-955 - [i4]Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura:
Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping. CoRR abs/1704.04664 (2017) - 2016
- [j13]Akira Taniguchi, Tadahiro Taniguchi, Tetsunari Inamura:
Spatial Concept Acquisition for a Mobile Robot That Integrates Self-Localization and Unsupervised Word Discovery From Spoken Sentences. IEEE Trans. Cogn. Dev. Syst. 8(4): 285-297 (2016) - [c51]Tomohiro Mimura, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura:
Clustering Latent Sensor Distribution on Body Map for Generating Body Schema. IAS 2016: 3-18 - [c50]Tomohiro Mimura, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura, Shiro Yano:
Analysis of slow dynamics of kinematic structure estimation after physical disorder: Constructive approach toward phantom limb pain. MHS 2016: 1-7 - [i3]Akira Taniguchi, Tadahiro Taniguchi, Tetsunari Inamura:
Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences. CoRR abs/1602.01208 (2016) - [i2]Tomohiro Mimura, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura:
Bayesian Body Schema Estimation using Tactile Information obtained through Coordinated Random Movements. CoRR abs/1612.00305 (2016) - 2015
- [c49]Yoshinobu Hagiwara, Yoshiaki Mizuchi, Yongwoon Choi, Tetsunari Inamura:
Cloud based VR System with Immersive Interfaces to Collect Human Gaze and Body Motion Behaviors. HRI (Extended Abstracts) 2015: 175-176 - [c48]Jekaterina Novikova, Leon Watts, Tetsunari Inamura:
Modeling Human-Robot Collaboration in a Simulated Environment. HRI (Extended Abstracts) 2015: 181-182 - [c47]Jekaterina Novikova, Leon Adam Watts, Tetsunari Inamura:
Emotionally expressive robot behavior improves human-robot collaboration. RO-MAN 2015: 7-12 - [c46]Yuka Ariki, Tetsunari Inamura, Shiro Ikeda, Jun Morimoto:
Sparsely extracting stored movements to construct interfaces for humanoid end-effector control. ROBIO 2015: 1816-1821 - 2014
- [c45]Shengbo Xu, Yuki Inoue, Tetsunari Inamura, Hirotaka Moriguchi, Shinichi Honiden:
Sample efficiency improvement on neuroevolution via estimation-based elimination strategy. AAMAS 2014: 1537-1538 - [c44]Jeffrey Too Chuan Tan, Tetsunari Inamura, Yoshinobu Hagiwara, Komei Sugiura, Takayuki Nagai, Hiroyuki Okada:
A new dimension for RoboCup @home: human-robot interaction between virtual and real worlds. HRI 2014: 332 - [c43]Yuka Ariki, Tetsunari Inamura, Jun Morimoto:
Observing human movements to construct a humanoid interface. Humanoids 2014: 342-347 - [c42]Karinne Ramirez-Amaro, Tetsunari Inamura, Emmanuel C. Dean-Leon, Michael Beetz, Gordon Cheng:
Bootstrapping humanoid robot skills by extracting semantic representations of human-like activities from virtual reality. Humanoids 2014: 438-443 - 2013
- [j12]Raghvendra Jain, Tetsunari Inamura:
Bayesian learning of tool affordances based on generalization of functional feature to estimate effects of unseen tools. Artif. Life Robotics 18(1-2): 95-103 (2013) - [c41]Tetsunari Inamura, Jeffrey Too Chuan Tan:
Development of RoboCup @home simulator: simulation platform that enables long-term large scale HRI. HRI 2013: 145-146 - [c40]Jeffrey Too Chuan Tan, Tetsunari Inamura:
Integration of work sequence and embodied interaction for collaborative work based human-robot interaction. HRI 2013: 239-240 - [c39]Jeffrey Too Chuan Tan, Tetsunari Inamura:
Embodied and multimodal human-robot interaction between virtual and real worlds. RO-MAN 2013: 296-297 - [c38]Bidan Huang, Joanna Bryson, Tetsunari Inamura:
Learning motion primitives of object manipulation using Mimesis Model. ROBIO 2013: 1144-1150 - [c37]Tetsunari Inamura, Jeffrey Too Chuan Tan, Komei Sugiura, Takayuki Nagai, Hiroyuki Okada:
Development of RoboCup@Home Simulation towards Long-term Large Scale HRI. RoboCup 2013: 672-680 - [c36]Yoshinobu Hagiwara, Tetsunari Inamura:
Object recognition using lighting condition database based on long-time observation in virtual environment. SII 2013: 766-771 - [c35]Jeffrey Too Chuan Tan, Tetsunari Inamura, Komei Sugiura, Takayuki Nagai, Hiroyuki Okada:
Human-Robot Interaction between Virtual and Real Worlds: Motivation from RoboCup @Home. ICSR 2013: 239-248 - [i1]Tetsunari Inamura, Tamim Asfour, Sethu Vijayakumar:
Cognitive Social Robotics: Intelligence based on Embodied Experience and Social Interaction (NII Shonan Meeting 2013-14). NII Shonan Meet. Rep. 2013 (2013) - 2012
- [c34]Raghvendra Jain, Tetsunari Inamura:
Estimation of Suitable Action to Realize Given Novel Effect with Given Tool Using Bayesian Tool Affordances. AAAI 2012: 2429-2430 - [c33]Jeffrey Too Chuan Tan, Tetsunari Inamura:
Extending chatterbot system into multimodal interaction framework with embodied contextual understanding. HRI 2012: 251-252 - [c32]Jeffrey Too Chuan Tan, Tetsunari Inamura:
SIGVerse - A cloud computing architecture simulation platform for social human-robot interaction. ICRA 2012: 1310-1315 - [c31]Keisuke Okuno, Tetsunari Inamura:
Analysis and modeling of emphatic motion use and symbolic expression use by observing humans' motion coaching task -models for robotic motion coaching system-. RO-MAN 2012: 640-645 - [c30]Jeffrey Too Chuan Tan, Feng Duan, Tetsunari Inamura:
Multimodal human-robot interaction with Chatterbot system: Extending AIML towards supporting embodied interactions. ROBIO 2012: 1727-1732 - [c29]Tetsunari Inamura, Jeffrey Too Chuan Tan:
Long-term large scale human-robot interaction platform through immersive VR system - Development of RoboCup @Home Simulator-. SII 2012: 242-247 - [c28]Keisuke Okuno, Tetsunari Inamura:
A model to output optimal degrees of emphasis for teaching motion according to initial performance of human-learners-an empirically obtained model for robotic motion coaching system. SII 2012: 916-920 - 2011
- [j11]Saifuddin Md. Tareeq, Tetsunari Inamura:
Management of Experience Data for Rapid Adaptation to New Preferences Based on Bayesian Significance Evaluation. Adv. Robotics 25(18): 2273-2291 (2011) - [j10]R. M. Kuppan Chetty, M. Singaperumal, T. Nagarajan, Tetsunari Inamura:
Coordination control of wheeled mobile robots - a hybrid approach. Int. J. Comput. Appl. Technol. 41(3/4): 195-204 (2011) - [c27]Matei Negulescu, Tetsunari Inamura:
Exploring sketching for robot collaboration. HRI 2011: 211-212 - [c26]Keisuke Okuno, Tetsunari Inamura:
Motion coaching with emphatic motions and adverbial expressions for human beings by robotic system -method for controlling motions and expressions with sole parameter-. IROS 2011: 3381-3386 - [c25]Jeffrey Too Chuan Tan, Tetsunari Inamura:
What are required to simulate interaction with robot? SIGVerse - A simulation platform for human-robot interaction. ROBIO 2011: 2878-2883 - [c24]Tetsunari Inamura:
Human-robot Cooperation System using Shared Cyber Space that Connects to Real World - Development of SocioIntelliGenesis Simulator SIGVerse toward HRI. SIMULTECH 2011: 429-434 - 2010
- [j9]Saifuddin Md. Tareeq, Tetsunari Inamura:
Rapid behavior adaptation for human-centered robots in a dynamic environment based on the integration of primitive confidences on multi-sensor elements. Artif. Life Robotics 15(4): 515-521 (2010) - [j8]Tetsunari Inamura:
Preface. Adv. Robotics 24(5-6): 627 (2010)
2000 – 2009
- 2009
- [j7]Yasuo Kuniyoshi, Tetsunari Inamura:
Preface. Adv. Robotics 23(11): 1423-1424 (2009) - [j6]Tetsunari Inamura, Kei Okada, Satoru Tokutsu, Naotaka Hatao, Masayuki Inaba, Hirochika Inoue:
HRP-2W: A humanoid platform for research on support behavior in daily life environments. Robotics Auton. Syst. 57(2): 145-154 (2009) - [c23]Tetsunari Inamura, Keisuke Okuno:
Estimation of other's sensory patterns based on dialogue and shared motion experiences. Humanoids 2009: 617-623 - 2008
- [c22]Tetsunari Inamura, Tomohiro Shibata:
Geometric proto-symbol manipulation towards language-based motion pattern synthesis and recognition. IROS 2008: 334-339 - [c21]Saifuddin Md. Tareeq, Tetsunari Inamura:
A sample discarding strategy for rapid adaptation to new situation based on Bayesian behavior learning. ROBIO 2008: 1950-1955 - 2007
- [j5]Tetsunari Inamura:
Preface. Adv. Robotics 21(13): 1471-1472 (2007) - [j4]Tetsunari Inamura:
Preface. Adv. Robotics 21(15): 1685-1686 (2007) - [c20]Tetsunari Inamura, Tomohiro Shibata:
Interpolation and Extrapolation of Motion Patterns in the Proto-symbol Space. ICONIP (2) 2007: 193-202 - 2006
- [c19]Naoki Kojo, Tetsunari Inamura, Kei Okada, Masayuki Inaba:
Gesture Recognition for Humanoids using Proto-symbol Space. Humanoids 2006: 76-81 - [c18]Tetsunari Inamura, Kei Okada, Masayuki Inaba, Hirochika Inoue:
HRP-2W: A Humanoid Platform for Research on Support Behavior in Daily life Environments. IAS 2006: 732-739 - [c17]Yuto Nakanishi, Ikuo Mizuuchi, Tomoaki Yoshikai, Tetsunari Inamura, Masayuki Inaba:
Tendon Arrangement Based on Joint Torque Requirements for a Reinforceable Musculo-Skeletal Humanoid. IAS 2006: 786-793 - [c16]Naoki Kojo, Tetsunari Inamura, Masayuki Inaba:
Behavior Induction by Geometric Relation between Symbols of Multi-sensory Pattern. IAS 2006: 875-882 - [c15]Tetsunari Inamura, Naoki Kojo, Masayuki Inaba:
Situation Recognition and Behavior Induction based on Geometric Symbol Representation ofMultimodal Sensorimotor Patterns. IROS 2006: 5147-5152 - [c14]Tetsunari Inamura, Tomohiro Kawaji, Tomoyuki Sonoda, Kei Okada, Masayuki Inaba:
Cooperative Task Achievement System Between Humans and Robots Based on Stochastic Memory Model of Spatial Environment. JSAI 2006: 77-87 - 2005
- [j3]Tetsunari Inamura, Masayuki Inaba, Hirochika Inoue:
A Dialogue Control Model Based on Ambiguity Evaluation of Users' Instructions and Stochastic Representation of Experiences. J. Robotics Mechatronics 17(6): 697-704 (2005) - [c13]Yuto Nakanishi, Ikuo Mizuuchi, Tomoaki Yoshikai, Tetsunari Inamura, Masayuki Inaba:
Pedaling by a redundant musculo-skeletal humanoid robot. Humanoids 2005: 68-73 - [c12]Tetsunari Inamura, Naoki Kojo, Tomoyuki Sonoda, Kazuyuki Sakamoto, Kei Okada, Masayuki Inaba:
Intent imitation using wearable motion capturing system with on-line teaching of task attention. Humanoids 2005: 469-474 - 2004
- [j2]Tetsunari Inamura, Iwaki Toshima, Hiroaki Tanie, Yoshihiko Nakamura:
Embodied Symbol Emergence Based on Mimesis Theory. Int. J. Robotics Res. 23(4-5): 363-377 (2004) - [j1]Tetsunari Inamura, Masayuki Inaba, Hirochika Inoue:
PEXIS: Probabilistic experience representation based adaptive interaction system for personal robots. Syst. Comput. Jpn. 35(6): 98-109 (2004) - [c11]Marika Hayashi, Tetsunari Inamura, Masayuki Inaba, Hirochika Inoue:
Acquisition of behavior modifier based on geometric proto-symbol manipulation and its application to motion generation. IROS 2004: 2036-2041 - [c10]Tetsunari Inamura, Masayuki Inaba, Hirochika Inoue:
Dialogue control for task achievement based on evaluation of situational vagueness and stochastic representation of experiences. IROS 2004: 2861-2866 - 2003
- [c9]Tetsunari Inamura, Hiroaki Tanie, Yoshihiko Nakamura:
Keyframe compression and decompression for time series data based on the continuous hidden Markov model. IROS 2003: 1487-1492 - [c8]Yoshihiko Nakamura, Tetsunari Inamura, Hiroaki Tanie:
A Statistic Model of Embodied Symbol Emergence. ISRR 2003: 573-584 - 2002
- [c7]Tetsunari Inamura, Iwaki Toshima, Yoshihiko Nakamura:
Acquisition and Embodiment of Motion Elements in Closed Mimesis Loop. ICRA 2002: 1539-1544 - [c6]Tetsunari Inamura, Yoshihiko Nakamura, Moriaki Shimozaki:
Associative computational model of mirror neurons that connects missing link between behaviors and symbols. IROS 2002: 1032-1037 - [c5]Tetsunari Inamura, Iwaki Toshima, Yoshihiko Nakamura:
Acquiring Motion Elements for Bidirectional Computation of Motion Recognition and Generation. ISER 2002: 372-381 - 2001
- [c4]Tetsunari Inamura, Yoshihiko Nakamura, Hideaki Ezaki, Iwaki Toshima:
Imitation and Primitive Symbol Acquisition of Humanoids by the Integrated Mimesis Loop. ICRA 2001: 4208-4213 - [c3]Qiang Huang, Yoshihiko Nakamura, Tetsunari Inamura:
Humanoids Walk with Feedforward Dynamic Pattern and Feedback Sensory Reflection. ICRA 2001: 4220-4225 - 2000
- [c2]Tetsunari Inamura, Masayuki Inaba, Hirochika Inoue:
User adaptation of human-robot interaction model based on Bayesian network and introspection of interaction experience. IROS 2000: 2139-2144
1990 – 1999
- 1998
- [c1]Tetsunari Inamura, Tomohiro Shibata, Yoshio Matsumoto, Masayuki Inaba, Hirochika Inoue:
Finding and following a human based on online visual feature determination through discourse. IROS 1998: 348-353
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-04 20:56 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint