default search action
Yoshiyuki Ohmura
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j6]Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Goal-Conditioned Dual-Action Imitation Learning for Dexterous Dual-Arm Robot Manipulation. IEEE Trans. Robotics 40: 2287-2305 (2024) - [c22]Takayuki Komatsu, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Ablation Study to Clarify the Mechanism of Object Segmentation in Multi-Object Representation Learning. ICDL 2024: 1-7 - [c21]Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Multi-task real-robot data with gaze attention for dual-arm fine manipulation. IROS 2024: 8516-8523 - [i10]Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Multi-task robot data for dual-arm fine manipulation. CoRR abs/2401.07603 (2024) - 2023
- [j5]Heecheol Kim, Yoshiyuki Ohmura, Akihiko Nagakubo, Yasuo Kuniyoshi:
Training Robots Without Robots: Deep Imitation Learning for Master-to-Robot Policy Transfer. IEEE Robotics Autom. Lett. 8(5): 2906-2913 (2023) - [c20]Yoshia Abe, Yoshiyuki Ohmura, Shogo Yonekura, Hoshinori Kanazawa, Yasuo Kuniyoshi:
Simulating Early Childhood Drawing Behaviors under Physical Constraints Using Reinforcement Learning. ICDL 2023: 156-163 - [c19]Ryo Takatsuki, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Unsupervised Judgment of Properties Based on Transformation Recognition. ICDL 2023: 409-414 - [i9]Takayuki Komatsu, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Ablation Study to Clarify the Mechanism of Object Segmentation in Multi-Object Representation Learning. CoRR abs/2310.03273 (2023) - 2022
- [c18]Takumi Takada, Wataru Shimaya, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Disentangling Patterns and Transformations from One Sequence of Images with Shape-invariant Lie Group Transformer. ICDL 2022: 54-59 - [c17]Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Memory-based gaze prediction in deep imitation learning for robot manipulation. ICRA 2022: 2427-2433 - [c16]Shogo Hamano, Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Using human gaze in few-shot imitation learning for robot manipulation. IROS 2022: 8622-8629 - [i8]Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Memory-based gaze prediction in deep imitation learning for robot manipulation. CoRR abs/2202.04877 (2022) - [i7]Heecheol Kim, Yoshiyuki Ohmura, Akihiko Nagakubo, Yasuo Kuniyoshi:
Training Robots without Robots: Deep Imitation Learning for Master-to-Robot Policy Transfer. CoRR abs/2202.09574 (2022) - [i6]Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Robot peels banana with goal-conditioned dual-action deep imitation learning. CoRR abs/2203.09749 (2022) - [i5]Takumi Takada, Wataru Shimaya, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Disentangling Patterns and Transformations from One Sequence of Images with Shape-invariant Lie Group Transformer. CoRR abs/2203.11210 (2022) - 2021
- [j4]Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Gaze-Based Dual Resolution Deep Imitation Learning for High-Precision Dexterous Robot Manipulation. IEEE Robotics Autom. Lett. 6(2): 1630-1637 (2021) - [c15]Takumi Takada, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Unsupervised Learning of Shape-invariant Lie Group Transformer by Embedding Ordinary Differential Equation. ICDL 2021: 1-6 - [c14]Takayuki Komatsu, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Unsupervised Temporal Segmentation Using Models That Discriminate Between Demonstrations and Unintentional Actions. IROS 2021: 8951-8956 - [c13]Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Transformer-based deep imitation learning for dual-arm robot manipulation. IROS 2021: 8965-8972 - [i4]Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Gaze-based dual resolution deep imitation learning for high-precision dexterous robot manipulation. CoRR abs/2102.01295 (2021) - [i3]Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Transformer-based deep imitation learning for dual-arm robot manipulation. CoRR abs/2108.00385 (2021) - [i2]Takayuki Kanai, Yoshiyuki Ohmura, Akihiko Nagakubo, Yasuo Kuniyoshi:
Third-party Evaluation of Robotic Hand Designs Using a Mechanical Glove. CoRR abs/2109.10501 (2021) - 2020
- [j3]Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Using Human Gaze to Improve Robustness Against Irrelevant Objects in Robot Manipulation Tasks. IEEE Robotics Autom. Lett. 5(3): 4415-4422 (2020) - [c12]Izumi Karino, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Identifying Critical States by the Action-Based Variance of Expected Return. ICANN (1) 2020: 366-378 - [i1]Izumi Karino, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Identifying Critical States by the Action-Based Variance of Expected Return. CoRR abs/2008.11332 (2020)
2010 – 2019
- 2019
- [c11]Kento Sekiya, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Generating an image of an object's appearance from somatosensory information during haptic exploration. IROS 2019: 8138-8143 - 2012
- [c10]Takashi Sagisaka, Yoshiyuki Ohmura, Akihiko Nagakubo, Kazuyuki Ozaki, Yasuo Kuniyoshi:
Development and Applications of High-Density Tactile Sensing Glove. EuroHaptics (1) 2012: 445-456 - 2011
- [c9]Takashi Sagisaka, Yoshiyuki Ohmura, Yasuo Kuniyoshi, Akihiko Nagakubo, Kazuyuki Ozaki:
High-density conformable tactile sensing glove. Humanoids 2011: 537-542
2000 – 2009
- 2009
- [c8]Yuki Fujimori, Yoshiyuki Ohmura, Tatsuya Harada, Yasuo Kuniyoshi:
Wearable motion capture suit with full-body tactile sensors. ICRA 2009: 3186-3193 - [c7]Kunihiro Ogata, Daisuke Shiramatsu, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Analyzing the "knack" of human piggyback motion based on simultaneous measurement of tactile and movement data as a basis for humanoid control. IROS 2009: 2531-2536 - 2007
- [c6]Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Humanoid robot which can lift a 30kg box by whole body contact and tactile feedback. IROS 2007: 1136-1141 - [c5]Yasuo Kuniyoshi, Yoshiyuki Ohmura, Akihiko Nagakubo:
Whole Body Haptics for Augmented Humanoid Task Capabilities. ISRR 2007: 61-73 - 2006
- [c4]Yoshiyuki Ohmura, Yasuo Kuniyoshi, Akihiko Nagakubo:
Conformable and Scalable Tactile Sensor Skin for Curved Surfaces. ICRA 2006: 1348-1353 - 2004
- [j2]Yasuo Kuniyoshi, Yoshiyuki Ohmura, Koji Terada, Akihiko Nagakubo:
Dynamic Roll-and-Rise Motion by an Adult-Size Humanoid Robot. Int. J. Humanoid Robotics 1(3): 497-516 (2004) - [j1]Yasuo Kuniyoshi, Yoshiyuki Ohmura, Koji Terada, Akihiko Nagakubo, Shin'ichiro Eitoku, Tomoyuki Yamamoto:
Embodied basis of invariant features in execution and perception of whole-body dynamic actions - knacks and focuses of Roll-and-Rise motion. Robotics Auton. Syst. 48(4): 189-201 (2004) - 2003
- [c3]Yasuo Kuniyoshi, Yasuaki Yorozu, Yoshiyuki Ohmura, Koji Terada, Takuya Otani, Akihiko Nagakubo, Tomoyuki Yamamoto:
From Humanoid Embodiment to Theory of Mind. Embodied Artificial Intelligence 2003: 202-218 - [c2]Koji Terada, Yoshiyuki Ohmura, Yasuo Kuniyoshi:
Analysis and control of whole body dynamic humanoid motion - towards experiments on a roll-and-rise motion. IROS 2003: 1382-1387 - [c1]Yasuo Kuniyoshi, Yoshiyuki Ohmura, Koji Terada, Tomoyuki Yamamoto, Akihiko Nagakubo:
Exploiting the Global Dynamics Structure of Whole-Body Humanoid Motion - Getting the "Knack" of Roll-and-Rise Motion. ISRR 2003: 385-394
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-09 13:03 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint