default search action
Nobutaka Shimada
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c62]Weihao Cai, Yoshiki Mori, Nobutaka Shimada:
Generating Robot Action Sequences: An Efficient Vision-Language Models with Visual Prompts. IWIS 2024: 1-4 - [c61]Fumiya Honjo, Makoto Sanada, Yoshiki Mori, Nobutaka Shimada:
Acquisition of Object Shape Using a HMD's Depth Sensor and MR Presentation of Manipulation Method. IWIS 2024: 1-4 - [c60]Yoshiki Mori, Zhongkui Wang, Nobutaka Shimada, Sadao Kawamura:
A Detachable Distance Sensor Unit Using Optical Fiber for a Pneumatic-Driven Bellows Actuator. SII 2024: 227-232 - 2023
- [j9]Dinh Tuan Tran, Dung Duc Tran, Minh Anh Nguyen, Quyen Van Pham, Nobutaka Shimada, Joo-Ho Lee, Anh Quang Nguyen:
MonoIS3DLoc: Simulation to Reality Learning Based Monocular Instance Segmentation to 3D Objects Localization From Aerial View. IEEE Access 11: 64170-64184 (2023) - [c59]Yutong Zhou, Nobutaka Shimada:
Vision + Language Applications: A Survey. CVPR Workshops 2023: 826-842 - [c58]Hiroki Fukada, Tadashi Matsuo, Nobutaka Shimada, Honda Atsushi, Ryuichi Yoshiura, Zhongkui Wang, Shinichi Hirai:
Uncertainty-Aware Quantitative Grasping Control of Granular Foodstuff Based on a Deep Model that Outputs Regression Coefficients. IWIS 2023: 1-4 - [c57]Makoto Sanada, Nobutaka Shimada, Yoshiaki Shirai:
Recalling Multiple Object Manipulation Candidates by Learning Based on Observation. IWIS 2023: 1-5 - [i3]Yutong Zhou, Nobutaka Shimada:
Vision + Language Applications: A Survey. CoRR abs/2305.14598 (2023) - 2022
- [j8]Dinh Tuan Tran, Nobutaka Shimada, Joo-Ho Lee:
Triple-Sigmoid Activation Function for Deep Open-Set Recognition. IEEE Access 10: 77668-77678 (2022) - [c56]Yutong Zhou, Nobutaka Shimada:
ABLE: Aesthetic Box Lunch Editing. CEA++@MM 2022: 53-56 - 2021
- [j7]Tadahiro Taniguchi, Lotfi El Hafi, Yoshinobu Hagiwara, Akira Taniguchi, Nobutaka Shimada, Takanobu Nishiura:
Semiotically adaptive cognition: toward the realization of remotely-operated service robots for the new normal symbiotic society. Adv. Robotics 35(11): 664-674 (2021) - [c55]Yutong Zhou, Nobutaka Shimada:
Generative Adversarial Network for Text-to-Face Synthesis and Manipulation with Pretrained BERT Model. FG 2021: 1-8 - [c54]Erika Aoki, Tadashi Matsuo, Nobutaka Shimada:
Non-tactile Thumb Tip Measurement System for Encouraging Rehabilitation After Surgery. ICIC (1) 2021: 842-852 - [c53]Kyohei Yoshida, Tadashi Matsuo, Nobutaka Shimada:
ROS2-Based Distributed System Implementation for Logging Indoor Human Activities. ICIC (1) 2021: 862-873 - [c52]Takaaki Fukui, Tadashi Matsuo, Nobutaka Shimada:
Scene Descriptor Expressing Ambiguity in Information Recovery Based on Incomplete Partial Observation. IROS 2021: 2414-2419 - 2020
- [j6]Yutong Zhou, Nobutaka Shimada:
Rain Streaks and Snowflakes Removal for Video Sequences via Motion Compensation and Matrix Completion. SN Comput. Sci. 1(6): 328 (2020)
2010 – 2019
- 2019
- [c51]Yutong Zhou, Nobutaka Shimada:
Using Motion Compensation and Matrix Completion Algorithm to Remove Rain Streaks and Snow for Video Sequences. ACPR (1) 2019: 91-104 - [c50]Michiko Sakuma, Kiyomi Kuramochi, Nobutaka Shimada, Rie Ito:
Positive and Negative Opinions About Living with Robots in Japanese University Students. HRI 2019: 640-641 - [c49]Makoto Sanada, Tadashi Matsuo, Nobutaka Shimada, Yoshiaki Shirai:
Recalling Candidates of Grasping Method from an Object Image using Neural Network. IROS 2019: 634-639 - 2018
- [c48]Tadashi Matsuo, Takuya Kawakami, Yoko Ogawa, Nobutaka Shimada:
Inference of Grasping Pattern from Object Image Based on Interaction Descriptor. ISIE 2018: 565-570 - 2017
- [j5]Tadashi Matsuo, Nobutaka Shimada:
Construction of Latent Descriptor Space and Inference Model of Hand-Object Interactions. IEICE Trans. Inf. Syst. 100-D(6): 1350-1359 (2017) - [c47]Tadashi Matsuo, Hiroya Fukuhara, Nobutaka Shimada:
Transform invariant auto-encoder. IROS 2017: 2359-2364 - [i2]Tadashi Matsuo, Nobutaka Shimada:
Construction of Latent Descriptor Space and Inference Model of Hand-Object Interactions. CoRR abs/1709.03739 (2017) - [i1]Tadashi Matsuo, Hiroya Fukuhara, Nobutaka Shimada:
Transform Invariant Auto-encoder. CoRR abs/1709.03754 (2017) - 2016
- [c46]Hiroyuki Adachi, Seiko Myojin, Nobutaka Shimada:
A Co-located Meeting Support System by Scoring Group Activity using Mobile Devices. AH 2016: 44:1-44:2 - 2015
- [c45]Yoko Ogawa, Nobutaka Shimada, Yoshiaki Shirai, Yoshimasa Kurumi, Masaru Komori:
Temporal-spatial validation of knot-tying procedures using RGB-D sensor for training of surgical operation. MVA 2015: 263-266 - [c44]Hiroyuki Adachi, Seiko Myojin, Nobutaka Shimada:
ScoringTalk: a tablet system scoring and visualizing conversation for balancing of participation. SIGGRAPH Asia Mobile Graphics and Interactive Applications 2015: 9:1-9:5 - [c43]Hiroyuki Adachi, Akimune Haruna, Seiko Myojin, Nobutaka Shimada:
ScoringTalk and WatchingMeter: utterance and gaze visualization for co-located collaboration. SIGGRAPH Asia Mobile Graphics and Interactive Applications 2015: 36:1 - 2014
- [c42]Hiroyuki Adachi, Seiko Myojin, Nobutaka Shimada:
Tablet system for sensing and visualizing statistical profiles of multi-party conversation. GCCE 2014: 407-411 - [c41]Shinya Morioka, Tadashi Matsuo, Yasuhiro Hiramoto, Nobutaka Shimada, Yoshiaki Shirai:
Automatic Image Collection of Objects with Similar Function by Learning Human Grasping Forms. MPRSS 2014: 3-14 - 2013
- [c40]Tadashi Matsuo, Yoshiaki Shirai, Nobutaka Shimada:
Construction of General HMMs from a Few Hand Motions for Sign Language Word Recognition. MVA 2013: 69-72 - 2012
- [c39]Sho Miyamoto, Tadashi Matsuo, Nobutaka Shimada, Yoshiaki Shirai:
Real-time and precise 3-D hand posture estimation based on classification tree trained with variations of appearances. ICPR 2012: 453-456 - [c38]Seiko Myojin, Arata Sato, Nobutaka Shimada:
Augmented reality card game based on user-specific information control. ACM Multimedia 2012: 1193-1196
2000 – 2009
- 2008
- [c37]Kazuhiro Maki, Noriaki Katayama, Nobutaka Shimada, Yoshiaki Shirai:
Image-based automatic detection of indoor scene events and interactive inquiry. ICPR 2008: 1-4 - [c36]Tadashi Matsuo, Yoshiaki Shirai, Nobutaka Shimada:
Automatic generation of HMM topology for sign language recognition. ICPR 2008: 1-4 - [c35]Takahide Tanaka, Satoshi Yamaguchi, Lee Jooho, Nobutaka Shimada, Hiromi T. Tanaka:
Toward Volume-Based Haptic Collaborative Virtual Environment with Realistic Sensation. ISUC 2008: 268-273 - 2007
- [j4]Yasushi Makihara, Masao Takizawa, Yoshiaki Shirai, Nobutaka Shimada:
Adaptation to change of lighting conditions for interactive object recognition. Syst. Comput. Jpn. 38(4): 52-62 (2007) - [c34]Akihiro Imai, Nobutaka Shimada, Yoshiaki Shirai:
Hand Posture Estimation in Complex Backgrounds by Considering Mis-match of Model. ACCV (1) 2007: 596-607 - [c33]Kazuma Haraguchi, Jun Miura, Nobutaka Shimada, Yoshiaki Shirai:
Probabilistic map building considering sensor visibility. ICINCO-RA (1) 2007: 200-206 - [c32]Kazuma Haraguchi, Nobutaka Shimada, Yoshiaki Shirai, Jun Miura:
Probabilistic map building considering sensor visibility for mobile robot. IROS 2007: 4115-4120 - 2006
- [j3]Atsushi Matsumoto, Yoshiaki Shirai, Nobutaka Shimada, Takuro Sakiyama, Jun Miura:
Robust Face Recognition under Various Illumination Conditions. IEICE Trans. Inf. Syst. 89-D(7): 2157-2163 (2006) - [c31]Kana Kawahigashi, Yoshiaki Shirai, Jun Miura, Nobutaka Shimada:
Automatic Synthesis of Training Data for Sign Language Recognition Using HMM. ICCHP 2006: 623-626 - 2005
- [c30]Yasushi Makihara, Jun Miura, Yoshiaki Shirai, Nobutaka Shimada:
Strategy for Displaying the Recognition Result in Interactive Vision. CW 2005: 467-474 - [c29]Atsushi Matsumoto, Nobutaka Shimada, Takuro Sakiyama, Jun Miura, Yoshiaki Shirai:
Robust Face Recognition under various illumination conditions. MVA 2005: 414-417 - 2004
- [c28]Yasushi Hamada, Nobutaka Shimada, Yoshiaki Shirai:
Hand Shape Estimation under Complex Backgrounds for Sign Language Recognition. FGR 2004: 589-594 - [c27]Akihiro Imai, Nobutaka Shimada, Yoshiaki Shirai:
3-D Hand Posture Recognition by Training Contour Variation. FGR 2004: 895-900 - [c26]Yasushi Makihara, Yoshiaki Shirai, Nobutaka Shimada:
Online Learning of Color Transformation for Interactive Object Recognition under Various Lighting Conditions. ICPR (3) 2004: 161-164 - 2003
- [j2]Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai:
Look where you're going [robotic wheelchair]. IEEE Robotics Autom. Mag. 10(1): 26-34 (2003) - [c25]Jun Miura, Yoshiaki Shirai, Nobutaka Shimada, Yasushi Makihara, Masao Takizawa, Yoshio Yano:
Development of a Personal Service Robot with User-Friendly Interfaces. FSR 2003: 427-436 - [c24]Yasushi Makihara, Masao Takizawa, Yoshiaki Shirai, Nobutaka Shimada:
Object Recognition under Various Lighting Conditions. SCIA 2003: 899-906 - 2002
- [j1]Yasushi Makihara, Masao Takizawa, Kazuo Ninokata, Yoshiaki Shirai, Jun Miura, Nobutaka Shimada:
A Service Robot Acting by Occasional Dialog - Object Recognition Using Dialog with User and Sensor-Based Manipulation -. J. Robotics Mechatronics 14(2): 124-132 (2002) - [c23]Mun Ho Jeong, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai:
Two-Hand Gesture Recognition using Coupled Switching Linear Model. ICPR (1) 2002: 9-12 - [c22]Yuichi Araki, Nobutaka Shimada, Yoshiaki Shirai:
Detection of Faces of Various Directions in Complex Backgrounds. ICPR (1) 2002: 409-412 - [c21]Mun Ho Jeong, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai:
Two-Hand Gesture Recognition Using Coupled Switching Linear Model. ICPR (3) 2002: 529-532 - [c20]Yasushi Makihara, Masao Takizawa, Yoshiaki Shirai, Jun Miura, Nobutaka Shimada:
Object Recognition Supported by User Interaction for Service Robots. ICPR (3) 2002: 561-564 - 2001
- [c19]Mun Ho Jeong, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai:
Recognition of Shape-Changing Hand Gestures Based on Switching Linear Model. ICIAP 2001: 14-19 - [c18]Kyousuke Uchida, Yoshiaki Shirai, Nobutaka Shimada:
Probabilistic method of real-time person detection using color image sequences. IROS 2001: 1983-1988 - [c17]Yoshifumi Murakami, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai:
Collision avoidance by observing pedestrians' faces for intelligent wheelchairs. IROS 2001: 2018-2023 - [c16]Yoshinori Kuno, Yoshifumi Murakami, Nobutaka Shimada:
User and social interfaces by observing human faces for intelligent wheelchairs. PUI 2001: 8:1-8:4 - 2000
- [c15]Nobutaka Shimada, Yoshiaki Shirai, Yoshinori Kuno:
Model Adaptation and Posture Estimation of Moving Articulated Object Using Monocular Camera. AMDO 2000: 159-172 - [c14]Yasushi Hamada, Nobutaka Shimada, Yoshiaki Shirai:
Hand Shape Estimation Using Image Transition Network. Workshop on Human Motion 2000: 161-166 - [c13]Yoshinori Kuno, Teruhisa Murashima, Nobutaka Shimada, Yoshiaki Shirai:
Interactive Gesture Interface for Intelligent Wheelchairs. IEEE International Conference on Multimedia and Expo (II) 2000: 789-792 - [c12]Shengshien Chong, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai:
Human-Robot Interface Based on Speech Understanding Assisted by Vision. ICMI 2000: 16-23 - [c11]Nobutaka Shimada, Kousuke Kimura, Yoshiaki Shirai, Yoshinori Kuno:
Hand Posture Estimation by Combining 2-D Appearance-Based and 3-D Model-Based Approaches. ICPR 2000: 3709-3712 - [c10]Yoshinori Kuno, Teruhisa Murashima, Nobutaka Shimada, Yoshiaki Shirai:
Intelligent Wheelchair Remotely Controlled by Interactive Gestures. ICPR 2000: 4672-4675 - [c9]Yoshinori Kuno, Teruhisa Murashima, Nobutaka Shimada, Yoshiaki Shirai:
Understanding and learning of gestures through human-robot interaction. IROS 2000: 2133-2138 - [c8]Yoshifumi Murakami, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai:
Intelligent wheelchair moving among people based on their observations. SMC 2000: 1466-1471
1990 – 1999
- 1999
- [c7]Yoshinori Kuno, Satoru Nakanishi, Teruhisa Murashima, Nobutaka Shimada, Yoshiaki Shirai:
Robotic Wheelchair Observing Its Inside and Outside. ICIAP 1999: 502-507 - [c6]Akihiko Iketani, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai:
Real-Time Surveillance System Detecting Persons in Complex Scenes. ICIAP 1999: 1112-1115 - [c5]Yoshinori Kuno, Satoru Nakanishi, Teruhisa Murashima, Nobutaka Shimada, Yoshiaki Shirai:
Robotic Wheelchair with Three Control Modes. ICRA 1999: 2590-2595 - [c4]Satoru Nakanishi, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai:
Robotic wheelchair based on observations of both user and environment. IROS 1999: 912-917 - 1998
- [c3]Nobutaka Shimada, Yoshiaki Shirai, Yoshinori Kuno, Jun Miura:
3-D Pose Estimation and Model Refinement of an Articulated Object from a Monocular Image Sequence. ACCV (1) 1998: 672-679 - [c2]Nobutaka Shimada, Yoshiaki Shirai, Yoshinori Kuno, Jun Miura:
Hand Gesture Estimation and Model Refinement Using Monocular Camera - Ambiguity Limitation by Inequality Constraints. FG 1998: 268-273 - [c1]Yoshihisa Adachi, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai:
Intelligent wheelchair using visual information on human faces. IROS 1998: 354-359
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-31 21:11 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint