


default search action
Ryo Yonetani
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [i25]Jiaqi Bao, Ryo Yonetani:
Path Planning using Instruction-Guided Probabilistic Roadmaps. CoRR abs/2502.16515 (2025) - 2024
- [c42]Akira Kasuga
, Ryo Yonetani
:
CXSimulator: A User Behavior Simulation using LLM Embeddings for Web-Marketing Campaign Assessment. CIKM 2024: 3817-3821 - [c41]Kohei Honda, Ryo Yonetani, Mai Nishimura, Tadashi Kozuno:
When to Replan? An Adaptive Replanning Strategy for Autonomous Navigation using Deep Reinforcement Learning. ICRA 2024: 6650-6656 - [c40]Hikaru Asano, Ryo Yonetani, Taiki Sekii, Hiroki Ouchi:
Text2Traj2Text: Learning-by-Synthesis Framework for Contextual Captioning of Human Movement Trajectories. INLG 2024: 289-302 - [c39]Ryo Yonetani
, Jun Baba
, Yasutaka Furukawa
:
RetailOpt: Opt-In, Easy-to-Deploy Trajectory Estimation from Smartphone Motion Data and Retail Facility Information. ISWC 2024: 125-132 - [i24]Ryo Yonetani, Jun Baba, Yasutaka Furukawa:
RetailOpt: An Opt-In, Easy-to-Deploy Trajectory Estimation System Leveraging Smartphone Motion Data and Retail Facility Information. CoRR abs/2404.12548 (2024) - [i23]Ryo Yonetani:
TSPDiffuser: Diffusion Models as Learned Samplers for Traveling Salesperson Path Planning Problems. CoRR abs/2406.02858 (2024) - [i22]Akira Kasuga, Ryo Yonetani:
CXSimulator: A User Behavior Simulation using LLM Embeddings for Web-Marketing Campaign Assessment. CoRR abs/2407.21553 (2024) - [i21]Hikaru Asano, Ryo Yonetani, Taiki Sekii, Hiroki Ouchi:
Text2Traj2Text: Learning-by-Synthesis Framework for Contextual Captioning of Human Movement Trajectories. CoRR abs/2409.12670 (2024) - [i20]Matthew Ishige, Yasuhiro Yoshimura, Ryo Yonetani:
Opt-in Camera: Person Identification in Video via UWB Localization and Its Application to Opt-in Systems. CoRR abs/2409.19891 (2024) - 2023
- [j10]Yasuhiro Nitta
, Mariko Isogawa
, Ryo Yonetani
, Maki Sugimoto
:
Importance Rank-Learning of Objects in Urban Scenes for Assisting Visually Impaired People. IEEE Access 11: 62932-62941 (2023) - [j9]Kazumi Kasaura
, Shuwa Miura
, Tadashi Kozuno
, Ryo Yonetani
, Kenta Hoshino
, Yohei Hosoe
:
Benchmarking Actor-Critic Deep Reinforcement Learning Algorithms for Robotics Control With Action Constraints. IEEE Robotics Autom. Lett. 8(8): 4449-4456 (2023) - [c38]Kazumi Kasaura, Ryo Yonetani, Mai Nishimura:
Periodic Multi-Agent Path Planning. AAAI 2023: 6183-6191 - [c37]Hikaru Asano, Ryo Yonetani, Mai Nishimura, Tadashi Kozuno:
Counterfactual Fairness Filter for Fair-Delay Multi-Robot Navigation. AAMAS 2023: 887-895 - [c36]Masafumi Endo
, Tatsunori Taniai, Ryo Yonetani, Genya Ishigami:
Risk-aware Path Planning via Probabilistic Fusion of Traversability Prediction for Planetary Rovers on Heterogeneous Terrains. ICRA 2023: 11852-11858 - [i19]Kazumi Kasaura, Ryo Yonetani, Mai Nishimura:
Periodic Multi-Agent Path Planning. CoRR abs/2301.10910 (2023) - [i18]Masafumi Endo
, Tatsunori Taniai, Ryo Yonetani, Genya Ishigami
:
Risk-aware Path Planning via Probabilistic Fusion of Traversability Prediction for Planetary Rovers on Heterogeneous Terrains. CoRR abs/2303.01169 (2023) - [i17]Kazumi Kasaura, Shuwa Miura, Tadashi Kozuno, Ryo Yonetani, Kenta Hoshino, Yohei Hosoe:
Benchmarking Actor-Critic Deep Reinforcement Learning Algorithms for Robotics Control with Action Constraints. CoRR abs/2304.08743 (2023) - [i16]Kohei Honda, Ryo Yonetani, Mai Nishimura, Tadashi Kozuno:
When to Replan? An Adaptive Replanning Strategy for Autonomous Navigation using Deep Reinforcement Learning. CoRR abs/2304.12046 (2023) - [i15]Hikaru Asano, Ryo Yonetani, Mai Nishimura, Tadashi Kozuno:
Counterfactual Fairness Filter for Fair-Delay Multi-Robot Navigation. CoRR abs/2305.11465 (2023) - 2022
- [j8]Kazumi Kasaura
, Mai Nishimura, Ryo Yonetani
:
Prioritized Safe Interval Path Planning for Multi-Agent Pathfinding With Continuous Time on 2D Roadmaps. IEEE Robotics Autom. Lett. 7(4): 10494-10501 (2022) - [c35]Keisuke Okumura, Ryo Yonetani, Mai Nishimura, Asako Kanezaki:
CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces. AAMAS 2022: 972-981 - [i14]Keisuke Okumura, Ryo Yonetani, Mai Nishimura, Asako Kanezaki:
CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces. CoRR abs/2201.09467 (2022) - 2021
- [j7]Hiroaki Minoura, Ryo Yonetani
, Mai Nishimura, Yoshitaka Ushiku
:
Crowd Density Forecasting by Modeling Patch-Based Dynamics. IEEE Robotics Autom. Lett. 6(1): 287-294 (2021) - [c34]Ryo Yonetani, Tatsunori Taniai, Mohammadamin Barekatain, Mai Nishimura, Asako Kanezaki:
Path Planning using Neural A* Search. ICML 2021: 12029-12039 - [c33]Felix von Drigalski, Kennosuke Hayashi, Yifei Huang, Ryo Yonetani, Masashi Hamaya, Kazutoshi Tanaka, Yoshihisa Ijiri:
Precise Multi-Modal In-Hand Pose Estimation using Low-Precision Sensors for Robotic Assembly. ICRA 2021: 968-974 - [c32]Kazutoshi Tanaka, Ryo Yonetani, Masashi Hamaya, Robert Lee, Felix von Drigalski, Yoshihisa Ijiri:
TRANS-AM: Transfer Learning by Aggregating Dynamics Models for Soft Robotic Assembly. ICRA 2021: 4627-4633 - [c31]Kazutoshi Tanaka, Masashi Hamaya, Devwrat Joshi, Felix von Drigalski, Ryo Yonetani, Takamitsu Matsubara, Yoshihisa Ijiri:
Learning Robotic Contact Juggling. IROS 2021: 958-964 - [i13]Toshinori Kitamura, Ryo Yonetani:
ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectives. CoRR abs/2112.04123 (2021) - 2020
- [c30]Navyata Sanghvi, Ryo Yonetani, Kris Kitani:
MGpi: A Computational Model of Multiagent Group Perception and Interaction. AAMAS 2020: 1196-1205 - [c29]Rie Kamikubo, Naoya Kato, Keita Higuchi, Ryo Yonetani, Yoichi Sato:
Support Strategies for Remote Guides in Assisting People with Visual Impairments for Effective Indoor Navigation. CHI 2020: 1-12 - [c28]Naoya Yoshida, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto, Ryo Yonetani:
Hybrid-FL for Wireless Networks: Cooperative Learning Mechanism Using Non-IID Data. ICC 2020: 1-7 - [c27]Jiaxin Ma, Ryo Yonetani, Zahid Iqbal
:
Adaptive Distillation for Decentralized Learning from Heterogeneous Clients. ICPR 2020: 7486-7492 - [c26]Mohammadamin Barekatain
, Ryo Yonetani, Masashi Hamaya:
MULTIPOLAR: Multi-Source Policy Aggregation for Transfer Reinforcement Learning between Diverse Environmental Dynamics. IJCAI 2020: 3108-3116 - [c25]Mai Nishimura, Ryo Yonetani:
L2B: Learning to Balance the Safety-Efficiency Trade-off in Interactive Crowd-aware Robot Navigation. IROS 2020: 11004-11010 - [i12]Mai Nishimura, Ryo Yonetani:
L2B: Learning to Balance the Safety-Efficiency Trade-off in Interactive Crowd-aware Robot Navigation. CoRR abs/2003.09207 (2020) - [i11]Jiaxin Ma, Ryo Yonetani, Zahid Iqbal
:
Adaptive Distillation for Decentralized Learning from Heterogeneous Clients. CoRR abs/2008.07948 (2020) - [i10]Ryo Yonetani, Tatsunori Taniai, Mohammadamin Barekatain, Mai Nishimura, Asako Kanezaki:
Path Planning using Neural A* Search. CoRR abs/2009.07476 (2020)
2010 – 2019
- 2019
- [c24]Takayuki Nishio, Ryo Yonetani:
Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge. ICC 2019: 1-7 - [c23]Nathawan Charoenkulvanich, Rie Kamikubo, Ryo Yonetani, Yoichi Sato:
Assisting group activity analysis through hand detection and identification in multiple egocentric videos. IUI 2019: 570-574 - [i9]Navyata Sanghvi, Ryo Yonetani, Kris M. Kitani:
Modeling Social Group Communication with Multi-Agent Imitation Learning. CoRR abs/1903.01537 (2019) - [i8]Naoya Yoshida, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto, Ryo Yonetani:
Hybrid-FL: Cooperative Learning Mechanism Using Non-IID Data in Wireless Networks. CoRR abs/1905.07210 (2019) - [i7]Ryo Yonetani, Tomohiro Takahashi, Atsushi Hashimoto, Yoshitaka Ushiku:
Decentralized Learning of Generative Adversarial Networks from Multi-Client Non-iid Data. CoRR abs/1905.09684 (2019) - [i6]Mohammadamin Barekatain, Ryo Yonetani, Masashi Hamaya:
MULTIPOLAR: Multi-Source Policy Aggregation for Transfer Reinforcement Learning between Diverse Environmental Dynamics. CoRR abs/1909.13111 (2019) - [i5]Hiroaki Minoura, Ryo Yonetani, Mai Nishimura, Yoshitaka Ushiku:
Crowd Density Forecasting by Modeling Patch-based Dynamics. CoRR abs/1911.09814 (2019) - 2018
- [j6]Ryo Yonetani
, Kris M. Kitani
, Yoichi Sato
:
Ego-Surfing: Person Localization in First-Person Videos Using Ego-Motion Signatures. IEEE Trans. Pattern Anal. Mach. Intell. 40(11): 2749-2761 (2018) - [c22]Seita Kayukawa, Keita Higuchi, Ryo Yonetani, Masanori Nakamura, Yoichi Sato, Shigeo Morishima
:
Dynamic Object Scanning: Object-Based Elastic Timeline for Quickly Browsing First-Person Videos. CHI Extended Abstracts 2018 - [c21]Seita Kayukawa, Keita Higuchi, Ryo Yonetani, Masanori Nakamura, Yoichi Sato, Shigeo Morishima
:
Dynamic Object Scanning: Object-Based Elastic Timeline for Quickly Browsing First-Person Videos. CHI Extended Abstracts 2018 - [c20]Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani, Yoichi Sato:
Future Person Localization in First-Person Videos. CVPR 2018: 7593-7602 - [c19]Yuki Sugita, Keita Higuchi, Ryo Yonetani, Rie Kamikubo, Yoichi Sato:
Browsing Group First-Person Videos with 3D Visualization. ISS 2018: 55-60 - [c18]Rie Kamikubo, Keita Higuchi, Ryo Yonetani, Hideki Koike, Yoichi Sato:
Exploring the Role of Tunnel Vision Simulation in the Design Cycle of Accessible Interfaces. W4A 2018: 13:1-13:10 - [i4]Takayuki Nishio, Ryo Yonetani:
Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge. CoRR abs/1804.08333 (2018) - 2017
- [c17]Rie Kamikubo, Keita Higuchi, Ryo Yonetani, Hideki Koike, Yoichi Sato:
Rapid Prototyping of Accessible Interfaces With Gaze-Contingent Tunnel Vision Simulation. ASSETS 2017: 387-388 - [c16]Keita Higuchi, Ryo Yonetani, Yoichi Sato:
EgoScanning: Quickly Scanning First-Person Videos with Egocentric Elastic Timelines. CHI 2017: 6536-6546 - [c15]Ryo Yonetani, Vishnu Naresh Boddeti, Kris M. Kitani, Yoichi Sato:
Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption. ICCV 2017: 2059-2069 - [c14]Yifei Huang
, Minjie Cai
, Hiroshi Kera, Ryo Yonetani, Keita Higuchi, Yoichi Sato:
Temporal Localization and Spatial Segmentation of Joint Attention in Multiple First-Person Videos. ICCV Workshops 2017: 2313-2321 - [c13]Keita Higuchi, Ryo Yonetani, Yoichi Sato:
Egoscanning: quickly scanning first-person videos with egocentric elastic timelines. SIGGRAPH ASIA Emerging Technologies 2017: 5:1-5:2 - [i3]Ryo Yonetani, Vishnu Naresh Boddeti, Kris M. Kitani, Yoichi Sato:
Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption. CoRR abs/1704.02203 (2017) - [i2]Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani, Yoichi Sato:
Future Person Localization in First-Person Videos. CoRR abs/1711.11217 (2017) - 2016
- [c12]Keita Higuchi, Ryo Yonetani, Yoichi Sato:
Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks. CHI 2016: 5180-5190 - [c11]Hiroshi Kera, Ryo Yonetani, Keita Higuchi, Yoichi Sato:
Discovering Objects of Joint Attention via First-Person Sensing. CVPR Workshops 2016: 361-369 - [c10]Ryo Yonetani, Kris M. Kitani, Yoichi Sato:
Recognizing Micro-Actions and Reactions from Paired Egocentric Videos. CVPR 2016: 2629-2638 - [c9]Ryo Yonetani, Kris Makoto Kitani, Yoichi Sato:
Visual Motif Discovery via First-Person Vision. ECCV (2) 2016: 187-203 - [i1]Ryo Yonetani, Kris Makoto Kitani, Yoichi Sato:
Ego-Surfing First-Person Videos. CoRR abs/1606.04637 (2016) - 2015
- [c8]Ryo Yonetani, Kris Makoto Kitani, Yoichi Sato:
Ego-surfing first person videos. CVPR 2015: 5445-5454 - 2013
- [b1]Ryo Yonetani:
Modeling Spatiotemporal Correlations between Video Saliency and Gaze Dynamics. Kyoto University, Japan, 2013 - [j5]Akisato Kimura, Ryo Yonetani, Takatsugu Hirayama:
Computational Models of Human Visual Attention and Their Implementations: A Survey. IEICE Trans. Inf. Syst. 96-D(3): 562-578 (2013) - [j4]Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama:
Learning Spatiotemporal Gaps between Where We Look and What We Focus on. Inf. Media Technol. 8(4): 1066-1070 (2013) - [j3]Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama:
Learning Spatiotemporal Gaps between Where We Look and What We Focus on. IPSJ Trans. Comput. Vis. Appl. 5: 75-79 (2013) - [c7]Kei Shimonishi, Hiroaki Kawashima, Ryo Yonetani, Erina Ishikawa, Takashi Matsuyama:
Learning aspects of interest from Gaze. GazeIn@ICMI 2013: 41-44 - [c6]Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama:
Predicting where we look from spatiotemporal gaps. ICMI 2013: 421-428 - 2012
- [j2]Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama:
Mental Focus Analysis Using the Spatio-temporal Correlation between Visual Saliency and Eye Movements. Inf. Media Technol. 7(1): 496-505 (2012) - [j1]Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama:
Mental Focus Analysis Using the Spatio-temporal Correlation between Visual Saliency and Eye Movements. J. Inf. Process. 20(1): 267-276 (2012) - [c5]Ryo Yonetani, Akisato Kimura, Hitoshi Sakano, Ken Fukuchi:
Single Image Segmentation with Estimated Depth. BMVC 2012: 1-11 - [c4]Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama:
Multi-mode saliency dynamics model for analyzing gaze and attention. ETRA 2012: 115-122 - [c3]Erina Ishikawa, Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama:
Semantic interpretation of eye movements using designed structures of displayed contents. GazeIn@ICMI 2012: 17:1-17:3 - [c2]Ryo Yonetani:
Modeling video viewing behaviors for viewer state estimation. ACM Multimedia 2012: 1393-1396 - 2010
- [c1]Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama:
Gaze Probing: Event-Based Estimation of Objects Being Focused On. ICPR 2010: 101-104
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-03-22 01:18 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint