default search action
Roozbeh Mottaghi
Person information
- affiliation: AI2, Allen Institute for Artificial Intelligence, Seattle, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c65]Mukul Khanna, Ram Ramrakhya, Gunjan Chhablani, Sriram Yenamandra, Théophile Gervet, Matthew Chang, Zsolt Kira, Devendra Singh Chaplot, Dhruv Batra, Roozbeh Mottaghi:
GOAT-Bench: A Benchmark for Multi-Modal Lifelong Navigation. CVPR 2024: 16373-16383 - [c64]Jiaman Li, Alexander Clegg, Roozbeh Mottaghi, Jiajun Wu, Xavier Puig, C. Karen Liu:
Controllable Human-Object Interaction Synthesis. ECCV (41) 2024: 54-72 - [c63]So Yeon Min, Xavier Puig, Devendra Singh Chaplot, Tsung-Yen Yang, Akshara Rai, Priyam Parashar, Ruslan Salakhutdinov, Yonatan Bisk, Roozbeh Mottaghi:
Situated Instruction Following. ECCV (61) 2024: 202-228 - [c62]Homanga Bharadhwaj, Roozbeh Mottaghi, Abhinav Gupta, Shubham Tulsiani:
Track2Act: Predicting Point Tracks from Internet Videos Enables Generalizable Robot Manipulation. ECCV (76) 2024: 306-324 - [c61]Xavier Puig, Eric Undersander, Andrew Szot, Mikael Dallaire Cote, Tsung-Yen Yang, Ruslan Partsey, Ruta Desai, Alexander Clegg, Michal Hlavac, So Yeon Min, Vladimir Vondrus, Théophile Gervet, Vincent-Pierre Berges, John M. Turner, Oleksandr Maksymets, Zsolt Kira, Mrinal Kalakrishnan, Jitendra Malik, Devendra Singh Chaplot, Unnat Jain, Dhruv Batra, Akshara Rai, Roozbeh Mottaghi:
Habitat 3.0: A Co-Habitat for Humans, Avatars, and Robots. ICLR 2024 - [i62]Mukul Khanna, Ram Ramrakhya, Gunjan Chhablani, Sriram Yenamandra, Théophile Gervet, Matthew Chang, Zsolt Kira, Devendra Singh Chaplot, Dhruv Batra, Roozbeh Mottaghi:
GOAT-Bench: A Benchmark for Multi-Modal Lifelong Navigation. CoRR abs/2404.06609 (2024) - [i61]Homanga Bharadhwaj, Roozbeh Mottaghi, Abhinav Gupta, Shubham Tulsiani:
Track2Act: Predicting Point Tracks from Internet Videos enables Diverse Zero-shot Robot Manipulation. CoRR abs/2405.01527 (2024) - [i60]Sriram Yenamandra, Arun Ramachandran, Mukul Khanna, Karmesh Yadav, Jay Vakil, Andrew Melnik, Michael Büttner, Leon Harz, Lyon Brown, Gora Chand Nandi, Arjun P. S, Gaurav Kumar Yadav, Rahul Kala, Robert Haschke, Yang Luo, Jinxin Zhu, Yansen Han, Bingyi Lu, Xuan Gu, Qinyuan Liu, Yaping Zhao, Qiting Ye, Chenxiao Dou, Yansong Chua, Volodymyr Kuzma, Vladyslav Humennyy, Ruslan Partsey, Jonathan Francis, Devendra Singh Chaplot, Gunjan Chhablani, Alexander Clegg, Théophile Gervet, Vidhi Jain, Ram Ramrakhya, Andrew Szot, Austin S. Wang, Tsung-Yen Yang, Aaron Edsinger, Charles C. Kemp, Binit Shah, Zsolt Kira, Dhruv Batra, Roozbeh Mottaghi, Yonatan Bisk, Chris Paxton:
Towards Open-World Mobile Manipulation in Homes: Lessons from the Neurips 2023 HomeRobot Open Vocabulary Mobile Manipulation Challenge. CoRR abs/2407.06939 (2024) - [i59]So Yeon Min, Xavier Puig, Devendra Singh Chaplot, Tsung-Yen Yang, Akshara Rai, Priyam Parashar, Ruslan Salakhutdinov, Yonatan Bisk, Roozbeh Mottaghi:
Situated Instruction Following. CoRR abs/2407.12061 (2024) - [i58]Matthew Chang, Gunjan Chhablani, Alexander Clegg, Mikael Dallaire Cote, Ruta Desai, Michal Hlavac, Vladimir Karashchuk, Jacob Krantz, Roozbeh Mottaghi, Priyam Parashar, Siddharth Patki, Ishita Prasad, Xavier Puig, Akshara Rai, Ram Ramrakhya, Daniel Tran, Joanne Truong, John M. Turner, Eric Undersander, Tsung-Yen Yang:
PARTNR: A Benchmark for Planning and Reasoning in Embodied Multi-agent Tasks. CoRR abs/2411.00081 (2024) - 2023
- [c60]Sriram Yenamandra, Arun Ramachandran, Karmesh Yadav, Austin S. Wang, Mukul Khanna, Théophile Gervet, Tsung-Yen Yang, Vidhi Jain, Alexander Clegg, John M. Turner, Zsolt Kira, Manolis Savva, Angel X. Chang, Devendra Singh Chaplot, Dhruv Batra, Roozbeh Mottaghi, Yonatan Bisk, Chris Paxton:
HomeRobot: Open-Vocabulary Mobile Manipulation. CoRL 2023: 1975-2011 - [c59]Vincent-Pierre Berges, Andrew Szot, Devendra Singh Chaplot, Aaron Gokaslan, Roozbeh Mottaghi, Dhruv Batra, Eric Undersander:
Galactic: Scaling End-to-End Reinforcement Learning for Rearrangement at 100k Steps-Per-Second. CVPR 2023: 13767-13777 - [c58]Klemen Kotar, Aaron Walsman, Roozbeh Mottaghi:
ENTL: Embodied Navigation Trajectory Learner. ICCV 2023: 10829-10838 - [c57]Jacob Krantz, Théophile Gervet, Karmesh Yadav, Austin S. Wang, Chris Paxton, Roozbeh Mottaghi, Dhruv Batra, Jitendra Malik, Stefan Lee, Devendra Singh Chaplot:
Navigating to Objects Specified by Images. ICCV 2023: 10882-10891 - [c56]Jiasen Lu, Christopher Clark, Rowan Zellers, Roozbeh Mottaghi, Aniruddha Kembhavi:
UNIFIED-IO: A Unified Model for Vision, Language, and Multi-modal Tasks. ICLR 2023 - [c55]Matthew Wallingford, Aditya Kusupati, Alex Fang, Vivek Ramanujan, Aniruddha Kembhavi, Roozbeh Mottaghi, Ali Farhadi:
Neural Radiance Field Codebooks. ICLR 2023 - [c54]Kuo-Hao Zeng, Luca Weihs, Roozbeh Mottaghi, Ali Farhadi:
Moving Forward by Moving Backward: Embedding Action Impact over Action Semantics. ICLR 2023 - [c53]Matthew Wallingford, Vivek Ramanujan, Alex Fang, Aditya Kusupati, Roozbeh Mottaghi, Aniruddha Kembhavi, Ludwig Schmidt, Ali Farhadi:
Neural Priming for Sample-Efficient Adaptation. NeurIPS 2023 - [i57]Matthew Wallingford, Aditya Kusupati, Alex Fang, Vivek Ramanujan, Aniruddha Kembhavi, Roozbeh Mottaghi, Ali Farhadi:
Neural Radiance Field Codebooks. CoRR abs/2301.04101 (2023) - [i56]Jacob Krantz, Théophile Gervet, Karmesh Yadav, Austin S. Wang, Chris Paxton, Roozbeh Mottaghi, Dhruv Batra, Jitendra Malik, Stefan Lee, Devendra Singh Chaplot:
Navigating to Objects Specified by Images. CoRR abs/2304.01192 (2023) - [i55]Klemen Kotar, Aaron Walsman, Roozbeh Mottaghi:
ENTL: Embodied Navigation Trajectory Learner. CoRR abs/2304.02639 (2023) - [i54]Kuo-Hao Zeng, Luca Weihs, Roozbeh Mottaghi, Ali Farhadi:
Moving Forward by Moving Backward: Embedding Action Impact over Action Semantics. CoRR abs/2304.12289 (2023) - [i53]Vincent-Pierre Berges, Andrew Szot, Devendra Singh Chaplot, Aaron Gokaslan, Roozbeh Mottaghi, Dhruv Batra, Eric Undersander:
Galactic: Scaling End-to-End Reinforcement Learning for Rearrangement at 100k Steps-Per-Second. CoRR abs/2306.07552 (2023) - [i52]Matthew Wallingford, Vivek Ramanujan, Alex Fang, Aditya Kusupati, Roozbeh Mottaghi, Aniruddha Kembhavi, Ludwig Schmidt, Ali Farhadi:
Neural Priming for Sample-Efficient Adaptation. CoRR abs/2306.10191 (2023) - [i51]Sriram Yenamandra, Arun Ramachandran, Karmesh Yadav, Austin S. Wang, Mukul Khanna, Théophile Gervet, Tsung-Yen Yang, Vidhi Jain, Alexander William Clegg, John M. Turner, Zsolt Kira, Manolis Savva, Angel X. Chang, Devendra Singh Chaplot, Dhruv Batra, Roozbeh Mottaghi, Yonatan Bisk, Chris Paxton:
HomeRobot: Open-Vocabulary Mobile Manipulation. CoRR abs/2306.11565 (2023) - [i50]Xavier Puig, Eric Undersander, Andrew Szot, Mikael Dallaire Cote, Tsung-Yen Yang, Ruslan Partsey, Ruta Desai, Alexander William Clegg, Michal Hlavac, So Yeon Min, Vladimir Vondrus, Théophile Gervet, Vincent-Pierre Berges, John M. Turner, Oleksandr Maksymets, Zsolt Kira, Mrinal Kalakrishnan, Jitendra Malik, Devendra Singh Chaplot, Unnat Jain, Dhruv Batra, Akshara Rai, Roozbeh Mottaghi:
Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots. CoRR abs/2310.13724 (2023) - [i49]Matthew Chang, Théophile Gervet, Mukul Khanna, Sriram Yenamandra, Dhruv Shah, So Yeon Min, Kavit Shah, Chris Paxton, Saurabh Gupta, Dhruv Batra, Roozbeh Mottaghi, Jitendra Malik, Devendra Singh Chaplot:
GOAT: GO to Any Thing. CoRR abs/2311.06430 (2023) - [i48]Jiaman Li, Alexander Clegg, Roozbeh Mottaghi, Jiajun Wu, Xavier Puig, C. Karen Liu:
Controllable Human-Object Interaction Synthesis. CoRR abs/2312.03913 (2023) - 2022
- [j5]Luca Weihs, Amanda Rose Yuile, Renée Baillargeon, Cynthia Fisher, Gary Marcus, Roozbeh Mottaghi, Aniruddha Kembhavi:
Benchmarking Progress to Infant-Level Physical Reasoning in AI. Trans. Mach. Learn. Res. 2022 (2022) - [c52]Jialin Wu, Jiasen Lu, Ashish Sabharwal, Roozbeh Mottaghi:
Multi-Modal Answer Validation for Knowledge-Based VQA. AAAI 2022: 2712-2721 - [c51]Sam Powers, Eliot Xing, Eric Kolve, Roozbeh Mottaghi, Abhinav Gupta:
CORA: Benchmarks, Baselines, and Metrics as a Platform for Continual Reinforcement Learning Agents. CoLLAs 2022: 705-743 - [c50]Kshitij Dwivedi, Gemma Roig, Aniruddha Kembhavi, Roozbeh Mottaghi:
What do navigation agents learn about their environment? CVPR 2022: 10266-10275 - [c49]Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, Aniruddha Kembhavi:
Simple but Effective: CLIP Embeddings for Embodied AI. CVPR 2022: 14809-14818 - [c48]Samir Yitzhak Gadre, Kiana Ehsani, Shuran Song, Roozbeh Mottaghi:
Continuous Scene Representations for Embodied AI. CVPR 2022: 14829-14839 - [c47]Klemen Kotar, Roozbeh Mottaghi:
Interactron: Embodied Adaptive Object Detection. CVPR 2022: 14840-14849 - [c46]Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, Roozbeh Mottaghi:
A-OKVQA: A Benchmark for Visual Question Answering Using World Knowledge. ECCV (8) 2022: 146-162 - [c45]Kiana Ehsani, Ali Farhadi, Aniruddha Kembhavi, Roozbeh Mottaghi:
Object Manipulation via Visual Target Localization. ECCV (39) 2022: 321-337 - [c44]Matt Deitke, Eli VanderBilt, Alvaro Herrasti, Luca Weihs, Kiana Ehsani, Jordi Salvador, Winson Han, Eric Kolve, Aniruddha Kembhavi, Roozbeh Mottaghi:
🏘️ ProcTHOR: Large-Scale Embodied AI Using Procedural Generation. NeurIPS 2022 - [c43]Kunal Pratap Singh, Luca Weihs, Alvaro Herrasti, Jonghyun Choi, Aniruddha Kembhavi, Roozbeh Mottaghi:
Ask4Help: Learning to Leverage an Expert for Embodied Tasks. NeurIPS 2022 - [i47]Klemen Kotar, Roozbeh Mottaghi:
Interactron: Embodied Adaptive Object Detection. CoRR abs/2202.00660 (2022) - [i46]Jiasen Lu, Jordi Salvador, Roozbeh Mottaghi, Aniruddha Kembhavi:
ASC me to Do Anything: Multi-task Training for Embodied AI. CoRR abs/2202.06987 (2022) - [i45]Kiana Ehsani, Ali Farhadi, Aniruddha Kembhavi, Roozbeh Mottaghi:
Object Manipulation via Visual Target Localization. CoRR abs/2203.08141 (2022) - [i44]Samir Yitzhak Gadre, Kiana Ehsani, Shuran Song, Roozbeh Mottaghi:
Continuous Scene Representations for Embodied AI. CoRR abs/2203.17251 (2022) - [i43]Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, Roozbeh Mottaghi:
A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge. CoRR abs/2206.01718 (2022) - [i42]Matt Deitke, Eli VanderBilt, Alvaro Herrasti, Luca Weihs, Jordi Salvador, Kiana Ehsani, Winson Han, Eric Kolve, Ali Farhadi, Aniruddha Kembhavi, Roozbeh Mottaghi:
ProcTHOR: Large-Scale Embodied AI Using Procedural Generation. CoRR abs/2206.06994 (2022) - [i41]Kshitij Dwivedi, Gemma Roig, Aniruddha Kembhavi, Roozbeh Mottaghi:
What do navigation agents learn about their environment? CoRR abs/2206.08500 (2022) - [i40]Jiasen Lu, Christopher Clark, Rowan Zellers, Roozbeh Mottaghi, Aniruddha Kembhavi:
Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks. CoRR abs/2206.08916 (2022) - [i39]Matt Deitke, Dhruv Batra, Yonatan Bisk, Tommaso Campari, Angel X. Chang, Devendra Singh Chaplot, Changan Chen, Claudia Pérez-D'Arpino, Kiana Ehsani, Ali Farhadi, Li Fei-Fei, Anthony G. Francis, Chuang Gan, Kristen Grauman, David Hall, Winson Han, Unnat Jain, Aniruddha Kembhavi, Jacob Krantz, Stefan Lee, Chengshu Li, Sagnik Majumder, Oleksandr Maksymets, Roberto Martín-Martín, Roozbeh Mottaghi, Sonia Raychaudhuri, Mike Roberts, Silvio Savarese, Manolis Savva, Mohit Shridhar, Niko Sünderhauf, Andrew Szot, Ben Talbot, Joshua B. Tenenbaum, Jesse Thomason, Alexander Toshev, Joanne Truong, Luca Weihs, Jiajun Wu:
Retrospectives on the Embodied AI Workshop. CoRR abs/2210.06849 (2022) - [i38]Kunal Pratap Singh, Luca Weihs, Alvaro Herrasti, Jonghyun Choi, Aniruddha Kembhavi, Roozbeh Mottaghi:
Ask4Help: Learning to Leverage an Expert for Embodied Tasks. CoRR abs/2211.09960 (2022) - 2021
- [c42]Rowan Zellers, Ari Holtzman, Matthew E. Peters, Roozbeh Mottaghi, Aniruddha Kembhavi, Ali Farhadi, Yejin Choi:
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World. ACL/IJCNLP (1) 2021: 2040-2050 - [c41]Kiana Ehsani, Winson Han, Alvaro Herrasti, Eli VanderBilt, Luca Weihs, Eric Kolve, Aniruddha Kembhavi, Roozbeh Mottaghi:
ManipulaTHOR: A Framework for Visual Object Manipulation. CVPR 2021: 4497-4506 - [c40]Luca Weihs, Matt Deitke, Aniruddha Kembhavi, Roozbeh Mottaghi:
Visual Room Rearrangement. CVPR 2021: 5922-5931 - [c39]Kuo-Hao Zeng, Luca Weihs, Ali Farhadi, Roozbeh Mottaghi:
Pushing It Out of the Way: Interactive Visual Navigation. CVPR 2021: 9868-9877 - [c38]Kunal Pratap Singh, Suvaansh Bhambri, Byeonghwi Kim, Roozbeh Mottaghi, Jonghyun Choi:
Factorizing Perception and Policy for Interactive Instruction Following. ICCV 2021: 1868-1877 - [c37]Klemen Kotar, Gabriel Ilharco, Ludwig Schmidt, Kiana Ehsani, Roozbeh Mottaghi:
Contrasting Contrastive Self-Supervised Representation Learning Pipelines. ICCV 2021: 9929-9939 - [c36]Prithvijit Chattopadhyay, Judy Hoffman, Roozbeh Mottaghi, Aniruddha Kembhavi:
RobustNav: Towards Benchmarking Robustness in Embodied Navigation. ICCV 2021: 15671-15680 - [c35]Kiana Ehsani, Daniel Gordon, Thomas Hai Dang Nguyen, Roozbeh Mottaghi, Ali Farhadi:
What Can You Learn From Your Muscles? Learning Visual Representation from Human Interactions. ICLR 2021 - [c34]Luca Weihs, Aniruddha Kembhavi, Kiana Ehsani, Sarah M. Pratt, Winson Han, Alvaro Herrasti, Eric Kolve, Dustin Schwenk, Roozbeh Mottaghi, Ali Farhadi:
Learning Generalizable Visual Representations via Interactive Gameplay. ICLR 2021 - [c33]Peng Gao, Jiasen Lu, Hongsheng Li, Roozbeh Mottaghi, Aniruddha Kembhavi:
Container: Context Aggregation Networks. NeurIPS 2021: 19160-19171 - [i37]Jialin Wu, Jiasen Lu, Ashish Sabharwal, Roozbeh Mottaghi:
Multi-Modal Answer Validation for Knowledge-Based VQA. CoRR abs/2103.12248 (2021) - [i36]Klemen Kotar, Gabriel Ilharco, Ludwig Schmidt, Kiana Ehsani, Roozbeh Mottaghi:
Contrasting Contrastive Self-Supervised Representation Learning Models. CoRR abs/2103.14005 (2021) - [i35]Luca Weihs, Matt Deitke, Aniruddha Kembhavi, Roozbeh Mottaghi:
Visual Room Rearrangement. CoRR abs/2103.16544 (2021) - [i34]Kiana Ehsani, Winson Han, Alvaro Herrasti, Eli VanderBilt, Luca Weihs, Eric Kolve, Aniruddha Kembhavi, Roozbeh Mottaghi:
ManipulaTHOR: A Framework for Visual Object Manipulation. CoRR abs/2104.11213 (2021) - [i33]Kuo-Hao Zeng, Luca Weihs, Ali Farhadi, Roozbeh Mottaghi:
Pushing it out of the Way: Interactive Visual Navigation. CoRR abs/2104.14040 (2021) - [i32]Rowan Zellers, Ari Holtzman, Matthew E. Peters, Roozbeh Mottaghi, Aniruddha Kembhavi, Ali Farhadi, Yejin Choi:
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World. CoRR abs/2106.00188 (2021) - [i31]Peng Gao, Jiasen Lu, Hongsheng Li, Roozbeh Mottaghi, Aniruddha Kembhavi:
Container: Context Aggregation Network. CoRR abs/2106.01401 (2021) - [i30]Prithvijit Chattopadhyay, Judy Hoffman, Roozbeh Mottaghi, Aniruddha Kembhavi:
RobustNav: Towards Benchmarking Robustness in Embodied Navigation. CoRR abs/2106.04531 (2021) - [i29]Sam Powers, Eliot Xing, Eric Kolve, Roozbeh Mottaghi, Abhinav Gupta:
CORA: Benchmarks, Baselines, and Metrics as a Platform for Continual Reinforcement Learning Agents. CoRR abs/2110.10067 (2021) - [i28]Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, Aniruddha Kembhavi:
Simple but Effective: CLIP Embeddings for Embodied AI. CoRR abs/2111.09888 (2021) - 2020
- [c32]Matt Deitke, Winson Han, Alvaro Herrasti, Aniruddha Kembhavi, Eric Kolve, Roozbeh Mottaghi, Jordi Salvador, Dustin Schwenk, Eli VanderBilt, Matthew Wallingford, Luca Weihs, Mark Yatskar, Ali Farhadi:
RoboTHOR: An Open Simulation-to-Real Embodied AI Platform. CVPR 2020: 3161-3171 - [c31]Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox:
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. CVPR 2020: 10737-10746 - [c30]Kuo-Hao Zeng, Roozbeh Mottaghi, Luca Weihs, Ali Farhadi:
Visual Reaction: Learning to Play Catch With Your Drone. CVPR 2020: 11570-11579 - [c29]Jae Sung Park, Chandra Bhagavatula, Roozbeh Mottaghi, Ali Farhadi, Yejin Choi:
VisualCOMET: Reasoning About the Dynamic Context of a Still Image. ECCV (5) 2020: 508-524 - [c28]Martin Lohmann, Jordi Salvador, Aniruddha Kembhavi, Roozbeh Mottaghi:
Learning About Objects by Learning to Interact with Them. NeurIPS 2020 - [i27]Matt Deitke, Winson Han, Alvaro Herrasti, Aniruddha Kembhavi, Eric Kolve, Roozbeh Mottaghi, Jordi Salvador, Dustin Schwenk, Eli VanderBilt, Matthew Wallingford, Luca Weihs, Mark Yatskar, Ali Farhadi:
RoboTHOR: An Open Simulation-to-Real Embodied AI Platform. CoRR abs/2004.06799 (2020) - [i26]Jae Sung Park, Chandra Bhagavatula, Roozbeh Mottaghi, Ali Farhadi, Yejin Choi:
Visual Commonsense Graphs: Reasoning about the Dynamic Context of a Still Image. CoRR abs/2004.10796 (2020) - [i25]Martin Lohmann, Jordi Salvador, Aniruddha Kembhavi, Roozbeh Mottaghi:
Learning About Objects by Learning to Interact with Them. CoRR abs/2006.09306 (2020) - [i24]Dhruv Batra, Aaron Gokaslan, Aniruddha Kembhavi, Oleksandr Maksymets, Roozbeh Mottaghi, Manolis Savva, Alexander Toshev, Erik Wijmans:
ObjectNav Revisited: On Evaluation of Embodied Agents Navigating to Objects. CoRR abs/2006.13171 (2020) - [i23]Luca Weihs, Jordi Salvador, Klemen Kotar, Unnat Jain, Kuo-Hao Zeng, Roozbeh Mottaghi, Aniruddha Kembhavi:
AllenAct: A Framework for Embodied AI Research. CoRR abs/2008.12760 (2020) - [i22]Kiana Ehsani, Daniel Gordon, Thomas Hai Dang Nguyen, Roozbeh Mottaghi, Ali Farhadi:
What Can You Learn from Your Muscles? Learning Visual Representation from Human Interactions. CoRR abs/2010.08539 (2020) - [i21]Dhruv Batra, Angel X. Chang, Sonia Chernova, Andrew J. Davison, Jia Deng, Vladlen Koltun, Sergey Levine, Jitendra Malik, Igor Mordatch, Roozbeh Mottaghi, Manolis Savva, Hao Su:
Rearrangement: A Challenge for Embodied AI. CoRR abs/2011.01975 (2020) - [i20]Kunal Pratap Singh, Suvaansh Bhambri, Byeonghwi Kim, Roozbeh Mottaghi, Jonghyun Choi:
MOCA: A Modular Object-Centric Approach for Interactive Instruction Following. CoRR abs/2012.03208 (2020)
2010 – 2019
- 2019
- [j4]Guy Barash, Mauricio Castillo-Effen, Niyati Chhaya, Peter Clark, Huáscar Espinoza, Eitan Farchi, Christopher W. Geib, Odd Erik Gundersen, Seán Ó hÉigeartaigh, José Hernández-Orallo, Chiori Hori, Xiaowei Huang, Kokil Jaidka, Pavan Kapanipathi, Sarah Keren, Seokhwan Kim, Marc Lanctot, Danny Lange, Julian J. McAuley, David R. Martinez, Marwan Mattar, Mausam, Martin Michalowski, Reuth Mirsky, Roozbeh Mottaghi, Joseph C. Osborn, Julien Pérolat, Martin Schmid, Arash Shaban-Nejad, Onn Shehory, Biplav Srivastava, William W. Streilein, Kartik Talamadupula, Julian Togelius, Koichiro Yoshino, Quanshi Zhang, Imed Zitouni:
Reports of the Workshops Held at the 2019 AAAI Conference on Artificial Intelligence. AI Mag. 40(3): 67-78 (2019) - [c27]Kenneth Marino, Mohammad Rastegari, Ali Farhadi, Roozbeh Mottaghi:
OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge. CVPR 2019: 3195-3204 - [c26]Mitchell Wortsman, Kiana Ehsani, Mohammad Rastegari, Ali Farhadi, Roozbeh Mottaghi:
Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning. CVPR 2019: 6750-6759 - [c25]Wei Yang, Xiaolong Wang, Ali Farhadi, Abhinav Gupta, Roozbeh Mottaghi:
Visual Semantic Navigation using Scene Priors. ICLR (Poster) 2019 - [i19]Marwan Mattar, Roozbeh Mottaghi, Julian Togelius, Danny Lange:
AAAI-2019 Workshop on Games and Simulations for Artificial Intelligence. CoRR abs/1903.02172 (2019) - [i18]Kenneth Marino, Mohammad Rastegari, Ali Farhadi, Roozbeh Mottaghi:
OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge. CoRR abs/1906.00067 (2019) - [i17]Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox:
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. CoRR abs/1912.01734 (2019) - [i16]Kuo-Hao Zeng, Roozbeh Mottaghi, Luca Weihs, Ali Farhadi:
Visual Reaction: Learning to Play Catch with Your Drone. CoRR abs/1912.02155 (2019) - [i15]Luca Weihs, Aniruddha Kembhavi, Winson Han, Alvaro Herrasti, Eric Kolve, Dustin Schwenk, Roozbeh Mottaghi, Ali Farhadi:
Artificial Agents Learn Flexible Visual Representations by Playing a Hiding Game. CoRR abs/1912.08195 (2019) - 2018
- [c24]Kiana Ehsani, Hessam Bagherinezhad, Joseph Redmon, Roozbeh Mottaghi, Ali Farhadi:
Who Let the Dogs Out? Modeling Dog Behavior From Visual Data. CVPR 2018: 4051-4060 - [c23]Kiana Ehsani, Roozbeh Mottaghi, Ali Farhadi:
SeGAN: Segmenting and Generating the Invisible. CVPR 2018: 6144-6153 - [i14]Kiana Ehsani, Hessam Bagherinezhad, Joseph Redmon, Roozbeh Mottaghi, Ali Farhadi:
Who Let The Dogs Out? Modeling Dog Behavior From Visual Data. CoRR abs/1803.10827 (2018) - [i13]Peter Anderson, Angel X. Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, Amir R. Zamir:
On Evaluation of Embodied Navigation Agents. CoRR abs/1807.06757 (2018) - [i12]Wei Yang, Xiaolong Wang, Ali Farhadi, Abhinav Gupta, Roozbeh Mottaghi:
Visual Semantic Navigation using Scene Priors. CoRR abs/1810.06543 (2018) - [i11]Mitchell Wortsman, Kiana Ehsani, Mohammad Rastegari, Ali Farhadi, Roozbeh Mottaghi:
Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning. CoRR abs/1812.00971 (2018) - 2017
- [c22]Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, Ali Farhadi:
Visual Semantic Planning Using Deep Successor Representations. ICCV 2017: 483-492 - [c21]Roozbeh Mottaghi, Connor Schenck, Dieter Fox, Ali Farhadi:
See the Glass Half Full: Reasoning About Liquid Containers, Their Volume and Content. ICCV 2017: 1889-1898 - [c20]Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J. Lim, Abhinav Gupta, Li Fei-Fei, Ali Farhadi:
Target-driven visual navigation in indoor scenes using deep reinforcement learning. ICRA 2017: 3357-3364 - [i10]Roozbeh Mottaghi, Connor Schenck, Dieter Fox, Ali Farhadi:
See the Glass Half Full: Reasoning about Liquid Containers, their Volume and Content. CoRR abs/1701.02718 (2017) - [i9]Kiana Ehsani, Roozbeh Mottaghi, Ali Farhadi:
SeGAN: Segmenting and Generating the Invisible. CoRR abs/1703.10239 (2017) - [i8]Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, Ali Farhadi:
Visual Semantic Planning using Deep Successor Representations. CoRR abs/1705.08080 (2017) - [i7]Eric Kolve, Roozbeh Mottaghi, Daniel Gordon, Yuke Zhu, Abhinav Gupta, Ali Farhadi:
AI2-THOR: An Interactive 3D Environment for Visual AI. CoRR abs/1712.05474 (2017) - 2016
- [j3]Alan L. Yuille, Roozbeh Mottaghi:
Complexity of Representation and Inference in Compositional Models with Part Sharing. J. Mach. Learn. Res. 17: 11:1-11:28 (2016) - [j2]Roozbeh Mottaghi, Sanja Fidler, Alan L. Yuille, Raquel Urtasun, Devi Parikh:
Human-Machine CRFs for Identifying Bottlenecks in Scene Understanding. IEEE Trans. Pattern Anal. Mach. Intell. 38(1): 74-87 (2016) - [c19]Roozbeh Mottaghi, Hannaneh Hajishirzi, Ali Farhadi:
A Task-Oriented Approach for Cost-Sensitive Recognition. CVPR 2016: 2203-2211 - [c18]Roozbeh Mottaghi, Hessam Bagherinezhad, Mohammad Rastegari, Ali Farhadi:
Newtonian Image Understanding: Unfolding the Dynamics of Objects in Static Images. CVPR 2016: 3521-3529 - [c17]Yu Xiang, Wonhui Kim, Wei Chen, Jingwei Ji, Christopher B. Choy, Hao Su, Roozbeh Mottaghi, Leonidas J. Guibas, Silvio Savarese:
ObjectNet3D: A Large Scale Database for 3D Object Recognition. ECCV (8) 2016: 160-176 - [c16]Roozbeh Mottaghi, Mohammad Rastegari, Abhinav Gupta, Ali Farhadi:
"What Happens If..." Learning to Predict the Effect of Forces in Images. ECCV (4) 2016: 269-285 - [i6]Roozbeh Mottaghi, Mohammad Rastegari, Abhinav Gupta, Ali Farhadi:
"What happens if..." Learning to Predict the Effect of Forces in Images. CoRR abs/1603.05600 (2016) - [i5]Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J. Lim, Abhinav Gupta, Li Fei-Fei, Ali Farhadi:
Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning. CoRR abs/1609.05143 (2016) - 2015
- [c15]Roozbeh Mottaghi, Yu Xiang, Silvio Savarese:
A coarse-to-fine model for 3D pose estimation and sub-category recognition. CVPR 2015: 418-426 - [i4]Roozbeh Mottaghi, Yu Xiang, Silvio Savarese:
A Coarse-to-Fine Model for 3D Pose Estimation and Sub-category Recognition. CoRR abs/1504.02764 (2015) - [i3]Roozbeh Mottaghi, Hessam Bagherinezhad, Mohammad Rastegari, Ali Farhadi:
Newtonian Image Understanding: Unfolding the Dynamics of Objects in Static Images. CoRR abs/1511.04048 (2015) - 2014
- [c14]Roozbeh Mottaghi, Xianjie Chen, Xiaobai Liu, Nam-Gyu Cho, Seong-Whan Lee, Sanja Fidler, Raquel Urtasun, Alan L. Yuille:
The Role of Context for Object Detection and Semantic Segmentation in the Wild. CVPR 2014: 891-898 - [c13]Xianjie Chen, Roozbeh Mottaghi, Xiaobai Liu, Sanja Fidler, Raquel Urtasun, Alan L. Yuille:
Detect What You Can: Detecting and Representing Objects Using Holistic Models and Body Parts. CVPR 2014: 1979-1986 - [c12]Yu Xiang, Changkyu Song, Roozbeh Mottaghi, Silvio Savarese:
Monocular Multiview Object Tracking with 3D Aspect Parts. ECCV (6) 2014: 220-235 - [c11]Yu Xiang, Roozbeh Mottaghi, Silvio Savarese:
Beyond PASCAL: A benchmark for 3D object detection in the wild. WACV 2014: 75-82 - [i2]Xianjie Chen, Roozbeh Mottaghi, Xiaobai Liu, Sanja Fidler, Raquel Urtasun, Alan L. Yuille:
Detect What You Can: Detecting and Representing Objects using Holistic Models and Body Parts. CoRR abs/1406.2031 (2014) - [i1]Roozbeh Mottaghi, Sanja Fidler, Alan L. Yuille, Raquel Urtasun, Devi Parikh:
Human-Machine CRFs for Identifying Bottlenecks in Holistic Scene Understanding. CoRR abs/1406.3906 (2014) - 2013
- [b1]Roozbeh Mottaghi:
Towards Scene Understanding: Object Detection, Segmentation, and Contextual Reasoning. University of California, Los Angeles, USA, 2013 - [c10]Roozbeh Mottaghi, Sanja Fidler, Jian Yao, Raquel Urtasun, Devi Parikh:
Analyzing Semantic Segmentation Using Hybrid Human-Machine CRFs. CVPR 2013: 3143-3150 - [c9]Sanja Fidler, Roozbeh Mottaghi, Alan L. Yuille, Raquel Urtasun:
Bottom-Up Segmentation for Top-Down Detection. CVPR 2013: 3294-3301 - [c8]Alan L. Yuille, Roozbeh Mottaghi:
Complexity of Representation and Inference in Compositional Models with Part Sharing. ICLR 2013 - 2012
- [c7]Roozbeh Mottaghi:
Augmenting deformable part models with irregular-shaped object patches. CVPR 2012: 3116-3123 - 2011
- [c6]Roozbeh Mottaghi, Ananth Ranganathan, Alan L. Yuille:
A compositional approach to learning part-based models of objects. ICCV Workshops 2011: 561-568
2000 – 2009
- 2009
- [c5]Jinhan Lee, Roozbeh Mottaghi, Charles Pippin, Tucker R. Balch:
Graph-based planning using local information for unknown outdoor environments. ICRA 2009: 1455-1460 - 2008
- [c4]Roozbeh Mottaghi, Michael Kaess, Ananth Ranganathan, Richard Roberts, Frank Dellaert:
Place recognition-based fixed-lag smoothing for environments with unreliable GPS. ICRA 2008: 1862-1867 - 2007
- [j1]Roozbeh Mottaghi, Richard T. Vaughan:
An integrated particle filter and potential field method applied to cooperative multi-robot target tracking. Auton. Robots 23(1): 19-35 (2007) - 2006
- [c3]Roozbeh Mottaghi, Richard T. Vaughan:
An Integrated Particle Filter and Potential Field Method for Cooperative Robot Target Tracking. ICRA 2006: 1342-1347 - 2005
- [c2]Roozbeh Mottaghi, Shahram Payandeh:
Coordination of Multiple Agents for Probabilistic Object Tracking. CRV 2005: 162-167 - 2001
- [c1]Mohammad Taghi Manzuri, Hamid Reza Chitsaz, Reza Ghorbani, Pooya Karimian, Alireza Mirazi, Mehran Motamed, Roozbeh Mottaghi, Payam Sabzmeydani:
Sharif CESR Small Size Robocup Team. RoboCup 2001: 595-598
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-11 21:40 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint