


default search action
Martin A. Riedmiller
Person information
- affiliation: DeepMind, London, UK
- affiliation (former): Albert Ludwigs University Freiburg, Machine Learning Lab, Germany
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j24]Konstantinos Bousmalis, Giulia Vezzani, Dushyant Rao, Coline Manon Devin, Alex X. Lee, Maria Bauzá Villalonga, Todor Davchev, Yuxiang Zhou, Agrim Gupta, Akhil Raju, Antoine Laurens, Claudio Fantacci, Valentin Dalibard, Martina Zambelli, Murilo Fernandes Martins, Rugile Pevceviciute, Michiel Blokzijl, Misha Denil, Nathan Batchelor, Thomas Lampe, Emilio Parisotto, Konrad Zolna, Scott E. Reed, Sergio Gómez Colmenarejo, Jon Scholz, Abbas Abdolmaleki, Oliver Groth, Jean-Baptiste Regli, Oleg Sushkov, Thomas Rothörl, José Enrique Chen, Yusuf Aytar, Dave Barker, Joy Ortiz, Martin A. Riedmiller, Jost Tobias Springenberg, Raia Hadsell, Francesco Nori, Nicolas Heess:
RoboCat: A Self-Improving Generalist Agent for Robotic Manipulation. Trans. Mach. Learn. Res. 2024 (2024) - [c108]Dhruva Tirumala, Markus Wulfmeier, Ben Moran, Sandy H. Huang, Jan Humplik, Guy Lever, Tuomas Haarnoja, Leonard Hasenclever, Arunkumar Byravan, Nathan Batchelor, Neil Sreendra, Kushal Patel, Marlon Gwira, Francesco Nori, Martin A. Riedmiller, Nicolas Heess:
Learning Robot Soccer from Egocentric Vision with Deep Reinforcement Learning. CoRL 2024: 165-184 - [c107]Dhruva Tirumala, Thomas Lampe, José Enrique Chen, Tuomas Haarnoja, Sandy H. Huang, Guy Lever, Ben Moran, Tim Hertweck, Leonard Hasenclever, Martin A. Riedmiller, Nicolas Heess, Markus Wulfmeier:
Replay across Experiments: A Natural Extension of Off-Policy RL. ICLR 2024 - [c106]Jost Tobias Springenberg, Abbas Abdolmaleki, Jingwei Zhang, Oliver Groth, Michael Bloesch, Thomas Lampe, Philemon Brakel, Sarah Bechtle, Steven Kapturowski, Roland Hafner, Nicolas Heess, Martin A. Riedmiller:
Offline Actor-Critic Reinforcement Learning Scales to Large Models. ICML 2024 - [c105]Thomas Lampe, Abbas Abdolmaleki, Sarah Bechtle, Sandy H. Huang, Jost Tobias Springenberg, Michael Bloesch, Oliver Groth, Roland Hafner, Tim Hertweck, Michael Neunert, Markus Wulfmeier, Jingwei Zhang, Francesco Nori, Nicolas Heess, Martin A. Riedmiller:
Mastering Stacking of Diverse Shapes with Large-Scale Iterative Reinforcement Learning on Real Robots. ICRA 2024: 7772-7779 - [c104]Mohak Bhardwaj, Thomas Lampe, Michael Neunert, Francesco Romano, Abbas Abdolmaleki, Arunkumar Byravan, Markus Wulfmeier, Martin A. Riedmiller, Jonas Buchli:
Real-world fluid directed rigid body control via deep reinforcement learning. L4DC 2024: 414-427 - [c103]Markus Wulfmeier, Michael Bloesch, Nino Vieillard, Arun Ahuja, Jorg Bornschein, Sandy H. Huang, Artem Sokolov, Matt Barnes, Guillaume Desjardins, Alex Bewley, Sarah Bechtle, Jost Tobias Springenberg, Nikola Momchev, Olivier Bachem, Matthieu Geist, Martin A. Riedmiller:
Imitating Language via Scalable Inverse Reinforcement Learning. NeurIPS 2024 - [i63]Jost Tobias Springenberg, Abbas Abdolmaleki, Jingwei Zhang, Oliver Groth, Michael Bloesch, Thomas Lampe, Philemon Brakel, Sarah Bechtle, Steven Kapturowski, Roland Hafner, Nicolas Heess, Martin A. Riedmiller:
Offline Actor-Critic Reinforcement Learning Scales to Large Models. CoRR abs/2402.05546 (2024) - [i62]Mohak Bhardwaj, Thomas Lampe, Michael Neunert, Francesco Romano, Abbas Abdolmaleki, Arunkumar Byravan, Markus Wulfmeier, Martin A. Riedmiller, Jonas Buchli:
Real-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning. CoRR abs/2402.06102 (2024) - [i61]Dhruva Tirumala, Markus Wulfmeier, Ben Moran, Sandy H. Huang, Jan Humplik, Guy Lever, Tuomas Haarnoja, Leonard Hasenclever, Arunkumar Byravan, Nathan Batchelor, Neil Sreendra, Kushal Patel, Marlon Gwira, Francesco Nori, Martin A. Riedmiller, Nicolas Heess:
Learning Robot Soccer from Egocentric Vision with Deep Reinforcement Learning. CoRR abs/2405.02425 (2024) - [i60]Markus Wulfmeier, Michael Bloesch, Nino Vieillard, Arun Ahuja, Jorg Bornschein, Sandy H. Huang, Artem Sokolov, Matt Barnes, Guillaume Desjardins, Alex Bewley, Sarah Maria Elisabeth Bechtle, Jost Tobias Springenberg, Nikola Momchev, Olivier Bachem, Matthieu Geist, Martin A. Riedmiller:
Imitating Language via Scalable Inverse Reinforcement Learning. CoRR abs/2409.01369 (2024) - [i59]Jingwei Zhang, Thomas Lampe, Abbas Abdolmaleki, Jost Tobias Springenberg, Martin A. Riedmiller:
Game On: Towards Language Models as RL Experimenters. CoRR abs/2409.03402 (2024) - [i58]Maria Bauzá, José Enrique Chen, Valentin Dalibard, Nimrod Gileadi, Roland Hafner, Murilo F. Martins, Joss Moore, Rugile Pevceviciute, Antoine Laurens, Dushyant Rao, Martina Zambelli, Martin A. Riedmiller, Jon Scholz, Konstantinos Bousmalis, Francesco Nori, Nicolas Heess:
DemoStart: Demonstration-led auto-curriculum applied to sim-to-real with multi-fingered robots. CoRR abs/2409.06613 (2024) - [i57]Abbas Abdolmaleki, Bilal Piot, Bobak Shahriari, Jost Tobias Springenberg, Tim Hertweck, Rishabh Joshi, Junhyuk Oh, Michael Bloesch, Thomas Lampe, Nicolas Heess, Jonas Buchli, Martin A. Riedmiller:
Preference Optimization as Probabilistic Inference. CoRR abs/2410.04166 (2024) - 2023
- [j23]Daniel J. Mankowitz
, Andrea Michi
, Anton Zhernov, Marco Gelmi, Marco Selvi, Cosmin Paduraru, Edouard Leurent, Shariq Iqbal, Jean-Baptiste Lespiau, Alex Ahern, Thomas Köppe, Kevin Millikin, Stephen Gaffney, Sophie Elster, Jackson Broshear, Chris Gamble, Kieran Milan, Robert Tung, Minjae Hwang, A. Taylan Cemgil, Mohammadamin Barekatain, Yujia Li, Amol Mandhane, Thomas Hubert, Julian Schrittwieser, Demis Hassabis, Pushmeet Kohli, Martin A. Riedmiller, Oriol Vinyals, David Silver:
Faster sorting algorithms discovered using deep reinforcement learning. Nat. 618(7964): 257-263 (2023) - [j22]Giulia Vezzani, Dhruva Tirumala, Markus Wulfmeier, Dushyant Rao, Abbas Abdolmaleki, Ben Moran, Tuomas Haarnoja, Jan Humplik, Roland Hafner, Michael Neunert, Claudio Fantacci, Tim Hertweck, Thomas Lampe, Fereshteh Sadeghi, Nicolas Heess, Martin A. Riedmiller:
SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration. Trans. Mach. Learn. Res. 2023 (2023) - [c102]Tim Seyde, Peter Werner, Wilko Schwarting, Igor Gilitschenski, Martin A. Riedmiller, Daniela Rus, Markus Wulfmeier:
Solving Continuous Control via Q-learning. ICLR 2023 - [i56]Jingwei Zhang, Jost Tobias Springenberg, Arunkumar Byravan, Leonard Hasenclever, Abbas Abdolmaleki, Dushyant Rao, Nicolas Heess, Martin A. Riedmiller:
Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains. CoRR abs/2302.12617 (2023) - [i55]Ingmar Schubert, Jingwei Zhang, Jake Bruce, Sarah Bechtle, Emilio Parisotto, Martin A. Riedmiller, Jost Tobias Springenberg, Arunkumar Byravan, Leonard Hasenclever, Nicolas Heess:
A Generalist Dynamics Model for Control. CoRR abs/2305.10912 (2023) - [i54]Konstantinos Bousmalis, Giulia Vezzani, Dushyant Rao, Coline Devin, Alex X. Lee, Maria Bauzá, Todor Davchev, Yuxiang Zhou, Agrim Gupta, Akhil Raju, Antoine Laurens, Claudio Fantacci, Valentin Dalibard, Martina Zambelli, Murilo F. Martins, Rugile Pevceviciute, Michiel Blokzijl, Misha Denil, Nathan Batchelor, Thomas Lampe, Emilio Parisotto, Konrad Zolna, Scott E. Reed, Sergio Gómez Colmenarejo, Jon Scholz, Abbas Abdolmaleki, Oliver Groth, Jean-Baptiste Regli, Oleg Sushkov, Thomas Rothörl, José Enrique Chen, Yusuf Aytar, Dave Barker, Joy Ortiz, Martin A. Riedmiller, Jost Tobias Springenberg, Raia Hadsell, Francesco Nori, Nicolas Heess:
RoboCat: A Self-Improving Foundation Agent for Robotic Manipulation. CoRR abs/2306.11706 (2023) - [i53]Norman Di Palo, Arunkumar Byravan, Leonard Hasenclever, Markus Wulfmeier, Nicolas Heess, Martin A. Riedmiller:
Towards A Unified Agent with Foundation Models. CoRR abs/2307.09668 (2023) - [i52]Brendan D. Tracey, Andrea Michi, Yuri Chervonyi, Ian Davies, Cosmin Paduraru, Nevena Lazic, Federico Felici, Timo Ewalds, Craig Donner, Cristian Galperti, Jonas Buchli, Michael Neunert, Andrea Huber, Jonathan Evens, Paula Kurylowicz, Daniel J. Mankowitz, Martin A. Riedmiller, The TCV Team:
Towards practical reinforcement learning for tokamak magnetic control. CoRR abs/2307.11546 (2023) - [i51]Nico Gürtler, Felix Widmaier, Cansu Sancaktar, Sebastian Blaes, Pavel Kolev, Stefan Bauer, Manuel Wüthrich, Markus Wulfmeier, Martin A. Riedmiller, Arthur Allshire, Qiang Wang, Robert McCarthy, Hangyeol Kim, Jongchan Baek, Wookyong Kwon, Shanliang Qian, Yasunori Toshimitsu, Mike Yan Michelis, Amirhossein Kazemipour, Arman Raayatsanati, Hehui Zheng, Barnabas Gavin Cangan, Bernhard Schölkopf, Georg Martius:
Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World. CoRR abs/2308.07741 (2023) - [i50]Shruti Mishra, Ankit Anand, Jordan Hoffmann, Nicolas Heess, Martin A. Riedmiller, Abbas Abdolmaleki, Doina Precup:
Policy composition in reinforcement learning via multi-objective policy optimization. CoRR abs/2308.15470 (2023) - [i49]Cristina Pinneri, Sarah Bechtle, Markus Wulfmeier, Arunkumar Byravan, Jingwei Zhang, William F. Whitney, Martin A. Riedmiller:
Equivariant Data Augmentation for Generalization in Offline Reinforcement Learning. CoRR abs/2309.07578 (2023) - [i48]Dhruva Tirumala, Thomas Lampe, José Enrique Chen, Tuomas Haarnoja, Sandy H. Huang, Guy Lever, Ben Moran, Tim Hertweck, Leonard Hasenclever, Martin A. Riedmiller, Nicolas Heess, Markus Wulfmeier:
Replay across Experiments: A Natural Extension of Off-Policy RL. CoRR abs/2311.15951 (2023) - [i47]Martin A. Riedmiller, Tim Hertweck, Roland Hafner:
Less is more - the Dispatcher/ Executor principle for multi-task Reinforcement Learning. CoRR abs/2312.09120 (2023) - [i46]Thomas Lampe, Abbas Abdolmaleki, Sarah Bechtle, Sandy H. Huang, Jost Tobias Springenberg, Michael Bloesch, Oliver Groth, Roland Hafner, Tim Hertweck, Michael Neunert, Markus Wulfmeier, Jingwei Zhang, Francesco Nori, Nicolas Heess, Martin A. Riedmiller:
Mastering Stacking of Diverse Shapes with Large-Scale Iterative Reinforcement Learning on Real Robots. CoRR abs/2312.11374 (2023) - 2022
- [j21]Jonas Degrave
, Federico Felici
, Jonas Buchli, Michael Neunert
, Brendan D. Tracey
, Francesco Carpanese, Timo Ewalds, Roland Hafner
, Abbas Abdolmaleki, Diego de Las Casas, Craig Donner, Leslie Fritz, Cristian Galperti, Andrea Huber
, James Keeling, Maria Tsimpoukelli, Jackie Kay, Antoine Merle, Jean-Marc Moret, Seb Noury, Federico Pesamosca
, David Pfau, Olivier Sauter, Cristian Sommariva, Stefano Coda
, Basil Duval, Ambrogio Fasoli, Pushmeet Kohli, Koray Kavukcuoglu, Demis Hassabis
, Martin A. Riedmiller
:
Magnetic control of tokamak plasmas through deep reinforcement learning. Nat. 602(7897): 414-419 (2022) - [c101]Sasha Salter, Markus Wulfmeier, Dhruva Tirumala, Nicolas Heess, Martin A. Riedmiller, Raia Hadsell, Dushyant Rao:
MO2: Model-Based Offline Options. CoLLAs 2022: 902-919 - [c100]Arunkumar Byravan, Leonard Hasenclever, Piotr Trochim, Mehdi Mirza, Alessandro Davide Ialongo, Yuval Tassa, Jost Tobias Springenberg, Abbas Abdolmaleki, Nicolas Heess, Josh Merel, Martin A. Riedmiller:
Evaluating Model-Based Planning and Planner Amortization for Continuous Control. ICLR 2022 - [i45]Nathan Lambert, Markus Wulfmeier, William F. Whitney, Arunkumar Byravan, Michael Bloesch, Vibhavari Dasagi, Tim Hertweck, Martin A. Riedmiller:
The Challenges of Exploration for Offline Reinforcement Learning. CoRR abs/2201.11861 (2022) - [i44]Bobak Shahriari, Abbas Abdolmaleki, Arunkumar Byravan, Abe Friesen, Siqi Liu, Jost Tobias Springenberg, Nicolas Heess, Matt Hoffman, Martin A. Riedmiller:
Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach. CoRR abs/2204.10256 (2022) - [i43]Sasha Salter, Markus Wulfmeier, Dhruva Tirumala, Nicolas Heess, Martin A. Riedmiller, Raia Hadsell, Dushyant Rao:
MO2: Model-Based Offline Options. CoRR abs/2209.01947 (2022) - [i42]Tim Seyde, Peter Werner, Wilko Schwarting, Igor Gilitschenski, Martin A. Riedmiller, Daniela Rus, Markus Wulfmeier:
Solving Continuous Control via Q-learning. CoRR abs/2210.12566 (2022) - [i41]Giulia Vezzani, Dhruva Tirumala, Markus Wulfmeier, Dushyant Rao, Abbas Abdolmaleki, Ben Moran, Tuomas Haarnoja, Jan Humplik, Roland Hafner, Michael Neunert, Claudio Fantacci, Tim Hertweck, Thomas Lampe, Fereshteh Sadeghi, Nicolas Heess, Martin A. Riedmiller:
SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration. CoRR abs/2211.13743 (2022) - 2021
- [c99]Sandy H. Huang, Abbas Abdolmaleki, Giulia Vezzani, Philemon Brakel, Daniel J. Mankowitz, Michael Neunert, Steven Bohez, Yuval Tassa, Nicolas Heess, Martin A. Riedmiller, Raia Hadsell:
A Constrained Multi-Objective Reinforcement Learning Framework. CoRL 2021: 883-893 - [c98]Alex X. Lee, Coline Manon Devin, Yuxiang Zhou, Thomas Lampe, Konstantinos Bousmalis, Jost Tobias Springenberg, Arunkumar Byravan, Abbas Abdolmaleki, Nimrod Gileadi, David Khosid, Claudio Fantacci, José Enrique Chen, Akhil Raju, Rae Jeong, Michael Neunert, Antoine Laurens, Stefano Saliceti, Federico Casarini, Martin A. Riedmiller, Raia Hadsell, Francesco Nori:
Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes. CoRL 2021: 1089-1131 - [c97]Michael Bloesch, Jan Humplik, Viorica Patraucean, Roland Hafner, Tuomas Haarnoja, Arunkumar Byravan, Noah Yamamoto Siegel, Saran Tunyasuvunakool, Federico Casarini, Nathan Batchelor, Francesco Romano, Stefano Saliceti, Martin A. Riedmiller, S. M. Ali Eslami, Nicolas Heess:
Towards Real Robot Learning in the Wild: A Case Study in Bipedal Locomotion. CoRL 2021: 1502-1511 - [c96]Martin A. Riedmiller, Jost Tobias Springenberg, Roland Hafner, Nicolas Heess:
Collect & Infer - a fresh look at data-efficient Reinforcement Learning. CoRL 2021: 1736-1744 - [c95]Markus Wulfmeier, Dushyant Rao, Roland Hafner, Thomas Lampe, Abbas Abdolmaleki, Tim Hertweck, Michael Neunert, Dhruva Tirumala, Noah Y. Siegel, Nicolas Heess, Martin A. Riedmiller:
Data-efficient Hindsight Off-policy Option Learning. ICML 2021: 11340-11350 - [c94]Markus Wulfmeier, Arunkumar Byravan, Tim Hertweck, Irina Higgins, Ankush Gupta, Tejas Kulkarni, Malcolm Reynolds, Denis Teplyashin, Roland Hafner, Thomas Lampe, Martin A. Riedmiller:
Representation Matters: Improving Perception and Exploration for Robotics. ICRA 2021: 6512-6519 - [c93]Nico Gürtler, Felix Widmaier, Cansu Sancaktar, Sebastian Blaes, Pavel Kolev, Stefan Bauer, Manuel Wüthrich, Markus Wulfmeier, Martin A. Riedmiller, Arthur Allshire, Qiang Wang, Robert McCarthy, Hangyeol Kim, Jongchan Baek, Wookyong Kwon, Shanliang Qian, Yasunori Toshimitsu, Mike Yan Michelis, Amirhossein Kazemipour, Arman Raayatsanati, Hehui Zheng, Barnabas Gavin Cangan, Bernhard Schölkopf, Georg Martius:
Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World. NeurIPS (Competition and Demos) 2021: 133-150 - [c92]Tim Seyde, Igor Gilitschenski, Wilko Schwarting, Bartolomeo Stellato, Martin A. Riedmiller, Markus Wulfmeier, Daniela Rus:
Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies. NeurIPS 2021: 27209-27221 - [i40]William F. Whitney, Michael Bloesch, Jost Tobias Springenberg, Abbas Abdolmaleki, Martin A. Riedmiller:
Rethinking Exploration for Sample-Efficient Policy Learning. CoRR abs/2101.09458 (2021) - [i39]Abbas Abdolmaleki, Sandy H. Huang, Giulia Vezzani, Bobak Shahriari, Jost Tobias Springenberg, Shruti Mishra
, Dhruva TB, Arunkumar Byravan, Konstantinos Bousmalis, András György, Csaba Szepesvári, Raia Hadsell, Nicolas Heess, Martin A. Riedmiller:
On Multi-objective Policy Optimization as a Tool for Reinforcement Learning. CoRR abs/2106.08199 (2021) - [i38]Martin A. Riedmiller, Jost Tobias Springenberg, Roland Hafner, Nicolas Heess:
Collect & Infer - a fresh look at data-efficient Reinforcement Learning. CoRR abs/2108.10273 (2021) - [i37]Oliver Groth, Markus Wulfmeier, Giulia Vezzani, Vibhavari Dasagi, Tim Hertweck, Roland Hafner, Nicolas Heess, Martin A. Riedmiller:
Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration. CoRR abs/2109.08603 (2021) - [i36]Arunkumar Byravan, Leonard Hasenclever, Piotr Trochim, Mehdi Mirza, Alessandro Davide Ialongo, Yuval Tassa, Jost Tobias Springenberg, Abbas Abdolmaleki, Nicolas Heess, Josh Merel, Martin A. Riedmiller:
Evaluating model-based planning and planner amortization for continuous control. CoRR abs/2110.03363 (2021) - [i35]Alex X. Lee, Coline Devin, Yuxiang Zhou, Thomas Lampe, Konstantinos Bousmalis, Jost Tobias Springenberg, Arunkumar Byravan, Abbas Abdolmaleki, Nimrod Gileadi, David Khosid, Claudio Fantacci, José Enrique Chen, Akhil Raju, Rae Jeong, Michael Neunert, Antoine Laurens, Stefano Saliceti, Federico Casarini, Martin A. Riedmiller, Raia Hadsell, Francesco Nori:
Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes. CoRR abs/2110.06192 (2021) - [i34]Tim Seyde, Igor Gilitschenski, Wilko Schwarting, Bartolomeo Stellato, Martin A. Riedmiller, Markus Wulfmeier, Daniela Rus:
Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies. CoRR abs/2111.02552 (2021) - 2020
- [c91]Roland Hafner, Tim Hertweck, Philipp Klöppner, Michael Bloesch, Michael Neunert, Markus Wulfmeier, Saran Tunyasuvunakool, Nicolas Heess, Martin A. Riedmiller:
Towards General and Autonomous Learning of Core Skills: A Case Study in Locomotion. CoRL 2020: 1084-1099 - [c90]Daniel J. Mankowitz, Nir Levine, Rae Jeong, Abbas Abdolmaleki, Jost Tobias Springenberg, Yuanyuan Shi, Jackie Kay, Todd Hester, Timothy A. Mann, Martin A. Riedmiller:
Robust Reinforcement Learning for Continuous Control with Model Misspecification. ICLR 2020 - [c89]Noah Y. Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, Martin A. Riedmiller:
Keep Doing What Worked: Behavior Modelling Priors for Offline Reinforcement Learning. ICLR 2020 - [c88]H. Francis Song, Abbas Abdolmaleki, Jost Tobias Springenberg, Aidan Clark, Hubert Soyer, Jack W. Rae, Seb Noury, Arun Ahuja, Siqi Liu, Dhruva Tirumala, Nicolas Heess, Dan Belov, Martin A. Riedmiller, Matthew M. Botvinick:
V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control. ICLR 2020 - [c87]Abbas Abdolmaleki, Sandy H. Huang, Leonard Hasenclever, Michael Neunert, H. Francis Song, Martina Zambelli, Murilo F. Martins, Nicolas Heess, Raia Hadsell, Martin A. Riedmiller:
A distributional view on multi-objective policy optimization. ICML 2020: 11-22 - [c86]Markus Wulfmeier, Abbas Abdolmaleki, Roland Hafner, Jost Tobias Springenberg, Michael Neunert, Noah Y. Siegel, Tim Hertweck, Thomas Lampe, Nicolas Heess, Martin A. Riedmiller:
Compositional Transfer in Hierarchical Reinforcement Learning. Robotics: Science and Systems 2020 - [i33]Michael Neunert, Abbas Abdolmaleki, Markus Wulfmeier, Thomas Lampe, Jost Tobias Springenberg, Roland Hafner, Francesco Romano, Jonas Buchli, Nicolas Heess, Martin A. Riedmiller:
Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics. CoRR abs/2001.00449 (2020) - [i32]Noah Y. Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, Martin A. Riedmiller:
Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning. CoRR abs/2002.08396 (2020) - [i31]Abbas Abdolmaleki, Sandy H. Huang, Leonard Hasenclever, Michael Neunert, H. Francis Song, Martina Zambelli, Murilo F. Martins, Nicolas Heess, Raia Hadsell, Martin A. Riedmiller:
A Distributional View on Multi-Objective Policy Optimization. CoRR abs/2005.07513 (2020) - [i30]Tim Hertweck, Martin A. Riedmiller, Michael Bloesch, Jost Tobias Springenberg, Noah Y. Siegel, Markus Wulfmeier, Roland Hafner, Nicolas Heess:
Simple Sensor Intentions for Exploration. CoRR abs/2005.07541 (2020) - [i29]Markus Wulfmeier, Dushyant Rao, Roland Hafner, Thomas Lampe, Abbas Abdolmaleki, Tim Hertweck, Michael Neunert, Dhruva Tirumala, Noah Y. Siegel, Nicolas Heess, Martin A. Riedmiller:
Data-efficient Hindsight Off-policy Option Learning. CoRR abs/2007.15588 (2020) - [i28]Roland Hafner, Tim Hertweck, Philipp Klöppner, Michael Bloesch, Michael Neunert, Markus Wulfmeier, Saran Tunyasuvunakool, Nicolas Heess, Martin A. Riedmiller:
Towards General and Autonomous Learning of Core Skills: A Case Study in Locomotion. CoRR abs/2008.12228 (2020) - [i27]Jost Tobias Springenberg, Nicolas Heess, Daniel J. Mankowitz, Josh Merel, Arunkumar Byravan, Abbas Abdolmaleki, Jackie Kay, Jonas Degrave, Julian Schrittwieser, Yuval Tassa, Jonas Buchli, Dan Belov, Martin A. Riedmiller:
Local Search for Policy Iteration in Continuous Control. CoRR abs/2010.05545 (2020) - [i26]Daniel J. Mankowitz, Dan A. Calian, Rae Jeong, Cosmin Paduraru, Nicolas Heess, Sumanth Dathathri, Martin A. Riedmiller, Timothy A. Mann:
Robust Constrained Reinforcement Learning for Continuous Control with Model Misspecification. CoRR abs/2010.10644 (2020) - [i25]Giulia Vezzani, Michael Neunert, Markus Wulfmeier, Rae Jeong, Thomas Lampe, Noah Y. Siegel, Roland Hafner, Abbas Abdolmaleki, Martin A. Riedmiller, Francesco Nori:
"What, not how": Solving an under-actuated insertion task from scratch. CoRR abs/2010.15492 (2020) - [i24]Markus Wulfmeier, Arunkumar Byravan, Tim Hertweck, Irina Higgins, Ankush Gupta, Tejas Kulkarni, Malcolm Reynolds, Denis Teplyashin, Roland Hafner, Thomas Lampe, Martin A. Riedmiller:
Representation Matters: Improving Perception and Exploration for Robotics. CoRR abs/2011.01758 (2020)
2010 – 2019
- 2019
- [j20]Jan M. Wülfing, Sreedhar S. Kumar
, Joschka Boedecker
, Martin A. Riedmiller, Ulrich Egert
:
Adaptive long-term control of biological neural networks with Deep Reinforcement Learning. Neurocomputing 342: 66-74 (2019) - [c85]Arunkumar Byravan, Jost Tobias Springenberg, Abbas Abdolmaleki, Roland Hafner, Michael Neunert, Thomas Lampe, Noah Y. Siegel, Nicolas Heess, Martin A. Riedmiller:
Imagined Value Gradients: Model-Based Policy Optimization with Tranferable Latent Dynamics Models. CoRL 2019: 566-589 - [c84]Michael Neunert, Abbas Abdolmaleki, Markus Wulfmeier, Thomas Lampe, Jost Tobias Springenberg, Roland Hafner, Francesco Romano, Jonas Buchli, Nicolas Heess, Martin A. Riedmiller:
Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics. CoRL 2019: 735-751 - [c83]Devin Schwab, Jost Tobias Springenberg, Murilo Fernandes Martins, Michael Neunert, Thomas Lampe, Abbas Abdolmaleki, Tim Hertweck, Roland Hafner, Francesco Nori, Martin A. Riedmiller:
Simultaneously Learning Vision and Feature-Based Control Policies for Real-World Ball-In-A-Cup. Robotics: Science and Systems 2019 - [i23]Carlos Florensa, Jonas Degrave, Nicolas Heess, Jost Tobias Springenberg, Martin A. Riedmiller:
Self-supervised Learning of Image Embedding for Continuous Control. CoRR abs/1901.00943 (2019) - [i22]Devin Schwab, Jost Tobias Springenberg, Murilo F. Martins, Thomas Lampe, Michael Neunert, Abbas Abdolmaleki, Tim Hertweck, Roland Hafner, Francesco Nori, Martin A. Riedmiller:
Simultaneously Learning Vision and Feature-based Control Policies for Real-world Ball-in-a-Cup. CoRR abs/1902.04706 (2019) - [i21]Daniel J. Mankowitz, Nir Levine, Rae Jeong, Abbas Abdolmaleki, Jost Tobias Springenberg, Timothy A. Mann, Todd Hester, Martin A. Riedmiller:
Robust Reinforcement Learning for Continuous Control with Model Misspecification. CoRR abs/1906.07516 (2019) - [i20]Markus Wulfmeier, Abbas Abdolmaleki, Roland Hafner, Jost Tobias Springenberg, Michael Neunert, Tim Hertweck, Thomas Lampe, Noah Y. Siegel, Nicolas Heess, Martin A. Riedmiller:
Regularized Hierarchical Policies for Compositional Transfer in Robotics. CoRR abs/1906.11228 (2019) - [i19]H. Francis Song, Abbas Abdolmaleki, Jost Tobias Springenberg, Aidan Clark, Hubert Soyer, Jack W. Rae, Seb Noury, Arun Ahuja, Siqi Liu, Dhruva Tirumala, Nicolas Heess, Dan Belov, Martin A. Riedmiller, Matthew M. Botvinick:
V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control. CoRR abs/1909.12238 (2019) - [i18]Arunkumar Byravan, Jost Tobias Springenberg, Abbas Abdolmaleki, Roland Hafner, Michael Neunert, Thomas Lampe, Noah Y. Siegel, Nicolas Heess, Martin A. Riedmiller:
Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models. CoRR abs/1910.04142 (2019) - [i17]Jonas Degrave, Abbas Abdolmaleki, Jost Tobias Springenberg, Nicolas Heess, Martin A. Riedmiller:
Quinoa: a Q-function You Infer Normalized Over Actions. CoRR abs/1911.01831 (2019) - 2018
- [c82]Jan Wülfing, Sreedhar S. Kumar, Joschka Boedecker, Martin A. Riedmiller, Ulrich Egert:
Controlling biological neural networks with deep reinforcement learning. ESANN 2018 - [c81]Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Rémi Munos, Nicolas Heess, Martin A. Riedmiller:
Maximum a Posteriori Policy Optimisation. ICLR (Poster) 2018 - [c80]Karol Hausman, Jost Tobias Springenberg, Ziyu Wang, Nicolas Heess, Martin A. Riedmiller:
Learning an Embedding Space for Transferable Robot Skills. ICLR (Poster) 2018 - [c79]Martin A. Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom Van de Wiele, Vlad Mnih, Nicolas Heess, Jost Tobias Springenberg:
Learning by Playing Solving Sparse Reward Tasks from Scratch. ICML 2018: 4341-4350 - [c78]Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin A. Riedmiller, Raia Hadsell, Peter W. Battaglia:
Graph Networks as Learnable Physics Engines for Inference and Control. ICML 2018: 4467-4476 - [i16]Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy P. Lillicrap, Martin A. Riedmiller:
DeepMind Control Suite. CoRR abs/1801.00690 (2018) - [i15]Martin A. Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom Van de Wiele, Volodymyr Mnih, Nicolas Heess, Jost Tobias Springenberg:
Learning by Playing - Solving Sparse Reward Tasks from Scratch. CoRR abs/1802.10567 (2018) - [i14]Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin A. Riedmiller, Raia Hadsell, Peter W. Battaglia:
Graph networks as learnable physics engines for inference and control. CoRR abs/1806.01242 (2018) - [i13]Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Rémi Munos, Nicolas Heess, Martin A. Riedmiller:
Maximum a Posteriori Policy Optimisation. CoRR abs/1806.06920 (2018) - [i12]Abbas Abdolmaleki, Jost Tobias Springenberg, Jonas Degrave, Steven Bohez, Yuval Tassa, Dan Belov, Nicolas Heess, Martin A. Riedmiller:
Relative Entropy Regularized Policy Iteration. CoRR abs/1812.02256 (2018) - 2017
- [i11]Ivaylo Popov, Nicolas Heess, Timothy P. Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerík, Thomas Lampe, Yuval Tassa, Tom Erez, Martin A. Riedmiller:
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation. CoRR abs/1704.03073 (2017) - [i10]Rico Jonschkowski, Roland Hafner, Jonathan Scholz, Martin A. Riedmiller:
PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations. CoRR abs/1705.09805 (2017) - [i9]Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin A. Riedmiller, David Silver:
Emergence of Locomotion Behaviours in Rich Environments. CoRR abs/1707.02286 (2017) - [i8]Matej Vecerík, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin A. Riedmiller:
Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards. CoRR abs/1707.08817 (2017) - 2016
- [j19]Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin A. Riedmiller, Thomas Brox:
Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(9): 1734-1747 (2016) - [j18]Sreedhar S. Kumar, Jan Wülfing, Samora Okujeni, Joschka Boedecker, Martin A. Riedmiller, Ulrich Egert
:
Autonomous Optimization of Targeted Stimulation of Neuronal Networks. PLoS Comput. Biol. 12(8) (2016) - [i7]Nicolas Heess, Gregory Wayne, Yuval Tassa, Timothy P. Lillicrap, Martin A. Riedmiller, David Silver:
Learning and Transfer of Modulated Locomotor Controllers. CoRR abs/1610.05182 (2016) - 2015
- [j17]Wendelin Böhmer
, Jost Tobias Springenberg, Joschka Boedecker
, Martin A. Riedmiller, Klaus Obermayer:
Autonomous Learning of State Representations for Control: An Emerging Field Aims to Autonomously Learn State Representations for Reinforcement Learning Agents from Their Real-World Sensor Observations. Künstliche Intell. 29(4): 353-362 (2015) - [j16]Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski
, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis:
Human-level control through deep reinforcement learning. Nat. 518(7540): 529-533 (2015) - [c77]Andreas Eitel, Jost Tobias Springenberg, Luciano Spinello, Martin A. Riedmiller, Wolfram Burgard
:
Multimodal deep learning for robust RGB-D object recognition. IROS 2015: 681-687 - [c76]Manuel Watter, Jost Tobias Springenberg, Joschka Boedecker, Martin A. Riedmiller:
Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images. NIPS 2015: 2746-2754 - [c75]Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, Martin A. Riedmiller:
Striving for Simplicity: The All Convolutional Net. ICLR (Workshop) 2015 - [i6]Manuel Watter, Jost Tobias Springenberg, Joschka Boedecker, Martin A. Riedmiller:
Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images. CoRR abs/1506.07365 (2015) - [i5]Andreas Eitel, Jost Tobias Springenberg, Luciano Spinello, Martin A. Riedmiller, Wolfram Burgard:
Multimodal Deep Learning for Robust RGB-D Object Recognition. CoRR abs/1507.06821 (2015) - 2014
- [c74]Joschka Boedecker
, Jost Tobias Springenberg, Jan Wülfing, Martin A. Riedmiller:
Approximate real-time optimal control based on sparse Gaussian process models. ADPRL 2014: 1-8 - [c73]David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, Martin A. Riedmiller:
Deterministic Policy Gradient Algorithms. ICML 2014: 387-395 - [c72]Thomas Lampe, Martin A. Riedmiller:
Approximate model-assisted Neural Fitted Q-Iteration. IJCNN 2014: 2698-2704 - [c71]Thomas Lampe, Lukas Dominique Josef Fiederer, Martin Voelker
, Alexander Knorr, Martin A. Riedmiller, Tonio Ball:
A brain-computer interface for high-level remote control of an autonomous, reinforcement-learning-based robotic system for reaching and grasping. IUI 2014: 83-88 - [c70]Alexey Dosovitskiy, Jost Tobias Springenberg, Martin A. Riedmiller, Thomas Brox:
Discriminative Unsupervised Feature Learning with Convolutional Neural Networks. NIPS 2014: 766-774 - [c69]Jost Tobias Springenberg, Martin A. Riedmiller:
Improving Deep Neural Networks with Probabilistic Maxout Units. ICLR (Workshop Poster) 2014 - [i4]Alexey Dosovitskiy, Jost Tobias Springenberg, Martin A. Riedmiller, Thomas Brox:
Discriminative Unsupervised Feature Learning with Convolutional Neural Networks. CoRR abs/1406.6909 (2014) - 2013
- [c68]Manuel Blum, Martin A. Riedmiller:
Electricity Demand Forecasting using Gaussian Processes. AAAI Workshop: Trading Agent Design and Analysis 2013 - [c67]Manuel Blum, Martin A. Riedmiller:
Optimization of Gaussian process hyperparameters using Rprop. ESANN 2013 - [c66]Martin A. Riedmiller:
Learning machines that perceive, act and communicate. MLIS@IJCAI 2013: 5 - [c65]Thomas Lampe, Martin A. Riedmiller:
Acquiring visual servoing reaching and grasping skills using neural reinforcement learning. IJCNN 2013: 1-8 - [c64]João Alves, José Neves
, Sascha Lange, Martin A. Riedmiller:
Improvement of a Web Browser Game Through the Knowledge Extracted from Player Behavior. KICSS 2013: 53-65 - [i3]Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin A. Riedmiller:
Playing Atari with Deep Reinforcement Learning. CoRR abs/1312.5602 (2013) - 2012
- [c63]Jan Mattner, Sascha Lange, Martin A. Riedmiller:
Learn to Swing Up and Balance a Real Pole Based on Raw Visual Input Data. ICONIP (5) 2012: 126-133 - [c62]Jost Tobias Springenberg, Martin A. Riedmiller:
Learning Temporal Coherent Features through Life-Time Sparsity. ICONIP (1) 2012: 347-356 - [c61]Manuel Blum, Jost Tobias Springenberg, Jan Wülfing, Martin A. Riedmiller:
A learned feature descriptor for object recognition in RGB-D data. ICRA 2012: 1298-1303 - [c60]Sascha Lange, Martin A. Riedmiller, Arne Voigtländer:
Autonomous reinforcement learning on raw visual input data in a real world application. IJCNN 2012: 1-8 - [c59]Oliver Obst
, Martin A. Riedmiller:
Taming the reservoir: Feedforward training for recurrent neural networks. IJCNN 2012: 1-7 - [c58]Jan Wülfing, Martin A. Riedmiller:
Unsupervised Learning of Local Features for Music Classification. ISMIR 2012: 139-144 - [p2]Sascha Lange, Thomas Gabel, Martin A. Riedmiller:
Batch Reinforcement Learning. Reinforcement Learning 2012: 45-73 - [p1]Martin A. Riedmiller:
10 Steps and Some Tricks to Set up Neural Reinforcement Controllers. Neural Networks: Tricks of the Trade (2nd ed.) 2012: 735-757 - 2011
- [j15]Roland Hafner, Martin A. Riedmiller:
Reinforcement learning in feedback control - Challenges and benchmarks from technical process control. Mach. Learn. 84(1-2): 137-169 (2011) - [c57]Andreas Witsch, Roland Reichle, Kurt Geihs
, Sascha Lange, Martin A. Riedmiller:
Enhancing the episodic natural actor-critic algorithm by a regularisation term to stabilize learning of control structures. ADPRL 2011: 156-163 - [c56]Thomas Gabel, Christian Lutz, Martin A. Riedmiller:
Improved neural fitted Q iteration applied to a novel computer gaming and learning benchmark. ADPRL 2011: 279-286 - 2010
- [j14]Martin Lauer
, Roland Hafner, Sascha Lange, Martin A. Riedmiller:
Cognitive concepts in autonomous soccer playing robots. Cogn. Syst. Res. 11(3): 287-309 (2010) - [c55]Sascha Lange, Martin A. Riedmiller:
Deep learning of visual control policies. ESANN 2010 - [c54]Sascha Lange, Martin A. Riedmiller:
Deep auto-encoder neural networks in reinforcement learning. IJCNN 2010: 1-8 - [c53]Thomas Gabel, Martin A. Riedmiller:
On Progress in RoboCup: The Simulation League Showcase. RoboCup 2010: 36-47
2000 – 2009
- 2009
- [j13]Martin A. Riedmiller, Thomas Gabel, Roland Hafner, Sascha Lange:
Reinforcement learning for robot soccer. Auton. Robots 27(1): 55-73 (2009) - [j12]Tim C. Kietzmann, Sascha Lange, Martin A. Riedmiller:
Computational object recognition: a biologically motivated approach. Biol. Cybern. 100(1): 59-79 (2009) - [j11]Stephan Timmer, Martin A. Riedmiller:
Efficient Identification of State in Reinforcement Learning. Künstliche Intell. 23(3): 5-11 (2009) - [c52]Tim C. Kietzmann, Martin A. Riedmiller:
The Neuro Slot Car Racer: Reinforcement Learning in a Real World Setting. ICMLA 2009: 311-316 - [e3]Sándor P. Fekete, Stefan Fischer, Martin A. Riedmiller, Subhash Suri:
Algorithmic Methods for Distributed Cooperative Systems, 06.09. - 11.09.2009. Dagstuhl Seminar Proceedings 09371, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, Germany 2009 [contents] - [i2]Sándor P. Fekete, Stefan Fischer, Martin A. Riedmiller, Subhash Suri:
09371 Abstracts Collection - Algorithmic Methods for Distributed Cooperative Systems. Algorithmic Methods for Distributed Cooperative Systems 2009 - 2008
- [j10]Tim C. Kietzmann, Sascha Lange, Martin A. Riedmiller:
Incremental GRLVQ: Learning relevant features for 3D object recognition. Neurocomputing 71(13-15): 2868-2879 (2008) - [c51]Thomas Gabel, Martin A. Riedmiller:
Reinforcement learning for DEC-MDPs with changing action sets and partially ordered dependencies. AAMAS (3) 2008: 1333-1336 - [c50]Thomas Gabel, Martin A. Riedmiller:
Increasing Precision of Credible Case-Based Inference. ECCBR 2008: 225-239 - [c49]Thomas Gabel, Martin A. Riedmiller:
Evaluation of Batch-Mode Reinforcement Learning Methods for Solving DEC-MDPs with Changing Action Sets. EWRL 2008: 82-95 - [c48]Martin A. Riedmiller, Roland Hafner, Sascha Lange, Martin Lauer
:
Learning to dribble on a real robot by success and failure. ICRA 2008: 2207-2208 - [c47]Thomas Gabel, Martin A. Riedmiller:
Joint Equilibrium Policy Search for Multi-Agent Scheduling Problems. MATES 2008: 61-72 - [c46]Thomas Gabel, Martin A. Riedmiller, Florian Trost:
A Case Study on Improving Defense Behavior in Soccer Simulation 2D: The NeuroHassle Approach. RoboCup 2008: 61-72 - 2007
- [c45]Martin A. Riedmiller, Thomas Gabel:
On Experiences in a Complex and Competitive Gaming Domain: Reinforcement Learning Meets RoboCup. CIG 2007: 17-23 - [c44]Thomas Gabel, Martin A. Riedmiller:
Scaling Adaptive Agent-Based Reactive Job-Shop Scheduling to Large-Scale Problems. CISched 2007: 259-266 - [c43]Stephan Timmer, Martin A. Riedmiller:
Safe Q-Learning on Complete History Spaces. ECML 2007: 394-405 - [c42]Arne Voigtländer, Sascha Lange, Martin Lauer, Martin A. Riedmiller:
Real-time 3D Ball Recognition using Perspective and Catadioptric Cameras. EMCR 2007 - [c41]Verena Heidrich-Meisner, Martin Lauer, Christian Igel, Martin A. Riedmiller:
Reinforcement learning in a nutshell. ESANN 2007: 277-288 - [c40]Martin A. Riedmiller, Michael Montemerlo, Hendrik Dahlkamp:
Learning to Drive a Real Car in 20 Minutes. FBIT 2007: 645-650 - [c39]Thomas Gabel, Martin A. Riedmiller:
An Analysis of Case-Based Value Function Approximation by Approximating State Transition Graphs. ICCBR 2007: 344-358 - [c38]Roland Hafner, Martin A. Riedmiller:
Neural Reinforcement Learning Controllers for a Real Robot Application. ICRA 2007: 2098-2103 - [c37]Heiko Müller, Martin Lauer
, Roland Hafner, Sascha Lange, Artur Merke, Martin A. Riedmiller:
Making a Robot Learn to Play Soccer Using Reward and Punishment. KI 2007: 220-234 - 2006
- [j9]Martin A. Riedmiller, Thomas Gabel, Roland Hafner, Sascha Lange, Martin Lauer
:
Die Brainstormers: Entwurfsprinzipien lernfähiger autonomer Roboter. Inform. Spektrum 29(3): 175-190 (2006) - [j8]Thomas Gabel, Martin A. Riedmiller:
Learning a Partial Behavior for a Competitive Robotic Soccer Agent. Künstliche Intell. 20(2): 18-23 (2006) - [c36]Thomas Gabel, Martin A. Riedmiller:
Reducing policy degradation in neuro-dynamic programming. ESANN 2006: 653-658 - [c35]Thomas Gabel, Martin A. Riedmiller:
Multi-agent Case-Based Reasoning for Cooperative Reinforcement Learners. ECCBR 2006: 32-46 - [c34]Sascha Lange, Martin A. Riedmiller:
Appearance-Based Robot Discrimination Using Eigenimages. RoboCup 2006: 499-506 - [e2]Hans-Dieter Burkhard, Martin A. Riedmiller, Uwe Schwiegelshohn, Manuela M. Veloso:
Multi-Robot Systems: Perception, Behaviors, Learning, and Action, 19.06. - 23.06.2006. Dagstuhl Seminar Proceedings 06251, Internationales Begegnungs- und Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl, Germany 2006 [contents] - [i1]Hans-Dieter Burkhard, Martin A. Riedmiller, Uwe Schwiegelshohn, Manuela M. Veloso:
06251 Abstracts Collection - Multi-Robot Systems: Perception, Behaviors, Learning, and Action. Multi-Robot Systems: Perception, Behaviors, Learning, and Action 2006 - 2005
- [j7]Martin A. Riedmiller, Daniel Withopf:
Effective Methods for Reinforcement Learning in Large Multi-Agent Domains. it Inf. Technol. 47(5): 241-249 (2005) - [c33]Martin A. Riedmiller:
Neural Fitted Q Iteration - First Experiences with a Data Efficient Neural Reinforcement Learning Method. ECML 2005: 317-328 - [c32]Thomas Gabel, Martin A. Riedmiller:
CBR for State Value Function Approximation in Reinforcement Learning. ICCBR 2005: 206-221 - [c31]Martin Lauer, Sascha Lange, Martin A. Riedmiller:
Modeling Moving Objects in a Dynamically Changing Robot Application. KI 2005: 291-303 - [c30]Alexander Sung, Artur Merke, Martin A. Riedmiller:
Reinforcement Learning Using a Grid Based Function Approximator. Biomimetic Neural Learning for Intelligent Robots 2005: 235-244 - [c29]Martin Lauer
, Sascha Lange, Martin A. Riedmiller:
Calculating the Perfect Match: An Efficient and Accurate Approach for Robot Self-localization. RoboCup 2005: 142-153 - [c28]Stephan Timmer, Martin A. Riedmiller:
Learning policies for abstract state spaces. SMC 2005: 3179-3184 - [c27]Martin A. Riedmiller, Daniel Withopf:
Comparing different methods to speed up reinforcement learning in a complex domain. SMC 2005: 3185-3190 - [c26]Martin A. Riedmiller:
Neural reinforcement learning to swing-up and balance a real pole. SMC 2005: 3191-3196 - [e1]Daniele Nardi, Martin A. Riedmiller, Claude Sammut, José Santos-Victor:
RoboCup 2004: Robot Soccer World Cup VIII. Lecture Notes in Computer Science 3276, Springer 2005, ISBN 3-540-25046-8 [contents] - 2004
- [j6]Enrico Pagello, Emanuele Menegatti, Ansgar Bredenfeld, Paulo Costa, Thomas Christaller, Adam Jacoff, Daniel Polani, Martin A. Riedmiller, Alessandro Saffiotti, Elizabeth Sklar, Takashi Tomoichi:
RoboCup-2003: New Scientific and Technical Advances. AI Mag. 25(2): 81-98 (2004) - [j5]Martin A. Riedmiller, François Fages, Malik Ghallab, Wolfgang Wahlster, Jörg H. Siekmann:
Invited talks. Künstliche Intell. 18(3): 44- (2004) - [j4]Ralf Schoknecht, Martin Spott
, Martin A. Riedmiller:
Fynesse: An architecture for integrating prior knowledge in autonomously learning agents. Soft Comput. 8(6): 397-408 (2004) - [c25]Martin Lauer, Martin A. Riedmiller:
Reinforcement Learning for Stochastic Cooperative Multi-Agent Systems. AAMAS 2004: 1516-1517 - [c24]Martin A. Riedmiller:
Machine Learning for Autonomous Robots. KI 2004: 52-55 - [c23]Sascha Lange, Martin A. Riedmiller:
Evolution of Computer Vision Subsystems in Robot Navigation and Image Classification Tasks. RoboCup 2004: 184-195 - 2003
- [j3]Ralf Schoknecht, Martin A. Riedmiller:
Reinforcement learning on explicitly specified time scales. Neural Comput. Appl. 12(2): 61-80 (2003) - [c22]Ralf Schoknecht, Martin A. Riedmiller:
Learning to Control at Multiple Time Scales. ICANN 2003: 479-487 - [c21]Martin Lauer
, Martin A. Riedmiller, Thomas Ragg, Walter Baum, Michael Wigbers:
The Smaller the Better: Comparison of Two Approaches for Sales Rate Prediction. IDA 2003: 451-461 - [c20]Roland Hafner, Martin A. Riedmiller:
Reinforcement learning on an omnidirectional mobile robot. IROS 2003: 418-423 - [c19]Enrico Pagello
, Emanuele Menegatti
, Ansgar Bredenfeld, Paulo Costa, Thomas Christaller, Adam Jacoff, Jeffrey Johnson, Martin A. Riedmiller, Alessandro Saffiotti, Takashi Tomoichi:
Overview of RoboCup 2003 Competition and Conferences. RoboCup 2003: 1-14 - [c18]Hans-Dieter Burkhard, Minoru Asada, Andrea Bonarini, Adam Jacoff, Daniele Nardi, Martin A. Riedmiller, Claude Sammut, Elizabeth Sklar, Manuela M. Veloso:
RoboCup: Yesterday, Today, and Tomorrow Workshop of the Executive Committee in Blaubeuren, October 2003. RoboCup 2003: 15-34 - 2002
- [c17]Ralf Schoknecht, Martin A. Riedmiller:
Speeding-up Reinforcement Learning with Multi-step Actions. ICANN 2002: 813-818 - 2001
- [c16]Artur Merke, Martin A. Riedmiller:
Karlsruhe Brainstormers - A Reinforcement Learning Approach to Robotic Soccer. RoboCup 2001: 435-440 - 2000
- [c15]Martin A. Riedmiller, Andrew W. Moore, Jeff G. Schneider:
Reinforcement Learning for Cooperating and Communicating Reactive Agents in Electrical Power Grids. Balancing Reactivity and Social Deliberation in Multi-Agent Systems 2000: 137-149 - [c14]Martin Lauer, Martin A. Riedmiller:
An Algorithm for Distributed Reinforcement Learning in Cooperative Multi-Agent Systems. ICML 2000: 535-542 - [c13]Sebastian Buck, Martin A. Riedmiller:
Learning Situation Dependent Success Rates of Actions in a RoboCup Scenario. PRICAI 2000: 809 - [c12]Martin A. Riedmiller, Artur Merke, David Meier, Andreas Hoffmann, Alex Sinner, Ortwin Thate, R. Ehrmann:
Karlsruhe Brainstormers - A Reinforcement Learning Approach to Robotic Soccer. RoboCup 2000: 367-372 - [c11]Martin A. Riedmiller, Artur Merke, David Meier, Andreas Hoffmann, Alex Sinner, Ortwin Thate:
Karlsruhe Brainstormers 2000 Team Description. RoboCup 2000: 485-488
1990 – 1999
- 1999
- [j2]Martin A. Riedmiller:
Concepts and Facilities of a Neural Reinforcement Learning Control Architecture for Technical Process Control. Neural Comput. Appl. 8(4): 323-338 (1999) - [c10]Jeff G. Schneider, Weng-Keen Wong, Andrew W. Moore, Martin A. Riedmiller:
Distributed Value Functions. ICML 1999: 371-378 - [c9]Simone C. Riedmiller, Martin A. Riedmiller:
A Neural Reinforcement Learning Approach to Learn Local Dispatching Policies in Production Scheduling. IJCAI 1999: 764-771 - [c8]Martin A. Riedmiller, Sebastian Buck, Artur Merke, R. Ehrmann, Ortwin Thate, S. Dilger, Alex Sinner, Andreas Hoffmann, Lutz Frommberger:
Karlsruhe Brainstormers - Design Principles. RoboCup 1999: 588-591 - 1998
- [j1]Karoly Santa, Michael Mews, Martin A. Riedmiller:
A Neural Approach for the Control of Piezoelectric Micromanipulation Robots. J. Intell. Robotic Syst. 22(3-4): 351-374 (1998) - 1997
- [b1]Martin A. Riedmiller:
Selbständig lernende neuronale Steuerungen. Karlsruhe Institute of Technology, VDI-Verlag 1997, ISBN 3-18-362608-X, pp. 1-192 - [c7]Martin A. Riedmiller:
Application of a self-learning controller with continuous control signals based on the DOE-approach. ESANN 1997 - [c6]Michael Wigbers, Martin A. Riedmiller:
A new method for the analysis of neural reference model control. ICNN 1997: 739-743 - 1996
- [c5]Martin A. Riedmiller:
Application of sequential reinforcement learning to control dynamic systems. ICNN 1996: 167-172 - [c4]Achim Stahlberger, Martin A. Riedmiller:
Fast Network Pruning and Feature Extraction by using the Unit-OBS Algorithm. NIPS 1996: 655-661 - 1995
- [c3]D. Koll, Martin A. Riedmiller, Heinrich Braun:
Massively Parallel Training of Multi Layer Perceptrons With Irregular Topologies. ICANNGA 1995: 293-296 - [c2]Barbara Janusz, Martin A. Riedmiller:
Self-learning neural control of a mobile robot. ICNN 1995: 2358-2363 - 1993
- [c1]Martin A. Riedmiller, Heinrich Braun:
A direct adaptive method for faster backpropagation learning: the RPROP algorithm. ICNN 1993: 586-591
Coauthor Index
aka: Noah Yamamoto Siegel

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-03-15 23:26 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint