default search action
Reinforcement Learning 2012
- Marco A. Wiering, Martijn van Otterlo:
Reinforcement Learning. Adaptation, Learning, and Optimization 12, Springer 2012, ISBN 978-3-642-27644-6 - Martijn van Otterlo, Marco A. Wiering:
Reinforcement Learning and Markov Decision Processes. 3-42 - Sascha Lange, Thomas Gabel, Martin A. Riedmiller:
Batch Reinforcement Learning. 45-73 - Lucian Busoniu, Alessandro Lazaric, Mohammad Ghavamzadeh, Rémi Munos, Robert Babuska, Bart De Schutter:
Least-Squares Methods for Policy Iteration. 75-109 - Todd Hester, Peter Stone:
Learning and Using Models. 111-141 - Alessandro Lazaric:
Transfer in Reinforcement Learning: A Framework and a Survey. 143-173 - Lihong Li:
Sample Complexity Bounds of Exploration. 175-204 - Hado van Hasselt:
Reinforcement Learning in Continuous State and Action Spaces. 207-251 - Martijn van Otterlo:
Solving Relational and First-Order Logical Markov Decision Processes: A Survey. 253-292 - Bernhard Hengst:
Hierarchical Approaches. 293-323 - Shimon Whiteson:
Evolutionary Computation for Reinforcement Learning. 325-355 - Nikos Vlassis, Mohammad Ghavamzadeh, Shie Mannor, Pascal Poupart:
Bayesian Reinforcement Learning. 359-386 - Matthijs T. J. Spaan:
Partially Observable Markov Decision Processes. 387-414 - David Wingate:
Predictively Defined Representations of State. 415-439 - Ann Nowé, Peter Vrancx, Yann-Michaël De Hauwere:
Game Theory and Multi-agent Reinforcement Learning. 441-470 - Frans A. Oliehoek:
Decentralized POMDPs. 471-503 - Ashvin Shah:
Psychological and Neuroscientific Connections with Reinforcement Learning. 507-537 - István Szita:
Reinforcement Learning in Games. 539-577 - Jens Kober, Jan Peters:
Reinforcement Learning in Robotics: A Survey. 579-610 - Marco A. Wiering, Martijn van Otterlo:
Conclusions, Future Directions and Outlook. 613-630
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.