default search action
David Abel
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j1]David Abel, Mark K. Ho, Anna Harutyunyan:
Three Dogmas of Reinforcement Learning. RLJ 2: 629-644 (2024) - [c24]Andi Peng, Yuying Sun, Tianmin Shu, David Abel:
Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input. ICML 2024 - [i23]Andi Peng, Yuying Sun, Tianmin Shu, David Abel:
Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input. CoRR abs/2405.14769 (2024) - [i22]David Abel, Mark K. Ho, Anna Harutyunyan:
Three Dogmas of Reinforcement Learning. CoRR abs/2407.10583 (2024) - [i21]Hyunin Lee, David Abel, Ming Jin, Javad Lavaei, Somayeh Sojoudi:
A Black Swan Hypothesis in Markov Decision Process via Irrationality. CoRR abs/2407.18422 (2024) - 2023
- [c23]Michael Bowling, John D. Martin, David Abel, Will Dabney:
Settling the Reward Hypothesis. ICML 2023: 3003-3020 - [c22]David Abel, André Barreto, Benjamin Van Roy, Doina Precup, Hado Philip van Hasselt, Satinder Singh:
A Definition of Continual Reinforcement Learning. NeurIPS 2023 - [i20]David Abel, André Barreto, Hado van Hasselt, Benjamin Van Roy, Doina Precup, Satinder Singh:
On the Convergence of Bounded Agents. CoRR abs/2307.11044 (2023) - [i19]David Abel, André Barreto, Benjamin Van Roy, Doina Precup, Hado van Hasselt, Satinder Singh:
A Definition of Continual Reinforcement Learning. CoRR abs/2307.11046 (2023) - 2022
- [c21]Jelena Luketina, Sebastian Flennerhag, Yannick Schroecker, David Abel, Tom Zahavy, Satinder Singh:
Meta-Gradients in Non-Stationary Environments. CoLLAs 2022: 886-901 - [c20]David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, Satinder Singh:
On the Expressivity of Markov Reward (Extended Abstract). IJCAI 2022: 5254-5258 - [i18]David Abel:
A Theory of Abstraction in Reinforcement Learning. CoRR abs/2203.00397 (2022) - [i17]Jelena Luketina, Sebastian Flennerhag, Yannick Schroecker, David Abel, Tom Zahavy, Satinder Singh:
Meta-Gradients in Non-Stationary Environments. CoRR abs/2209.06159 (2022) - [i16]Michael Bowling, John D. Martin, David Abel, Will Dabney:
Settling the Reward Hypothesis. CoRR abs/2212.10420 (2022) - 2021
- [c19]Erwan Lecarpentier, David Abel, Kavosh Asadi, Yuu Jinnai, Emmanuel Rachelson, Michael L. Littman:
Lipschitz Lifelong Reinforcement Learning. AAAI 2021: 8270-8278 - [c18]Tadashi Kozuno, Yunhao Tang, Mark Rowland, Rémi Munos, Steven Kapturowski, Will Dabney, Michal Valko, David Abel:
Revisiting Peng's Q(λ) for Modern Reinforcement Learning. ICML 2021: 5794-5804 - [c17]David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, Satinder Singh:
On the Expressivity of Markov Reward. NeurIPS 2021: 7799-7812 - [i15]Tadashi Kozuno, Yunhao Tang, Mark Rowland, Rémi Munos, Steven Kapturowski, Will Dabney, Michal Valko, David Abel:
Revisiting Peng's Q(λ) for Modern Reinforcement Learning. CoRR abs/2103.00107 (2021) - [i14]Mark K. Ho, David Abel, Carlos G. Correa, Michael L. Littman, Jonathan D. Cohen, Thomas L. Griffiths:
Control of mental representations in human planning. CoRR abs/2105.06948 (2021) - [i13]David Abel, Cameron Allen, Dilip Arumugam, D. Ellis Hershkowitz, Michael L. Littman, Lawson L. S. Wong:
Bad-Policy Density: A Measure of Reinforcement Learning Hardness. CoRR abs/2110.03424 (2021) - [i12]David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, Satinder Singh:
On the Expressivity of Markov Reward. CoRR abs/2111.00876 (2021) - 2020
- [b1]David Abel:
A Theory of Abstraction in Reinforcement Learning. Brown University, USA, 2020 - [c16]Mark K. Ho, David Abel, Jonathan D. Cohen, Michael L. Littman, Thomas L. Griffiths:
People Do Not Just Plan, They Plan to Plan. AAAI 2020: 1300-1307 - [c15]David Abel, Nate Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, Michael L. Littman:
Value Preserving State-Action Abstractions. AISTATS 2020: 1639-1650 - [c14]Khimya Khetarpal, Zafarali Ahmed, Gheorghe Comanici, David Abel, Doina Precup:
What can I do here? A Theory of Affordances in Reinforcement Learning. ICML 2020: 5243-5253 - [i11]Erwan Lecarpentier, David Abel, Kavosh Asadi, Yuu Jinnai, Emmanuel Rachelson, Michael L. Littman:
Lipschitz Lifelong Reinforcement Learning. CoRR abs/2001.05411 (2020) - [i10]Kavosh Asadi, David Abel, Michael Littman:
Learning State Abstractions for Transfer in Continuous Control. CoRR abs/2002.05518 (2020) - [i9]Mark K. Ho, David Abel, Jonathan D. Cohen, Michael L. Littman, Thomas L. Griffiths:
The Efficiency of Human Cognition Reflects Planned Information Processing. CoRR abs/2002.05769 (2020) - [i8]Khimya Khetarpal, Zafarali Ahmed, Gheorghe Comanici, David Abel, Doina Precup:
What can I do here? A Theory of Affordances in Reinforcement Learning. CoRR abs/2006.15085 (2020)
2010 – 2019
- 2019
- [c13]David Abel, Dilip Arumugam, Kavosh Asadi, Yuu Jinnai, Michael L. Littman, Lawson L. S. Wong:
State Abstraction as Compression in Apprenticeship Learning. AAAI 2019: 3134-3142 - [c12]David Abel:
A Theory of State Abstraction for Reinforcement Learning. AAAI 2019: 9876-9877 - [c11]David Abel:
simple_rl: Reproducible Reinforcement Learning in Python. RML@ICLR 2019 - [c10]Yuu Jinnai, David Abel, David Ellis Hershkowitz, Michael L. Littman, George Dimitri Konidaris:
Finding Options that Minimize Planning Time. ICML 2019: 3120-3129 - [c9]Yuu Jinnai, Jee Won Park, David Abel, George Dimitri Konidaris:
Discovering Options for Exploration by Minimizing Cover Time. ICML 2019: 3130-3139 - [c8]David Abel, John Winder, Marie desJardins, Michael L. Littman:
The Expected-Length Model of Options. IJCAI 2019: 1951-1958 - [i7]Yuu Jinnai, Jee Won Park, David Abel, George Dimitri Konidaris:
Discovering Options for Exploration by Minimizing Cover Time. CoRR abs/1903.00606 (2019) - 2018
- [c7]David Abel, Edward C. Williams, Stephen Brawner, Emily Reif, Michael L. Littman:
Bandit-Based Solar Panel Control. AAAI 2018: 7713-7718 - [c6]David Abel, Dilip Arumugam, Lucas Lehnert, Michael L. Littman:
State Abstractions for Lifelong Reinforcement Learning. ICML 2018: 10-19 - [c5]David Abel, Yuu Jinnai, Yue (Sophie) Guo, George Dimitri Konidaris, Michael L. Littman:
Policy and Value Transfer in Lifelong Reinforcement Learning. ICML 2018: 20-29 - [i6]Yuu Jinnai, David Abel, Michael L. Littman, George Dimitri Konidaris:
Finding Options that Minimize Planning Time. CoRR abs/1810.07311 (2018) - [i5]Dilip Arumugam, David Abel, Kavosh Asadi, Nakul Gopalan, Christopher Grimm, Jun Ki Lee, Lucas Lehnert, Michael L. Littman:
Mitigating Planner Overfitting in Model-Based Reinforcement Learning. CoRR abs/1812.01129 (2018) - 2017
- [i4]David Abel, John Salvatier, Andreas Stuhlmüller, Owain Evans:
Agent-Agnostic Human-in-the-Loop Reinforcement Learning. CoRR abs/1701.04079 (2017) - [i3]David Abel, D. Ellis Hershkowitz, Michael L. Littman:
Near Optimal Behavior via Approximate State Abstraction. CoRR abs/1701.04113 (2017) - [i2]Christopher Grimm, Dilip Arumugam, Siddharth Karamcheti, David Abel, Lawson L. S. Wong, Michael L. Littman:
Latent Attention Networks. CoRR abs/1706.00536 (2017) - 2016
- [c4]David Abel, James MacGlashan, Michael L. Littman:
Reinforcement Learning as a Framework for Ethical Decision Making. AAAI Workshop: AI, Ethics, and Society 2016 - [c3]David Abel, D. Ellis Hershkowitz, Michael L. Littman:
Near Optimal Behavior via Approximate State Abstraction. ICML 2016: 2915-2923 - [i1]David Abel, Alekh Agarwal, Fernando Diaz, Akshay Krishnamurthy, Robert E. Schapire:
Exploratory Gradient Boosting for Reinforcement Learning in Complex Domains. CoRR abs/1603.04119 (2016) - 2015
- [c2]David Abel, D. Ellis Hershkowitz, Gabriel Barth-Maron, Stephen Brawner, Kevin O'Farrell, James MacGlashan, Stefanie Tellex:
Goal-Based Action Priors. ICAPS 2015: 306-314 - 2014
- [c1]Gabriel Barth-Maron, David Abel, James MacGlashan, Stefanie Tellex:
Affordances as Transferable Knowledge for Planning Agents. AAAI Fall Symposia 2014
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-09 13:01 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint