![](https://dblp.uni-trier.de./img/logo.320x120.png)
![search dblp search dblp](https://dblp.uni-trier.de./img/search.dark.16x16.png)
![search dblp](https://dblp.uni-trier.de./img/search.dark.16x16.png)
default search action
Sarath Sreedharan
Person information
- affiliation: Colorado State University, Fort Collins, CO, USA
Refine list
![note](https://dblp.uni-trier.de./img/note-mark.dark.12x12.png)
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [j6]Sarath Sreedharan, Siddharth Srivastava, Subbarao Kambhampati:
Explain it as simple as possible, but no simpler - Explanation via model simplification for addressing inferential gap. Artif. Intell. 340: 104279 (2025) - 2024
- [j5]Sarath Sreedharan
, Tathagata Chakraborti, Christian Muise
, Subbarao Kambhampati:
Planning with mental models - Balancing explanations and explicability. Artif. Intell. 335: 104181 (2024) - [c48]Malek Mechergui, Sarath Sreedharan:
Goal Alignment: Re-analyzing Value Alignment Problems Using Human-Aware AI. AAAI 2024: 10110-10118 - [c47]Turgay Caglar, Sirine Belhaj, Tathagata Chakraborti, Michael Katz, Sarath Sreedharan:
Can LLMs Fix Issues with Reasoning Models? Towards More Likely Models for AI Planning. AAAI 2024: 20061-20069 - [c46]Dylan Pallickara, Sarath Sreedharan:
A Wireframe-Based Approach for Classifying and Acquiring Proficiency in the American Sign Language (Student Abstract). AAAI 2024: 23606-23607 - [c45]Turgay Caglar, Sarath Sreedharan:
HELP! Providing Proactive Support in the Presence of Knowledge Asymmetry. AAMAS 2024: 234-243 - [c44]Malek Mechergui, Sarath Sreedharan:
Expectation Alignment: Handling Reward Misspecification in the Presence of Expectation Mismatch. NeurIPS 2024 - [i37]Sarath Sreedharan, Malek Mechergui:
Handling Reward Misspecification in the Presence of Expectation Mismatch. CoRR abs/2404.08791 (2024) - [i36]Kelsey Sikes, Sarah Keren, Sarath Sreedharan:
Reducing Human-Robot Goal State Divergence with Environment Design. CoRR abs/2404.15184 (2024) - [i35]Silvia Tulli, Stylianos Loukas Vasileiou, Sarath Sreedharan:
Human-Modeling in Sequential Decision-Making: An Analysis through the Lens of Human-Aware AI. CoRR abs/2405.07773 (2024) - [i34]Sarath Sreedharan, Anagha Kulkarni, Subbarao Kambhampati:
Explainable Human-AI Interaction: A Planning Perspective. CoRR abs/2405.15804 (2024) - 2023
- [j4]Sarath Sreedharan
:
Human-aware AI - A foundational framework for human-AI interaction. AI Mag. 44(4): 460-466 (2023) - [c43]Sarath Sreedharan:
Human-Aware AI - A Foundational Framework for Human-AI Interaction. AAAI 2023: 15455 - [c42]Brittany Cates, Anagha Kulkarni, Sarath Sreedharan:
Planning for Attacker Entrapment in Adversarial Settings. ICAPS 2023: 86-94 - [c41]Sarath Sreedharan, Christian Muise, Subbarao Kambhampati:
Generalizing Action Justification and Causal Links to Policies. ICAPS 2023: 417-426 - [c40]Malek Mechergui, Sarath Sreedharan:
Goal Alignment: Re-analyzing Value Alignment Problems Using Human-Aware AI. AAMAS 2023: 2331-2333 - [c39]Zahra Zahedi
, Mudit Verma
, Sarath Sreedharan
, Subbarao Kambhampati
:
Trust-Aware Planning: Modeling Trust Evolution in Iterated Human-Robot Interaction. HRI 2023: 281-289 - [c38]Manas Gaur
, Efthymia Tsamoura
, Sarath Sreedharan
, Sudip Mittal
:
KiL 2023 : 3rd International Workshop on Knowledge-infused Learning. KDD 2023: 5857-5858 - [c37]Lin Guan, Karthik Valmeekam, Sarath Sreedharan, Subbarao Kambhampati:
Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning. NeurIPS 2023 - [c36]Sarath Sreedharan, Michael Katz:
Optimistic Exploration in Reinforcement Learning Using Symbolic Model Estimates. NeurIPS 2023 - [c35]Karthik Valmeekam, Matthew Marquez, Alberto Olmo Hernandez, Sarath Sreedharan, Subbarao Kambhampati:
PlanBench: An Extensible Benchmark for Evaluating Large Language Models on Planning and Reasoning about Change. NeurIPS 2023 - [c34]Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, Subbarao Kambhampati:
On the Planning Abilities of Large Language Models - A Critical Investigation. NeurIPS 2023 - [c33]Indrajit Ray, Sarath Sreedharan, Rakesh Podder
, Shadaab Kawnain Bashir
, Indrakshi Ray:
Explainable AI for Prioritizing and Deploying Defenses for Cyber-Physical System Resiliency. TPS-ISA 2023: 184-192 - [i33]Zahra Zahedi, Sarath Sreedharan, Subbarao Kambhampati:
A Mental Model Based Theory of Trust. CoRR abs/2301.12569 (2023) - [i32]Malek Mechergui, Sarath Sreedharan:
Goal Alignment: A Human-Aware Account of Value Alignment Problem. CoRR abs/2302.00813 (2023) - [i31]Karthik Valmeekam, Sarath Sreedharan, Matthew Marquez, Alberto Olmo Hernandez, Subbarao Kambhampati:
On the Planning Abilities of Large Language Models (A Critical Investigation with a Proposed Benchmark). CoRR abs/2302.06706 (2023) - [i30]Brittany Cates, Anagha Kulkarni, Sarath Sreedharan:
Planning for Attacker Entrapment in Adversarial Settings. CoRR abs/2303.00822 (2023) - [i29]Lin Guan, Karthik Valmeekam, Sarath Sreedharan, Subbarao Kambhampati:
Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning. CoRR abs/2305.14909 (2023) - [i28]Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, Subbarao Kambhampati:
On the Planning Abilities of Large Language Models - A Critical Investigation. CoRR abs/2305.15771 (2023) - [i27]Tathagata Chakraborti, Jungkoo Kang, Christian Muise, Sarath Sreedharan, Michael E. Walker, Daniel Szafir, Tom Williams:
TOBY: A Tool for Exploring Data in Academic Survey Papers. CoRR abs/2306.10051 (2023) - [i26]Turgay Caglar, Sirine Belhaj, Tathagata Chakraborti, Michael Katz, Sarath Sreedharan:
Towards More Likely Models for AI Planning. CoRR abs/2311.13720 (2023) - 2022
- [c32]Subbarao Kambhampati, Sarath Sreedharan, Mudit Verma, Yantian Zha, Lin Guan:
Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems. AAAI 2022: 12262-12267 - [c31]Karthik Valmeekam, Sarath Sreedharan, Sailik Sengupta, Subbarao Kambhampati:
RADAR-X: An Interactive Mixed Initiative Planning Interface Pairing Contrastive Explanations and Revised Plan Suggestions. ICAPS 2022: 508-517 - [c30]Zahra Zahedi, Sarath Sreedharan, Mudit Verma, Subbarao Kambhampati:
Modeling the Interplay between Human Trust and Monitoring. HRI 2022: 1119-1123 - [c29]Sarath Sreedharan, Utkarsh Soni, Mudit Verma, Siddharth Srivastava, Subbarao Kambhampati:
Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations. ICLR 2022 - [c28]Lin Guan, Sarath Sreedharan, Subbarao Kambhampati:
Leveraging Approximate Symbolic Models for Reinforcement Learning via Skill Diversity. ICML 2022: 7949-7967 - [c27]Sarath Sreedharan, Pascal Bercher
, Subbarao Kambhampati:
On the Computational Complexity of Model Reconciliations. IJCAI 2022: 4657-4664 - [i25]Lin Guan, Sarath Sreedharan, Subbarao Kambhampati:
Leveraging Approximate Symbolic Models for Reinforcement Learning via Skill Diversity. CoRR abs/2202.02886 (2022) - [i24]Zahra Zahedi, Sarath Sreedharan, Subbarao Kambhampati:
A Mental-Model Centric Landscape of Human-AI Symbiosis. CoRR abs/2202.09447 (2022) - [i23]Karthik Valmeekam, Alberto Olmo Hernandez, Sarath Sreedharan, Subbarao Kambhampati:
Large Language Models Still Can't Plan (A Benchmark for LLMs on Planning and Reasoning about Change). CoRR abs/2206.10498 (2022) - [i22]Utkarsh Soni, Sarath Sreedharan, Mudit Verma, Lin Guan, Matthew Marquez, Subbarao Kambhampati:
Towards customizable reinforcement learning agents: Enabling preference specification through online vocabulary expansion. CoRR abs/2210.15096 (2022) - 2021
- [j3]Sarath Sreedharan
, Tathagata Chakraborti
, Subbarao Kambhampati:
Foundations of explanations as model reconciliation. Artif. Intell. 301: 103558 (2021) - [j2]Sarath Sreedharan
, Siddharth Srivastava, Subbarao Kambhampati:
Using state abstractions to compute personalized contrastive explanations for AI agent behavior. Artif. Intell. 301: 103570 (2021) - [c26]Karthik Valmeekam, Sarath Sreedharan, Sailik Sengupta, Subbarao Kambhampati:
RADAR-X: An Interactive Interface Pairing Contrastive Explanations with Revised Plan Suggestions. AAAI 2021: 16051-16053 - [c25]Sarath Sreedharan, Anagha Kulkarni, David E. Smith, Subbarao Kambhampati:
A Unifying Bayesian Formulation of Measures of Interpretability in Human-AI Interaction. IJCAI 2021: 4602-4610 - [c24]Utkarsh Soni, Sarath Sreedharan, Subbarao Kambhampati:
Not all users are the same: Providing personalized explanations for sequential decision making problems. IROS 2021: 6240-6247 - [i21]Sarath Sreedharan, Anagha Kulkarni, David E. Smith, Subbarao Kambhampati:
A Unifying Bayesian Formulation of Measures of Interpretability in Human-AI. CoRR abs/2104.10743 (2021) - [i20]Zahra Zahedi, Mudit Verma, Sarath Sreedharan, Subbarao Kambhampati:
Trust-Aware Planning: Modeling Trust Evolution in Longitudinal Human-Robot Interaction. CoRR abs/2105.01220 (2021) - [i19]Alberto Olmo Hernandez, Sarath Sreedharan, Subbarao Kambhampati:
GPT3-to-plan: Extracting plans from text using GPT-3. CoRR abs/2106.07131 (2021) - [i18]Utkarsh Soni, Sarath Sreedharan, Subbarao Kambhampati:
Not all users are the same: Providing personalized explanations for sequential decision making problems. CoRR abs/2106.12207 (2021) - [i17]Subbarao Kambhampati, Sarath Sreedharan, Mudit Verma, Yantian Zha, Lin Guan:
Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems. CoRR abs/2109.09904 (2021) - 2020
- [c23]Sarath Sreedharan, Tathagata Chakraborti, Christian Muise, Subbarao Kambhampati:
Hierarchical Expertise-Level Modeling for User Specific Robot-Behavior Explanations. AAAI 2020: 2518-2526 - [c22]Sarath Sreedharan, Siddharth Srivastava, Subbarao Kambhampati:
TLdR: Policy Summarization for Factored SSP Problems Using Temporal Abstractions. ICAPS 2020: 272-280 - [c21]Sarath Sreedharan, Tathagata Chakraborti, Christian Muise, Yasaman Khazaeni, Subbarao Kambhampati:
- D3WA+ - A Case Study of XAIP in a Model Acquisition Task for Dialogue Planning. ICAPS 2020: 488-498 - [c20]Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati:
The Emerging Landscape of Explainable Automated Planning & Decision Making. IJCAI 2020: 4803-4811 - [c19]Anagha Kulkarni, Sarath Sreedharan, Sarah Keren, Tathagata Chakraborti, David E. Smith, Subbarao Kambhampati:
Designing Environments Conducive to Interpretable Robot Behavior. IROS 2020: 10982-10989 - [i16]Sarath Sreedharan, Utkarsh Soni, Mudit Verma, Siddharth Srivastava, Subbarao Kambhampati:
Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Black Box Simulators. CoRR abs/2002.01080 (2020) - [i15]Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati:
The Emerging Landscape of Explainable AI Planning and Decision Making. CoRR abs/2002.11697 (2020) - [i14]Anagha Kulkarni, Sarath Sreedharan, Sarah Keren, Tathagata Chakraborti, David E. Smith, Subbarao Kambhampati:
Designing Environments Conducive to Interpretable Robot Behavior. CoRR abs/2007.00820 (2020) - [i13]Karthik Valmeekam, Sarath Sreedharan, Sailik Sengupta, Subbarao Kambhampati:
RADAR-X: An Interactive Interface Pairing Contrastive Explanations with Revised Plan Suggestions. CoRR abs/2011.09644 (2020) - [i12]Sarath Sreedharan, Tathagata Chakraborti, Yara Rizk, Yasaman Khazaeni:
Explainable Composition of Aggregated Assistants. CoRR abs/2011.10707 (2020) - [i11]Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, David E. Smith, Subbarao Kambhampati:
A Bayesian Account of Measures of Interpretability in Human-AI Interaction. CoRR abs/2011.10920 (2020)
2010 – 2019
- 2019
- [c18]Tathagata Chakraborti, Anagha Kulkarni, Sarath Sreedharan, David E. Smith, Subbarao Kambhampati:
Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior. ICAPS 2019: 86-96 - [c17]Tathagata Chakraborti, Sarath Sreedharan, Sachin Grover, Subbarao Kambhampati:
Plan Explanations as Model Reconciliation. HRI 2019: 258-266 - [c16]Zahra Zahedi, Alberto Olmo Hernandez, Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati:
Towards Understanding User Preferences for Explanation Types in Model Reconciliation. HRI 2019: 648-649 - [c15]Sarath Sreedharan, Alberto Olmo Hernandez, Aditya Prasad Mishra, Subbarao Kambhampati:
Model-Free Model Reconciliation. IJCAI 2019: 587-594 - [c14]Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati:
Balancing Explicability and Explanations in Human-Aware Planning. IJCAI 2019: 1335-1343 - [c13]Sarath Sreedharan, Siddharth Srivastava, David E. Smith, Subbarao Kambhampati:
Why Can't You Do That HAL? Explaining Unsolvability of Planning Tasks. IJCAI 2019: 1422-1430 - [i10]Sarath Sreedharan, Alberto Olmo Hernandez, Aditya Prasad Mishra, Subbarao Kambhampati:
Model-Free Model Reconciliation. CoRR abs/1903.07198 (2019) - [i9]Sarath Sreedharan, Tathagata Chakraborti, Christian Muise, Subbarao Kambhampati:
Planning with Explanatory Actions: A Joint Approach to Plan Explicability and Explanations in Human-Aware Planning. CoRR abs/1903.07269 (2019) - [i8]Sarath Sreedharan, Siddharth Srivastava, David E. Smith, Subbarao Kambhampati:
Why Couldn't You do that? Explaining Unsolvability of Classical Planning Problems in the Presence of Plan Advice. CoRR abs/1903.08218 (2019) - 2018
- [c12]Sarath Sreedharan, Tathagata Chakraborti, Subbarao Kambhampati:
Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation. ICAPS 2018: 518-526 - [c11]Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati:
Explicability versus Explanations in Human-Aware Planning. AAMAS 2018: 2180-2182 - [c10]Sarath Sreedharan, Siddharth Srivastava, Subbarao Kambhampati:
Hierarchical Expertise Level Modeling for User Specific Contrastive Explanations. IJCAI 2018: 4829-4836 - [c9]Tathagata Chakraborti, Sarath Sreedharan, Anagha Kulkarni, Subbarao Kambhampati:
Projection-Aware Task Planning and Execution for Human-in-the-Loop Operation of Robots in a Mixed-Reality Workspace. IROS 2018: 4476-4482 - [i7]Tathagata Chakraborti, Sarath Sreedharan, Sachin Grover, Subbarao Kambhampati:
Plan Explanations as Model Reconciliation - An Empirical Study. CoRR abs/1802.01013 (2018) - [i6]Sarath Sreedharan, Siddharth Srivastava, Subbarao Kambhampati:
Hierarchical Expertise-Level Modeling for User Specific Robot-Behavior Explanations. CoRR abs/1802.06895 (2018) - [i5]Tathagata Chakraborti, Anagha Kulkarni, Sarath Sreedharan, David E. Smith, Subbarao Kambhampati:
Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior. CoRR abs/1811.09722 (2018) - 2017
- [j1]Tuan Nguyen, Sarath Sreedharan, Subbarao Kambhampati:
Robust planning with incomplete domain models. Artif. Intell. 245: 134-161 (2017) - [c8]Sarath Sreedharan, Tathagata Chakraborti, Subbarao Kambhampati:
Balancing Explicability and Explanation in Human-Aware Planning. AAAI Fall Symposia 2017: 61-68 - [c7]Sailik Sengupta, Tathagata Chakraborti, Sarath Sreedharan, Satya Gautam Vadlamudi, Subbarao Kambhampati:
RADAR - A Proactive Decision Support System for Human-in-the-Loop Planning. AAAI Fall Symposia 2017: 269-276 - [c6]Sarath Sreedharan, Tathagata Chakraborti, Subbarao Kambhampati:
Explanations as Model Reconciliation - A Multi-Agent Perspective. AAAI Fall Symposia 2017: 277-283 - [c5]Yu Zhang, Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, Hankz Hankui Zhuo, Subbarao Kambhampati:
Plan explicability and predictability for robot task planning. ICRA 2017: 1313-1320 - [c4]Tathagata Chakraborti, Sarath Sreedharan, Yu Zhang, Subbarao Kambhampati:
Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy. IJCAI 2017: 156-163 - [i4]Tathagata Chakraborti, Sarath Sreedharan, Yu Zhang, Subbarao Kambhampati:
Explanation Generation as Model Reconciliation in Multi-Model Planning. CoRR abs/1701.08317 (2017) - [i3]Tathagata Chakraborti, Sarath Sreedharan, Anagha Kulkarni, Subbarao Kambhampati:
Alternative Modes of Interaction in Proximal Human-in-the-Loop Operation of Robots. CoRR abs/1703.08930 (2017) - [i2]Sarath Sreedharan, Tathagata Chakraborti, Subbarao Kambhampati:
Balancing Explicability and Explanation in Human-Aware Planning. CoRR abs/1708.00543 (2017) - 2016
- [c3]Yu Zhang, Sarath Sreedharan, Subbarao Kambhampati:
A Formal Analysis of Required Cooperation in Multi-Agent Planning. ICAPS 2016: 335-344 - [c2]Tathagata Chakraborti, Sarath Sreedharan, Sailik Sengupta, T. K. Satish Kumar, Subbarao Kambhampati:
Compliant Conditions for Polynomial Time Approximation of Operator Counts. SOCS 2016: 123-124 - [i1]Tathagata Chakraborti, Sarath Sreedharan, Sailik Sengupta, T. K. Satish Kumar, Subbarao Kambhampati:
Compliant Conditions for Polynomial Time Approximation of Operator Counts. CoRR abs/1605.07989 (2016) - 2015
- [c1]Yu Zhang, Sarath Sreedharan, Subbarao Kambhampati:
Capability Models and Their Applications in Planning. AAMAS 2015: 1151-1159
Coauthor Index
![](https://dblp.uni-trier.de./img/cog.dark.24x24.png)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-02-15 01:19 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint