default search action
Umang Bhatt
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j3]Umang Bhatt, Holli Sargeant:
When Should Algorithms Resign? A Proposal for AI Governance. Computer 57(10): 99-103 (2024) - [i38]Sreejan Kumar, Raja Marjieh, Byron Zhang, Declan Campbell, Michael Y. Hu, Umang Bhatt, Brenden M. Lake, Thomas L. Griffiths:
Comparing Abstraction in Humans and Large Language Models Using Multimodal Serial Reproduction. CoRR abs/2402.03618 (2024) - [i37]Umang Bhatt, Holli Sargeant:
When Should Algorithms Resign? CoRR abs/2402.18326 (2024) - [i36]Ilia Sucholutsky, Katherine M. Collins, Maya Malaviya, Nori Jacoby, Weiyang Liu, Theodore R. Sumers, Michalis Korakakis, Umang Bhatt, Mark K. Ho, Joshua B. Tenenbaum, Bradley C. Love, Zachary A. Pardos, Adrian Weller, Thomas L. Griffiths:
Representational Alignment Supports Effective Machine Teaching. CoRR abs/2406.04302 (2024) - [i35]Sanyam Kapoor, Nate Gruver, Manley Roberts, Katherine M. Collins, Arka Pal, Umang Bhatt, Adrian Weller, Samuel Dooley, Micah Goldblum, Andrew Gordon Wilson:
Large Language Models Must Be Taught to Know What They Don't Know. CoRR abs/2406.08391 (2024) - [i34]Katherine M. Collins, Valerie Chen, Ilia Sucholutsky, Hannah Rose Kirk, Malak Sadek, Holli Sargeant, Ameet Talwalkar, Adrian Weller, Umang Bhatt:
Modulating Language Model Experiences through Frictions. CoRR abs/2407.12804 (2024) - [i33]Katherine M. Collins, Ilia Sucholutsky, Umang Bhatt, Kartik Chandra, Lionel Wong, Mina Lee, Cedegao E. Zhang, Tan Zhi-Xuan, Mark K. Ho, Vikash Mansinghka, Adrian Weller, Joshua B. Tenenbaum, Thomas L. Griffiths:
Building Machines that Learn and Think with People. CoRR abs/2408.03943 (2024) - 2023
- [j2]Valerie Chen, Umang Bhatt, Hoda Heidari, Adrian Weller, Ameet Talwalkar:
Perspectives on incorporating expert feedback into model updates. Patterns 4(7): 100780 (2023) - [c27]Javier Abad Martinez, Umang Bhatt, Adrian Weller, Giovanni Cherubin:
Approximating Full Conformal Prediction at Scale via Influence Functions. AAAI 2023: 6631-6639 - [c26]Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Adrian Weller, Mateja Jamnik:
Towards Robust Metrics for Concept Representation Evaluation. AAAI 2023: 11791-11799 - [c25]Katherine Maeve Collins, Matthew Barker, Mateo Espinosa Zarlenga, Naveen Raman, Umang Bhatt, Mateja Jamnik, Ilia Sucholutsky, Adrian Weller, Krishnamurthy Dvijotham:
Human Uncertainty in Concept-Based AI Systems. AIES 2023: 869-889 - [c24]Zeju Qiu, Weiyang Liu, Tim Z. Xiao, Zhen Liu, Umang Bhatt, Yucen Luo, Adrian Weller, Bernhard Schölkopf:
Iterative Teaching by Data Hallucination. AISTATS 2023: 9892-9913 - [c23]Matthew Barker, Emma Kallina, Dhananjay Ashok, Katherine M. Collins, Ashley Casovan, Adrian Weller, Ameet Talwalkar, Valerie Chen, Umang Bhatt:
FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines. EAAMO 2023: 19:1-19:15 - [c22]Alan Chan, Rebecca Salganik, Alva Markelius, Chris Pang, Nitarshan Rajkumar, Dmitrii Krasheninnikov, Lauro Langosco, Zhonghao He, Yawen Duan, Micah Carroll, Michelle Lin, Alex Mayhew, Katherine M. Collins, Maryam Molamohammadi, John Burden, Wanru Zhao, Shalaleh Rismani, Konstantinos Voudouris, Umang Bhatt, Adrian Weller, David Krueger, Tegan Maharaj:
Harms from Increasingly Agentic Algorithmic Systems. FAccT 2023: 651-666 - [c21]Vivek Palaniappan, Matthew Ashman, Katherine M. Collins, Juyeon Heo, Adrian Weller, Umang Bhatt:
GeValDi: Generative Validation of Discriminative Models. Tiny Papers @ ICLR 2023 - [c20]Katherine M. Collins, Umang Bhatt, Weiyang Liu, Vihari Piratla, Ilia Sucholutsky, Bradley C. Love, Adrian Weller:
Human-in-the-Loop Mixup. UAI 2023: 454-464 - [c19]Ilia Sucholutsky, Ruairidh M. Battleday, Katherine M. Collins, Raja Marjieh, Joshua C. Peterson, Pulkit Singh, Umang Bhatt, Nori Jacoby, Adrian Weller, Thomas L. Griffiths:
On the informativeness of supervision signals. UAI 2023: 2036-2046 - [i32]Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Adrian Weller, Mateja Jamnik:
Towards Robust Metrics for Concept Representation Evaluation. CoRR abs/2301.10367 (2023) - [i31]Alan Chan, Rebecca Salganik, Alva Markelius, Chris Pang, Nitarshan Rajkumar, Dmitrii Krasheninnikov, Lauro Langosco, Zhonghao He, Yawen Duan, Micah Carroll, Michelle Lin, Alex Mayhew, Katherine M. Collins, Maryam Molamohammadi, John Burden, Wanru Zhao, Shalaleh Rismani, Konstantinos Voudouris, Umang Bhatt, Adrian Weller, David Krueger, Tegan Maharaj:
Harms from Increasingly Agentic Algorithmic Systems. CoRR abs/2302.10329 (2023) - [i30]Katherine M. Collins, Matthew Barker, Mateo Espinosa Zarlenga, Naveen Raman, Umang Bhatt, Mateja Jamnik, Ilia Sucholutsky, Adrian Weller, Krishnamurthy Dvijotham:
Human Uncertainty in Concept-Based AI Systems. CoRR abs/2303.12872 (2023) - [i29]Umang Bhatt, Valerie Chen, Katherine M. Collins, Parameswaran Kamalaruban, Emma Kallina, Adrian Weller, Ameet Talwalkar:
Learning Personalized Decision Support Policies. CoRR abs/2304.06701 (2023) - [i28]Katherine M. Collins, Albert Q. Jiang, Simon Frieder, Lionel Wong, Miri Zilka, Umang Bhatt, Thomas Lukasiewicz, Yuhuai Wu, Joshua B. Tenenbaum, William Hart, Timothy Gowers, Wenda Li, Adrian Weller, Mateja Jamnik:
Evaluating Language Models for Mathematics through Interactions. CoRR abs/2306.01694 (2023) - [i27]Matthew Barker, Katherine M. Collins, Krishnamurthy Dvijotham, Adrian Weller, Umang Bhatt:
Selective Concept Models: Permitting Stakeholder Customisation at Test-Time. CoRR abs/2306.08424 (2023) - [i26]Matthew Barker, Emma Kallina, Dhananjay Ashok, Katherine M. Collins, Ashley Casovan, Adrian Weller, Ameet Talwalkar, Valerie Chen, Umang Bhatt:
FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines. CoRR abs/2307.15475 (2023) - 2022
- [j1]John Zerilli, Umang Bhatt, Adrian Weller:
How transparency modulates trust in artificial intelligence. Patterns 3(4): 100455 (2022) - [c18]Dan Ley, Umang Bhatt, Adrian Weller:
Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates. AAAI 2022: 7390-7398 - [c17]Julius von Kügelgen, Amir-Hossein Karimi, Umang Bhatt, Isabel Valera, Adrian Weller, Bernhard Schölkopf:
On the Fairness of Causal Algorithmic Recourse. AAAI 2022: 9584-9594 - [c16]Yuxin Xiao, Paul Pu Liang, Umang Bhatt, Willie Neiswanger, Ruslan Salakhutdinov, Louis-Philippe Morency:
Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis. EMNLP (Findings) 2022: 7273-7284 - [c15]Katherine M. Collins, Umang Bhatt, Adrian Weller:
Eliciting and Learning with Soft Labels from Every Annotator. HCOMP 2022: 40-52 - [c14]Varun Babbar, Umang Bhatt, Adrian Weller:
On the Utility of Prediction Sets in Human-AI Teams. IJCAI 2022: 2457-2463 - [i25]Javier Abad Martinez, Umang Bhatt, Adrian Weller, Giovanni Cherubin:
Approximating Full Conformal Prediction at Scale via Influence Functions. CoRR abs/2202.01315 (2022) - [i24]Varun Babbar, Umang Bhatt, Adrian Weller:
On the Utility of Prediction Sets in Human-AI Teams. CoRR abs/2205.01411 (2022) - [i23]Valerie Chen, Umang Bhatt, Hoda Heidari, Adrian Weller, Ameet Talwalkar:
Perspectives on Incorporating Expert Feedback into Model Updates. CoRR abs/2205.06905 (2022) - [i22]Katherine M. Collins, Umang Bhatt, Adrian Weller:
Eliciting and Learning with Soft Labels from Every Annotator. CoRR abs/2207.00810 (2022) - [i21]Ana Lucic, Sheeraz Ahmad, Amanda Furtado Brinhosa, Vera Liao, Himani Agrawal, Umang Bhatt, Krishnaram Kenthapadi, Alice Xiang, Maarten de Rijke, Nicholas Drabowski:
Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users. CoRR abs/2207.02726 (2022) - [i20]Yuxin Xiao, Paul Pu Liang, Umang Bhatt, Willie Neiswanger, Ruslan Salakhutdinov, Louis-Philippe Morency:
Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis. CoRR abs/2210.04714 (2022) - [i19]Zeju Qiu, Weiyang Liu, Tim Z. Xiao, Zhen Liu, Umang Bhatt, Yucen Luo, Adrian Weller, Bernhard Schölkopf:
Iterative Teaching by Data Hallucination. CoRR abs/2210.17467 (2022) - [i18]Katherine M. Collins, Umang Bhatt, Weiyang Liu, Vihari Piratla, Bradley C. Love, Adrian Weller:
Web-based Elicitation of Human Perception on mixup Data. CoRR abs/2211.01202 (2022) - 2021
- [c13]Matt Chapman-Rounds, Umang Bhatt, Erik Pazos, Marc-Andre Schulz, Konstantinos Georgatzis:
FIMAP: Feature Importance by Minimal Adversarial Perturbation. AAAI 2021: 11433-11441 - [c12]Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, Alice Xiang:
Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty. AIES 2021: 401-413 - [c11]Umang Bhatt, Adrian Weller, Giovanni Cherubin:
Fast conformal classification using influence functions. COPA 2021: 303-305 - [c10]Javier Antorán, Umang Bhatt, Tameem Adel, Adrian Weller, José Miguel Hernández-Lobato:
Getting a CLUE: A Method for Explaining Uncertainty Estimates. ICLR 2021 - [i17]Ana Lucic, Madhulika Srikumar, Umang Bhatt, Alice Xiang, Ankur Taly, Q. Vera Liao, Maarten de Rijke:
A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms. CoRR abs/2103.14976 (2021) - [i16]Dan Ley, Umang Bhatt, Adrian Weller:
δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates. CoRR abs/2104.06323 (2021) - [i15]Andrei Margeloiu, Matthew Ashman, Umang Bhatt, Yanzhi Chen, Mateja Jamnik, Adrian Weller:
Do Concept Bottleneck Models Learn as Intended? CoRR abs/2105.04289 (2021) - [i14]Umang Bhatt, Isabel Chien, Muhammad Bilal Zafar, Adrian Weller:
DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement. CoRR abs/2107.05978 (2021) - [i13]Dan Ley, Umang Bhatt, Adrian Weller:
Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates. CoRR abs/2112.02646 (2021) - 2020
- [c9]Botty Dimanov, Umang Bhatt, Mateja Jamnik, Adrian Weller:
You Shouldn't Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods. SafeAI@AAAI 2020: 63-73 - [c8]Botty Dimanov, Umang Bhatt, Mateja Jamnik, Adrian Weller:
You Shouldn't Trust Me: Learning Models Which Conceal Unfairness from Multiple Explanation Methods. ECAI 2020: 2473-2480 - [c7]Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, Peter Eckersley:
Explainable machine learning in deployment. FAT* 2020: 648-657 - [c6]Brian Davis, Umang Bhatt, Kartikeya Bhardwaj, Radu Marculescu, José M. F. Moura:
On Network Science and Mutual Information for Explaining Deep Neural Networks. ICASSP 2020: 8399-8403 - [c5]Umang Bhatt, Adrian Weller, José M. F. Moura:
Evaluating and Aggregating Feature-based Model Explanations. IJCAI 2020: 3016-3022 - [i12]Umang Bhatt, Adrian Weller, José M. F. Moura:
Evaluating and Aggregating Feature-based Model Explanations. CoRR abs/2005.00631 (2020) - [i11]Javier Antorán, Umang Bhatt, Tameem Adel, Adrian Weller, José Miguel Hernández-Lobato:
Getting a CLUE: A Method for Explaining Uncertainty Estimates. CoRR abs/2006.06848 (2020) - [i10]Umang Bhatt, McKane Andrus, Adrian Weller, Alice Xiang:
Machine Learning Explainability for External Stakeholders. CoRR abs/2007.05408 (2020) - [i9]Julius von Kügelgen, Umang Bhatt, Amir-Hossein Karimi, Isabel Valera, Adrian Weller, Bernhard Schölkopf:
On the Fairness of Causal Algorithmic Recourse. CoRR abs/2010.06529 (2020) - [i8]Umang Bhatt, Yunfeng Zhang, Javier Antorán, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Adrian Weller, Alice Xiang:
Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty. CoRR abs/2011.07586 (2020)
2010 – 2019
- 2019
- [c4]Umang Bhatt, Pradeep Ravikumar, José M. F. Moura:
Building Human-Machine Trust via Interpretability. AAAI 2019: 9919-9920 - [c3]Umang Bhatt, Brian Davis, José M. F. Moura:
Diagnostic Model Explanations: A Medical Narrative. AAAI Spring Symposium: Interpretable AI for Well-being 2019 - [c2]Aaron M. Roth, Samantha Reig, Umang Bhatt, Jonathan Shulgach, Tamara Amin, Afsaneh Doryab, Fei Fang, Manuela Veloso:
A Robot's Expressive Language Affects Human Strategy and Perceptions in a Competitive Game. RO-MAN 2019: 1-8 - [i7]Brian Davis, Umang Bhatt, Kartikeya Bhardwaj, Radu Marculescu, José M. F. Moura:
NIF: A Framework for Quantifying Neural Information Flow in Deep Networks. CoRR abs/1901.08557 (2019) - [i6]Umang Bhatt, Pradeep Ravikumar, José M. F. Moura:
Towards Aggregating Weighted Feature Attributions. CoRR abs/1901.10040 (2019) - [i5]Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, Peter Eckersley:
Explainable Machine Learning in Deployment. CoRR abs/1909.06342 (2019) - [i4]Aaron M. Roth, Samantha Reig, Umang Bhatt, Jonathan Shulgach, Tamara Amin, Afsaneh Doryab, Fei Fang, Manuela Veloso:
A Robot's Expressive Language Affects Human Strategy and Perceptions in a Competitive Game. CoRR abs/1910.11459 (2019) - 2018
- [c1]Umang Bhatt:
Maintaining the Humanity of Our Models. AAAI Spring Symposia 2018 - [i3]Aaron M. Roth, Umang Bhatt, Tamara Amin, Afsaneh Doryab, Fei Fang, Manuela M. Veloso:
The Impact of Humanoid Affect Expression on Human Behavior in a Game-Theoretic Setting. CoRR abs/1806.03671 (2018) - 2017
- [i2]Umang Bhatt, Shouvik Mani, Edgar Xi, J. Zico Kolter:
Intelligent Pothole Detection and Road Condition Assessment. CoRR abs/1710.02595 (2017) - [i1]Umang Bhatt:
Maintaining The Humanity of Our Models. CoRR abs/1711.05791 (2017)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-09 13:24 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint