


default search action
Harsha Nori
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [i22]Saibo Geng, Hudson Cooper, Michal Moskal, Samuel Jenkins, Julian Berman, Nathan Ranchin, Robert West, Eric Horvitz, Harsha Nori:
Generating Structured Outputs from Language Models: Benchmark and Studies. CoRR abs/2501.10868 (2025) - 2024
- [j2]Tomas M. Bosschieter
, Zifei Xu, Hui Lan, Benjamin J. Lengerich, Harsha Nori, Ian S. Painter, Vivienne Souter, Rich Caruana:
Interpretable Predictive Models to Understand Risk Factors for Maternal and Fetal Outcomes. J. Heal. Informatics Res. 8(1): 65-87 (2024) - [c14]Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Harsha Nori, Sergey Yekhanin:
Differentially Private Synthetic Data via Foundation Model APIs 1: Images. ICLR 2024 - [c13]Chulin Xie, Zinan Lin, Arturs Backurs, Sivakanth Gopi, Da Yu, Huseyin A. Inan, Harsha Nori, Haotian Jiang, Huishuai Zhang, Yin Tat Lee, Bo Li, Sergey Yekhanin:
Differentially Private Synthetic Data via Foundation Model APIs 2: Text. ICML 2024 - [i21]Sebastian Bordt, Benjamin J. Lengerich, Harsha Nori, Rich Caruana:
Data Science with LLMs and Interpretable Models. CoRR abs/2402.14474 (2024) - [i20]Chulin Xie, Zinan Lin, Arturs Backurs, Sivakanth Gopi, Da Yu, Huseyin A. Inan, Harsha Nori, Haotian Jiang, Huishuai Zhang, Yin Tat Lee, Bo Li, Sergey Yekhanin:
Differentially Private Synthetic Data via Foundation Model APIs 2: Text. CoRR abs/2403.01749 (2024) - [i19]Sebastian Bordt, Harsha Nori, Rich Caruana:
Elephants Never Forget: Testing Language Models for Memorization of Tabular Data. CoRR abs/2403.06644 (2024) - [i18]Sebastian Bordt, Harsha Nori, Vanessa Rodrigues, Besmira Nushi, Rich Caruana:
Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language Models. CoRR abs/2404.06209 (2024) - [i17]Andreas Mueller, Julien Siems, Harsha Nori, David Salinas, Arber Zela, Rich Caruana, Frank Hutter:
GAMformer: In-Context Learning for Generalized Additive Models. CoRR abs/2410.04560 (2024) - [i16]Harsha Nori, Naoto Usuyama, Nicholas King, Scott Mayer McKinney, Xavier Fernandes, Sheng Zhang, Eric Horvitz:
From Medprompt to o1: Exploration of Run-Time Strategies for Medical Challenge Problems and Beyond. CoRR abs/2411.03590 (2024) - [i15]Kyle O'Brien, David Majercak, Xavier Fernandes, Richard Edgar, Jingya Chen, Harsha Nori, Dean Carignan, Eric Horvitz, Forough Poursabzi-Sangdeh:
Steering Language Model Refusal with Sparse Autoencoders. CoRR abs/2411.11296 (2024) - 2023
- [c12]Charvi Rastogi
, Marco Túlio Ribeiro
, Nicholas King
, Harsha Nori
, Saleema Amershi
:
Supporting Human-AI Collaboration in Auditing LLMs with LLMs. AIES 2023: 913-926 - [i14]Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro, Yi Zhang:
Sparks of Artificial General Intelligence: Early experiments with GPT-4. CoRR abs/2303.12712 (2023) - [i13]Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, Eric Horvitz:
Capabilities of GPT-4 on Medical Challenge Problems. CoRR abs/2303.13375 (2023) - [i12]Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Harsha Nori, Sergey Yekhanin:
Differentially Private Synthetic Data via Foundation Model APIs 1: Images. CoRR abs/2305.15560 (2023) - [i11]Benjamin J. Lengerich, Sebastian Bordt, Harsha Nori, Mark E. Nunnally, Yin Aphinyanaphongs, Manolis Kellis, Rich Caruana:
LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs. CoRR abs/2308.01157 (2023) - [i10]Tomas M. Bosschieter, Zifei Xu, Hui Lan, Benjamin J. Lengerich, Harsha Nori, Ian S. Painter, Vivienne Souter, Rich Caruana:
Interpretable Predictive Models to Understand Risk Factors for Maternal and Fetal Outcomes. CoRR abs/2310.10203 (2023) - [i9]Harsha Nori, Yin Tat Lee, Sheng Zhang, Dean Carignan, Richard Edgar, Nicolò Fusi, Nicholas King, Jonathan Larson, Yuanzhi Li, Weishung Liu, Renqian Luo, Scott Mayer McKinney, Robert Osazuwa Ness, Hoifung Poon, Tao Qin, Naoto Usuyama, Christopher M. White, Eric Horvitz:
Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine. CoRR abs/2311.16452 (2023) - 2022
- [c11]Fengshi Niu, Harsha Nori, Brian Quistorff, Rich Caruana, Donald Ngwe, Aadharsh Kannan:
Differentially Private Estimation of Heterogeneous Causal Effects. CLeaR 2022: 618-633 - [c10]Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana:
Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values. KDD 2022: 4132-4142 - [c9]Rich Caruana, Harsha Nori:
Why Data Scientists Prefer Glassbox Machine Learning: Algorithms, Differential Privacy, Editing and Bias Mitigation. KDD 2022: 4776-4777 - [c8]Qinghao Hu, Harsha Nori, Peng Sun, Yonggang Wen, Tianwei Zhang:
Primo: Practical Learning-Augmented Systems with Interpretable Models. USENIX ATC 2022: 519-538 - [i8]Fengshi Niu, Harsha Nori, Brian Quistorff, Rich Caruana, Donald Ngwe, Aadharsh Kannan:
Differentially Private Estimation of Heterogeneous Causal Effects. CoRR abs/2202.11043 (2022) - [i7]Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana:
Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values. CoRR abs/2206.15465 (2022) - [i6]Tomas M. Bosschieter, Zifei Xu, Hui Lan, Benjamin J. Lengerich, Harsha Nori, Kristin Sitcov, Vivienne Souter, Rich Caruana:
Using Interpretable Machine Learning to Predict Maternal and Fetal Outcomes. CoRR abs/2207.05322 (2022) - 2021
- [j1]Alex Okeson, Rich Caruana, Nick Craswell, Kori Inkpen, Scott M. Lundberg, Harsha Nori, Hanna M. Wallach, Jennifer Wortman Vaughan:
Summarize with Caution: Comparing Global Feature Attributions. IEEE Data Eng. Bull. 44(4): 14-27 (2021) - [c7]Harsha Nori, Rich Caruana, Zhiqi Bu, Judy Hanwen Shen, Janardhan Kulkarni:
Accuracy, Interpretability, and Differential Privacy via Explainable Boosting. ICML 2021: 8227-8237 - [c6]Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna M. Wallach, Jennifer Wortman Vaughan:
Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning. DaSH@KDD 2021 - [c5]Zhi Chen, Sarah Tan, Harsha Nori, Kori Inkpen, Yin Lou, Rich Caruana:
Using Explainable Boosting Machines (EBMs) to Detect Common Flaws in Data. PKDD/ECML Workshops (1) 2021: 534-551 - [i5]Harsha Nori, Rich Caruana, Zhiqi Bu, Judy Hanwen Shen, Janardhan Kulkarni:
Accuracy, Interpretability, and Differential Privacy via Explainable Boosting. CoRR abs/2106.09680 (2021) - [i4]Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana:
GAM Changer: Editing Generalized Additive Models with Interactive Visualization. CoRR abs/2112.03245 (2021) - 2020
- [c4]Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna M. Wallach, Jennifer Wortman Vaughan:
Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning. CHI 2020: 1-14 - [c3]Rich Caruana, Scott M. Lundberg, Marco Túlio Ribeiro, Harsha Nori, Samuel Jenkins:
Intelligible and Explainable Machine Learning: Best Practices and Practical Challenges. KDD 2020: 3511-3512
2010 – 2019
- 2019
- [c2]Joshua Allen, Bolin Ding, Janardhan Kulkarni, Harsha Nori, Olga Ohrimenko
, Sergey Yekhanin:
An Algorithmic Framework For Differentially Private Data Analysis on Trusted Processors. NeurIPS 2019: 13635-13646 - [i3]Harsha Nori, Samuel Jenkins, Paul Koch, Rich Caruana:
InterpretML: A Unified Framework for Machine Learning Interpretability. CoRR abs/1909.09223 (2019) - 2018
- [c1]Bolin Ding, Harsha Nori, Paul Li, Joshua Allen:
Comparing Population Means Under Local Differential Privacy: With Significance and Power. AAAI 2018: 26-33 - [i2]Bolin Ding, Harsha Nori, Paul Li, Joshua Allen:
Comparing Population Means under Local Differential Privacy: with Significance and Power. CoRR abs/1803.09027 (2018) - [i1]Joshua Allen, Bolin Ding, Janardhan Kulkarni, Harsha Nori, Olga Ohrimenko, Sergey Yekhanin:
An Algorithmic Framework For Differentially Private Data Analysis on Trusted Processors. CoRR abs/1807.00736 (2018)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-02-24 22:31 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint