default search action
Nicholas Carlini
Person information
- affiliation: Google, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c64]Andong Hua, Jindong Gu, Zhiyu Xue, Nicholas Carlini, Eric Wong, Yao Qin:
Initialization Matters for Adversarial Transfer Learning. CVPR 2024: 24831-24840 - [c63]Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr:
Stealing part of a production language model. ICML 2024 - [c62]Florian Tramèr, Gautam Kamath, Nicholas Carlini:
Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining. ICML 2024 - [c61]Edoardo Debenedetti, Nicholas Carlini, Florian Tramèr:
Evading Black-box Classifiers Without Breaking Eggs. SaTML 2024: 408-424 - [c60]Nicholas Carlini, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum S. Anderson, Andreas Terzis, Kurt Thomas, Florian Tramèr:
Poisoning Web-Scale Training Datasets is Practical. SP 2024: 407-425 - [c59]Edoardo Debenedetti, Giorgio Severi, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Eric Wallace, Nicholas Carlini, Florian Tramèr:
Privacy Side Channels in Machine Learning Systems. USENIX Security Symposium 2024 - [i88]Jonathan Hayase, Ema Borevkovic, Nicholas Carlini, Florian Tramèr, Milad Nasr:
Query-Based Adversarial Prompt Generation. CoRR abs/2402.12329 (2024) - [i87]Nicholas Carlini, Daniel Paleka, Krishnamurthy (Dj) Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr:
Stealing Part of a Production Language Model. CoRR abs/2403.06634 (2024) - [i86]Sanghyun Hong, Nicholas Carlini, Alexey Kurakin:
Diffusion Denoising as a Certified Defense against Clean-label Poisoning. CoRR abs/2403.11981 (2024) - [i85]Yuxin Wen, Leo Marchyok, Sanghyun Hong, Jonas Geiping, Tom Goldstein, Nicholas Carlini:
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models. CoRR abs/2404.01231 (2024) - [i84]Yiming Zhang, Avi Schwarzschild, Nicholas Carlini, Zico Kolter, Daphne Ippolito:
Forcing Diffuse Distributions out of Language Models. CoRR abs/2404.10859 (2024) - [i83]Nicholas Carlini:
Cutting through buggy adversarial example defenses: fixing 1 line of code breaks Sabre. CoRR abs/2405.03672 (2024) - [i82]Robert Hönig, Javier Rando, Nicholas Carlini, Florian Tramèr:
Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI. CoRR abs/2406.12027 (2024) - [i81]Nicholas Carlini, Jorge Chávez-Saab, Anna Hambitzer, Francisco Rodríguez-Henríquez, Adi Shamir:
Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the Hard-Label Setting. CoRR abs/2410.05750 (2024) - [i80]Yiming Zhang, Javier Rando, Ivan Evtimov, Jianfeng Chi, Eric Michael Smith, Nicholas Carlini, Florian Tramèr, Daphne Ippolito:
Persistent Pre-Training Poisoning of LLMs. CoRR abs/2410.13722 (2024) - [i79]Nicholas Carlini, Milad Nasr:
Remote Timing Attacks on Efficient Language Model Inference. CoRR abs/2410.17175 (2024) - [i78]Itay Yona, Ilia Shumailov, Jamie Hayes, Nicholas Carlini:
Stealing User Prompts from Mixture of Experts. CoRR abs/2410.22884 (2024) - [i77]Nicholas Carlini, Jorge Chávez-Saab, Anna Hambitzer, Francisco Rodríguez-Henríquez, Adi Shamir:
Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the Hard-Label Setting. IACR Cryptol. ePrint Arch. 2024: 1580 (2024) - 2023
- [j1]Clark W. Barrett, Brad Boyd, Elie Bursztein, Nicholas Carlini, Brad Chen, Jihye Choi, Amrita Roy Chowdhury, Mihai Christodorescu, Anupam Datta, Soheil Feizi, Kathleen Fisher, Tatsunori Hashimoto, Dan Hendrycks, Somesh Jha, Daniel Kang, Florian Kerschbaum, Eric Mitchell, John C. Mitchell, Zulfikar Ramzan, Khawaja Shams, Dawn Song, Ankur Taly, Diyi Yang:
Identifying and Mitigating the Security Risks of Generative AI. Found. Trends Priv. Secur. 6(1): 1-52 (2023) - [c58]Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, Chiyuan Zhang:
Quantifying Memorization Across Neural Language Models. ICLR 2023 - [c57]Nicholas Carlini, Florian Tramèr, Krishnamurthy (Dj) Dvijotham, Leslie Rice, Mingjie Sun, J. Zico Kolter:
(Certified!!) Adversarial Robustness for Free! ICLR 2023 - [c56]Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Guha Thakurta, Nicolas Papernot, Chiyuan Zhang:
Measuring Forgetting of Memorized Training Examples. ICLR 2023 - [c55]Chawin Sitawarin, Kornrapat Pongmala, Yizheng Chen, Nicholas Carlini, David A. Wagner:
Part-Based Models Improve Adversarial Robustness. ICLR 2023 - [c54]Chawin Sitawarin, Florian Tramèr, Nicholas Carlini:
Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems. ICML 2023: 32008-32032 - [c53]Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini:
Preventing Generation of Verbatim Memorization in Language Models Gives a False Sense of Privacy. INLG 2023: 28-53 - [c52]Daphne Ippolito, Nicholas Carlini, Katherine Lee, Milad Nasr, Yun William Yu:
Reverse-Engineering Decoding Strategies Given Blackbox Access to a Language Generation System. INLG 2023: 396-406 - [c51]Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Pang Wei Koh, Daphne Ippolito, Florian Tramèr, Ludwig Schmidt:
Are aligned neural networks adversarially aligned? NeurIPS 2023 - [c50]Matthew Jagielski, Milad Nasr, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini, Florian Tramèr:
Students Parrot Their Teachers: Membership Inference on Model Distillation. NeurIPS 2023 - [c49]Zhouxing Shi, Nicholas Carlini, Ananth Balashankar, Ludwig Schmidt, Cho-Jui Hsieh, Alex Beutel, Yao Qin:
Effective Robustness against Natural Distribution Shifts for Models with Different Training Data. NeurIPS 2023 - [c48]Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr, Nicholas Carlini:
Counterfactual Memorization in Neural Language Models. NeurIPS 2023 - [c47]Sanghyun Hong, Nicholas Carlini, Alexey Kurakin:
Publishing Efficient On-device Models Increases Adversarial Vulnerability. SaTML 2023: 271-290 - [c46]Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis:
Tight Auditing of Differentially Private Machine Learning. USENIX Security Symposium 2023: 1631-1648 - [c45]Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace:
Extracting Training Data from Diffusion Models. USENIX Security Symposium 2023: 5253-5270 - [i76]Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace:
Extracting Training Data from Diffusion Models. CoRR abs/2301.13188 (2023) - [i75]Zhouxing Shi, Nicholas Carlini, Ananth Balashankar, Ludwig Schmidt, Cho-Jui Hsieh, Alex Beutel, Yao Qin:
Effective Robustness against Natural Distribution Shifts for Models with Different Training Data. CoRR abs/2302.01381 (2023) - [i74]Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis:
Tight Auditing of Differentially Private Machine Learning. CoRR abs/2302.07956 (2023) - [i73]Nicholas Carlini, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum S. Anderson, Andreas Terzis, Kurt Thomas, Florian Tramèr:
Poisoning Web-Scale Training Datasets is Practical. CoRR abs/2302.10149 (2023) - [i72]Keane Lucas, Matthew Jagielski, Florian Tramèr, Lujo Bauer, Nicholas Carlini:
Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators. CoRR abs/2302.13464 (2023) - [i71]Matthew Jagielski, Milad Nasr, Christopher A. Choquette-Choo, Katherine Lee, Nicholas Carlini:
Students Parrot Their Teachers: Membership Inference on Model Distillation. CoRR abs/2303.03446 (2023) - [i70]Edoardo Debenedetti, Nicholas Carlini, Florian Tramèr:
Evading Black-box Classifiers Without Breaking Eggs. CoRR abs/2306.02895 (2023) - [i69]Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramèr, Ludwig Schmidt:
Are aligned neural networks adversarially aligned? CoRR abs/2306.15447 (2023) - [i68]Nikhil Kandpal, Matthew Jagielski, Florian Tramèr, Nicholas Carlini:
Backdoor Attacks for In-Context Learning with Language Models. CoRR abs/2307.14692 (2023) - [i67]Nicholas Carlini:
A LLM Assisted Exploitation of AI-Guardian. CoRR abs/2307.15008 (2023) - [i66]Clark W. Barrett, Brad Boyd, Ellie Burzstein, Nicholas Carlini, Brad Chen, Jihye Choi, Amrita Roy Chowdhury, Mihai Christodorescu, Anupam Datta, Soheil Feizi, Kathleen Fisher, Tatsunori Hashimoto, Dan Hendrycks, Somesh Jha, Daniel Kang, Florian Kerschbaum, Eric Mitchell, John C. Mitchell, Zulfikar Ramzan, Khawaja Shams, Dawn Song, Ankur Taly, Diyi Yang:
Identifying and Mitigating the Security Risks of Generative AI. CoRR abs/2308.14840 (2023) - [i65]Daphne Ippolito, Nicholas Carlini, Katherine Lee, Milad Nasr, Yun William Yu:
Reverse-Engineering Decoding Strategies Given Blackbox Access to a Language Generation System. CoRR abs/2309.04858 (2023) - [i64]Edoardo Debenedetti, Giorgio Severi, Nicholas Carlini, Christopher A. Choquette-Choo, Matthew Jagielski, Milad Nasr, Eric Wallace, Florian Tramèr:
Privacy Side Channels in Machine Learning Systems. CoRR abs/2309.05610 (2023) - [i63]A. Feder Cooper, Katherine Lee, James Grimmelmann, Daphne Ippolito, Christopher Callison-Burch, Christopher A. Choquette-Choo, Niloofar Mireshghallah, Miles Brundage, David Mimno, Madiha Zahrah Choksi, Jack M. Balkin, Nicholas Carlini, Christopher De Sa, Jonathan Frankle, Deep Ganguli, Bryant Gipson, Andres Guadamuz, Swee Leng Harris, Abigail Z. Jacobs, Elizabeth Joh, Gautam Kamath, Mark Lemley, Cass Matthews, Christine McLeavey, Corynne McSherry, Milad Nasr, Paul Ohm, Adam Roberts, Tom Rubin, Pamela Samuelson, Ludwig Schubert, Kristen Vaccaro, Luis Villa, Felix Wu, Elana Zeide:
Report of the 1st Workshop on Generative AI and Law. CoRR abs/2311.06477 (2023) - [i62]Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, Katherine Lee:
Scalable Extraction of Training Data from (Production) Language Models. CoRR abs/2311.17035 (2023) - [i61]Andong Hua, Jindong Gu, Zhiyu Xue, Nicholas Carlini, Eric Wong, Yao Qin:
Initialization Matters for Adversarial Transfer Learning. CoRR abs/2312.05716 (2023) - 2022
- [c44]Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, Nicholas Carlini:
Deduplicating Training Data Makes Language Models Better. ACL (1) 2022: 8424-8445 - [c43]Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini:
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. CCS 2022: 2779-2792 - [c42]David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, Alexey Kurakin:
AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation. ICLR 2022 - [c41]Oliver Bryniarski, Nabeel Hingun, Pedro Pachuca, Vincent Wang, Nicholas Carlini:
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent. ICLR 2022 - [c40]Nicholas Carlini, Andreas Terzis:
Poisoning and Backdooring Contrastive Learning. ICLR 2022 - [c39]Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramèr:
Data Poisoning Won't Save You From Facial Recognition. ICLR 2022 - [c38]Sanghyun Hong, Nicholas Carlini, Alexey Kurakin:
Handcrafted Backdoors in Deep Neural Networks. NeurIPS 2022 - [c37]Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramèr:
The Privacy Onion Effect: Memorization is Relative. NeurIPS 2022 - [c36]Maura Pintor, Luca Demetrio, Angelo Sotgiu, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli:
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples. NeurIPS 2022 - [c35]Roland S. Zimmermann, Wieland Brendel, Florian Tramèr, Nicholas Carlini:
Increasing Confidence in Adversarial Robustness Evaluations. NeurIPS 2022 - [c34]Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, Florian Tramèr:
Membership Inference Attacks From First Principles. SP 2022: 1897-1914 - [i60]Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, Chiyuan Zhang:
Quantifying Memorization Across Neural Language Models. CoRR abs/2202.07646 (2022) - [i59]Florian Tramèr, Andreas Terzis, Thomas Steinke, Shuang Song, Matthew Jagielski, Nicholas Carlini:
Debugging Differential Privacy: A Case Study for Privacy Auditing. CoRR abs/2202.12219 (2022) - [i58]Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini:
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. CoRR abs/2204.00032 (2022) - [i57]Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramèr:
The Privacy Onion Effect: Memorization is Relative. CoRR abs/2206.10469 (2022) - [i56]Nicholas Carlini, Florian Tramèr, Krishnamurthy Dvijotham, J. Zico Kolter:
(Certified!!) Adversarial Robustness for Free! CoRR abs/2206.10550 (2022) - [i55]Roland S. Zimmermann, Wieland Brendel, Florian Tramèr, Nicholas Carlini:
Increasing Confidence in Adversarial Robustness Evaluations. CoRR abs/2206.13991 (2022) - [i54]Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Chiyuan Zhang:
Measuring Forgetting of Memorized Training Examples. CoRR abs/2207.00099 (2022) - [i53]Chawin Sitawarin, Kornrapat Pongmala, Yizheng Chen, Nicholas Carlini, David A. Wagner:
Part-Based Models Improve Adversarial Robustness. CoRR abs/2209.09117 (2022) - [i52]Nicholas Carlini, Vitaly Feldman, Milad Nasr:
No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy". CoRR abs/2209.14987 (2022) - [i51]Chawin Sitawarin, Florian Tramèr, Nicholas Carlini:
Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems. CoRR abs/2210.03297 (2022) - [i50]Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini:
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy. CoRR abs/2210.17546 (2022) - [i49]Florian Tramèr, Gautam Kamath, Nicholas Carlini:
Considerations for Differentially Private Learning with Large-Scale Public Pretraining. CoRR abs/2212.06470 (2022) - [i48]Sanghyun Hong, Nicholas Carlini, Alexey Kurakin:
Publishing Efficient On-device Models Increases Adversarial Vulnerability. CoRR abs/2212.13700 (2022) - [i47]Battista Biggio, Nicholas Carlini, Pavel Laskov, Konrad Rieck, Antonio Emanuele Cinà:
Security of Machine Learning (Dagstuhl Seminar 22281). Dagstuhl Reports 12(7): 41-61 (2022) - 2021
- [c33]Nicholas Carlini:
Session details: Session 1: Adversarial Machine Learning. AISec@CCS 2021 - [c32]Nicholas Carlini:
Session details: Session 2A: Machine Learning for Cybersecurity. AISec@CCS 2021 - [c31]Christopher A. Choquette-Choo, Florian Tramèr, Nicholas Carlini, Nicolas Papernot:
Label-Only Membership Inference Attacks. ICML 2021: 1964-1974 - [c30]Nicholas Carlini:
How Private is Machine Learning? IH&MMSec 2021: 3 - [c29]Nicholas Carlini, Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Abhradeep Thakurta, Florian Tramèr:
Is Private Learning Possible with Instance Encoding? SP 2021: 410-427 - [c28]Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Nicholas Carlini:
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. SP 2021: 866-882 - [c27]Nicholas Carlini:
Poisoning the Unlabeled Dataset of Semi-Supervised Learning. USENIX Security Symposium 2021: 1577-1592 - [c26]Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, Colin Raffel:
Extracting Training Data from Large Language Models. USENIX Security Symposium 2021: 2633-2650 - [e2]Nicholas Carlini, Ambra Demontis, Yizheng Chen:
AISec@CCS 2021: Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, Virtual Event, Republic of Korea, 15 November 2021. ACM 2021, ISBN 978-1-4503-8657-9 [contents] - [i46]Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Nicholas Carlini:
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. CoRR abs/2101.04535 (2021) - [i45]Nicholas Carlini:
Poisoning the Unlabeled Dataset of Semi-Supervised Learning. CoRR abs/2105.01622 (2021) - [i44]Sanghyun Hong, Nicholas Carlini, Alexey Kurakin:
Handcrafted Backdoors in Deep Neural Networks. CoRR abs/2106.04690 (2021) - [i43]David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, Alex Kurakin:
AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation. CoRR abs/2106.04732 (2021) - [i42]Nicholas Carlini, Andreas Terzis:
Poisoning and Backdooring Contrastive Learning. CoRR abs/2106.09667 (2021) - [i41]Maura Pintor, Luca Demetrio, Angelo Sotgiu, Giovanni Manca, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli:
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples. CoRR abs/2106.09947 (2021) - [i40]Oliver Bryniarski, Nabeel Hingun, Pedro Pachuca, Vincent Wang, Nicholas Carlini:
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent. CoRR abs/2106.15023 (2021) - [i39]Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, Nicholas Carlini:
Deduplicating Training Data Makes Language Models Better. CoRR abs/2107.06499 (2021) - [i38]Nicholas Carlini, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Florian Tramèr:
NeuraCrypt is not private. CoRR abs/2108.07256 (2021) - [i37]Dan Hendrycks, Nicholas Carlini, John Schulman, Jacob Steinhardt:
Unsolved Problems in ML Safety. CoRR abs/2109.13916 (2021) - [i36]Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, Florian Tramèr:
Membership Inference Attacks From First Principles. CoRR abs/2112.03570 (2021) - [i35]Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr, Nicholas Carlini:
Counterfactual Memorization in Neural Language Models. CoRR abs/2112.12938 (2021) - 2020
- [c25]Sadia Afroz, Nicholas Carlini, Ambra Demontis:
AISec'20: 13th Workshop on Artificial Intelligence and Security. CCS 2020: 2143-2144 - [c24]Nicholas Carlini, Matthew Jagielski, Ilya Mironov:
Cryptanalytic Extraction of Neural Network Models. CRYPTO (3) 2020: 189-218 - [c23]Nicholas Carlini, Hany Farid:
Evading Deepfake-Image Detectors with White- and Black-Box Attacks. CVPR Workshops 2020: 2804-2813 - [c22]David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel:
ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring. ICLR 2020 - [c21]Florian Tramèr, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jörn-Henrik Jacobsen:
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. ICML 2020: 9561-9571 - [c20]Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin Raffel, Ekin Dogus Cubuk, Alexey Kurakin, Chun-Liang Li:
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. NeurIPS 2020 - [c19]Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, Ludwig Schmidt:
Measuring Robustness to Natural Distribution Shifts in Image Classification. NeurIPS 2020 - [c18]Florian Tramèr, Nicholas Carlini, Wieland Brendel, Aleksander Madry:
On Adaptive Attacks to Adversarial Example Defenses. NeurIPS 2020 - [c17]Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot:
High Accuracy and High Fidelity Extraction of Neural Networks. USENIX Security Symposium 2020: 1345-1362 - [i34]Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, Colin Raffel:
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. CoRR abs/2001.07685 (2020) - [i33]Florian Tramèr, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jörn-Henrik Jacobsen:
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. CoRR abs/2002.04599 (2020) - [i32]Florian Tramèr, Nicholas Carlini, Wieland Brendel, Aleksander Madry:
On Adaptive Attacks to Adversarial Example Defenses. CoRR abs/2002.08347 (2020) - [i31]Nicholas Carlini, Matthew Jagielski, Ilya Mironov:
Cryptanalytic Extraction of Neural Network Models. CoRR abs/2003.04884 (2020) - [i30]Nicholas Carlini, Hany Farid:
Evading Deepfake-Image Detectors with White- and Black-Box Attacks. CoRR abs/2004.00622 (2020) - [i29]Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, Ludwig Schmidt:
Measuring Robustness to Natural Distribution Shifts in Image Classification. CoRR abs/2007.00644 (2020) - [i28]Christopher A. Choquette-Choo, Florian Tramèr, Nicholas Carlini, Nicolas Papernot:
Label-Only Membership Inference Attacks. CoRR abs/2007.14321 (2020) - [i27]Nicholas Carlini:
A Partial Break of the Honeypots Defense to Catch Adversarial Attacks. CoRR abs/2009.10975 (2020) - [i26]Guneet S. Dhillon, Nicholas Carlini:
Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning. CoRR abs/2010.00071 (2020) - [i25]Nicholas Carlini, Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Shuang Song, Abhradeep Thakurta, Florian Tramèr:
An Attack on InstaHide: Is Private Learning Possible with Instance Encoding? CoRR abs/2011.05315 (2020) - [i24]Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, Colin Raffel:
Extracting Training Data from Large Language Models. CoRR abs/2012.07805 (2020)
2010 – 2019
- 2019
- [c16]Sadia Afroz, Battista Biggio, Nicholas Carlini, Yuval Elovici, Asaf Shabtai:
AISec'19: 12th ACM Workshop on Artificial Intelligence and Security. CCS 2019: 2707-2708 - [c15]Justin Gilmer, Nicolas Ford, Nicholas Carlini, Ekin D. Cubuk:
Adversarial Examples Are a Natural Consequence of Test Error in Noise. ICML 2019: 2280-2289 - [c14]Yao Qin, Nicholas Carlini, Garrison W. Cottrell, Ian J. Goodfellow, Colin Raffel:
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition. ICML 2019: 5231-5240 - [c13]David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel:
MixMatch: A Holistic Approach to Semi-Supervised Learning. NeurIPS 2019: 5050-5060 - [c12]Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, Dawn Song:
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks. USENIX Security Symposium 2019: 267-284 - [e1]Lorenzo Cavallaro, Johannes Kinder, Sadia Afroz, Battista Biggio, Nicholas Carlini, Yuval Elovici, Asaf Shabtai:
Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, AISec@CCS 2019, London, UK, November 15, 2019. ACM 2019, ISBN 978-1-4503-6833-9 [contents] - [i23]Nic Ford, Justin Gilmer, Nicholas Carlini, Ekin Dogus Cubuk:
Adversarial Examples Are a Natural Consequence of Test Error in Noise. CoRR abs/1901.10513 (2019) - [i22]Nicholas Carlini:
Is AmI (Attacks Meet Interpretability) Robust to Adversarial Examples? CoRR abs/1902.02322 (2019) - [i21]Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian J. Goodfellow, Aleksander Madry, Alexey Kurakin:
On Evaluating Adversarial Robustness. CoRR abs/1902.06705 (2019) - [i20]Yao Qin, Nicholas Carlini, Ian J. Goodfellow, Garrison W. Cottrell, Colin Raffel:
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition. CoRR abs/1903.10346 (2019) - [i19]Jörn-Henrik Jacobsen, Jens Behrmann, Nicholas Carlini, Florian Tramèr, Nicolas Papernot:
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness. CoRR abs/1903.10484 (2019) - [i18]Alexander Ratner, Dan Alistarh, Gustavo Alonso, David G. Andersen, Peter Bailis, Sarah Bird, Nicholas Carlini, Bryan Catanzaro, Eric S. Chung, Bill Dally, Jeff Dean, Inderjit S. Dhillon, Alexandros G. Dimakis, Pradeep Dubey, Charles Elkan, Grigori Fursin, Gregory R. Ganger, Lise Getoor, Phillip B. Gibbons, Garth A. Gibson, Joseph E. Gonzalez, Justin Gottschlich, Song Han, Kim M. Hazelwood, Furong Huang, Martin Jaggi, Kevin G. Jamieson, Michael I. Jordan, Gauri Joshi, Rania Khalaf, Jason Knight, Jakub Konecný, Tim Kraska, Arun Kumar, Anastasios Kyrillidis, Jing Li, Samuel Madden, H. Brendan McMahan, Erik Meijer, Ioannis Mitliagkas, Rajat Monga, Derek Gordon Murray, Dimitris S. Papailiopoulos, Gennady Pekhimenko, Theodoros Rekatsinas, Afshin Rostamizadeh, Christopher Ré, Christopher De Sa, Hanie Sedghi, Siddhartha Sen, Virginia Smith, Alex Smola, Dawn Song, Evan Randall Sparks, Ion Stoica, Vivienne Sze, Madeleine Udell, Joaquin Vanschoren, Shivaram Venkataraman, Rashmi Vinayak, Markus Weimer, Andrew Gordon Wilson, Eric P. Xing, Matei Zaharia, Ce Zhang, Ameet Talwalkar:
SysML: The New Frontier of Machine Learning Systems. CoRR abs/1904.03257 (2019) - [i17]David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel:
MixMatch: A Holistic Approach to Semi-Supervised Learning. CoRR abs/1905.02249 (2019) - [i16]Nicholas Carlini:
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models. CoRR abs/1905.07112 (2019) - [i15]Steven Chen, Nicholas Carlini, David A. Wagner:
Stateful Detection of Black-Box Adversarial Attacks. CoRR abs/1907.05587 (2019) - [i14]Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot:
High-Fidelity Extraction of Neural Network Models. CoRR abs/1909.01838 (2019) - [i13]Nicholas Carlini, Úlfar Erlingsson, Nicolas Papernot:
Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications. CoRR abs/1910.13427 (2019) - [i12]David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel:
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring. CoRR abs/1911.09785 (2019) - 2018
- [b1]Nicholas Carlini:
Evaluation and Design of Robust Neural Network Defenses. University of California, Berkeley, USA, 2018 - [c11]Anish Athalye, Nicholas Carlini, David A. Wagner:
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. ICML 2018: 274-283 - [c10]Nicholas Carlini, David A. Wagner:
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. IEEE Symposium on Security and Privacy Workshops 2018: 1-7 - [i11]Nicholas Carlini, David A. Wagner:
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. CoRR abs/1801.01944 (2018) - [i10]Anish Athalye, Nicholas Carlini, David A. Wagner:
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. CoRR abs/1802.00420 (2018) - [i9]Nicholas Carlini, Chang Liu, Jernej Kos, Úlfar Erlingsson, Dawn Song:
The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets. CoRR abs/1802.08232 (2018) - [i8]Anish Athalye, Nicholas Carlini:
On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses. CoRR abs/1804.03286 (2018) - [i7]Tom B. Brown, Nicholas Carlini, Chiyuan Zhang, Catherine Olsson, Paul F. Christiano, Ian J. Goodfellow:
Unrestricted Adversarial Examples. CoRR abs/1809.08352 (2018) - 2017
- [c9]Nicholas Carlini, David A. Wagner:
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. AISec@CCS 2017: 3-14 - [c8]Nicholas Carlini, David A. Wagner:
Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy 2017: 39-57 - [c7]Warren He, James Wei, Xinyun Chen, Nicholas Carlini, Dawn Song:
Adversarial Example Defense: Ensembles of Weak Defenses are not Strong. WOOT 2017 - [i6]Nicholas Carlini, David A. Wagner:
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. CoRR abs/1705.07263 (2017) - [i5]Warren He, James Wei, Xinyun Chen, Nicholas Carlini, Dawn Song:
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong. CoRR abs/1706.04701 (2017) - [i4]Nicholas Carlini, Guy Katz, Clark W. Barrett, David L. Dill:
Ground-Truth Adversarial Examples. CoRR abs/1709.10207 (2017) - [i3]Nicholas Carlini, David A. Wagner:
MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples. CoRR abs/1711.08478 (2017) - 2016
- [c6]Nicholas Carlini, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, David A. Wagner, Wenchao Zhou:
Hidden Voice Commands. USENIX Security Symposium 2016: 513-530 - [i2]Nicholas Carlini, David A. Wagner:
Defensive Distillation is Not Robust to Adversarial Examples. CoRR abs/1607.04311 (2016) - [i1]Nicholas Carlini, David A. Wagner:
Towards Evaluating the Robustness of Neural Networks. CoRR abs/1608.04644 (2016) - 2015
- [c5]Nicholas Carlini, Antonio Barresi, Mathias Payer, David A. Wagner, Thomas R. Gross:
Control-Flow Bending: On the Effectiveness of Control-Flow Integrity. USENIX Security Symposium 2015: 161-176 - 2014
- [c4]Nicholas Carlini, David A. Wagner:
ROP is Still Dangerous: Breaking Modern Defenses. USENIX Security Symposium 2014: 385-399 - 2013
- [c3]Eric Kim, Nicholas Carlini, Andrew Chang, George Yiu, Kai Wang, David A. Wagner:
Improved Support for Machine-assisted Ballot-level Audits. EVT/WOTE 2013 - 2012
- [c2]Nicholas Carlini, Adrienne Porter Felt, David A. Wagner:
An Evaluation of the Google Chrome Extension Security Architecture. USENIX Security Symposium 2012: 97-111 - [c1]Kai Wang, Nicholas Carlini, Eric Kim, Ivan Motyashov, Daniel Nguyen, David A. Wagner:
Operator-Assisted Tabulation of Optical Scan Ballots. EVT/WOTE 2012
Coauthor Index
aka: Alex Kurakin
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-01 01:15 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint