default search action
Jascha Sohl-Dickstein
Person information
- affiliation: Google Brain, Mountain View, CA, USA
- affiliation (PhD 2012): UC Berkeley, Redwood Center for Theoretical Neuroscience, CA, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j6]Avi Singh, John D. Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J. Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron T. Parisi, Abhishek Kumar, Alexander A. Alemi, Alex Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin Fathy Elsayed, Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura Culp, Lechao Xiao, Maxwell L. Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yundi Qian, Yamini Bansal, Ethan Dyer, Behnam Neyshabur, Jascha Sohl-Dickstein, Noah Fiedel:
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models. Trans. Mach. Learn. Res. 2024 (2024) - [c57]Mitchell Wortsman, Peter J. Liu, Lechao Xiao, Katie E. Everett, Alexander A. Alemi, Ben Adlam, John D. Co-Reyes, Izzeddin Gur, Abhishek Kumar, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein, Kelvin Xu, Jaehoon Lee, Justin Gilmer, Simon Kornblith:
Small-scale proxies for large-scale Transformer training instabilities. ICLR 2024 - [c56]Katie E. Everett, Lechao Xiao, Mitchell Wortsman, Alexander A. Alemi, Roman Novak, Peter J. Liu, Izzeddin Gur, Jascha Sohl-Dickstein, Leslie Pack Kaelbling, Jaehoon Lee, Jeffrey Pennington:
Scaling Exponents Across Parameterizations and Optimizers. ICML 2024 - [c55]Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clément Farabet, Shane Legg:
Position: Levels of AGI for Operationalizing Progress on the Path to AGI. ICML 2024 - [i83]Jascha Sohl-Dickstein:
The boundary of neural network trainability is fractal. CoRR abs/2402.06184 (2024) - [i82]Brian Lester, Jaehoon Lee, Alex Alemi, Jeffrey Pennington, Adam Roberts, Jascha Sohl-Dickstein, Noah Constant:
Training LLMs over Neurally Compressed Text. CoRR abs/2404.03626 (2024) - [i81]Katie Everett, Lechao Xiao, Mitchell Wortsman, Alexander A. Alemi, Roman Novak, Peter J. Liu, Izzeddin Gur, Jascha Sohl-Dickstein, Leslie Pack Kaelbling, Jaehoon Lee, Jeffrey Pennington:
Scaling Exponents Across Parameterizations and Optimizers. CoRR abs/2407.05872 (2024) - [i80]Jiri Hron, Laura Culp, Gamaleldin F. Elsayed, Rosanne Liu, Ben Adlam, Maxwell L. Bileschi, Bernd Bohnet, JD Co-Reyes, Noah Fiedel, C. Daniel Freeman, Izzeddin Gur, Kathleen Kenealy, Jaehoon Lee, Peter J. Liu, Gaurav Mishra, Igor Mordatch, Azade Nova, Roman Novak, Aaron Parisi, Jeffrey Pennington, Alex Rizkowsky, Isabelle Simpson, Hanie Sedghi, Jascha Sohl-Dickstein, Kevin Swersky, Sharad Vikram, Tris Warkentin, Lechao Xiao, Kelvin Xu, Jasper Snoek, Simon Kornblith:
Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability. CoRR abs/2408.07852 (2024) - 2023
- [j5]Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew La, Andrew K. Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakas, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartlomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, Cèsar Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodolà, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan J. Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, François Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocon, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse H. Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, José Hernández-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Senel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, María José Ramírez-Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael I. Ivanitskiy, Michael Starritt, Michael Strube, Michal Swedrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T., Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Milkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima (Shammie) Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay V. Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, Ziyi Wu:
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. Trans. Mach. Learn. Res. 2023 (2023) - [c54]Yilun Du, Conor Durkan, Robin Strudel, Joshua B. Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, Will Sussman Grathwohl:
Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC. ICML 2023: 8489-8510 - [c53]Oscar Li, James Harrison, Jascha Sohl-Dickstein, Virginia Smith, Luke Metz:
Variance-Reduced Gradient Estimation via Noise-Reuse in Online Evolution Strategies. NeurIPS 2023 - [i79]Yilun Du, Conor Durkan, Robin Strudel, Joshua B. Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, Will Grathwohl:
Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC. CoRR abs/2302.11552 (2023) - [i78]Oscar Li, James Harrison, Jascha Sohl-Dickstein, Virginia Smith, Luke Metz:
Noise-Reuse in Online Evolution Strategies. CoRR abs/2304.12180 (2023) - [i77]Mitchell Wortsman, Peter J. Liu, Lechao Xiao, Katie Everett, Alex Alemi, Ben Adlam, John D. Co-Reyes, Izzeddin Gur, Abhishek Kumar, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein, Kelvin Xu, Jaehoon Lee, Justin Gilmer, Simon Kornblith:
Small-scale proxies for large-scale Transformer training instabilities. CoRR abs/2309.14322 (2023) - [i76]Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clément Farabet, Shane Legg:
Levels of AGI: Operationalizing Progress on the Path to AGI. CoRR abs/2311.02462 (2023) - [i75]C. Daniel Freeman, Laura Culp, Aaron Parisi, Maxwell L. Bileschi, Gamaleldin F. Elsayed, Alex Rizkowsky, Isabelle Simpson, Alex Alemi, Azade Nova, Ben Adlam, Bernd Bohnet, Gaurav Mishra, Hanie Sedghi, Igor Mordatch, Izzeddin Gur, Jaehoon Lee, John D. Co-Reyes, Jeffrey Pennington, Kelvin Xu, Kevin Swersky, Kshiteej Mahajan, Lechao Xiao, Rosanne Liu, Simon Kornblith, Noah Constant, Peter J. Liu, Roman Novak, Yundi Qian, Noah Fiedel, Jascha Sohl-Dickstein:
Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5? CoRR abs/2311.07587 (2023) - [i74]Avi Singh, John D. Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J. Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, Abhishek Kumar, Alex Alemi, Alex Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin F. Elsayed, Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura Culp, Lechao Xiao, Maxwell L. Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yundi Qian, Yamini Bansal, Ethan Dyer, Behnam Neyshabur, Jascha Sohl-Dickstein, Noah Fiedel:
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models. CoRR abs/2312.06585 (2023) - 2022
- [c52]Luke Metz, C. Daniel Freeman, James Harrison, Niru Maheswaranathan, Jascha Sohl-Dickstein:
Practical Tradeoffs between Memory, Compute, and Performance in Learned Optimizers. CoLLAs 2022: 142-164 - [c51]Jiri Hron, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein:
Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling. ICML 2022: 8926-8945 - [c50]Roman Novak, Jascha Sohl-Dickstein, Samuel S. Schoenholz:
Fast Finite Width Neural Tangent Kernel. ICML 2022: 17018-17044 - [c49]Paul Vicol, Luke Metz, Jascha Sohl-Dickstein:
Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies (Extended Abstract). IJCAI 2022: 5354-5358 - [c48]James Harrison, Luke Metz, Jascha Sohl-Dickstein:
A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases. NeurIPS 2022 - [i73]Luke Metz, C. Daniel Freeman, James Harrison, Niru Maheswaranathan, Jascha Sohl-Dickstein:
Practical tradeoffs between memory, compute, and performance in learned optimizers. CoRR abs/2203.11860 (2022) - [i72]Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew La, Andrew K. Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakas, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartlomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, Cèsar Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodolà, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan J. Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, François Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocon, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse H. Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, José Hernández-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Senel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, María José Ramírez-Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael I. Ivanitskiy, Michael Starritt, Michael Strube, Michal Swedrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T., Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Milkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima (Shammie) Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay V. Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, Ziyi Wu:
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. CoRR abs/2206.04615 (2022) - [i71]Jiri Hron, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein:
Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling. CoRR abs/2206.07673 (2022) - [i70]Roman Novak, Jascha Sohl-Dickstein, Samuel S. Schoenholz:
Fast Finite Width Neural Tangent Kernel. CoRR abs/2206.08720 (2022) - [i69]David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-Dickstein, Kevin Murphy, Charles Sutton:
Language Model Cascades. CoRR abs/2207.10342 (2022) - [i68]James Harrison, Luke Metz, Jascha Sohl-Dickstein:
A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases. CoRR abs/2209.11208 (2022) - [i67]Luke Metz, James Harrison, C. Daniel Freeman, Amil Merchant, Lucas Beyer, James Bradbury, Naman Agrawal, Ben Poole, Igor Mordatch, Adam Roberts, Jascha Sohl-Dickstein:
VeLO: Training Versatile Learned Optimizers by Scaling Up. CoRR abs/2211.09760 (2022) - [i66]Louis Kirsch, James Harrison, Jascha Sohl-Dickstein, Luke Metz:
General-Purpose In-Context Learning by Meta-Learning Transformers. CoRR abs/2212.04458 (2022) - 2021
- [c47]Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole:
Score-Based Generative Modeling through Stochastic Differential Equations. ICLR 2021 - [c46]Paul Vicol, Luke Metz, Jascha Sohl-Dickstein:
Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies. ICML 2021: 10553-10563 - [c45]Neha S. Wadia, Daniel Duckworth, Samuel S. Schoenholz, Ethan Dyer, Jascha Sohl-Dickstein:
Whitening and Second Order Optimization Both Make Information in the Dataset Unusable During Training, and Can Reduce or Prevent Generalization. ICML 2021: 10617-10629 - [c44]Niru Maheswaranathan, David Sussillo, Luke Metz, Ruoxi Sun, Jascha Sohl-Dickstein:
Reverse engineering learned optimizers reveals known and novel mechanisms. NeurIPS 2021: 19910-19922 - [i65]Luke Metz, C. Daniel Freeman, Niru Maheswaranathan, Jascha Sohl-Dickstein:
Training Learned Optimizers with Randomly Initialized Learned Optimizers. CoRR abs/2101.07367 (2021) - [i64]James Martens, Andy Ballard, Guillaume Desjardins, Grzegorz Swirszcz, Valentin Dalibard, Jascha Sohl-Dickstein, Samuel S. Schoenholz:
Rapid training of deep neural networks without skip connections or normalization layers using Deep Kernel Shaping. CoRR abs/2110.01765 (2021) - [i63]Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Srivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard H. Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicholas Roberts, Juan Diego Rodriguez, Claude Roux, Paulo Henrique Santos Vasconcellos, Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib Shamsi, Xudong Shen, Yiwen Shi, Haoyue Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, K. V. Aditya Srivatsa, Tony Sun, Mukund Varma T., A. Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Zijie J. Wang, Gloria Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmanski, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang:
NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation. CoRR abs/2112.02721 (2021) - [i62]Paul Vicol, Luke Metz, Jascha Sohl-Dickstein:
Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies. CoRR abs/2112.13835 (2021) - 2020
- [c43]Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha Sohl-Dickstein, Samuel S. Schoenholz:
Neural Tangents: Fast and Easy Infinite Neural Networks in Python. ICLR 2020 - [c42]Jiri Hron, Yasaman Bahri, Jascha Sohl-Dickstein, Roman Novak:
Infinite attention: NNGP and NTK for deep attention networks. ICML 2020: 4376-4386 - [c41]Tong Che, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, Yoshua Bengio:
Your GAN is Secretly an Energy-based Model and You Should Use Discriminator Driven Latent Sampling. NeurIPS 2020 - [c40]Jaehoon Lee, Samuel S. Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, Jascha Sohl-Dickstein:
Finite Versus Infinite Neural Networks: an Empirical Study. NeurIPS 2020 - [i61]Jascha Sohl-Dickstein, Roman Novak, Samuel S. Schoenholz, Jaehoon Lee:
On the infinite width limit of neural networks with a standard parameterization. CoRR abs/2001.07301 (2020) - [i60]Luke Metz, Niru Maheswaranathan, Ruoxi Sun, C. Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein:
Using a thousand optimization tasks to learn hyperparameter search strategies. CoRR abs/2002.11887 (2020) - [i59]Aitor Lewkowycz, Yasaman Bahri, Ethan Dyer, Jascha Sohl-Dickstein, Guy Gur-Ari:
The large learning rate phase of deep learning: the catapult mechanism. CoRR abs/2003.02218 (2020) - [i58]Tong Che, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, Yoshua Bengio:
Your GAN is Secretly an Energy-based Model and You Should use Discriminator Driven Latent Sampling. CoRR abs/2003.06060 (2020) - [i57]Jiri Hron, Yasaman Bahri, Jascha Sohl-Dickstein, Roman Novak:
Infinite attention: NNGP and NTK for deep attention networks. CoRR abs/2006.10540 (2020) - [i56]Jiri Hron, Yasaman Bahri, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein:
Exact posterior distributions of wide Bayesian neural networks. CoRR abs/2006.10541 (2020) - [i55]Jascha Sohl-Dickstein, Peter Battaglino, Michael Robert DeWeese:
A new method for parameter estimation in probabilistic models: Minimum probability flow. CoRR abs/2007.09240 (2020) - [i54]Jaehoon Lee, Samuel S. Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, Jascha Sohl-Dickstein:
Finite Versus Infinite Neural Networks: an Empirical Study. CoRR abs/2007.15801 (2020) - [i53]Neha S. Wadia, Daniel Duckworth, Samuel S. Schoenholz, Ethan Dyer, Jascha Sohl-Dickstein:
Whitening and second order optimization both destroy information about the dataset, and can make generalization impossible. CoRR abs/2008.07545 (2020) - [i52]Luke Metz, Niru Maheswaranathan, C. Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein:
Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves. CoRR abs/2009.11243 (2020) - [i51]Vinay Rao, Jascha Sohl-Dickstein:
Is Batch Norm unique? An empirical investigation and prescription to emulate the best properties of common normalizers without batch dependence. CoRR abs/2010.10687 (2020) - [i50]Niru Maheswaranathan, David Sussillo, Luke Metz, Ruoxi Sun, Jascha Sohl-Dickstein:
Reverse engineering learned optimizers reveals known and novel mechanisms. CoRR abs/2011.02159 (2020) - [i49]Daniel S. Park, Jaehoon Lee, Daiyi Peng, Yuan Cao, Jascha Sohl-Dickstein:
Towards NNGP-guided Neural Architecture Search. CoRR abs/2011.06006 (2020) - [i48]Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole:
Score-Based Generative Modeling through Stochastic Differential Equations. CoRR abs/2011.13456 (2020) - [i47]Michael Laskin, Luke Metz, Seth Nabarrao, Mark Saroufim, Badreddine Noune, Carlo Luschi, Jascha Sohl-Dickstein, Pieter Abbeel:
Parallel Training of Deep Networks with Local Updates. CoRR abs/2012.03837 (2020)
2010 – 2019
- 2019
- [j4]Christopher J. Shallue, Jaehoon Lee, Joseph M. Antognini, Jascha Sohl-Dickstein, Roy Frostig, George E. Dahl:
Measuring the Effects of Data Parallelism on Neural Network Training. J. Mach. Learn. Res. 20: 112:1-112:49 (2019) - [c39]Laurent Dinh, Jascha Sohl-Dickstein, Razvan Pascanu, Hugo Larochelle:
A RAD approach to deep mixture models. DGS@ICLR 2019 - [c38]Gamaleldin F. Elsayed, Ian J. Goodfellow, Jascha Sohl-Dickstein:
Adversarial Reprogramming of Neural Networks. ICLR (Poster) 2019 - [c37]Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein:
Meta-Learning Update Rules for Unsupervised Representation Learning. ICLR 2019 - [c36]Roman Novak, Lechao Xiao, Yasaman Bahri, Jaehoon Lee, Greg Yang, Jiri Hron, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein:
Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes. ICLR (Poster) 2019 - [c35]Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, Samuel S. Schoenholz:
A Mean Field Theory of Batch Normalization. ICLR (Poster) 2019 - [c34]Niru Maheswaranathan, Luke Metz, George Tucker, Dami Choi, Jascha Sohl-Dickstein:
Guided evolutionary strategies: augmenting random search with surrogate gradients. ICML 2019: 4264-4273 - [c33]Luke Metz, Niru Maheswaranathan, Jeremy Nixon, C. Daniel Freeman, Jascha Sohl-Dickstein:
Understanding and correcting pathologies in the training of learned optimizers. ICML 2019: 4556-4565 - [c32]Daniel S. Park, Jascha Sohl-Dickstein, Quoc V. Le, Samuel L. Smith:
The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study. ICML 2019: 5042-5051 - [c31]Mahdi Karami, Dale Schuurmans, Jascha Sohl-Dickstein, Laurent Dinh, Daniel Duckworth:
Invertible Convolutional Flow. NeurIPS 2019: 5636-5646 - [c30]Jaehoon Lee, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, Jeffrey Pennington:
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent. NeurIPS 2019: 8570-8581 - [i46]Jascha Sohl-Dickstein, Kenji Kawaguchi:
Eliminating all bad Local Minima from Loss Landscapes without even adding an Extra Unit. CoRR abs/1901.03909 (2019) - [i45]Jaehoon Lee, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Jascha Sohl-Dickstein, Jeffrey Pennington:
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent. CoRR abs/1902.06720 (2019) - [i44]Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, Samuel S. Schoenholz:
A Mean Field Theory of Batch Normalization. CoRR abs/1902.08129 (2019) - [i43]Laurent Dinh, Jascha Sohl-Dickstein, Razvan Pascanu, Hugo Larochelle:
A RAD approach to deep mixture models. CoRR abs/1903.07714 (2019) - [i42]Daniel S. Park, Jascha Sohl-Dickstein, Quoc V. Le, Samuel L. Smith:
The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study. CoRR abs/1905.03776 (2019) - [i41]Luke Metz, Niru Maheswaranathan, Jonathon Shlens, Jascha Sohl-Dickstein, Ekin D. Cubuk:
Using learned optimizers to make models robust to input noise. CoRR abs/1906.03367 (2019) - [i40]Stephan Hoyer, Jascha Sohl-Dickstein, Sam Greydanus:
Neural reparameterization improves structural optimization. CoRR abs/1909.04240 (2019) - [i39]Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha Sohl-Dickstein, Samuel S. Schoenholz:
Neural Tangents: Fast and Easy Infinite Neural Networks in Python. CoRR abs/1912.02803 (2019) - 2018
- [c29]Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein:
Deep Neural Networks as Gaussian Processes. ICLR (Poster) 2018 - [c28]Daniel Levy, Matthew D. Hoffman, Jascha Sohl-Dickstein:
Generalizing Hamiltonian Monte Carlo with Neural Networks. ICLR (Poster) 2018 - [c27]Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein:
Learning to Learn Without Labels. ICLR (Workshop) 2018 - [c26]Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein:
Sensitivity and Generalization in Neural Networks: an Empirical Study. ICLR (Poster) 2018 - [c25]Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel S. Schoenholz, Jeffrey Pennington:
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10, 000-Layer Vanilla Convolutional Neural Networks. ICML 2018: 5389-5398 - [c24]Gamaleldin F. Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alexey Kurakin, Ian J. Goodfellow, Jascha Sohl-Dickstein:
Adversarial Examples that Fool both Computer Vision and Time-Limited Humans. NeurIPS 2018: 3914-3924 - [c23]Joseph M. Antognini, Jascha Sohl-Dickstein:
PCA of high dimensional random walks with comparison to neural network training. NeurIPS 2018: 10328-10337 - [i38]Gamaleldin F. Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian J. Goodfellow, Jascha Sohl-Dickstein:
Adversarial Examples that Fool both Human and Computer Vision. CoRR abs/1802.08195 (2018) - [i37]Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein:
Sensitivity and Generalization in Neural Networks: an Empirical Study. CoRR abs/1802.08760 (2018) - [i36]Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein:
Learning Unsupervised Learning Rules. CoRR abs/1804.00222 (2018) - [i35]Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel S. Schoenholz, Jeffrey Pennington:
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10, 000-Layer Vanilla Convolutional Neural Networks. CoRR abs/1806.05393 (2018) - [i34]Joseph M. Antognini, Jascha Sohl-Dickstein:
PCA of high dimensional random walks with comparison to neural network training. CoRR abs/1806.08805 (2018) - [i33]Samuel L. Smith, Daniel Duckworth, Quoc V. Le, Jascha Sohl-Dickstein:
Stochastic natural gradient descent draws posterior samples in function space. CoRR abs/1806.09597 (2018) - [i32]Niru Maheswaranathan, Luke Metz, George Tucker, Jascha Sohl-Dickstein:
Guided evolutionary strategies: escaping the curse of dimensionality in random search. CoRR abs/1806.10230 (2018) - [i31]Gamaleldin F. Elsayed, Ian J. Goodfellow, Jascha Sohl-Dickstein:
Adversarial Reprogramming of Neural Networks. CoRR abs/1806.11146 (2018) - [i30]Roman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein:
Bayesian Convolutional Neural Networks with Many Channels are Gaussian Processes. CoRR abs/1810.05148 (2018) - [i29]Luke Metz, Niru Maheswaranathan, Jeremy Nixon, C. Daniel Freeman, Jascha Sohl-Dickstein:
Learned optimizers that outperform SGD on wall-clock and test loss. CoRR abs/1810.10180 (2018) - [i28]Christopher J. Shallue, Jaehoon Lee, Joseph M. Antognini, Jascha Sohl-Dickstein, Roy Frostig, George E. Dahl:
Measuring the Effects of Data Parallelism on Neural Network Training. CoRR abs/1811.03600 (2018) - 2017
- [j3]Badr F. Albanna, Christopher Hillar, Jascha Sohl-Dickstein, Michael Robert DeWeese:
Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations. Entropy 19(8): 427 (2017) - [c22]Jasmine Collins, Jascha Sohl-Dickstein, David Sussillo:
Capacity and Trainability in Recurrent Neural Networks. ICLR (Poster) 2017 - [c21]Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio:
Density estimation using Real NVP. ICLR (Poster) 2017 - [c20]Justin Gilmer, Colin Raffel, Samuel S. Schoenholz, Maithra Raghu, Jascha Sohl-Dickstein:
Explaining the Learning Dynamics of Direct Feedback Alignment. ICLR (Workshop) 2017 - [c19]Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein:
Unrolled Generative Adversarial Networks. ICLR (Poster) 2017 - [c18]Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli, Jascha Sohl-Dickstein:
Deep Information Propagation. ICLR (Poster) 2017 - [c17]George Tucker, Andriy Mnih, Chris J. Maddison, Jascha Sohl-Dickstein:
REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models. ICLR (Workshop) 2017 - [c16]Jakob N. Foerster, Justin Gilmer, Jascha Sohl-Dickstein, Jan Chorowski, David Sussillo:
Input Switched Affine Networks: An RNN Architecture Designed for Interpretability. ICML 2017: 1136-1145 - [c15]Maithra Raghu, Ben Poole, Jon M. Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein:
On the Expressive Power of Deep Neural Networks. ICML 2017: 2847-2854 - [c14]Olga Wichrowska, Niru Maheswaranathan, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Nando de Freitas, Jascha Sohl-Dickstein:
Learned Optimizers that Scale and Generalize. ICML 2017: 3751-3760 - [c13]George Tucker, Andriy Mnih, Chris J. Maddison, Dieterich Lawson, Jascha Sohl-Dickstein:
REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models. NIPS 2017: 2627-2636 - [c12]Maithra Raghu, Justin Gilmer, Jason Yosinski, Jascha Sohl-Dickstein:
SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. NIPS 2017: 6076-6085 - [i27]Olga Wichrowska, Niru Maheswaranathan, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Nando de Freitas, Jascha Sohl-Dickstein:
Learned Optimizers that Scale and Generalize. CoRR abs/1703.04813 (2017) - [i26]George Tucker, Andriy Mnih, Chris J. Maddison, Jascha Sohl-Dickstein:
REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models. CoRR abs/1703.07370 (2017) - [i25]Maithra Raghu, Justin Gilmer, Jason Yosinski, Jascha Sohl-Dickstein:
SVCCA: Singular Vector Canonical Correlation Analysis for Deep Understanding and Improvement. CoRR abs/1706.05806 (2017) - [i24]Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein:
A Correspondence Between Random Neural Networks and Statistical Field Theory. CoRR abs/1710.06570 (2017) - [i23]Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein:
Deep Neural Networks as Gaussian Processes. CoRR abs/1711.00165 (2017) - [i22]Daniel Levy, Matthew D. Hoffman, Jascha Sohl-Dickstein:
Generalizing Hamiltonian Monte Carlo with Neural Networks. CoRR abs/1711.09268 (2017) - 2016
- [c11]Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, Surya Ganguli:
Exponential expressivity in deep neural networks through transient chaos. NIPS 2016: 3360-3368 - [i21]Subhaneil Lahiri, Jascha Sohl-Dickstein, Surya Ganguli:
A universal tradeoff between power, precision and speed in physical communication. CoRR abs/1603.07758 (2016) - [i20]Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio:
Density estimation using Real NVP. CoRR abs/1605.08803 (2016) - [i19]Maithra Raghu, Ben Poole, Jon M. Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein:
On the expressive power of deep neural networks. CoRR abs/1606.05336 (2016) - [i18]Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, Surya Ganguli:
Exponential expressivity in deep neural networks through transient chaos. CoRR abs/1606.05340 (2016) - [i17]Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli, Jascha Sohl-Dickstein:
Deep Information Propagation. CoRR abs/1611.01232 (2016) - [i16]Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein:
Unrolled Generative Adversarial Networks. CoRR abs/1611.02163 (2016) - [i15]Maithra Raghu, Ben Poole, Jon M. Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein:
Survey of Expressivity in Deep Neural Networks. CoRR abs/1611.08083 (2016) - [i14]Jakob N. Foerster, Justin Gilmer, Jan Chorowski, Jascha Sohl-Dickstein, David Sussillo:
Intelligible Language Modeling with Input Switched Affine Networks. CoRR abs/1611.09434 (2016) - [i13]Jasmine Collins, Jascha Sohl-Dickstein, David Sussillo:
Capacity and Trainability in Recurrent Neural Networks. CoRR abs/1611.09913 (2016) - [i12]Ben Poole, Alexander A. Alemi, Jascha Sohl-Dickstein, Anelia Angelova:
Improved generator objectives for GANs. CoRR abs/1612.02780 (2016) - 2015
- [j2]Jascha Sohl-Dickstein, Santani Teng, Benjamin M. Gaub, Chris C. Rodgers, Crystal Li, Michael Robert DeWeese, Nicol S. Harper:
A Device for Human Ultrasonic Echolocation. IEEE Trans. Biomed. Eng. 62(6): 1526-1534 (2015) - [c10]Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, Surya Ganguli:
Deep Unsupervised Learning using Nonequilibrium Thermodynamics. ICML 2015: 2256-2265 - [c9]Chris Piech, Jonathan Bassen, Jonathan Huang, Surya Ganguli, Mehran Sahami, Leonidas J. Guibas, Jascha Sohl-Dickstein:
Deep Knowledge Tracing. NIPS 2015: 505-513 - [i11]Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, Surya Ganguli:
Deep Unsupervised Learning using Nonequilibrium Thermodynamics. CoRR abs/1503.03585 (2015) - [i10]Jascha Sohl-Dickstein, Diederik P. Kingma:
Technical Note on Equivalence Between Recurrent Neural Network Time Series Models and Variational Bayesian Models. CoRR abs/1504.08025 (2015) - [i9]Chris Piech, Jonathan Spencer, Jonathan Huang, Surya Ganguli, Mehran Sahami, Leonidas J. Guibas, Jascha Sohl-Dickstein:
Deep Knowledge Tracing. CoRR abs/1506.05908 (2015) - 2014
- [j1]Urs Köster, Jascha Sohl-Dickstein, Charles M. Gray, Bruno A. Olshausen:
Modeling Higher-Order Correlations within Cortical Microcolumns. PLoS Comput. Biol. 10(7) (2014) - [c8]Jascha Sohl-Dickstein, Ben Poole, Surya Ganguli:
Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods. ICML 2014: 604-612 - [c7]Jascha Sohl-Dickstein, Mayur Mudigonda, Michael Robert DeWeese:
Hamiltonian Monte Carlo Without Detailed Balance. ICML 2014: 719-726 - [i8]Ben Poole, Jascha Sohl-Dickstein, Surya Ganguli:
Analyzing noise in autoencoders and deep networks. CoRR abs/1406.1831 (2014) - 2013
- [c6]Eliana Feasley, Chris Klaiber, James Irwin, Jace Kohlmeier, Jascha Sohl-Dickstein:
Controlled experiments on millions of students to personalize learning. AIED Workshops 2013 - [c5]Joseph Jay Williams, Dave Paunesku, Benjamin Heley, Jascha Sohl-Dickstein:
Measurably Increasing Motivation in MOOCs. AIED Workshops 2013 - [i7]Jascha Sohl-Dickstein, Ben Poole, Surya Ganguli:
An adaptive low dimensional quasi-Newton sum of functions optimizer. CoRR abs/1311.2115 (2013) - 2012
- [b1]Jascha Sohl-Dickstein:
Efficient Methods for Unsupervised Learning of Probabilistic Models. University of California, Berkeley, USA, 2012 - [c4]Lucas Theis, Jascha Sohl-Dickstein, Matthias Bethge:
Training sparse natural image models with a fast Gibbs sampler of an extended state space. NIPS 2012: 1133-1141 - [i6]Jascha Sohl-Dickstein:
The Natural Gradient by Analogy to Signal Whitening, and Recipes and Tricks for its Use. CoRR abs/1205.1828 (2012) - [i5]Jascha Sohl-Dickstein, Benjamin J. Culpepper:
Hamiltonian Annealed Importance Sampling for partition function estimation. CoRR abs/1205.1925 (2012) - [i4]Jascha Sohl-Dickstein:
Hamiltonian Monte Carlo with Reduced Momentum Flips. CoRR abs/1205.1939 (2012) - [i3]Jascha Sohl-Dickstein:
Efficient Methods for Unsupervised Learning of Probabilistic Models. CoRR abs/1205.4295 (2012) - 2011
- [c3]Ching Ming Wang, Jascha Sohl-Dickstein, Ivana Tosic, Bruno A. Olshausen:
Lie Group Transformation Models for Predictive Video Coding. DCC 2011: 83-92 - [c2]Benjamin J. Culpepper, Jascha Sohl-Dickstein, Bruno A. Olshausen:
Building a better probabilistic model of images by factorization. ICCV 2011: 2011-2017 - [c1]Jascha Sohl-Dickstein, Peter Battaglino, Michael Robert DeWeese:
Minimum Probability Flow Learning. ICML 2011: 905-912 - 2010
- [i2]Jascha Sohl-Dickstein, Jimmy C. Wang, Bruno A. Olshausen:
An Unsupervised Algorithm For Learning Lie Group Transformations. CoRR abs/1001.1027 (2010)
2000 – 2009
- 2009
- [i1]Jascha Sohl-Dickstein, Peter Battaglino, Michael Robert DeWeese:
Minimum Probability Flow Learning. CoRR abs/0906.4779 (2009)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-26 01:52 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint