default search action
Zhiyuan Li 0005
Person information
- unicode name: 李志远
- affiliation: Toyota Technological Institute at Chicago (TTIC), IL, USA
- affiliation: Stanford University, Department of Computer Science, Stanford, CA, USA
- affiliation (PhD 2022): Princeton University, Department of Computer Science, Princeton, NJ, USA
Other persons with the same name
- Zhiyuan Li (aka: Zhi-yuan Li, Zhi-Yuan Li, ZhiYuan Li) — disambiguation page
- Zhiyuan Li 0001 — Purdue University, Department of Computer Science, West Lafayette, IN, USA (and 1 more)
- Zhiyuan Li 0002 (aka: Zhi-yuan Li 0002) — Jiangsu University, School of Computer Science and Communication Engineering, Zhenjiang, China (and 1 more)
- Zhiyuan Li 0003 (aka: Zhi-yuan Li 0003) — Inner Mongolia University of Technology, Department of Mathematics, Hohhot, China
- Zhiyuan Li 0004 — BNU-HKBU United International College, Zhuhai, China (and 1 more)
- Zhiyuan Li 0006 — University of South Carolina, Department of Computer Science and Engineering, Columbia, SC, USA
- Zhiyuan Li 0007 — Rochester Institute of Technology, Rochester, NY, USA
- Zhiyuan Li 0008 — Motorola, Schaumburg, IL, USA (and 1 more)
- Zhiyuan Li 0009 — Wuhan University, School of Information Management, Wuhan, China
- Zhiyuan Li 0010 — University of Illinois at Urbana-Champaign, Department of Mechanical Science and Engineering, Urbana, IL, USA
- Zhiyuan Li 0011 — University of Cincinnati, Department of Electrical Engineering and Computer Science, Cincinnati, OH, USA (and 1 more)
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c33]Hong Liu, Zhiyuan Li, David Leo Wright Hall, Percy Liang, Tengyu Ma:
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training. ICLR 2024 - [c32]Kaifeng Lyu, Jikai Jin, Zhiyuan Li, Simon Shaolei Du, Jason D. Lee, Wei Hu:
Dichotomy of Early and Late Phase Implicit Biases Can Provably Induce Grokking. ICLR 2024 - [c31]Runzhe Wang, Sadhika Malladi, Tianhao Wang, Kaifeng Lyu, Zhiyuan Li:
The Marginal Value of Momentum for Small Learning Rate SGD. ICLR 2024 - [c30]Khashayar Gatmiry, Zhiyuan Li, Sashank J. Reddi, Stefanie Jegelka:
Simplicity Bias via Global Convergence of Sharpness Minimization. ICML 2024 - [i34]Zhiyuan Li, Hong Liu, Denny Zhou, Tengyu Ma:
Chain of Thought Empowers Transformers to Solve Inherently Serial Problems. CoRR abs/2402.12875 (2024) - [i33]Kaiyue Wen, Zhiyuan Li, Jason Wang, David Hall, Percy Liang, Tengyu Ma:
Understanding Warmup-Stable-Decay Learning Rates: A River Valley Loss Landscape Perspective. CoRR abs/2410.05192 (2024) - [i32]Khashayar Gatmiry, Zhiyuan Li, Sashank J. Reddi, Stefanie Jegelka:
Simplicity Bias via Global Convergence of Sharpness Minimization. CoRR abs/2410.16401 (2024) - 2023
- [c29]Kaiyue Wen, Tengyu Ma, Zhiyuan Li:
How Sharpness-Aware Minimization Minimizes Sharpness? ICLR 2023 - [c28]Jikai Jin, Zhiyuan Li, Kaifeng Lyu, Simon Shaolei Du, Jason D. Lee:
Understanding Incremental Learning of Gradient Descent: A Fine-grained Analysis of Matrix Sensing. ICML 2023: 15200-15238 - [c27]Hong Liu, Sang Michael Xie, Zhiyuan Li, Tengyu Ma:
Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models. ICML 2023: 22188-22214 - [c26]Khashayar Gatmiry, Zhiyuan Li, Tengyu Ma, Sashank J. Reddi, Stefanie Jegelka, Ching-Yao Chuang:
What is the Inductive Bias of Flatness Regularization? A Study of Deep Matrix Factorization Models. NeurIPS 2023 - [c25]Kaiyue Wen, Zhiyuan Li, Tengyu Ma:
Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization. NeurIPS 2023 - [i31]Jikai Jin, Zhiyuan Li, Kaifeng Lyu, Simon S. Du, Jason D. Lee:
Understanding Incremental Learning of Gradient Descent: A Fine-grained Analysis of Matrix Sensing. CoRR abs/2301.11500 (2023) - [i30]Hong Liu, Zhiyuan Li, David Hall, Percy Liang, Tengyu Ma:
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training. CoRR abs/2305.14342 (2023) - [i29]Khashayar Gatmiry, Zhiyuan Li, Ching-Yao Chuang, Sashank J. Reddi, Tengyu Ma, Stefanie Jegelka:
The Inductive Bias of Flatness Regularization for Deep Matrix Factorization. CoRR abs/2306.13239 (2023) - [i28]Kaiyue Wen, Zhiyuan Li, Tengyu Ma:
Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization. CoRR abs/2307.11007 (2023) - [i27]Runzhe Wang, Sadhika Malladi, Tianhao Wang, Kaifeng Lyu, Zhiyuan Li:
The Marginal Value of Momentum for Small Learning Rate SGD. CoRR abs/2307.15196 (2023) - [i26]Kaifeng Lyu, Jikai Jin, Zhiyuan Li, Simon S. Du, Jason D. Lee, Wei Hu:
Dichotomy of Early and Late Phase Implicit Biases Can Provably Induce Grokking. CoRR abs/2311.18817 (2023) - 2022
- [b1]Zhiyuan Li:
Bridging Theory and Practice in Deep Learning: Optimization and Generalization. Princeton University, USA, 2022 - [c24]Zhiyuan Li, Tianhao Wang, Sanjeev Arora:
What Happens after SGD Reaches Zero Loss? --A Mathematical Framework. ICLR 2022 - [c23]Sanjeev Arora, Zhiyuan Li, Abhishek Panigrahi:
Understanding Gradient Descent on the Edge of Stability in Deep Learning. ICML 2022: 948-1024 - [c22]Zhiyuan Li, Srinadh Bhojanapalli, Manzil Zaheer, Sashank J. Reddi, Sanjiv Kumar:
Robust Training of Neural Networks Using Scale Invariant Architectures. ICML 2022: 12656-12684 - [c21]Zhiyuan Li, Tianhao Wang, Jason D. Lee, Sanjeev Arora:
Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent. NeurIPS 2022 - [c20]Zhiyuan Li, Tianhao Wang, Dingli Yu:
Fast Mixing of Stochastic Gradient Descent with Normalization and Weight Decay. NeurIPS 2022 - [c19]Kaifeng Lyu, Zhiyuan Li, Sanjeev Arora:
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction. NeurIPS 2022 - [i25]Zhiyuan Li, Srinadh Bhojanapalli, Manzil Zaheer, Sashank J. Reddi, Sanjiv Kumar:
Robust Training of Neural Networks using Scale Invariant Architectures. CoRR abs/2202.00980 (2022) - [i24]Sanjeev Arora, Zhiyuan Li, Abhishek Panigrahi:
Understanding Gradient Descent on Edge of Stability in Deep Learning. CoRR abs/2205.09745 (2022) - [i23]Kaifeng Lyu, Zhiyuan Li, Sanjeev Arora:
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction. CoRR abs/2206.07085 (2022) - [i22]Zhiyuan Li, Tianhao Wang, Jason D. Lee, Sanjeev Arora:
Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent. CoRR abs/2207.04036 (2022) - [i21]Hong Liu, Sang Michael Xie, Zhiyuan Li, Tengyu Ma:
Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models. CoRR abs/2210.14199 (2022) - [i20]Kaiyue Wen, Tengyu Ma, Zhiyuan Li:
How Does Sharpness-Aware Minimization Minimize Sharpness? CoRR abs/2211.05729 (2022) - 2021
- [c18]Zhiyuan Li, Yi Zhang, Sanjeev Arora:
Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets? ICLR 2021 - [c17]Zhiyuan Li, Yuping Luo, Kaifeng Lyu:
Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning. ICLR 2021 - [c16]Zhiyuan Li, Sadhika Malladi, Sanjeev Arora:
On the Validity of Modeling SGD with Stochastic Differential Equations (SDEs). NeurIPS 2021: 12712-12725 - [c15]Kaifeng Lyu, Zhiyuan Li, Runzhe Wang, Sanjeev Arora:
Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias. NeurIPS 2021: 12978-12991 - [c14]Simon S. Du, Wei Hu, Zhiyuan Li, Ruoqi Shen, Zhao Song, Jiajun Wu:
When is particle filtering efficient for planning in partially observed linear dynamical systems? UAI 2021: 728-737 - [i19]Zhiyuan Li, Sadhika Malladi, Sanjeev Arora:
On the Validity of Modeling SGD with Stochastic Differential Equations (SDEs). CoRR abs/2102.12470 (2021) - [i18]Zhiyuan Li, Tianhao Wang, Sanjeev Arora:
What Happens after SGD Reaches Zero Loss? -A Mathematical Framework. CoRR abs/2110.06914 (2021) - [i17]Kaifeng Lyu, Zhiyuan Li, Runzhe Wang, Sanjeev Arora:
Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias. CoRR abs/2110.13905 (2021) - 2020
- [c13]Zhiyuan Li, Sanjeev Arora:
An Exponential Learning Rate Schedule for Deep Learning. ICLR 2020 - [c12]Sanjeev Arora, Simon S. Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, Dingli Yu:
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks. ICLR 2020 - [c11]Wei Hu, Zhiyuan Li, Dingli Yu:
Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee. ICLR 2020 - [c10]Zhiyuan Li, Kaifeng Lyu, Sanjeev Arora:
Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate. NeurIPS 2020 - [i16]Simon S. Du, Wei Hu, Zhiyuan Li, Ruoqi Shen, Zhao Song, Jiajun Wu:
When is Particle Filtering Efficient for POMDP Sequential Planning? CoRR abs/2006.05975 (2020) - [i15]Zhiyuan Li, Kaifeng Lyu, Sanjeev Arora:
Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate. CoRR abs/2010.02916 (2020) - [i14]Zhiyuan Li, Yi Zhang, Sanjeev Arora:
Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets? CoRR abs/2010.08515 (2020) - [i13]Zhiyuan Li, Yuping Luo, Kaifeng Lyu:
Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning. CoRR abs/2012.09839 (2020)
2010 – 2019
- 2019
- [c9]Sanjeev Arora, Zhiyuan Li, Kaifeng Lyu:
Theoretical Analysis of Auto Rate-Tuning by Batch Normalization. ICLR (Poster) 2019 - [c8]Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, Nathan Srebro:
The role of over-parametrization in generalization of neural networks. ICLR (Poster) 2019 - [c7]Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruosong Wang:
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks. ICML 2019: 322-332 - [c6]Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang:
On Exact Computation with an Infinitely Wide Neural Net. NeurIPS 2019: 8139-8148 - [c5]Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Rong Ge, Sanjeev Arora:
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets. NeurIPS 2019: 14574-14583 - [i12]Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruosong Wang:
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks. CoRR abs/1901.08584 (2019) - [i11]Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang:
On Exact Computation with an Infinitely Wide Neural Net. CoRR abs/1904.11955 (2019) - [i10]Wei Hu, Zhiyuan Li, Dingli Yu:
Understanding Generalization of Deep Neural Networks Trained with Noisy Labels. CoRR abs/1905.11368 (2019) - [i9]Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Sanjeev Arora, Rong Ge:
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets. CoRR abs/1906.06247 (2019) - [i8]Sanjeev Arora, Simon S. Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, Dingli Yu:
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks. CoRR abs/1910.01663 (2019) - [i7]Zhiyuan Li, Sanjeev Arora:
An Exponential Learning Rate Schedule for Deep Learning. CoRR abs/1910.07454 (2019) - [i6]Zhiyuan Li, Ruosong Wang, Dingli Yu, Simon S. Du, Wei Hu, Ruslan Salakhutdinov, Sanjeev Arora:
Enhanced Convolutional Neural Tangent Kernels. CoRR abs/1911.00809 (2019) - 2018
- [c4]Elad Hazan, Wei Hu, Yuanzhi Li, Zhiyuan Li:
Online Improper Learning with an Approximation Oracle. NeurIPS 2018: 5657-5665 - [i5]Elad Hazan, Wei Hu, Yuanzhi Li, Zhiyuan Li:
Online Improper Learning with an Approximation Oracle. CoRR abs/1804.07837 (2018) - [i4]Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, Nathan Srebro:
Towards Understanding the Role of Over-Parametrization in Generalization of Neural Networks. CoRR abs/1805.12076 (2018) - [i3]Sanjeev Arora, Zhiyuan Li, Kaifeng Lyu:
Theoretical Analysis of Auto Rate-Tuning by Batch Normalization. CoRR abs/1812.03981 (2018) - 2017
- [c3]Zhiyuan Li, Yicheng Liu, Pingzhong Tang, Tingting Xu, Wei Zhan:
Stability of Generalized Two-sided Markets with Transaction Thresholds. AAMAS 2017: 290-298 - 2016
- [c2]Yexiang Xue, Zhiyuan Li, Stefano Ermon, Carla P. Gomes, Bart Selman:
Solving Marginal MAP Problems with NP Oracles and Parity Constraints. NIPS 2016: 1127-1135 - [c1]Dylan J. Foster, Zhiyuan Li, Thodoris Lykouris, Karthik Sridharan, Éva Tardos:
Learning in Games: Robustness of Fast Convergence. NIPS 2016: 4727-4735 - [i2]Dylan J. Foster, Zhiyuan Li, Thodoris Lykouris, Karthik Sridharan, Éva Tardos:
Fast Convergence of Common Learning Algorithms in Games. CoRR abs/1606.06244 (2016) - [i1]Yexiang Xue, Zhiyuan Li, Stefano Ermon, Carla P. Gomes, Bart Selman:
Solving Marginal MAP Problems with NP Oracles and Parity Constraints. CoRR abs/1610.02591 (2016)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-27 00:46 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint