default search action
Nan Jiang 0008
Person information
- affiliation: University of Illinois at Urbana-Champaign, Urbana, IL, USA
- affiliation (former): University of Michigan Ann Arbor, MI, USA
Other persons with the same name
- Nan Jiang — disambiguation page
- Nan Jiang 0001 — Chinese Academy of Sciences, Institute of Software, Beijing, China
- Nan Jiang 0002 — Zhejiang University School of Medicine, Hangzhou First People's Hospital, China
- Nan Jiang 0003 — Missouri University of Science and Technology, Rolla, MO, USA (and 2 more)
- Nan Jiang 0004 — Samsung Research China-Beijing, Beijing, China (and 1 more)
- Nan Jiang 0005 — Beijing University of Technology, Faculty of Information Technology, China
- Nan Jiang 0006 — Bournemouth University, Faculty of Science and Technology, Fern Barrow, Poole, United Kingdom (and 1 more)
- Nan Jiang 0007 — Colorado School of Mines, Computer Science Department, Golden, CO, USA (and 1 more)
- Nan Jiang 0009 — NVIDIA Corporation, St. Louis, USA (and 1 more)
- Nan Jiang 0010 — Beihang University, State Key Laboratory of Software Development Environment, China
- Nan Jiang 0011 — Arizona State University, Department of Electrical Engineering, Tempe, AZ, USA
- Nan Jiang 0012 — Purdue University West Lafayette, IN, USA
- Nan Jiang 0013 — East China Jiaotong University, Nanchang, China
- Nan Jiang 0014 — National University of Defense Technology, Changsha, China
- Nan Jiang 0015 — Shandong University, Weihai, China
- Nan Jiang 0016 — Huazhong University of Science and Technology, Wuhan, China
- Nan Jiang 0017 — University of Minnesota, Minneapolis, MN, USA
- Nan Jiang 0018 — Jiangsu Normal University, Xuzhou, China
- Nan Jiang 0019 — Texas A&M University, College Station, TX, USA
- Nan Jiang 0020 — Zhejiang Sci-Tech University, Hangzhou, Zhejiang, China (and 1 more)
- Nan Jiang 0021 — China University of Geosciences, Wuhan, China
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j1]Aditya Modi, Jinglin Chen, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal:
Model-Free Representation Learning and Exploration in Low-Rank MDPs. J. Mach. Learn. Res. 25: 6:1-6:76 (2024) - [c51]Yong Lin, Hangyu Lin, Wei Xiong, Shizhe Diao, Jianmeng Liu, Jipeng Zhang, Rui Pan, Haoxiang Wang, Wenbin Hu, Hanning Zhang, Hanze Dong, Renjie Pi, Han Zhao, Nan Jiang, Heng Ji, Yuan Yao, Tong Zhang:
Mitigating the Alignment Tax of RLHF. EMNLP 2024: 580-606 - [c50]Philip Amortila, Dylan J. Foster, Nan Jiang, Ayush Sekhari, Tengyang Xie:
Harnessing Density Ratios for Online Reinforcement Learning. ICLR 2024 - [c49]Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, Tong Zhang:
Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint. ICML 2024 - [i47]Philip Amortila, Dylan J. Foster, Nan Jiang, Ayush Sekhari, Tengyang Xie:
Harnessing Density Ratios for Online Reinforcement Learning. CoRR abs/2401.09681 (2024) - [i46]Chenlu Ye, Wei Xiong, Yuheng Zhang, Nan Jiang, Tong Zhang:
A Theoretical Analysis of Nash Learning from Human Feedback under General KL-Regularized Preference. CoRR abs/2402.07314 (2024) - [i45]Hanze Dong, Wei Xiong, Bo Pang, Haoxiang Wang, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, Tong Zhang:
RLHF Workflow: From Reward Modeling to Online RLHF. CoRR abs/2405.07863 (2024) - [i44]Yuheng Zhang, Dian Yu, Baolin Peng, Linfeng Song, Ye Tian, Mingyue Huo, Nan Jiang, Haitao Mi, Dong Yu:
Iterative Nash Policy Optimization: Aligning LLMs with General Preferences via No-Regret Learning. CoRR abs/2407.00617 (2024) - 2023
- [c48]Audrey Huang, Jinglin Chen, Nan Jiang:
Extended Abstract: Learning in Low-rank MDPs with Density Features. CISS 2023: 1-3 - [c47]Tengyang Xie, Dylan J. Foster, Yu Bai, Nan Jiang, Sham M. Kakade:
The Role of Coverage in Online Reinforcement Learning. ICLR 2023 - [c46]Philip Amortila, Nan Jiang, Csaba Szepesvári:
The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation. ICML 2023: 768-790 - [c45]Audrey Huang, Jinglin Chen, Nan Jiang:
Reinforcement Learning in Low-rank MDPs with Density Features. ICML 2023: 13710-13752 - [c44]Mohak Bhardwaj, Tengyang Xie, Byron Boots, Nan Jiang, Ching-An Cheng:
Adversarial Model for Offline Reinforcement Learning. NeurIPS 2023 - [c43]Masatoshi Uehara, Haruka Kiyohara, Andrew Bennett, Victor Chernozhukov, Nan Jiang, Nathan Kallus, Chengchun Shi, Wen Sun:
Future-Dependent Value-Based Off-Policy Evaluation in POMDPs. NeurIPS 2023 - [i43]Audrey Huang, Jinglin Chen, Nan Jiang:
Reinforcement Learning in Low-Rank MDPs with Density Features. CoRR abs/2302.02252 (2023) - [i42]Mohak Bhardwaj, Tengyang Xie, Byron Boots, Nan Jiang, Ching-An Cheng:
Adversarial Model for Offline Reinforcement Learning. CoRR abs/2302.11048 (2023) - [i41]Philip Amortila, Nan Jiang, Csaba Szepesvári:
The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation. CoRR abs/2307.13332 (2023) - [i40]Yong Lin, Hangyu Lin, Wei Xiong, Shizhe Diao, Jianmeng Liu, Jipeng Zhang, Rui Pan, Haoxiang Wang, Wenbin Hu, Hanning Zhang, Hanze Dong, Renjie Pi, Han Zhao, Nan Jiang, Yuan Yao, Tong Zhang:
Mitigating the Alignment Tax of RLHF. CoRR abs/2309.06256 (2023) - [i39]Wei Xiong, Hanze Dong, Chenlu Ye, Han Zhong, Nan Jiang, Tong Zhang:
Gibbs Sampling from Human Feedback: A Provable KL- constrained Framework for RLHF. CoRR abs/2312.11456 (2023) - 2022
- [c42]Jiawei Huang, Nan Jiang:
On the Convergence Rate of Off-Policy Policy Optimization Methods with Density-Ratio Correction. AISTATS 2022: 2658-2705 - [c41]Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, Jason D. Lee:
Offline Reinforcement Learning with Realizability and Single-policy Concentrability. COLT 2022: 2730-2775 - [c40]Jiawei Huang, Jinglin Chen, Li Zhao, Tao Qin, Nan Jiang, Tie-Yan Liu:
Towards Deployment-Efficient Reinforcement Learning: Lower Bound and Optimality. ICLR 2022 - [c39]Ching-An Cheng, Tengyang Xie, Nan Jiang, Alekh Agarwal:
Adversarially Trained Actor Critic for Offline Reinforcement Learning. ICML 2022: 3852-3878 - [c38]Chengchun Shi, Masatoshi Uehara, Jiawei Huang, Nan Jiang:
A Minimax Learning Approach to Off-Policy Evaluation in Confounded Partially Observable Markov Decision Processes. ICML 2022: 20057-20094 - [c37]Philip Amortila, Nan Jiang, Dhruv Madeka, Dean P. Foster:
A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation. NeurIPS 2022 - [c36]Jinglin Chen, Aditya Modi, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal:
On the Statistical Efficiency of Reward-Free Exploration in Non-Linear RL. NeurIPS 2022 - [c35]Audrey Huang, Nan Jiang:
Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions. NeurIPS 2022 - [c34]Jiawei Huang, Li Zhao, Tao Qin, Wei Chen, Nan Jiang, Tie-Yan Liu:
Tiered Reinforcement Learning: Pessimism in the Face of Uncertainty and Constant Regret. NeurIPS 2022 - [c33]Tengyang Xie, Akanksha Saran, Dylan J. Foster, Lekan P. Molu, Ida Momennejad, Nan Jiang, Paul Mineiro, John Langford:
Interaction-Grounded Learning with Action-Inclusive Feedback. NeurIPS 2022 - [c32]Jinglin Chen, Nan Jiang:
Offline reinforcement learning under value and density-ratio realizability: The power of gaps. UAI 2022: 378-388 - [i38]Ching-An Cheng, Tengyang Xie, Nan Jiang, Alekh Agarwal:
Adversarially Trained Actor Critic for Offline Reinforcement Learning. CoRR abs/2202.02446 (2022) - [i37]Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, Jason D. Lee:
Offline Reinforcement Learning with Realizability and Single-policy Concentrability. CoRR abs/2202.04634 (2022) - [i36]Jiawei Huang, Jinglin Chen, Li Zhao, Tao Qin, Nan Jiang, Tie-Yan Liu:
Towards Deployment-Efficient Reinforcement Learning: Lower Bound and Optimality. CoRR abs/2202.06450 (2022) - [i35]Jinglin Chen, Nan Jiang:
Offline Reinforcement Learning Under Value and Density-Ratio Realizability: the Power of Gaps. CoRR abs/2203.13935 (2022) - [i34]Jiawei Huang, Li Zhao, Tao Qin, Wei Chen, Nan Jiang, Tie-Yan Liu:
Tiered Reinforcement Learning: Pessimism in the Face of Uncertainty and Constant Regret. CoRR abs/2205.12418 (2022) - [i33]Tengyang Xie, Akanksha Saran, Dylan J. Foster, Lekan P. Molu, Ida Momennejad, Nan Jiang, Paul Mineiro, John Langford:
Interaction-Grounded Learning with Action-inclusive Feedback. CoRR abs/2206.08364 (2022) - [i32]Jinglin Chen, Aditya Modi, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal:
On the Statistical Efficiency of Reward-Free Exploration in Non-Linear RL. CoRR abs/2206.10770 (2022) - [i31]Philip Amortila, Nan Jiang, Dhruv Madeka, Dean P. Foster:
A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation. CoRR abs/2207.08342 (2022) - [i30]Masatoshi Uehara, Haruka Kiyohara, Andrew Bennett, Victor Chernozhukov, Nan Jiang, Nathan Kallus, Chengchun Shi, Wen Sun:
Future-Dependent Value-Based Off-Policy Evaluation in POMDPs. CoRR abs/2207.13081 (2022) - [i29]Tengyang Xie, Dylan J. Foster, Yu Bai, Nan Jiang, Sham M. Kakade:
The Role of Coverage in Online Reinforcement Learning. CoRR abs/2210.04157 (2022) - [i28]Audrey Huang, Nan Jiang:
Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions. CoRR abs/2210.15543 (2022) - [i27]Tengyang Xie, Mohak Bhardwaj, Nan Jiang, Ching-An Cheng:
ARMOR: A Model-based Framework for Improving Arbitrary Baseline Policies with Offline Data. CoRR abs/2211.04538 (2022) - 2021
- [c31]Priyank Agrawal, Jinglin Chen, Nan Jiang:
Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration. AAAI 2021: 6566-6573 - [c30]Cameron Voloshin, Nan Jiang, Yisong Yue:
Minimax Model Learning. AISTATS 2021: 1612-1620 - [c29]Gellért Weisz, Philip Amortila, Barnabás Janzer, Yasin Abbasi-Yadkori, Nan Jiang, Csaba Szepesvári:
On Query-efficient Planning in MDPs under Linear Realizability of the Optimal State-value Function. COLT 2021: 4355-4385 - [c28]Tengyang Xie, Nan Jiang:
Batch Value-function Approximation with Only Realizability. ICML 2021: 11404-11413 - [c27]Cameron Voloshin, Hoang Minh Le, Nan Jiang, Yisong Yue:
Empirical Study of Off-Policy Policy Evaluation for Reinforcement Learning. NeurIPS Datasets and Benchmarks 2021 - [c26]Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, Alekh Agarwal:
Bellman-consistent Pessimism for Offline Reinforcement Learning. NeurIPS 2021: 6683-6694 - [c25]Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, Yu Bai:
Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning. NeurIPS 2021: 27395-27407 - [i26]Gellért Weisz, Philip Amortila, Barnabás Janzer, Yasin Abbasi-Yadkori, Nan Jiang, Csaba Szepesvári:
On Query-efficient Planning in MDPs under Linear Realizability of the Optimal State-value Function. CoRR abs/2102.02049 (2021) - [i25]Masatoshi Uehara, Masaaki Imaizumi, Nan Jiang, Nathan Kallus, Wen Sun, Tengyang Xie:
Finite Sample Analysis of Minimax Offline Reinforcement Learning: Completeness, Fast Rates and First-Order Efficiency. CoRR abs/2102.02981 (2021) - [i24]Aditya Modi, Jinglin Chen, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal:
Model-free Representation Learning and Exploration in Low-rank MDPs. CoRR abs/2102.07035 (2021) - [i23]Cameron Voloshin, Nan Jiang, Yisong Yue:
Minimax Model Learning. CoRR abs/2103.02084 (2021) - [i22]Jiawei Huang, Nan Jiang:
On the Convergence Rate of Off-Policy Policy Optimization Methods with Density-Ratio Correction. CoRR abs/2106.00993 (2021) - [i21]Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, Yu Bai:
Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning. CoRR abs/2106.04895 (2021) - [i20]Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, Alekh Agarwal:
Bellman-consistent Pessimism for Offline Reinforcement Learning. CoRR abs/2106.06926 (2021) - [i19]Chengchun Shi, Masatoshi Uehara, Nan Jiang:
A Minimax Learning Approach to Off-Policy Evaluation in Partially Observable Markov Decision Processes. CoRR abs/2111.06784 (2021) - 2020
- [c24]Aditya Modi, Nan Jiang, Ambuj Tewari, Satinder Singh:
Sample Complexity of Reinforcement Learning using Linearly Combined Model Ensembles. AISTATS 2020: 2010-2020 - [c23]Jiawei Huang, Nan Jiang:
From Importance Sampling to Doubly Robust Policy Gradient. ICML 2020: 4434-4443 - [c22]Masatoshi Uehara, Jiawei Huang, Nan Jiang:
Minimax Weight and Q-Function Learning for Off-Policy Evaluation. ICML 2020: 9659-9668 - [c21]Nan Jiang, Jiawei Huang:
Minimax Value Interval for Off-Policy Evaluation and Policy Optimization. NeurIPS 2020 - [c20]Tengyang Xie, Nan Jiang:
Q* Approximation Schemes for Batch Reinforcement Learning: A Theoretical Comparison. UAI 2020: 550-559 - [i18]Nan Jiang, Jiawei Huang:
Minimax Confidence Interval for Off-Policy Evaluation and Policy Optimization. CoRR abs/2002.02081 (2020) - [i17]Tengyang Xie, Nan Jiang:
Q* Approximation Schemes for Batch Reinforcement Learning: A Theoretical Comparison. CoRR abs/2003.03924 (2020) - [i16]Tengyang Xie, Nan Jiang:
Batch Value-function Approximation with Only Realizability. CoRR abs/2008.04990 (2020) - [i15]Priyank Agrawal, Jinglin Chen, Nan Jiang:
Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration. CoRR abs/2010.12163 (2020) - [i14]Philip Amortila, Nan Jiang, Tengyang Xie:
A Variant of the Wang-Foster-Kakade Lower Bound for the Discounted Setting. CoRR abs/2011.01075 (2020)
2010 – 2019
- 2019
- [c19]Wen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford:
Model-based RL in Contextual Decision Processes: PAC bounds and Exponential Improvements over Model-free Approaches. COLT 2019: 2898-2933 - [c18]Jinglin Chen, Nan Jiang:
Information-Theoretic Considerations in Batch Reinforcement Learning. ICML 2019: 1042-1051 - [c17]Simon S. Du, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal, Miroslav Dudík, John Langford:
Provably efficient RL with Rich Observations via Latent State Decoding. ICML 2019: 1665-1674 - [c16]Yu Bai, Tengyang Xie, Nan Jiang, Yu-Xiang Wang:
Provably Efficient Q-Learning with Low Switching Cost. NeurIPS 2019: 8002-8011 - [i13]Simon S. Du, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal, Miroslav Dudík, John Langford:
Provably efficient RL with Rich Observations via Latent State Decoding. CoRR abs/1901.09018 (2019) - [i12]Jinglin Chen, Nan Jiang:
Information-Theoretic Considerations in Batch Reinforcement Learning. CoRR abs/1905.00360 (2019) - [i11]Yu Bai, Tengyang Xie, Nan Jiang, Yu-Xiang Wang:
Provably Efficient Q-Learning with Low Switching Cost. CoRR abs/1905.12849 (2019) - [i10]Jiawei Huang, Nan Jiang:
From Importance Sampling to Doubly Robust Policy Gradient. CoRR abs/1910.09066 (2019) - [i9]Aditya Modi, Nan Jiang, Ambuj Tewari, Satinder Singh:
Sample Complexity of Reinforcement Learning using Linearly Combined Model Ensembles. CoRR abs/1910.10597 (2019) - [i8]Masatoshi Uehara, Nan Jiang:
Minimax Weight and Q-Function Learning for Off-Policy Evaluation. CoRR abs/1910.12809 (2019) - [i7]Cameron Voloshin, Hoang Minh Le, Nan Jiang, Yisong Yue:
Empirical Study of Off-Policy Policy Evaluation for Reinforcement Learning. CoRR abs/1911.06854 (2019) - 2018
- [c15]Aditya Modi, Nan Jiang, Satinder Singh, Ambuj Tewari:
Markov Decision Processes with Continuous Side Information. ALT 2018: 597-618 - [c14]Nan Jiang, Alekh Agarwal:
Open Problem: The Dependence of Sample Complexity Lower Bounds on Planning Horizon. COLT 2018: 3395-3398 - [c13]Hoang Minh Le, Nan Jiang, Alekh Agarwal, Miroslav Dudík, Yisong Yue, Hal Daumé III:
Hierarchical Imitation and Reinforcement Learning. ICML 2018: 2923-2932 - [c12]Christoph Dann, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, Robert E. Schapire:
On Oracle-Efficient PAC RL with Rich Observations. NeurIPS 2018: 1429-1439 - [c11]Nan Jiang, Alex Kulesza, Satinder Singh:
Completing State Representations using Spectral Learning. NeurIPS 2018: 4333-4342 - [i6]Hoang Minh Le, Nan Jiang, Alekh Agarwal, Miroslav Dudík, Yisong Yue, Hal Daumé III:
Hierarchical Imitation and Reinforcement Learning. CoRR abs/1803.00590 (2018) - [i5]Christoph Dann, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, Robert E. Schapire:
On Polynomial Time PAC Reinforcement Learning with Rich Observations. CoRR abs/1803.00606 (2018) - [i4]Wen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford:
Model-Based Reinforcement Learning in Contextual Decision Processes. CoRR abs/1811.08540 (2018) - 2017
- [c10]Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, Robert E. Schapire:
Contextual Decision Processes with low Bellman rank are PAC-Learnable. ICML 2017: 1704-1713 - [c9]Kareem Amin, Nan Jiang, Satinder Singh:
Repeated Inverse Reinforcement Learning. NIPS 2017: 1815-1824 - [i3]Kareem Amin, Nan Jiang, Satinder Singh:
Repeated Inverse Reinforcement Learning. CoRR abs/1705.05427 (2017) - [i2]Aditya Modi, Nan Jiang, Satinder Singh, Ambuj Tewari:
Markov Decision Processes with Continuous Side Information. CoRR abs/1711.05726 (2017) - 2016
- [c8]Nan Jiang, Alex Kulesza, Satinder Singh:
Improving Predictive State Representations via Gradient Descent. AAAI 2016: 1709-1715 - [c7]Nan Jiang, Satinder Singh, Ambuj Tewari:
On Structural Properties of MDPs that Bound Loss Due to Shallow Planning. IJCAI 2016: 1640-1647 - [c6]Nan Jiang, Alex Kulesza, Satinder Singh, Richard L. Lewis:
The Dependence of Effective Planning Horizon on Model Accuracy. IJCAI 2016: 4180-4189 - [i1]Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, Robert E. Schapire:
Contextual Decision Processes with Low Bellman Rank are PAC-Learnable. CoRR abs/1610.09512 (2016) - 2015
- [c5]Alex Kulesza, Nan Jiang, Satinder Singh:
Spectral Learning of Predictive State Representations with Insufficient Statistics. AAAI 2015: 2715-2721 - [c4]Alex Kulesza, Nan Jiang, Satinder Singh:
Low-Rank Spectral Learning with Weighted Loss Functions. AISTATS 2015 - [c3]Nan Jiang, Alex Kulesza, Satinder Singh, Richard L. Lewis:
The Dependence of Effective Planning Horizon on Model Accuracy. AAMAS 2015: 1181-1189 - [c2]Nan Jiang, Alex Kulesza, Satinder Singh:
Abstraction Selection in Model-based Reinforcement Learning. ICML 2015: 179-188 - 2014
- [c1]Nan Jiang, Satinder Singh, Richard L. Lewis:
Improving UCT planning via approximate homomorphisms. AAMAS 2014: 1289-1296
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-19 21:45 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint