default search action
Journal of Machine Learning Research, Volume 23
Volume 23, 2022
- Subhabrata Majumdar, George Michailidis:
Joint Estimation and Inference for Data Integration Problems based on Multiple Multi-layered Gaussian Graphical Models. 1:1-1:53 - Shaogao Lv, Heng Lian:
Debiased Distributed Learning for Sparse Partial Linear Models in High Dimensions. 2:1-2:32 - Keith D. Levin, Asad Lodhia, Elizaveta Levina:
Recovering shared structure from multiple networks with unknown edge distributions. 3:1-3:48 - Lorenzo Rimella, Nick Whiteley:
Exploiting locality in high-dimensional Factorial hidden Markov models. 4:1-4:34 - Guillaume Ausset, Stéphan Clémençon, François Portier:
Empirical Risk Minimization under Random Censorship. 5:1-5:59 - Xi Peng, Yunfan Li, Ivor W. Tsang, Hongyuan Zhu, Jiancheng Lv, Joey Tianyi Zhou:
XAI Beyond Classification: Interpretable Neural Clustering. 6:1-6:28 - Justin D. Silverman, Kimberly Roche, Zachary C. Holmes, Lawrence A. David, Sayan Mukherjee:
Bayesian Multinomial Logistic Normal Models through Marginally Latent Matrix-T Processes. 7:1-7:42 - Michael Fairbank, Spyridon Samothrakis, Luca Citi:
Deep Learning in Target Space. 8:1-8:46 - Utkarsh Sharma, Jared Kaplan:
Scaling Laws from the Data Manifold Dimension. 9:1-9:34 - Florentina Bunea, Seth Strimas-Mackey, Marten H. Wegkamp:
Interpolating Predictors in High-Dimensional Factor Regression. 10:1-10:60 - Ali Devran Kara, Serdar Yüksel:
Near Optimality of Finite Memory Feedback Policies in Partially Observed Markov Decision Processes. 11:1-11:46 - Jayakumar Subramanian, Amit Sinha, Raihan Seraj, Aditya Mahajan:
Approximate Information State for Approximate Planning and Reinforcement Learning in Partially Observed Systems. 12:1-12:83 - Dimitris Bertsimas, Ryan Cory-Wright, Jean Pauphilet:
Solving Large-Scale Sparse PCA to Certifiable (Near) Optimality. 13:1-13:35 - Sarbojit Roy, Soham Sarkar, Subhajit Dutta, Anil Kumar Ghosh:
On Generalizations of Some Distance Based Classifiers for HDLSS Data. 14:1-14:41 - Alasdair Paren, Leonard Berrada, Rudra P. K. Poudel, M. Pawan Kumar:
A Stochastic Bundle Method for Interpolation. 15:1-15:57 - Kaixuan Wei, Angelica I. Avilés-Rivero, Jingwei Liang, Ying Fu, Hua Huang, Carola-Bibiane Schönlieb:
TFPnP: Tuning-free Plug-and-Play Proximal Algorithms with Applications to Inverse Imaging Problems. 16:1-16:48 - Michele Peruzzi, David B. Dunson:
Spatial Multivariate Trees for Big Data Bayesian Regression. 17:1-17:40 - Xuebin Zheng, Bingxin Zhou, Yu Guang Wang, Xiaosheng Zhuang:
Decimated Framelet System on Graphs and Fast G-Framelet Transforms. 18:1-18:68 - Oxana A. Manita, Mark A. Peletier, Jacobus W. Portegies, Jaron Sanders, Albert Senen-Cerda:
Universal Approximation in Dropout Neural Networks. 19:1-19:46 - Tomojit Ghosh, Michael Kirby:
Supervised Dimensionality Reduction and Visualization using Centroid-Encoder. 20:1-20:34 - Jakob Drefs, Enrico Guiraud, Jörg Lücke:
Evolutionary Variational Optimization of Generative Models. 21:1-21:51 - Ali Eshragh, Fred Roosta, Asef Nazari, Michael W. Mahoney:
LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data. 22:1-22:36 - Yuangang Pan, Ivor W. Tsang, Weijie Chen, Gang Niu, Masashi Sugiyama:
Fast and Robust Rank Aggregation against Model Misspecification. 23:1-23:35 - Derek Driggs, Jingwei Liang, Carola-Bibiane Schönlieb:
On Biased Stochastic Gradient Estimation. 24:1-24:43 - Maxime Vono, Daniel Paulin, Arnaud Doucet:
Efficient MCMC Sampling with Dimension-Free Convergence Rate using ADMM-type Splitting. 25:1-25:69 - Emir Demirovic, Anna Lukina, Emmanuel Hebrard, Jeffrey Chan, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao, Peter J. Stuckey:
MurTree: Optimal Decision Trees via Dynamic Programming and Search. 26:1-26:47 - Narayana Santhanam, Venkatachalam Anantharam, Wojciech Szpankowski:
Data-Derived Weak Universal Consistency. 27:1-27:55 - Mohammed Rayyan Sheriff, Debasish Chatterjee:
Novel Min-Max Reformulations of Linear Inverse Problems. 28:1-28:46 - Kaiyi Ji, Junjie Yang, Yingbin Liang:
Theoretical Convergence of Multi-Step Model-Agnostic Meta-Learning. 29:1-29:41 - Augusto Fasano, Daniele Durante:
A Class of Conjugate Priors for Multinomial Probit Models which Includes the Multivariate Normal One. 30:1-30:26 - Jaouad Mourtada, Stéphane Gaïffas:
An improper estimator with optimal excess risk in misspecified density estimation and logistic regression. 31:1-31:49 - Horia Mania, Michael I. Jordan, Benjamin Recht:
Active Learning for Nonlinear System Identification with Guarantees. 32:1-32:30 - Tri M. Le, Bertrand S. Clarke:
Model Averaging Is Asymptotically Better Than Model Selection For Prediction. 33:1-33:53 - Weijing Tang, Jiaqi Ma, Qiaozhu Mei, Ji Zhu:
SODEN: A Scalable Continuous-Time Survival Model through Ordinary Differential Equation Networks. 34:1-34:29 - Guojun Zhang, Pascal Poupart, Yaoliang Yu:
Optimality and Stability in Non-Convex Smooth Games. 35:1-35:71 - Feihu Huang, Shangqian Gao, Jian Pei, Heng Huang:
Accelerated Zeroth-Order and First-Order Momentum Methods from Mini to Minimax Optimization. 36:1-36:70 - Matteo Pegoraro, Mario Beraha:
Projected Statistical Methods for Distributional Data on the Real Line with the Wasserstein Metric. 37:1-37:59 - Lorenzo Pacchiardi, Ritabrata Dutta:
Score Matched Neural Exponential Families for Likelihood-Free Inference. 38:1-38:71 - Jeremiah Birrell, Paul Dupuis, Markos A. Katsoulakis, Yannis Pantazis, Luc Rey-Bellet:
(f, Gamma)-Divergences: Interpolating between f-Divergences and Integral Probability Metrics. 39:1-39:70 - Nikita Puchkin, Vladimir G. Spokoiny:
Structure-adaptive Manifold Estimation. 40:1-40:62 - Timothy I. Cannings, Yingying Fan:
The correlation-assisted missing data estimator. 41:1-41:49 - Zhong Li, Jiequn Han, Weinan E, Qianxiao Li:
Approximation and Optimization Theory for Linear Continuous-Time Recurrent Neural Networks. 42:1-42:85 - Rory Mitchell, Joshua Cooper, Eibe Frank, Geoffrey Holmes:
Sampling Permutations for Shapley Value Estimation. 43:1-43:46 - Si Liu, Risheek Garrepalli, Dan Hendrycks, Alan Fern, Debashis Mondal, Thomas G. Dietterich:
PAC Guarantees and Effective Algorithms for Detecting Novel Categories. 44:1-44:47 - Kevin O'Connor, Kevin McGoff, Andrew B. Nobel:
Optimal Transport for Stationary Markov Chains via Policy Iteration. 45:1-45:52 - Wanrong Zhu, Zhipeng Lou, Wei Biao Wu:
Beyond Sub-Gaussian Noises: Sharp Concentration Analysis for Stochastic Gradient Descent. 46:1-46:22 - Jonathan Ho, Chitwan Saharia, William Chan, David J. Fleet, Mohammad Norouzi, Tim Salimans:
Cascaded Diffusion Models for High Fidelity Image Generation. 47:1-47:33 - Zhiyan Ding, Shi Chen, Qin Li, Stephen J. Wright:
Overparameterization of Deep ResNet: Zero Loss and Mean-field Analysis. 48:1-48:65 - Xinyi Wang, Lang Tong:
Innovations Autoencoder and its Application in One-class Anomalous Sequence Detection. 49:1-49:27 - Luong Ha Nguyen, James-A. Goulet:
Analytically Tractable Hidden-States Inference in Bayesian Neural Networks. 50:1-50:33 - Dominique Benielli, Baptiste Bauvin, Sokol Koço, Riikka Huusari, Cécile Capponi, Hachem Kadri, François Laviolette:
Toolbox for Multimodal Learn (scikit-multimodallearn). 51:1-51:7 - Zijun Gao, Trevor Hastie:
LinCDE: Conditional Density Estimation via Lindsey's Method. 52:1-52:55 - Philipp Bach, Victor Chernozhukov, Malte S. Kurz, Martin Spindler:
DoubleML - An Object-Oriented Implementation of Double Machine Learning in Python. 53:1-53:6 - Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhkopf, René Sass, Frank Hutter:
SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization. 54:1-54:9 - Terrance D. Savitsky, Matthew R. Williams, Jingchen Hu:
Bayesian Pseudo Posterior Mechanism under Asymptotic Differential Privacy. 55:1-55:37 - Victor Guilherme Turrisi da Costa, Enrico Fini, Moin Nabi, Nicu Sebe, Elisa Ricci:
solo-learn: A Library of Self-supervised Methods for Visual Representation Learning. 56:1-56:6 - Han Zhao, Geoffrey J. Gordon:
Inherent Tradeoffs in Learning Fair Representations. 57:1-57:26 - Craig M. Lewis, Francesco Grossetti:
A Statistical Approach for Optimal Topic Model Identification. 58:1-58:20 - Carlos Fernández-Loría, Foster J. Provost:
Causal Classification: Treatment Effect Estimation vs. Outcome Prediction. 59:1-59:35 - Xun Zhang, William B. Haskell, Zhisheng Ye:
A Unifying Framework for Variance-Reduced Algorithms for Findings Zeroes of Monotone operators. 60:1-60:44 - Hengrui Luo, Giovanni Nattino, Matthew T. Pratola:
Sparse Additive Gaussian Process Regression. 61:1-61:34 - Manfred Jaeger:
The AIM and EM Algorithms for Learning from Coarse Data. 62:1-62:55 - Ben Sherwood, Adam Maidman:
Additive nonlinear quantile regression in ultra-high dimension. 63:1-63:47 - Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra:
Stochastic Zeroth-Order Optimization under Nonstationarity and Nonconvexity. 64:1-64:47 - Tianyi Lin, Nhat Ho, Marco Cuturi, Michael I. Jordan:
On the Complexity of Approximating Multimarginal Optimal Transport. 65:1-65:43 - Aaron J. Molstad:
New Insights for the Multivariate Square-Root Lasso. 66:1-66:52 - Chiyuan Zhang, Samy Bengio, Yoram Singer:
Are All Layers Created Equal? 67:1-67:28 - Wei Zhu, Qiang Qiu, A. Robert Calderbank, Guillermo Sapiro, Xiuyuan Cheng:
Scaling-Translation-Equivariant Networks with Decomposed Convolutional Filters. 68:1-68:45 - Alex Olshevsky:
Asymptotic Network Independence and Step-Size for a Distributed Subgradient Method. 69:1-69:32 - Asad Haris, Noah Simon, Ali Shojaie:
Generalized Sparse Additive Models. 70:1-70:56 - Wanjun Liu, Xiufan Yu, Runze Li:
Multiple-Splitting Projection Test for High-Dimensional Mean Vectors. 71:1-71:27 - Susanna Lange, Kyle Helfrich, Qiang Ye:
Batch Normalization Preconditioning for Neural Network Training. 72:1-72:41 - George Wynne, Andrew B. Duncan:
A Kernel Two-Sample Test for Functional Data. 73:1-73:51 - Ba-Hien Tran, Simone Rossi, Dimitrios Milios, Maurizio Filippone:
All You Need is a Good Functional Prior for Bayesian Deep Learning. 74:1-74:56 - Gábor Melis, András György, Phil Blunsom:
Mutual Information Constraints for Monte-Carlo Objectives to Prevent Posterior Collapse Especially in Language Modelling. 75:1-75:36 - Lilian Besson, Emilie Kaufmann, Odalric-Ambrym Maillard, Julien Seznec:
Efficient Change-Point Detection for Tackling Piecewise-Stationary Bandits. 77:1-77:40 - Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos:
Multi-Agent Online Optimization with Delays: Asynchronicity, Adaptivity, and Optimism. 78:1-78:49 - Yuling Yao, Aki Vehtari, Andrew Gelman:
Stacking for Non-mixing Bayesian Computations: The Curse and Blessing of Multimodal Posteriors. 79:1-79:45 - Marta Catalano, Pierpaolo De Blasi, Antonio Lijoi, Igor Prünster:
Posterior Asymptotics for Boosted Hierarchical Dirichlet Process Mixtures. 80:1-80:23 - David G. Harris, Thomas W. Pensyl, Aravind Srinivasan, Khoa Trinh:
Dependent randomized rounding for clustering and partition systems with knapsack constraints. 81:1-81:41 - Boxin Zhao, Y. Samuel Wang, Mladen Kolar:
FuDGE: A Method to Estimate a Functional Differential Graph in a High-Dimensional Setting. 82:1-82:82 - Yichi Zhang, Molei Liu, Matey Neykov, Tianxi Cai:
Prior Adaptive Semi-supervised Learning with Application to EHR Phenotyping. 83:1-83:25 - Rajarshi Guhaniyogi, Cheng Li, Terrance D. Savitsky, Sanvesh Srivastava:
Distributed Bayesian Varying Coefficient Modeling Using a Gaussian Process Prior. 84:1-84:59 - Zhanrui Cai, Runze Li, Yaowu Zhang:
A Distribution Free Conditional Independence Test with Applications to Causal Discovery. 85:1-85:41 - Chao Shen, Yu-Ting Lin, Hau-Tieng Wu:
Robust and scalable manifold learning via landmark diffusion for long-term medical signal processing. 86:1-86:30 - Rafael Izbicki, Gilson Y. Shimizu, Rafael Bassi Stern:
CD-split and HPD-split: Efficient Conformal Regions in High Dimensions. 87:1-87:32 - Hongzhi Liu, Yingpeng Du, Zhonghai Wu:
Generalized Ambiguity Decomposition for Ranking Ensemble Learning. 88:1-88:36 - Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, Kevin Murphy:
Machine Learning on Graphs: A Model and Comprehensive Taxonomy. 89:1-89:64 - Xi Chen, Bo Jiang, Tianyi Lin, Shuzhong Zhang:
Accelerating Adaptive Cubic Regularization of Newton's Method via Random Sampling. 90:1-90:38 - Eran Malach, Shai Shalev-Shwartz:
When Hardness of Approximation Meets Hardness of Learning. 91:1-91:24 - Paz Fink Shustin, Haim Avron:
Gauss-Legendre Features for Gaussian Process Regression. 92:1-92:47 - Jakob Raymaekers, Ruben H. Zamar:
Regularized K-means Through Hard-Thresholding. 93:1-93:48 - Kweku Abraham, Ismael Castillo, Elisabeth Gassiat:
Multiple Testing in Nonparametric Hidden Markov Models: An Empirical Bayes Approach. 94:1-94:57 - Jan Niklas Böhm, Philipp Berens, Dmitry Kobak:
Attraction-Repulsion Spectrum in Neighbor Embeddings. 95:1-95:32 - Chunxiao Li, Cynthia Rudin, Tyler H. McCormick:
Rethinking Nonlinear Instrumental Variable Models through Prediction Validity. 96:1-96:55 - Daniel Sanz-Alonso, Ruiyi Yang:
Unlabeled Data Help in Graph-Based Semi-Supervised Learning: A Bayesian Nonparametrics Perspective. 97:1-97:28 - Hsiang-Fu Yu, Kai Zhong, Jiong Zhang, Wei-Cheng Chang, Inderjit S. Dhillon:
PECOS: Prediction for Enormous and Correlated Output Spaces. 98:1-98:32 - Qiong Zhang, Jiahua Chen:
Distributed Learning of Finite Gaussian Mixtures. 99:1-99:40 - Hannes Köhler, Andreas Christmann:
Total Stability of SVMs and Localized SVMs. 100:1-100:41 - Xiangyu Yang, Jiashan Wang, Hao Wang:
Towards An Efficient Approach for the Nonconvex lp Ball Projection: Algorithm and Analysis. 101:1-101:31 - Efstathia Bura, Liliana Forzani, Rodrigo García Arancibia, Pamela Llop, Diego Tomassi:
Sufficient reductions in regression with mixed predictors. 102:1-102:47 - Nir Weinberger, Guy Bresler:
The EM Algorithm is Adaptively-Optimal for Unbalanced Symmetric Gaussian Mixtures. 103:1-103:79 - F. Richard Guo, Emilija Perkovic:
Efficient Least Squares for Estimating Total Effects under Linearity and Causal Sufficiency. 104:1-104:41 - Michael Puthawala, Konik Kothari, Matti Lassas, Ivan Dokmanic, Maarten V. de Hoop:
Globally Injective ReLU Networks. 105:1-105:55 - Bokun Wang, Shiqian Ma, Lingzhou Xue:
Riemannian Stochastic Proximal Gradient Methods for Nonsmooth Optimization over the Stiefel Manifold. 106:1-106:33 - Christoffer Löffler, Christopher Mutschler:
IALE: Imitating Active Learner Ensembles. 107:1-107:29 - Daniel R. Kowal:
Bayesian subset selection and variable importance for interpretable prediction and classification. 108:1-108:38 - Kayvan Sadeghi, Terry Soo:
Conditions and Assumptions for Constraint-based Causal Structure Learning. 109:1-109:34 - Jun Ho Yoon, Seyoung Kim:
EiGLasso for Scalable Sparse Kronecker-Sum Inverse Covariance Estimation. 110:1-110:39 - Masaaki Imaizumi, Kenji Fukumizu:
Advantage of Deep Neural Networks for Estimating Functions with Singularity on Hypersurfaces. 111:1-111:54 - Shu Hu, Yiming Ying, Xin Wang, Siwei Lyu:
Sum of Ranked Range Loss for Supervised Learning. 112:1-112:44 - José Correa, Andrés Cristi, Boris Epstein, José A. Soto:
The Two-Sided Game of Googol. 113:1-113:37 - Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, Yi Ma:
ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction. 114:1-114:103 - Linh Tran, Maja Pantic, Marc Peter Deisenroth:
Cauchy-Schwarz Regularized Autoencoder. 115:1-115:37 - Jian Huang, Yuling Jiao, Zhen Li, Shiao Liu, Yang Wang, Yunfei Yang:
An Error Analysis of Generative Adversarial Networks for Learning Distributions. 116:1-116:43 - Chelsea Sidrane, Amir Maleki, Ahmed Irfan, Mykel J. Kochenderfer:
OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems. 117:1-117:45 - Hanyuan Hang, Yuchao Cai, Hanfang Yang, Zhouchen Lin:
Under-bagging Nearest Neighbors for Imbalanced Classification. 118:1-118:63 - Lei Wu, Jihao Long:
A spectral-based analysis of the separation between two-layer neural networks and linear methods. 119:1-119:34 - William Fedus, Barret Zoph, Noam Shazeer:
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. 120:1-120:39 - Huang Fang, Nicholas J. A. Harvey, Victor S. Portella, Michael P. Friedlander:
Online Mirror Descent and Dual Averaging: Keeping Pace in the Dynamic Case. 121:1-121:38 - Luca Venturi, Samy Jelassi, Tristan Ozuch, Joan Bruna:
Depth separation beyond radial functions. 122:1-122:56 - Jian-Feng Cai, Jingyang Li, Dong Xia:
Provable Tensor-Train Format Tensor Completion by Riemannian Optimization. 123:1-123:77 - Julien Herzen, Francesco Lässig, Samuele Giuliano Piazzetta, Thomas Neuer, Léo Tafti, Guillaume Raille, Tomas Van Pottelbergh, Marek Pasieka, Andrzej Skrodzki, Nicolas Huguenin, Maxime Dumonal, Jan Koscisz, Dennis Bader, Frédérick Gusset, Mounir Benheddi, Camila Williamson, Michal Kosinski, Matej Petrik, Gaël Grosch:
Darts: User-Friendly Modern Machine Learning for Time Series. 124:1-124:6 - Niladri S. Chatterji, Philip M. Long:
Foolish Crowds Support Benign Overfitting. 125:1-125:12 - Sreejith Sreekumar, Ziv Goldfeld:
Neural Estimation of Statistical Divergences. 126:1-126:75 - Haoyuan Chen, Liang Ding, Rui Tuo:
Kernel Packet: An Exact and Scalable Algorithm for Gaussian Process Regression with Matérn Correlations. 127:1-127:32 - Jiaoyang Huang, Daniel Zhengyu Huang, Qing Yang, Guang Cheng:
Power Iteration for Tensor PCA. 128:1-128:47 - Washim Uddin Mondal, Mridul Agarwal, Vaneet Aggarwal, Satish V. Ukkusuri:
On the Approximation of Cooperative Heterogeneous Multi-Agent Reinforcement Learning (MARL) using Mean Field Control (MFC). 129:1-129:46 - Alexander Shevchenko, Vyacheslav Kungurtsev, Marco Mondelli:
Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks. 130:1-130:55 - Julie Nutini, Issam H. Laradji, Mark Schmidt:
Let's Make Block Coordinate Descent Converge Faster: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence. 131:1-131:74 - Jeremias Knoblauch, Jack Jewson, Theodoros Damoulas:
An Optimization-centric View on Bayes' Rule: Reviewing and Generalizing Variational Inference. 132:1-132:109 - Samson J. Koelle, Hanyu Zhang, Marina Meila, Yu-Chia Chen:
Manifold Coordinates with Physical Meaning. 133:1-133:57 - Shaohan Chen, Nikolaos V. Sahinidis, Chuanhou Gao:
Transfer Learning in Information Criteria-based Feature Selection. 134:1-134:105 - Jeremias Sulam, Chong You, Zhihui Zhu:
Recovery and Generalization in Over-Realized Dictionary Learning. 135:1-135:23 - Quanming Yao, Yaqing Wang, Bo Han, James T. Kwok:
Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm Regularization. 136:1-136:60 - Tianyi Lin, Nhat Ho, Michael I. Jordan:
On the Efficiency of Entropic Regularized Algorithms for Optimal Transport. 137:1-137:42 - Samuel Herrmann, Cristina Zucca:
Exact simulation of diffusion first exit times: algorithm acceleration. 138:1-138:20 - Ilai Bistritz, Zhengyuan Zhou, Xi Chen, Nicholas Bambos, José H. Blanchet:
No Weighted-Regret Learning in Adversarial Bandits with Delays. 139:1-139:43 - Yahya Sattar, Samet Oymak:
Non-asymptotic and Accurate Learning of Nonlinear Dynamical Systems. 140:1-140:49 - Konstantinos Pantazis, Avanti Athreya, Jesús Arroyo, William N. Frost, Evan S. Hill, Vince Lyzinski:
The Importance of Being Correlated: Implications of Dependence in Joint Spectral Inference across Multiple Networks. 141:1-141:77 - Roy Mitz, Yoel Shkolnisky:
A Perturbation-Based Kernel Approximation Framework. 142:1-142:26 - Alex A. Gorodetsky, Cosmin Safta, John D. Jakeman:
Reverse-mode differentiation in arbitrary tensor network format: with application to supervised learning. 143:1-143:29 - Aaron Defazio, Samy Jelassi:
A Momentumized, Adaptive, Dual Averaged Gradient Method. 144:1-144:34 - Andrew Patterson, Adam White, Martha White:
A Generalized Projected Bellman Error for Off-policy Value Estimation in Reinforcement Learning. 145:1-145:61 - Andrea Patane, Arno Blaas, Luca Laurenti, Luca Cardelli, Stephen Roberts, Marta Kwiatkowska:
Adversarial Robustness Guarantees for Gaussian Processes. 146:1-146:55 - Marco Avella-Medina, José Luis Montiel Olea, Cynthia Rush, Amilcar Velez:
On the Robustness to Misspecification of α-posteriors and Their Variational Approximations. 147:1-147:51 - Hanbaek Lyu, Christopher Strohmeier, Deanna Needell:
Online Nonnegative CP-dictionary Learning for Markovian Data. 148:1-148:50 - Quentin Bertrand, Quentin Klopfenstein, Mathurin Massias, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon:
Implicit Differentiation for Fast Hyperparameter Selection in Non-Smooth Convex Learning. 149:1-149:43 - Michaël Allouche, Stéphane Girard, Emmanuel Gobet:
EV-GAN: Simulation of extreme events with ReLU neural networks. 150:1-150:39 - Edward Wagstaff, Fabian B. Fuchs, Martin Engelcke, Michael A. Osborne, Ingmar Posner:
Universal Approximation of Functions on Sets. 151:1-151:56 - Sébastien Forestier, Rémy Portelas, Yoan Mollard, Pierre-Yves Oudeyer:
Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning. 152:1-152:41 - Shangtong Zhang, Shimon Whiteson:
Truncated Emphatic Temporal Difference Methods for Prediction and Control. 153:1-153:59 - Yanwei Jia, Xun Yu Zhou:
Policy Evaluation and Temporal-Difference Learning in Continuous Time and Space: A Martingale Approach. 154:1-154:55 - Guy Hacohen, Daphna Weinshall:
Principal Components Bias in Over-parameterized Linear Models, and its Manifestation in Deep Neural Networks. 155:1-155:46 - Yingying Zhang, Yan-Yong Zhao, Heng Lian:
Statistical Rates of Convergence for Functional Partially Linear Support Vector Machines for Classification. 156:1-156:24 - Vladimir Pestov:
A universally consistent learning rule with a universally monotone error. 157:1-157:27 - Arun S. Maiya:
ktrain: A Low-Code Library for Augmented Machine Learning. 158:1-158:6 - Martin Emil Jakobsen, Rajen Dinesh Shah, Peter Bühlmann, Jonas Peters:
Structure Learning for Directed Trees. 159:1-159:97 - Nikola Konstantinov, Christoph H. Lampert:
Fairness-Aware PAC Learning from Corrupted Data. 160:1-160:60 - Olympio Hacquard, Krishnakumar Balasubramanian, Gilles Blanchard, Clément Levrard, Wolfgang Polonik:
Topologically penalized regression on manifolds. 161:1-161:39 - Dachao Lin, Haishan Ye, Zhihua Zhang:
Explicit Convergence Rates of Greedy and Random Quasi-Newton Methods. 162:1-162:40 - Tian Tong, Cong Ma, Ashley Prater-Bennette, Erin E. Tripp, Yuejie Chi:
Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation from Incomplete Measurements. 163:1-163:77 - Antoine Dedieu, Rahul Mazumder, Haoyue Wang:
Solving L1-regularized SVMs and Related Linear Programs: Revisiting the Effectiveness of Column and Constraint Generation. 164:1-164:41 - Ingrid Blaschzyk, Ingo Steinwart:
Improved Classification Rates for Localized SVMs. 165:1-165:59 - Fredrik D. Johansson, Uri Shalit, Nathan Kallus, David A. Sontag:
Generalization Bounds and Representation Learning for Estimation of Potential Outcomes and Causal Effects. 166:1-166:50 - Michal Derezinski, Manfred K. Warmuth, Daniel Hsu:
Unbiased estimators for random design regression. 167:1-167:46 - Lucas Henrique Sousa Mello, Flávio Miguel Varejão, Alexandre Loureiros Rodrigues:
A Worst Case Analysis of Calibrated Label Ranking Multi-label Classification Method. 168:1-168:30 - Hai Shu, Zhe Qu, Hongtu Zhu:
D-GCCA: Decomposition-based Generalized Canonical Correlation Analysis for Multi-view High-dimensional Data. 169:1-169:64 - Tim Coleman, Wei Peng, Lucas Mentch:
Scalable and Efficient Hypothesis Testing with Random Forests. 170:1-170:35 - Aidan N. Gomez, Oscar Key, Kuba Perlin, Stephen Gou, Nick Frosst, Jeff Dean, Yarin Gal:
Interlocking Backpropagation: Improving depthwise model-parallelism. 171:1-171:28 - Yuanyu Wan, Guanghui Wang, Wei-Wei Tu, Lijun Zhang:
Projection-free Distributed Online Learning with Sublinear Communication Complexity. 172:1-172:53 - Diego Granziol, Stefan Zohren, Stephen Roberts:
Learning Rates as a Function of Batch Size: A Random Matrix Theory Approach to Neural Network Training. 173:1-173:65 - Ali Ghadirzadeh, Petra Poklukar, Karol Arndt, Chelsea Finn, Ville Kyrki, Danica Kragic, Mårten Björkman:
Training and Evaluation of Deep Policies Using Reinforcement Learning and Generative Models. 174:1-174:37 - Idan Attias, Aryeh Kontorovich, Yishay Mansour:
Improved Generalization Bounds for Adversarially Robust Learning. 175:1-175:31 - Ilya Chevyrev, Harald Oberhauser:
Signature Moments to Characterize Laws of Stochastic Processes. 176:1-176:42 - Ping Ma, Yongkai Chen, Xinlian Zhang, Xin Xing, Jingyi Ma, Michael W. Mahoney:
Asymptotic Analysis of Sampling Estimators for Randomized Numerical Linear Algebra Algorithms. 177:1-177:45 - Matteo Basei, Xin Guo, Anran Hu, Yufei Zhang:
Logarithmic Regret for Episodic Continuous-Time Linear-Quadratic Reinforcement Learning over a Finite-Time Horizon. 178:1-178:34 - Aurélien Garivier, Hédi Hadiji, Pierre Ménard, Gilles Stoltz:
KL-UCB-Switch: Optimal Regret Bounds for Stochastic Bandits from Both a Distribution-Dependent and a Distribution-Free Viewpoints. 179:1-179:66 - Huaqing Jin, Yanyuan Ma, Fei Jiang:
Matrix Completion with Covariate Information and Informative Missingness. 180:1-180:62 - David Holzmüller, Ingo Steinwart:
Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent. 181:1-181:82 - Alfonso Landeros, Oscar Hernan Madrid Padilla, Hua Zhou, Kenneth Lange:
Extensions to the Proximal Distance Method of Constrained Optimization. 182:1-182:45 - Yichen Zhou, Giles Hooker:
Boulevard: Regularized Stochastic Gradient Boosted Trees and Their Limiting Distribution. 183:1-183:44 - Indrajit Ghosh, Anirban Bhattacharya, Debdeep Pati:
Statistical Optimality and Stability of Tangent Transform Algorithms in Logit Models. 184:1-184:42 - Bhumika Mistry, Katayoun Farrahi, Jonathon S. Hare:
A Primer for Neural Arithmetic Logic Modules. 185:1-185:58 - Song Liu, Takafumi Kanamori, Daniel J. Williams:
Estimating Density Models with Truncation Boundaries using Score Matching. 186:1-186:38 - Nicolás García Trillos, Ryan Murray:
Adversarial Classification: Necessary Conditions and Geometric Flows. 187:1-187:38 - Noa Ben-David, Sivan Sabato:
Active Structure Learning of Bayesian Networks in an Observational Setting. 188:1-188:38 - Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu, Zhangyang Wang, Wotao Yin:
Learning to Optimize: A Primer and A Benchmark. 189:1-189:59 - Pedro F. Felzenszwalb, Caroline J. Klivans, Alice Paul:
Clustering with Semidefinite Programming and Fixed Point Iteration. 190:1-190:23 - Benny Avelin, Anders Karlsson:
Deep Limits and a Cut-Off Phenomenon for Neural Networks. 191:1-191:29 - Leon Bungert, Tim Roith, Daniel Tenbrinck, Martin Burger:
A Bregman Learning Framework for Sparse Neural Networks. 192:1-192:43 - Wenjia Wang, Bing-Yi Jing:
Gaussian process regression: Optimality, robustness, and relationship with kernel ridge regression. 193:1-193:67 - Anna Bonnet, Claire Lacour, Franck Picard, Vincent Rivoirard:
Uniform deconvolution for Poisson Point Processes. 194:1-194:36 - Yang Yu, Shih-Kang Chao, Guang Cheng:
Distributed Bootstrap for Simultaneous Inference Under High Dimensionality. 195:1-195:77 - Anastasis Kratsios, Leonie Papon:
Universal Approximation Theorems for Differentiable Geometric Deep Learning. 196:1-196:73 - Xuhong Li, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Zeyu Chen, Dejing Dou:
InterpretDL: Explaining Deep Models in PaddlePaddle. 197:1-197:6 - Subha Maity, Yuekai Sun, Moulinath Banerjee:
Meta-analysis of heterogeneous data: integrative sparse regression in high-dimensions. 198:1-198:50 - Jongkyeong Kang, Seung Jun Shin:
A Forward Approach for Sufficient Dimension Reduction in Binary Classification. 199:1-199:31 - Katherine Tsai, Mladen Kolar, Oluwasanmi Koyejo:
A Nonconvex Framework for Structured Dynamic Covariance Recovery. 200:1-200:91 - Quentin Duchemin, Yohann de Castro, Claire Lacour:
Three rates of convergence or separation via U-statistics in a dependent framework. 201:1-201:59 - Jin Zhu, Xueqin Wang, Liyuan Hu, Junhao Huang, Kangkang Jiang, Yanhang Zhang, Shiyun Lin, Junxian Zhu:
abess: A Fast Best-Subset Selection Library in Python and R. 202:1-202:7 - Jon Cockayne, Matthew M. Graham, Chris J. Oates, Timothy John Sullivan, Onur Teymur:
Testing Whether a Learning Procedure is Calibrated. 203:1-203:36 - Baoluo Sun, Yifan Cui, Eric Tchetgen Tchetgen:
Selective Machine Learning of the Average Treatment Effect with an Invalid Instrumental Variable. 204:1-204:40 - Dennis Nieman, Botond Szabó, Harry van Zanten:
Contraction rates for sparse variational approximations in Gaussian process regression. 205:1-205:26 - Hoai An Le Thi, Hoang Phuc Hau Luu, Hoai Minh Le, Tao Pham Dinh:
Stochastic DCA with Variance Reduction and Applications in Machine Learning. 206:1-206:44 - Ji Chen, Xiaodong Li, Zongming Ma:
Nonconvex Matrix Completion with Linearly Parameterized Factors. 207:1-207:35 - Mikhail Usvyatsov, Rafael Ballester-Ripoll, Konrad Schindler:
tntorch: Tensor Network Learning with PyTorch. 208:1-208:6 - Kaichao You, Yong Liu, Ziyang Zhang, Jianmin Wang, Michael I. Jordan, Mingsheng Long:
Ranking and Tuning Pre-trained Models: A New Paradigm for Exploiting Model Hubs. 209:1-209:47 - Michael Pearce, Elena A. Erosheva:
A Unified Statistical Learning Model for Rankings and Scores with Application to Grant Panel Review. 210:1-210:33 - Feng Zhou, Quyu Kong, Zhijie Deng, Jichao Kan, Yixuan Zhang, Cheng Feng, Jun Zhu:
Efficient Inference for Dynamic Flexible Interactions of Neural Populations. 211:1-211:49 - Mridul Agarwal, Vaneet Aggarwal, Kamyar Azizzadenesheli:
Multi-Agent Multi-Armed Bandits with Limited Communication. 212:1-212:24 - Lars H. B. Olsen, Ingrid Kristine Glad, Martin Jullum, Kjersti Aas:
Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features. 213:1-213:51 - Yoav Freund, Yi-An Ma, Tong Zhang:
When is the Convergence Time of Langevin Algorithms Dimension Independent? A Composite Optimization Viewpoint. 214:1-214:32 - Georgios Kissas, Jacob H. Seidman, Leonardo Ferreira Guilhoto, Victor M. Preciado, George J. Pappas, Paris Perdikaris:
Learning Operators with Coupled Attention. 215:1-215:63 - Zhen Huang, Nabarun Deb, Bodhisattva Sen:
Kernel Partial Correlation Coefficient - a Measure of Conditional Dependence. 216:1-216:58 - Bo Shen, Weijun Xie, Zhenyu James Kong:
Smooth Robust Tensor Completion for Background/Foreground Separation with Missing Pixels: Novel Algorithm with Convergence Guarantee. 217:1-217:40 - Nicolas Boullé, Seick Kim, Tianyi Shi, Alex Townsend:
Learning Green's functions associated with time-dependent partial differential equations. 218:1-218:34 - Diviyan Kalainathan, Olivier Goudet, Isabelle Guyon, David Lopez-Paz, Michèle Sebag:
Structural Agnostic Modeling: Adversarial Learning of Causal Graphs. 219:1-219:62 - Alireza Fallah, Mert Gürbüzbalaban, Asuman E. Ozdaglar, Umut Simsekli, Lingjiong Zhu:
Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks. 220:1-220:96 - Dhruva Tirumala, Alexandre Galashov, Hyeonwoo Noh, Leonard Hasenclever, Razvan Pascanu, Jonathan Schwarz, Guillaume Desjardins, Wojciech Marian Czarnecki, Arun Ahuja, Yee Whye Teh, Nicolas Heess:
Behavior Priors for Efficient Reinforcement Learning. 221:1-221:68 - Huan Li, Zhouchen Lin, Yongchun Fang:
Variance Reduced EXTRA and DIGing and Their Optimal Acceleration for Strongly Convex Decentralized Optimization. 222:1-222:41 - Qiang Zhou, Sinno Jialin Pan:
On Acceleration for Convex Composite Minimization with Noise-Corrupted Gradients and Approximate Proximal Mapping. 223:1-223:59 - Lucas Mentch, Siyu Zhou:
Getting Better from Worse: Augmented Bagging and A Cautionary Tale of Variable Importance. 224:1-224:32 - Charvi Rastogi, Sivaraman Balakrishnan, Nihar B. Shah, Aarti Singh:
Two-Sample Testing on Ranked Preference Data and the Role of Modeling Assumptions. 225:1-225:48 - Alexander D'Amour, Katherine A. Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yi-An Ma, Cory Y. McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, D. Sculley:
Underspecification Presents Challenges for Credibility in Modern Machine Learning. 226:1-226:61 - Hao Chen, Lili Zheng, Raed Al Kontar, Garvesh Raskutti:
Gaussian Process Parameter Estimation Using Mini-batch Stochastic Gradient Descent: Convergence Guarantees and Empirical Benefits. 227:1-227:59 - Sébastien Gadat, Ioana Gavra:
Asymptotic Study of Stochastic Adaptive Algorithms in Non-convex Landscape. 228:1-228:54 - Congliang Chen, Li Shen, Fangyu Zou, Wei Liu:
Towards Practical Adam: Non-Convexity, Convergence Theory, and Mini-Batch Acceleration. 229:1-229:47 - Alex Bird, Christopher K. I. Williams, Christopher Hawthorne:
Multi-Task Dynamical Systems. 230:1-230:52 - Hiroaki Sasaki, Takashi Takenouchi:
Representation Learning for Maximization of MI, Nonlinear ICA and Nonlinear Subspaces with Robust Density Ratio Estimation. 231:1-231:55 - Fabio Sigrist:
Gaussian Process Boosting. 232:1-232:46 - Wenlong Mou, Nicolas Flammarion, Martin J. Wainwright, Peter L. Bartlett:
An Efficient Sampling Algorithm for Non-smooth Composite Potentials. 233:1-233:50 - Oscar Hernan Madrid Padilla, Yi Yu, Carey E. Priebe:
Change point localization in dependent dynamic nonparametric random dot product graphs. 234:1-234:59 - Arnak S. Dalalyan, Avetik G. Karagulyan, Lionel Riou-Durand:
Bounding the Error of Discretized Langevin Algorithms for Non-Strongly Log-Concave Targets. 235:1-235:38 - Chencheng Cai, Rong Chen, Han Xiao:
KoPA: Automated Kronecker Product Approximation. 236:1-236:44 - Yang Zhou, Mark Koudstaal, Dengdeng Yu, Dehan Kong, Fang Yao:
Nonparametric Principal Subspace Regression. 237:1-237:28 - Prashanth L. A., Sanjay P. Bhat:
A Wasserstein Distance Approach for Concentration of Empirical Risk Estimates. 238:1-238:61 - Zhize Li, Jian Li:
Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization. 239:1-239:61 - Harsh Parikh, Cynthia Rudin, Alexander Volfovsky:
MALTS: Matching After Learning to Stretch. 240:1-240:42 - Xinwei Shen, Furui Liu, Hanze Dong, Qing Lian, Zhitang Chen, Tong Zhang:
Weakly Supervised Disentangled Generative Causal Representation Learning. 241:1-241:55 - Yang Ni, Francesco C. Stingo, Veerabhadran Baladandayuthapani:
Bayesian Covariate-Dependent Gaussian Graphical Models with Varying Structure. 242:1-242:29 - Ines Wilms, Jacob Bien:
Tree-based Node Aggregation in Sparse Graphical Models. 243:1-243:36 - Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez:
Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables. 244:1-244:54 - Mehdi Molkaraie:
Mappings for Marginal Probabilities with Applications to Models in Statistical Physics. 245:1-245:36 - Lorenzo Nespoli, Vasco Medici:
Multivariate Boosted Trees and Applications to Forecasting and Control. 246:1-246:47 - Oscar Hernan Madrid Padilla, Wesley Tansey, Yanzhen Chen:
Quantile regression with ReLU Networks: Estimators and minimax rates. 247:1-247:42 - Huiming Lin, Meng Li:
Double Spike Dirichlet Priors for Structured Weighting. 248:1-248:28 - Long Feng, Junhui Wang:
Projected Robust PCA with Application to Smooth Image Recovery. 249:1-249:41 - Daiqi Gao, Yufeng Liu, Donglin Zeng:
Non-asymptotic Properties of Individualized Treatment Rules from Sequentially Rule-Adaptive Trials. 250:1-250:42 - Abhijin Adiga, Chris J. Kuhlman, Madhav V. Marathe, S. S. Ravi, Daniel J. Rosenkrantz, Richard Edwin Stearns:
Using Active Queries to Infer Symmetric Node Functions of Graph Dynamical Systems. 251:1-251:43 - Diego A. Velázquez, Pau Rodríguez, Josep M. Gonfaus, F. Xavier Roca, Jordi Gonzàlez:
A Closer Look at Embedding Propagation for Manifold Smoothing. 252:1-252:27 - Alan Chan, Hugo Silva, Sungsu Lim, Tadashi Kozuno, A. Rupam Mahmood, Martha White:
Greedification Operators for Policy Optimization: Investigating Forward and Reverse KL Divergences. 253:1-253:79 - Minh-Lien Jeanne Nguyen, Claire Lacour, Vincent Rivoirard:
Adaptive Greedy Algorithm for Moderately Large Dimensions in Kernel Conditional Density Estimation. 254:1-254:74 - Shi Dong, Benjamin Van Roy, Zhengyuan Zhou:
Simple Agent, Complex Environment: Efficient Reinforcement Learning with Agent States. 255:1-255:54 - Michael Muehlebach, Michael I. Jordan:
On Constraints in First-Order Optimization: A View from Non-Smooth Dynamical Systems. 256:1-256:47 - André F. T. Martins, Marcos V. Treviso, António Farinhas, Pedro M. Q. Aguiar, Mário A. T. Figueiredo, Mathieu Blondel, Vlad Niculae:
Sparse Continuous Distributions and Fenchel-Young Losses. 257:1-257:74 - Assaf Rabinowicz, Saharon Rosset:
Tree-Based Models for Correlated Data. 258:1-258:31 - Shiwei Lan:
Learning Temporal Evolution of Spatial Dependence with Generalized Spatiotemporal Gaussian Process Models. 259:1-259:53 - Arnulf Jentzen, Adrian Riekert:
A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions. 260:1-260:50 - Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, Frank Hutter:
Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning. 261:1-261:61 - Muxuan Liang, Young-Geun Choi, Yang Ning, Maureen A. Smith, Ying-Qi Zhao:
Estimation and inference on high-dimensional individualized treatment rule in observational data using split-and-pooled de-correlated score. 262:1-262:65 - Niladri S. Chatterji, Philip M. Long, Peter L. Bartlett:
The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer Linear Networks. 263:1-263:48 - José Henrique de Morais Goulart, Romain Couillet, Pierre Comon:
A Random Matrix Perspective on Random Tensors. 264:1-264:36 - Ion Necoara, Nitesh Kumar Singh:
Stochastic subgradient for composite convex optimization with functional constraints. 265:1-265:35 - Daren Wang, Zifeng Zhao, Yi Yu, Rebecca Willett:
Functional Linear Regression with Mixed Predictors. 266:1-266:94 - Jiayi Weng, Huayu Chen, Dong Yan, Kaichao You, Alexis Duburcq, Minghao Zhang, Yi Su, Hang Su, Jun Zhu:
Tianshou: A Highly Modularized Deep Reinforcement Learning Library. 267:1-267:6 - Kit C. Chan, Umar Islambekov, Alexey Luchinsky, Rebecca Sanders:
A Computationally Efficient Framework for Vector Representation of Persistence Diagrams. 268:1-268:33 - Ruixuan Zhao, Xin He, Junhui Wang:
Learning linear non-Gaussian directed acyclic graph with diverging number of nodes. 269:1-269:34 - Keru Wu, Scott C. Schmidler, Yuansi Chen:
Minimax Mixing Time of the Metropolis-Adjusted Langevin Algorithm for Log-Concave Sampling. 270:1-270:63 - Kun Chen, Ruipeng Dong, Wanwan Xu, Zemin Zheng:
Fast Stagewise Sparse Factor Regression. 271:1-271:45 - Kean Ming Tan, Heather Battey, Wen-Xin Zhou:
Communication-Constrained Distributed Quantile Regression with Optimal Statistical Guarantees. 272:1-272:61 - Cyrill Scheidegger, Julia Hörrmann, Peter Bühlmann:
The Weighted Generalised Covariance Measure. 273:1-273:68 - Shengyi Huang, Rousslan Fernand Julien Dossa, Chang Ye, Jeff Braga, Dipam Chakraborty, Kinal Mehta, João G. M. Araújo:
CleanRL: High-quality Single-file Implementations of Deep Reinforcement Learning Algorithms. 274:1-274:18 - Yanwei Jia, Xun Yu Zhou:
Policy Gradient and Actor-Critic Learning in Continuous Time and Space: Theory and Algorithms. 275:1-275:50 - Shijun Zhang, Zuowei Shen, Haizhao Yang:
Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons. 276:1-276:60 - Nicolò Cesa-Bianchi, Tommaso Cesari, Roberto Colomboni, Claudio Gentile, Yishay Mansour:
Nonstochastic Bandits with Composite Anonymous Feedback. 277:1-277:24 - Chiwoo Park:
Jump Gaussian Process Model for Estimating Piecewise Continuous Regression Functions. 278:1-278:37 - Amichai Painsky:
Convergence Guarantees for the Good-Turing Estimator. 279:1-279:37 - Parisa Ghane, Ulisses Braga-Neto:
Generalized Resubstitution for Classification Error Estimation. 280:1-280:30 - Nicholas M. Boffi, Stephen Tu, Jean-Jacques E. Slotine:
Nonparametric adaptive control and prediction: theory and randomized algorithms. 281:1-281:46 - Lin Xiao:
On the Convergence Rates of Policy Gradient Methods. 282:1-282:36 - Adrien Corenflos, Nicolas Chopin, Simo Särkkä:
De-Sequentialized Monte Carlo: a parallel-in-time particle smoother. 283:1-283:39 - Chuyang Ke, Jean Honorio:
Exact Partitioning of High-order Models with a Novel Convex Tensor Cone Relaxation. 284:1-284:28 - Shir Chorev, Philip Tannor, Dan Ben Israel, Noam Bressler, Itay Gabbay, Nir Hutnik, Jonatan Liberman, Matan Perlmutter, Yurii Romanyshyn, Lior Rokach:
Deepchecks: A Library for Testing and Validating Machine Learning Models and Data. 285:1-285:6 - Yong Zheng Ong, Zuowei Shen, Haizhao Yang:
Integral Autoencoder Network for Discretization-Invariant Learning. 286:1-286:45 - Haiyun He, Hanshu Yan, Vincent Y. F. Tan:
Information-Theoretic Characterization of the Generalization Error for Iterative Semi-Supervised Learning. 287:1-287:52 - Francesco Martinuzzi, Chris Rackauckas, Anas Abdelrehim, Miguel D. Mahecha, Karin Mora:
ReservoirComputing.jl: An Efficient and Modular Library for Reservoir Computing Models. 288:1-288:8 - Laura Forastiere, Fabrizia Mealli, Albert Wu, Edoardo M. Airoldi:
Estimating Causal Effects under Network Interference with Bayesian Generalized Propensity Scores. 289:1-289:61 - Davoud Ataee Tarzanagh, George Michailidis:
Regularized and Smooth Double Core Tensor Factorization for Heterogeneous Data. 290:1-290:49 - Lukasz Kidzinski, Francis K. C. Hui, David I. Warton, Trevor J. Hastie:
Generalized Matrix Factorization: efficient algorithms for fitting generalized linear latent variable models to large data arrays. 291:1-291:29 - Qiuping Wang, Ting Yan, Binyan Jiang, Chenlei Leng:
Two-mode Networks: Inference with as Many Parameters as Actors and Differential Privacy. 292:1-292:38 - Daron Anderson, Douglas J. Leith:
Expected Regret and Pseudo-Regret are Equivalent When the Optimal Arm is Unique. 293:1-293:12 - Bernardo Fichera, Aude Billard:
Linearization and Identification of Multiple-Attractor Dynamical Systems through Laplacian Eigenmaps. 294:1-294:35 - Rohit Bhattacharya, Razieh Nabi, Ilya Shpitser:
Semiparametric Inference For Causal Effects In Graphical Models With Hidden Variables. 295:1-295:76 - Dimitris Bertsimas, Jack Dunn, Ivan S. Paskov:
Stable Classification. 296:1-296:53 - Pierre-Cyril Aubin-Frankowski, Zoltán Szabó:
Handling Hard Affine SDP Shape Constraints in RKHSs. 297:1-297:54 - Simon Mandlík, Matej Racinsky, Viliam Lisý, Tomás Pevný:
JsonGrinder.jl: automated differentiable neural architecture for embedding arbitrary JSON data. 298:1-298:5 - Zeda Li, Scott A. Bruce, Tian Cai:
Interpretable Classification of Categorical Time Series Using the Spectral Envelope and Optimal Scalings. 299:1-299:31 - Vo Nguyen Le Duy, Ichiro Takeuchi:
More Powerful Conditional Selective Inference for Generalized Lasso by Parametric Programming. 300:1-300:37 - T. Tony Cai, Rong Ma:
Theoretical Foundations of t-SNE for Visualizing High-Dimensional Clustered Data. 301:1-301:54 - Yutian Chen, Liyuan Xu, Çaglar Gülçehre, Tom Le Paine, Arthur Gretton, Nando de Freitas, Arnaud Doucet:
On Instrumental Variable Regression for Deep Offline Policy Evaluation. 302:1-302:40 - Alice Gatti, Zhixiong Hu, Tess E. Smidt, Esmond G. Ng, Pieter Ghysels:
Graph Partitioning and Sparse Matrix Ordering using Reinforcement Learning and Graph Neural Networks. 303:1-303:28 - Sumit Mukherjee, Subhabrata Sen:
Variational Inference in high-dimensional linear regression. 304:1-304:56 - Anna C. Neufeld, Lucy L. Gao, Daniela M. Witten:
Tree-Values: Selective Inference for Regression Trees. 305:1-305:43 - Lu Zhang, Bob Carpenter, Andrew Gelman, Aki Vehtari:
Pathfinder: Parallel quasi-Newton variational inference. 306:1-306:49 - Songhua Wu, Tongliang Liu, Bo Han, Jun Yu, Gang Niu, Masashi Sugiyama:
Learning from Noisy Pairwise Similarity and Unlabeled Data. 307:1-307:34 - Hong T. M. Chu, Kim-Chuan Toh, Yangjing Zhang:
On Regularized Square-root Regression Problems: Distributionally Robust Interpretation and Fast Computations. 308:1-308:39 - Sjoerd Dirksen, Martin Genzel, Laurent Jacques, Alexander Stollenwerk:
The Separation Capacity of Random Neural Networks. 309:1-309:47 - Shujie Ma, Liangjun Su, Yichong Zhang:
Detecting Latent Communities in Network Formation Models. 310:1-310:61 - Tenghui Li, Guoxu Zhou, Yuning Qiu, Qibin Zhao:
Toward Understanding Convolutional Neural Networks from Volterra Convolution Perspective. 311:1-311:50 - Zirui Sun, Mingwei Dai, Yao Wang, Shao-Bo Lin:
Nystrom Regularization for Time Series Forecasting. 312:1-312:42 - Adam Block, Zeyu Jia, Yury Polyanskiy, Alexander Rakhlin:
Intrinsic Dimension Estimation Using Wasserstein Distance. 313:1-313:37 - Guy Kornowski, Ohad Shamir:
Oracle Complexity in Nonsmooth Nonconvex Optimization. 314:1-314:44 - Takuma Seno, Michita Imai:
d3rlpy: An Offline Deep Reinforcement Learning Library. 315:1-315:20 - Tian Lan, Sunil Srinivasa, Huan Wang, Stephan Zheng:
WarpDrive: Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU. 316:1-316:6 - Hao Dong, Yuedong Wang:
Nonparametric Neighborhood Selection in Graphical Models. 317:1-317:36 - Jeff Calder, Mahmood Ettehad:
Hamilton-Jacobi equations on graphs with applications to semi-supervised learning and data depth. 318:1-318:62 - Zhuotong Chen, Qianxiao Li, Zheng Zhang:
Self-Healing Robust Neural Networks via Closed-Loop Control. 319:1-319:54 - Yidong Zhou, Hans-Georg Müller:
Network Regression with Graph Laplacians. 320:1-320:41 - Nima Hamidi, Mohsen Bayati:
On Low-rank Trace Regression under General Sampling Distribution. 321:1-321:49 - Fengnan Gao, Zongming Ma, Hongsong Yuan:
Community detection in sparse latent space models. 322:1-322:50 - Nhat Ho, Chiao-Yu Yang, Michael I. Jordan:
Convergence Rates for Gaussian Mixtures of Experts. 323:1-323:81 - Yang Liu, Anthony C. Constantinou, Zhigao Guo:
Improving Bayesian Network Structure Learning in the Presence of Measurement Error. 324:1-324:28 - Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, Jean-Philippe Vert:
On Mixup Regularization. 325:1-325:31 - Rishi Sonthalia, Anna C. Gilbert:
Project and Forget: Solving Large-Scale Metric Constrained Problems. 326:1-326:54 - Mattes Mollenhauer, Stefan Klus, Christof Schütte, Péter Koltai:
Kernel Autocovariance Operators of Stationary Processes: Estimation and Convergence. 327:1-327:34 - Brian Swenson, Ryan Murray, H. Vincent Poor, Soummya Kar:
Distributed Stochastic Gradient Descent: Nonconvexity, Nonsmoothness, and Convergence to Local Minima. 328:1-328:62 - Jonathan Bunton, Paulo Tabuada:
Joint Continuous and Discrete Model Selection via Submodularity. 329:1-329:42 - Xing Fan, Marianna Pensky, Feng Yu, Teng Zhang:
ALMA: Alternating Minimization Algorithm for Clustering Mixture Multilayer Network. 330:1-330:46 - Ulrike Schneider, Patrick Tardivel:
The Geometry of Uniqueness, Sparsity and Clustering in Penalized Estimation. 331:1-331:36 - HaiYing Wang, Jae Kwang Kim:
Maximum sampled conditional likelihood for informative subsampling. 332:1-332:50 - Domagoj Cevid, Loris Michel, Jeffrey Näf, Peter Bühlmann, Nicolai Meinshausen:
Distributional Random Forests: Heterogeneity Adjustment and Multivariate Distributional Regression. 333:1-333:79 - Michael K. Cohen, Marcus Hutter, Neel Nanda:
Fully General Online Imitation Learning. 334:1-334:30 - Jaime Roquero Gimenez, Dominik Rothenhäusler:
Causal Aggregation: Estimation and Inference of Causal Effects by Constraint-Based Data Fusion. 335:1-335:60 - Agniva Chowdhury, Gregory Dexter, Palma London, Haim Avron, Petros Drineas:
Faster Randomized Interior Point Methods for Tall/Wide Linear Programs. 336:1-336:48 - Nicholas Sterge, Bharath K. Sriperumbudur:
Statistical Optimality and Computational Efficiency of Nystrom Kernel PCA. 337:1-337:32 - Marian-Andrei Rizoiu, Alexander Soen, Shidi Li, Pio Calderon, Leanne Dong, Aditya Krishna Menon, Lexing Xie:
Interval-censored Hawkes processes. 338:1-338:84 - Ting Hu, Yunwen Lei:
Early Stopping for Iterative Regularization with General Loss Functions. 339:1-339:36 - Han Zhao, Chen Dan, Bryon Aragam, Tommi S. Jaakkola, Geoffrey J. Gordon, Pradeep Ravikumar:
Fundamental Limits and Tradeoffs in Invariant Representation Learning. 340:1-340:49 - Chihao Zhang, Yiling Elaine Chen, Shihua Zhang, Jingyi Jessica Li:
Information-theoretic Classification Accuracy: A Criterion that Guides Data-driven Combination of Ambiguous Outcome Labels in Multi-class Classification. 341:1-341:65 - Rémi Leluc, François Portier:
SGD with Coordinate Sampling: Theory and Practice. 342:1-342:47 - Shangtong Zhang, Remi Tachet des Combes, Romain Laroche:
Global Optimality and Finite Sample Analysis of Softmax Off-Policy Actor Critic under State Distribution Mismatch. 343:1-343:91 - Luc Brogat-Motte, Alessandro Rudi, Céline Brouard, Juho Rousu, Florence d'Alché-Buc:
Vector-Valued Least-Squares Regression under Output Regularity Assumptions. 344:1-344:50 - Nan Jiang, Maosen Zhang, Willem-Jan van Hoeve, Yexiang Xue:
Constraint Reasoning Embedded Structured Prediction. 345:1-345:40 - Subha Maity, Yuekai Sun, Moulinath Banerjee:
Minimax optimal approaches to the label shift problem in non-parametric settings. 346:1-346:45 - El Mehdi Achour, François Malgouyres, Franck Mamalet:
Existence, Stability and Scalability of Orthogonal Convolutional Neural Networks. 347:1-347:56 - Jian Cao, Joseph Guinness, Marc G. Genton, Matthias Katzfuss:
Scalable Gaussian-process regression and variable selection using Vecchia approximations. 348:1-348:30 - Francesco Ceccon, Jordan Jalving, Joshua Haddad, Alexander Thebelt, Calvin Tsay, Carl D. Laird, Ruth Misener:
OMLT: Optimization & Machine Learning Toolkit. 349:1-349:8 - Yuexi Wang, Tetsuya Kaji, Veronika Rocková:
Approximate Bayesian Computation via Classification. 350:1-350:49 - Imanol Arrieta Ibarra, Paman Gujral, Jonathan Tannen, Mark Tygert, Cherie Xu:
Metrics of Calibration for Probabilistic Predictions. 351:1-351:54
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.