![](https://dblp.uni-trier.de./img/logo.320x120.png)
![search dblp search dblp](https://dblp.uni-trier.de./img/search.dark.16x16.png)
![search dblp](https://dblp.uni-trier.de./img/search.dark.16x16.png)
default search action
Maxim Raginsky
Person information
- affiliation: University of Illinois at Urbana-Champaign, Coordinated Science Laboratory, Urbana, IL, USA
- affiliation: Duke University, Department of Electrical and Computer Engineering, Durham, NC, USA
- affiliation (PhD 202): Northwestern University, Department of Electrical and Computer Engineering, Evanston, IL, USA
Refine list
![note](https://dblp.uni-trier.de./img/note-mark.dark.12x12.png)
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j35]Shizhuo Dylan Zhang, Curt Tigges, Zory Zhang, Stella Biderman, Maxim Raginsky, Talia Ringer:
Transformer-Based Models Are Not Yet Perfect At Learning to Emulate Structural Recursion. Trans. Mach. Learn. Res. 2024 (2024) - [c75]Joshua Hanson, Maxim Raginsky:
Rademacher complexity of neural ODEs via Chen-Fliess series. L4DC 2024: 758-769 - [i64]Shizhuo Dylan Zhang, Curt Tigges, Zory Zhang, Stella Biderman, Maxim Raginsky, Talia Ringer:
Transformer-Based Models Are Not Yet Perfect At Learning to Emulate Structural Recursion. CoRR abs/2401.12947 (2024) - [i63]Joshua Hanson, Maxim Raginsky:
Rademacher Complexity of Neural ODEs via Chen-Fliess Series. CoRR abs/2401.16655 (2024) - [i62]Tanya Veeravalli, Maxim Raginsky:
Revisiting Stochastic Realization Theory using Functional Itô Calculus. CoRR abs/2402.10157 (2024) - 2023
- [j34]Belinda Tzen, Anant Raj
, Maxim Raginsky
, Francis R. Bach
:
Variational Principles for Mirror Descent and Mirror Langevin Dynamics. IEEE Control. Syst. Lett. 7: 1542-1547 (2023) - [j33]Naci Saldi
, Tamer Basar, Maxim Raginsky:
Partially Observed Discrete-Time Risk-Sensitive Mean Field Games. Dyn. Games Appl. 13(3): 929-960 (2023) - [c74]Tanya Veeravalli, Maxim Raginsky:
A Constructive Approach to Function Realization by Neural Stochastic Differential Equations. CDC 2023: 6364-6369 - [c73]Yifeng Chu, Maxim Raginsky:
Majorizing Measures, Codes, and Information. ISIT 2023: 660-665 - [c72]Tanya Veeravalli, Maxim Raginsky:
Nonlinear Controllability and Function Representation by Neural Stochastic Differential Equations. L4DC 2023: 838-850 - [c71]Yifeng Chu, Maxim Raginsky:
A unified framework for information-theoretic generalization bounds. NeurIPS 2023 - [i61]Belinda Tzen, Anant Raj
, Maxim Raginsky, Francis R. Bach:
Variational Principles for Mirror Descent and Mirror Langevin Dynamics. CoRR abs/2303.09532 (2023) - [i60]Yifeng Chu, Maxim Raginsky:
A Chain Rule for the Expected Suprema of Bernoulli Processes. CoRR abs/2304.14474 (2023) - [i59]Yifeng Chu, Maxim Raginsky:
Majorizing Measures, Codes, and Information. CoRR abs/2305.02960 (2023) - [i58]Yifeng Chu, Maxim Raginsky:
A unified framework for information-theoretic generalization bounds. CoRR abs/2305.11042 (2023) - [i57]Shizhuo Dylan Zhang, Curt Tigges, Stella Biderman, Maxim Raginsky, Talia Ringer:
Can Transformers Learn to Solve Problems Recursively? CoRR abs/2305.14699 (2023) - [i56]Tanya Veeravalli, Maxim Raginsky:
A Constructive Approach to Function Realization by Neural Stochastic Differential Equations. CoRR abs/2307.00215 (2023) - [i55]Fredrik Hellström, Giuseppe Durisi, Benjamin Guedj, Maxim Raginsky:
Generalization Bounds: Perspectives from Information Theory and PAC-Bayes. CoRR abs/2309.04381 (2023) - 2022
- [j32]Ali Devran Kara, Maxim Raginsky, Serdar Yüksel:
Robustness to incorrect models and data-driven learning in average-cost optimal stochastic control. Autom. 139: 110179 (2022) - [j31]Aolin Xu, Maxim Raginsky
:
Minimum Excess Risk in Bayesian Learning. IEEE Trans. Inf. Theory 68(12): 7935-7955 (2022) - [c70]Joshua Hanson, Maxim Raginsky:
Fitting an immersed submanifold to data via Sussmann's orbit theorem. CDC 2022: 5323-5328 - [c69]Alan Yang, Jie Xiong, Maxim Raginsky, Elyse Rosenbaum:
Input-to-State Stable Neural Ordinary Differential Equations with Applications to Transient Modeling of Circuits. L4DC 2022: 663-675 - [e1]Po-Ling Loh, Maxim Raginsky:
Conference on Learning Theory, 2-5 July 2022, London, UK. Proceedings of Machine Learning Research 178, PMLR 2022 [contents] - [i54]Alan Yang, Jie Xiong, Maxim Raginsky, Elyse Rosenbaum:
Input-to-State Stable Neural Ordinary Differential Equations with Applications to Transient Modeling of Circuits. CoRR abs/2202.06453 (2022) - [i53]Joshua Hanson, Maxim Raginsky:
Fitting an immersed submanifold to data via Sussmann's orbit theorem. CoRR abs/2204.01119 (2022) - [i52]Tanya Veeravalli, Maxim Raginsky:
Nonlinear controllability and function representation by neural stochastic differential equations. CoRR abs/2212.00896 (2022) - 2021
- [c68]Mehmet A. Donmez, Jeffrey Ludwig, Maxim Raginsky, Andrew C. Singer:
EE-Grad: Exploration and Exploitation for Cost-Efficient Mini-Batch SGD. ACSCC 2021: 490-497 - [c67]Todd P. Coleman
, Maxim Raginsky:
Sampling, variational Bayesian inference, and conditioned stochastic differential equations. CDC 2021: 3054-3059 - [c66]Joshua Hanson, Maxim Raginsky, Eduardo D. Sontag:
Learning Recurrent Neural Net Models of Nonlinear Systems. L4DC 2021: 425-435 - [c65]Jie Xiong, Alan Yang, Maxim Raginsky, Elyse Rosenbaum:
Neural Networks for Transient Modeling of Circuits : Invited Paper. MLCAD 2021: 1-7 - [c64]Hrayr Harutyunyan, Maxim Raginsky, Greg Ver Steeg, Aram Galstyan:
Information-theoretic generalization bounds for black-box learning algorithms. NeurIPS 2021: 24670-24682 - [i51]Hrayr Harutyunyan, Maxim Raginsky, Greg Ver Steeg, Aram Galstyan:
Information-theoretic generalization bounds for black-box learning algorithms. CoRR abs/2110.01584 (2021) - 2020
- [j30]Naci Saldi
, Tamer Basar
, Maxim Raginsky:
Approximate Markov-Nash Equilibria for Discrete-Time Risk-Sensitive Mean-Field Games. Math. Oper. Res. 45(4): 1596-1620 (2020) - [j29]Naveen Goela
, Maxim Raginsky
:
Channel Polarization Through the Lens of Blackwell Measures. IEEE Trans. Inf. Theory 66(10): 6222-6241 (2020) - [c63]Joshua Hanson
, Maxim Raginsky:
Universal Simulation of Stable Dynamical Systems by Recurrent Neural Nets. L4DC 2020: 384-392 - [c62]Alan Yang, AmirEmad Ghassami, Maxim Raginsky, Negar Kiyavash, Elyse Rosenbaum:
Model-Augmented Conditional Mutual Information Estimation for Feature Selection. UAI 2020: 1139-1148 - [i50]Belinda Tzen, Maxim Raginsky:
A mean-field theory of lazy training in two-layer neural nets: entropic regularization and controlled McKean-Vlasov dynamics. CoRR abs/2002.01987 (2020) - [i49]Naci Saldi, Tamer Basar, Maxim Raginsky:
Partially Observed Discrete-Time Risk-Sensitive Mean Field Games. CoRR abs/2003.11987 (2020) - [i48]Joshua Hanson
, Maxim Raginsky, Eduardo D. Sontag:
Learning Recurrent Neural Net Models of Nonlinear Systems. CoRR abs/2011.09573 (2020) - [i47]Aolin Xu, Maxim Raginsky:
Minimum Excess Risk in Bayesian Learning. CoRR abs/2012.14868 (2020)
2010 – 2019
- 2019
- [j28]Naci Saldi
, Tamer Basar, Maxim Raginsky:
Approximate Nash Equilibria in Partially Observed Stochastic Games with Mean-Field Interactions. Math. Oper. Res. 44(3): 1006-1033 (2019) - [j27]Jaeho Lee
, Maxim Raginsky:
Learning Finite-Dimensional Coding Schemes with Nonlinear Reconstruction Maps. SIAM J. Math. Data Sci. 1(3): 617-642 (2019) - [c61]Naci Saldi
, Tamer Basar, Maxim Raginsky:
Partially-Observed Discrete-Time Risk-Sensitive Mean-Field Games. CDC 2019: 317-322 - [c60]Noyan C. Sevüktekin, Maxim Raginsky, Andrew C. Singer
:
Linear Noisy Networks with Stochastic Components. CDC 2019: 5386-5391 - [c59]Ali Devran Kara, Maxim Raginsky, Serdar Yüksel:
Robustness to Incorrect Models in Average-Cost Optimal Stochastic Control. CDC 2019: 7970-7975 - [c58]Belinda Tzen, Maxim Raginsky:
Theoretical guarantees for sampling and inference in generative models with latent diffusions. COLT 2019: 3084-3114 - [c57]Joshua Hanson, Maxim Raginsky:
Universal Approximation of Input-Output Maps by Temporal Convolutional Nets. NeurIPS 2019: 14048-14058 - [i46]Belinda Tzen, Maxim Raginsky:
Theoretical guarantees for sampling and inference in generative models with latent diffusions. CoRR abs/1903.01608 (2019) - [i45]Belinda Tzen, Maxim Raginsky:
Neural Stochastic Differential Equations: Deep Latent Gaussian Models in the Diffusion Limit. CoRR abs/1905.09883 (2019) - [i44]Joshua Hanson
, Maxim Raginsky:
Universal Approximation of Input-Output Maps by Temporal Convolutional Nets. CoRR abs/1906.09211 (2019) - [i43]Alan Yang, AmirEmad Ghassami, Maxim Raginsky, Negar Kiyavash, Elyse Rosenbaum:
Model-Augmented Nearest-Neighbor Estimation of Conditional Mutual Information for Feature Selection. CoRR abs/1911.04628 (2019) - 2018
- [j26]Naci Saldi
, Tamer Basar, Maxim Raginsky:
Markov-Nash Equilibria in Mean-Field Games with Discounted Cost. SIAM J. Control. Optim. 56(6): 4256-4287 (2018) - [j25]Soomin Lee, Angelia Nedic
, Maxim Raginsky:
Coordinate Dual Averaging for Decentralized Online Optimization With Nonseparable Global Objectives. IEEE Trans. Control. Netw. Syst. 5(1): 34-44 (2018) - [j24]Ehsan Shafieepoorfard, Maxim Raginsky
:
Sequential Empirical Coordination Under an Output Entropy Constraint. IEEE Trans. Inf. Theory 64(10): 6830-6841 (2018) - [c56]Yanina Shkel, Maxim Raginsky, Sergio Verdú:
Sequential prediction with coded side information under logarithmic loss. ALT 2018: 753-769 - [c55]Belinda Tzen, Tengyuan Liang, Maxim Raginsky:
Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability. COLT 2018: 857-875 - [c54]Yang Xiu, Samuel Sagan, Advika Battini, Xiao Ma, Maxim Raginsky, Elyse Rosenbaum:
Stochastic modeling of air electrostatic discharge parameters. IRPS 2018: 2 - [c53]Yanina Shkel
, Maxim Raginsky, Sergio Verdú:
Universal Compression, List Decoding, and Logarithmic Loss. ISIT 2018: 206-210 - [c52]Jaeho Lee, Maxim Raginsky:
Minimax Statistical Learning with Wasserstein distances. NeurIPS 2018: 2692-2701 - [i42]Belinda Tzen, Tengyuan Liang, Maxim Raginsky:
Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability. CoRR abs/1802.06439 (2018) - [i41]Naci Saldi, Tamer Basar, Maxim Raginsky:
Discrete-time Risk-sensitive Mean-field Games. CoRR abs/1808.03929 (2018) - [i40]Naveen Goela, Maxim Raginsky:
Channel Polarization through the Lens of Blackwell Measures. CoRR abs/1809.05073 (2018) - [i39]Jaeho Lee, Maxim Raginsky:
Learning finite-dimensional coding schemes with nonlinear reconstruction maps. CoRR abs/1812.09658 (2018) - 2017
- [j23]Soomin Lee, Angelia Nedic, Maxim Raginsky:
Stochastic Dual Averaging for Decentralized Online Optimization on Time-Varying Communication Graphs. IEEE Trans. Autom. Control. 62(12): 6407-6414 (2017) - [j22]Aolin Xu, Maxim Raginsky
:
Information-Theoretic Lower Bounds on Bayes Risk in Decentralized Estimation. IEEE Trans. Inf. Theory 63(3): 1580-1600 (2017) - [j21]Aolin Xu, Maxim Raginsky
:
Information-Theoretic Lower Bounds for Distributed Function Computation. IEEE Trans. Inf. Theory 63(4): 2314-2337 (2017) - [c51]Ehsan Shafieepoorfard, Maxim Raginsky:
Rationally inattentive Markov decision processes over a finite horizon. ACSSC 2017: 621-627 - [c50]Naci Saldi
, Tamer Basar, Maxim Raginsky:
Markov-Nash equilibria in mean-field games with discounted cost. ACC 2017: 3676-3681 - [c49]Maxim Raginsky, Alexander Rakhlin, Matus Telgarsky:
Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis. COLT 2017: 1674-1703 - [c48]Yanina Shkel
, Maxim Raginsky, Sergio Verdú:
Universal lossy compression under logarithmic loss. ISIT 2017: 1157-1161 - [c47]Aolin Xu, Maxim Raginsky:
Information-theoretic analysis of generalization capability of learning algorithms. NIPS 2017: 2524-2533 - [i38]Maxim Raginsky, Alexander Rakhlin, Matus Telgarsky:
Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis. CoRR abs/1702.03849 (2017) - [i37]Naci Saldi, Tamer Basar, Maxim Raginsky:
Approximate Nash Equilibria in Partially Observed Stochastic Games with Mean-Field Interactions. CoRR abs/1705.02036 (2017) - [i36]Mehmet A. Donmez, Maxim Raginsky, Andrew C. Singer:
EE-Grad: Exploration and Exploitation for Cost-Efficient Mini-Batch SGD. CoRR abs/1705.07070 (2017) - [i35]Mehmet A. Donmez, Maxim Raginsky, Andrew C. Singer, Lav R. Varshney:
Cost-Performance Tradeoffs in Fusing Unreliable Computational Units. CoRR abs/1705.07779 (2017) - [i34]Aolin Xu, Maxim Raginsky:
Information-theoretic analysis of generalization capability of learning algorithms. CoRR abs/1705.07809 (2017) - [i33]Jaeho Lee, Maxim Raginsky:
Minimax Statistical Learning and Domain Adaptation with Wasserstein Distances. CoRR abs/1705.07815 (2017) - [i32]Ehsan Shafieepoorfard, Maxim Raginsky:
Sequential Empirical Coordination Under an Output Entropy Constraint. CoRR abs/1710.10255 (2017) - 2016
- [j20]Maxim Raginsky, Angelia Nedic
:
Online Discrete Optimization in Social Networks in the Presence of Knightian Uncertainty. Oper. Res. 64(3): 662-679 (2016) - [j19]Mehmet A. Donmez, Maxim Raginsky, Andrew C. Singer
:
Online Optimization Under Adversarial Perturbations. IEEE J. Sel. Top. Signal Process. 10(2): 256-269 (2016) - [j18]Ehsan Shafieepoorfard, Maxim Raginsky, Sean P. Meyn:
Rationally Inattentive Control of Markov Processes. SIAM J. Control. Optim. 54(2): 987-1016 (2016) - [j17]Maxim Raginsky:
Strong Data Processing Inequalities and Φ-Sobolev Inequalities for Discrete Channels. IEEE Trans. Inf. Theory 62(6): 3355-3389 (2016) - [c46]Mehmet A. Donmez, Maxim Raginsky, Andrew C. Singer
, Lav R. Varshney:
Cost-performance tradeoffs in unreliable computation architectures. ACSSC 2016: 215-219 - [c45]Peng Guan, Maxim Raginsky, Rebecca Willett, Daphney-Stavroula Zois
:
Regret minimization algorithms for single-controller zero-sum stochastic games. CDC 2016: 7075-7080 - [c44]Ehsan Shafieepoorfard, Maxim Raginsky:
Sequential empirical coordination under an output entropy constraint. CDC 2016: 7347-7352 - [c43]Maxim Raginsky:
Channel polarization and Blackwell measures. ISIT 2016: 56-60 - [c42]Maxim Raginsky, Alexander Rakhlin, Matthew Tsao, Yihong Wu, Aolin Xu:
Information-theoretic analysis of stability and bias of learning algorithms. ITW 2016: 26-30 - [c41]Daphney-Stavroula Zois
, Maxim Raginsky:
Active object detection on graphs via locally informative trees. MLSP 2016: 1-6 - [i31]Aryeh Kontorovich, Maxim Raginsky:
Concentration of measure without independence: a unified approach via the martingale method. CoRR abs/1602.00721 (2016) - [i30]Aolin Xu, Maxim Raginsky:
Information-Theoretic Lower Bounds on Bayes Risk in Decentralized Estimation. CoRR abs/1607.00550 (2016) - [i29]Naci Saldi, Tamer Basar, Maxim Raginsky:
Markov-Nash Equilibria in Mean-Field Games with Discounted Cost. CoRR abs/1612.07878 (2016) - 2015
- [j16]Richard S. Laugesen, Prashant G. Mehta
, Sean P. Meyn, Maxim Raginsky:
Poisson's Equation in Nonlinear Filtering. SIAM J. Control. Optim. 53(1): 501-525 (2015) - [c40]Angelia Nedic
, Soomin Lee, Maxim Raginsky:
Decentralized online optimization with global objectives and local communication. ACC 2015: 4497-4503 - [c39]Aolin Xu, Maxim Raginsky:
Converses for distributed estimation via strong data processing inequalities. ISIT 2015: 2376-2380 - [c38]Jaeho Lee, Maxim Raginsky, Pierre Moulin:
On MMSE estimation from quantized observations in the nonasymptotic regime. ISIT 2015: 2924-2928 - [i28]Ehsan Shafieepoorfard, Maxim Raginsky, Sean P. Meyn:
Rationally inattentive control of Markov processes. CoRR abs/1502.03762 (2015) - [i27]Aolin Xu, Maxim Raginsky:
Converses for distributed estimation via strong data processing inequalities. CoRR abs/1504.06028 (2015) - [i26]Jaeho Lee, Maxim Raginsky, Pierre Moulin:
On MMSE estimation from quantized observations in the nonasymptotic regime. CoRR abs/1504.06029 (2015) - [i25]Soomin Lee, Angelia Nedic, Maxim Raginsky:
Decentralized Online Optimization with Global Objectives and Local Communication. CoRR abs/1508.07933 (2015) - [i24]Aolin Xu, Maxim Raginsky:
Information-theoretic lower bounds for distributed function computation. CoRR abs/1509.00514 (2015) - [i23]Maxim Raginsky, Igal Sason:
Concentration of Measure Inequalities and Their Communication and Information-Theoretic Applications. CoRR abs/1510.02947 (2015) - 2014
- [j15]Peng Guan, Maxim Raginsky, Rebecca M. Willett
:
Online Markov Decision Processes With Kullback-Leibler Control Cost. IEEE Trans. Autom. Control. 59(6): 1423-1438 (2014) - [c37]Peng Guan, Maxim Raginsky, Rebecca Willett
:
From minimax value to low-regret algorithms for online Markov decision processes. ACC 2014: 471-476 - [c36]Maxim Raginsky, Angelia Nedic
:
Online discrete optimization in social networks. ACC 2014: 3796-3801 - [c35]Richard S. Laugesen, Prashant G. Mehta
, Sean P. Meyn, Maxim Raginsky:
Poisson's equation in nonlinear filtering. CDC 2014: 4185-4190 - [c34]Aolin Xu, Maxim Raginsky:
A new information-theoretic lower bound for distributed function computation. ISIT 2014: 2227-2231 - [i22]Peng Guan, Maxim Raginsky, Rebecca Willett:
Online Markov decision processes with Kullback-Leibler control cost. CoRR abs/1401.3198 (2014) - [i21]Maxim Raginsky:
Strong data processing inequalities and $Φ$-Sobolev inequalities for discrete channels. CoRR abs/1411.3575 (2014) - 2013
- [j14]Maxim Raginsky, Igal Sason:
Concentration of Measure Inequalities in Information Theory, Communications, and Coding. Found. Trends Commun. Inf. Theory 10(1-2): 1-246 (2013) - [j13]Maxim Raginsky:
Empirical Processes, Typical Sequences, and Coordinated Actions in Standard Borel Spaces. IEEE Trans. Inf. Theory 59(3): 1288-1301 (2013) - [c33]Ehsan Shafieepoorfard, Maxim Raginsky, Sean P. Meyn:
Rational inattention in controlled Markov processes. ACC 2013: 6790-6797 - [c32]Ehsan Shafieepoorfard, Maxim Raginsky:
Rational inattention in scalar LQG control. CDC 2013: 5733-5739 - [c31]Maxim Raginsky, Igal Sason:
Refined bounds on the empirical distribution of good channel codes via concentration inequalities. ISIT 2013: 221-225 - [c30]Maxim Raginsky:
Logarithmic Sobolev inequalities and strong data processing theorems for discrete channels. ISIT 2013: 419-423 - [c29]Maxim Raginsky:
Learning joint quantizers for reconstruction and prediction. ITW 2013: 1-5 - [i20]Maxim Raginsky, Angelia Nedic:
Online discrete optimization in social networks in the presence of Knightian uncertainty. CoRR abs/1307.0473 (2013) - [i19]Peng Guan, Maxim Raginsky, Rebecca Willett:
Relax but stay in control: from value to algorithms for online Markov decision processes. CoRR abs/1310.7300 (2013) - 2012
- [j12]Kalyani Krishnamurthy, Rebecca Willett, Maxim Raginsky:
Target detection performance bounds in compressive imaging. EURASIP J. Adv. Signal Process. 2012: 205 (2012) - [j11]Maxim Raginsky, Rebecca M. Willett
, Corinne Horn, Jorge G. Silva, Roummel F. Marcia:
Sequential Anomaly Detection in the Presence of Noise and Limited Feedback. IEEE Trans. Inf. Theory 58(8): 5544-5562 (2012) - [c28]Peng Guan, Maxim Raginsky, Rebecca Willett:
Online Markov decision processes with Kullback-Leibler control cost. ACC 2012: 1388-1393 - [c27]Maxim Raginsky, Jake V. Bouvrie:
Continuous-time stochastic Mirror Descent on a network: Variance reduction, consensus, convergence. CDC 2012: 6793-6800 - [i18]Maxim Raginsky, Igal Sason:
Concentration of Measure Inequalities in Information Theory, Communications and Coding. CoRR abs/1212.4663 (2012) - 2011
- [j10]Maxim Raginsky, Alexander Rakhlin:
Information-Based Complexity, Feedback and Dynamics in Convex Programming. IEEE Trans. Inf. Theory 57(10): 7036-7056 (2011) - [j9]Maxim Raginsky, Sina Jafarpour, Zachary T. Harmany, Roummel F. Marcia, Rebecca M. Willett
, A. Robert Calderbank:
Performance Bounds for Expander-Based Compressed Sensing in Poisson Noise. IEEE Trans. Signal Process. 59(9): 4139-4153 (2011) - [c26]Maxim Raginsky:
Directed information and pearl's causal calculus. Allerton 2011: 958-965 - [c25]Maxim Raginsky, Nooshin Kiarashi, Rebecca Willett
:
Decentralized Online Convex Programming with local information. ACC 2011: 5363-5369 - [c24]Maxim Raginsky:
Shannon meets Blackwell and Le Cam: Channels, codes, and statistical experiments. ISIT 2011: 1220-1224 - [c23]Maxim Raginsky, Alexander Rakhlin:
Lower Bounds for Passive and Active Learning. NIPS 2011: 1026-1034 - [i17]Maxim Raginsky:
Directed information and Pearl's causal calculus. CoRR abs/1110.0718 (2011) - 2010
- [j8]Kalyani Krishnamurthy, Maxim Raginsky, Rebecca Willett
:
Multiscale Photon-Limited Spectral Image Reconstruction. SIAM J. Imaging Sci. 3(3): 619-645 (2010) - [j7]Maxim Raginsky, Rebecca Willett
, Zachary T. Harmany, Roummel F. Marcia:
Compressed sensing performance bounds under Poisson noise. IEEE Trans. Signal Process. 58(8): 3990-4002 (2010) - [c22]Maxim Raginsky:
Divergence-based characterization of fundamental limitations of adaptive dynamical systems. Allerton 2010: 107-114 - [c21]Maxim Raginsky, Alexander Rakhlin, Serdar Yüksel:
Online Convex Programming and regularization in adaptive control. CDC 2010: 1957-1962 - [c20]Maxim Raginsky, Sina Jafarpour, Rebecca Willett
, A. Robert Calderbank:
Fishing in Poisson streams: Focusing on the whales, ignoring the minnows. CISS 2010: 1-6 - [c19]Kalyani Krishnamurthy, Maxim Raginsky, Rebecca Willett
:
Hyperspectral target detection from incoherent projections. ICASSP 2010: 3550-3553 - [c18]Kalyani Krishnamurthy, Maxim Raginsky, Rebecca Willett
:
Hyperspectral target detection from incoherent projections: Nonequiprobable targets and inhomogeneous SNR. ICIP 2010: 1357-1360 - [c17]Todd P. Coleman, Maxim Raginsky:
Mutual information saddle points in channels of exponential family type. ISIT 2010: 1355-1359 - [c16]Maxim Raginsky:
Empirical processes and typical sequences. ISIT 2010: 1458-1462 - [i16]Maxim Raginsky, Sina Jafarpour, Rebecca Willett, A. Robert Calderbank:
Fishing in Poisson streams: focusing on the whales, ignoring the minnows. CoRR abs/1003.2836 (2010) - [i15]Maxim Raginsky, Sina Jafarpour, Zachary T. Harmany, Roummel F. Marcia, Rebecca Willett, A. Robert Calderbank:
Performance bounds for expander-based compressed sensing in Poisson noise. CoRR abs/1007.2377 (2010) - [i14]Maxim Raginsky:
Empirical processes, typical sequences and coordinated actions in standard Borel spaces. CoRR abs/1009.0282 (2010) - [i13]Maxim Raginsky, Alexander Rakhlin:
Information-based complexity, feedback and dynamics in sequential convex programming. CoRR abs/1010.2285 (2010) - [i12]Maxim Raginsky:
Divergence-based characterization of fundamental limitations of adaptive dynamical systems. CoRR abs/1010.2286 (2010)
2000 – 2009
- 2009
- [j6]Svetlana Lazebnik, Maxim Raginsky:
Supervised Learning of Quantizer Codebooks by Information Loss Minimization. IEEE Trans. Pattern Anal. Mach. Intell. 31(7): 1294-1309 (2009) - [j5]Maxim Raginsky:
Joint universal lossy coding and identification of stationary mixing sources with general alphabets. IEEE Trans. Inf. Theory 55(5): 1945-1960 (2009) - [j4]Avon L. Fernandes, Maxim Raginsky, Todd P. Coleman
:
A low-complexity universal scheme for rate-constrained distributed regression using a wireless sensor network. IEEE Trans. Signal Process. 57(5): 1731-1744 (2009) - [c15]Maxim Raginsky, Alexander Rakhlin:
Information complexity of black-box convex optimization: A new look via feedback information theory. Allerton 2009: 803-510 - [c14]Svetlana Lazebnik, Maxim Raginsky:
An empirical Bayes approach to contextual region classification. CVPR 2009: 2380-2387 - [c13]Rebecca M. Willett
, Maxim Raginsky:
Performance bounds on compressed sensing with Poisson noise. ISIT 2009: 174-178 - [c12]Maxim Raginsky:
Achievability results for statistical learning under communication constraints. ISIT 2009: 1328-1332 - [c11]Maxim Raginsky, Roummel F. Marcia, Jorge G. Silva, Rebecca M. Willett
:
Sequential probability assignment via online convex programming using exponential families. ISIT 2009: 1338-1342 - [c10]Maxim Raginsky, Svetlana Lazebnik:
Locality-sensitive binary codes from shift-invariant kernels. NIPS 2009: 1509-1517 - [i11]Rebecca Willett, Maxim Raginsky:
Minimax risk for Poisson compressed sensing. CoRR abs/0901.1900 (2009) - [i10]Maxim Raginsky:
Joint universal lossy coding and identification of stationary mixing sources with general alphabets. CoRR abs/0901.1904 (2009) - [i9]Maxim Raginsky:
Achievability results for statistical learning under communication constraints. CoRR abs/0901.1905 (2009) - [i8]Maxim Raginsky, Zachary T. Harmany, Roummel F. Marcia, Rebecca Willett:
Compressed sensing performance bounds under Poisson noise. CoRR abs/0910.5146 (2009) - [i7]Sina Jafarpour, Rebecca Willett, Maxim Raginsky, A. Robert Calderbank:
Performance Bounds for Expander-based Compressed Sensing in the presence of Poisson Noise. CoRR abs/0911.1368 (2009) - [i6]Maxim Raginsky, Roummel F. Marcia, Jorge G. Silva, Rebecca Willett:
Sequential anomaly detection in the presence of noise and limited feedback. CoRR abs/0911.2904 (2009) - 2008
- [j3]Maxim Raginsky, Thomas J. Anastasio:
Cooperation in self-organizing map networks enhances information transmission in the presence of input background activity. Biol. Cybern. 98(3): 195-211 (2008) - [j2]Maxim Raginsky:
Joint Fixed-Rate Universal Lossy Coding and Identification of Continuous-Alphabet Memoryless Sources. IEEE Trans. Inf. Theory 54(7): 3059-3077 (2008) - [c9]Maxim Raginsky:
On the information capacity of Gaussian channels under small peak power constraints. Allerton 2008: 286-293 - [c8]Avon L. Fernandes, Maxim Raginsky, Todd P. Coleman:
A low-complexity universal scheme for rate-constrained distributed regression using a wireless sensor network. ICASSP 2008: 2269-2272 - [c7]Maxim Raginsky:
Universal Wyner-Ziv coding of discrete memoryless sources with known side information statistics. ISIT 2008: 2167-2171 - [c6]Maxim Raginsky, Svetlana Lazebnik, Rebecca Willett, Jorge G. Silva:
Near-minimax recursive density estimation on the binary hypercube. NIPS 2008: 1305-1312 - 2007
- [c5]Maxim Raginsky:
Joint Universal Lossy Coding and Identification of Stationary Mixing Sources. ISIT 2007: 1961-1965 - [c4]Svetlana Lazebnik, Maxim Raginsky:
Learning Nearest-Neighbor Quantizers from Labeled Data by Information Loss Minimization. AISTATS 2007: 251-258 - [i5]Maxim Raginsky:
Learning from compressed observations. CoRR abs/0704.0671 (2007) - [i4]Maxim Raginsky:
Joint universal lossy coding and identification of stationary mixing sources. CoRR abs/0704.2644 (2007) - 2006
- [c3]Maxim Raginsky:
Joint Universal Lossy Coding and Identification of I.I.D. Vector Sources. ISIT 2006: 577-581 - [i3]Maxim Raginsky:
Joint universal lossy coding and identification of i.i.d. vector sources. CoRR abs/cs/0601074 (2006) - 2005
- [c2]Maxim Raginsky:
A complexity-regularized quantization approach to nonlinear dimensionality reduction. ISIT 2005: 352-356 - [c1]Maxim Raginsky, Svetlana Lazebnik:
Estimation of Intrinsic Dimensionality Using High-Rate Vector Quantization. NIPS 2005: 1105-1112 - [i2]Maxim Raginsky:
A complexity-regularized quantization approach to nonlinear dimensionality reduction. CoRR abs/cs/0501091 (2005) - [i1]Maxim Raginsky:
Joint fixed-rate universal lossy coding and identification of continuous-alphabet memoryless sources. CoRR abs/cs/0512015 (2005) - 2003
- [j1]Maxim Raginsky:
Scaling and Renormalization in Fault-Tolerant Quantum Computers. Quantum Inf. Process. 2(3): 249-258 (2003)
Coauthor Index
aka: Rebecca M. Willett
![](https://dblp.uni-trier.de./img/cog.dark.24x24.png)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-21 00:17 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint