default search action
Daniel Soudry
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c48]Yaniv Blumenfeld, Itay Hubara, Daniel Soudry:
Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators. ICLR 2024 - [c47]Daniel Goldfarb, Itay Evron, Nir Weinberger, Daniel Soudry, Paul Hand:
The Joint Effect of Task Similarity and Overparameterization on Catastrophic Forgetting - An Analytical Model. ICLR 2024 - [c46]Gon Buzaglo, Itamar Harel, Mor Shpigel Nacson, Alon Brutzkus, Nathan Srebro, Daniel Soudry:
How Uniform Random Weights Induce Non-uniform Bias: Typical Interpolating Neural Networks Generalize with Narrow Teachers. ICML 2024 - [i55]Daniel Goldfarb, Itay Evron, Nir Weinberger, Daniel Soudry, Paul Hand:
The Joint Effect of Task Similarity and Overparameterization on Catastrophic Forgetting - An Analytical Model. CoRR abs/2401.12617 (2024) - [i54]Yaniv Blumenfeld, Itay Hubara, Daniel Soudry:
Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators. CoRR abs/2401.14110 (2024) - [i53]Gon Buzaglo, Itamar Harel, Mor Shpigel Nacson, Alon Brutzkus, Nathan Srebro, Daniel Soudry:
How Uniform Random Weights Induce Non-uniform Bias: Typical Interpolating Neural Networks Generalize with Narrow Teachers. CoRR abs/2402.06323 (2024) - [i52]Dan Qiao, Kaiqi Zhang, Esha Singh, Daniel Soudry, Yu-Xiang Wang:
Stable Minima Cannot Overfit in Univariate ReLU Networks: Generalization by Large Step Sizes. CoRR abs/2406.06838 (2024) - [i51]Maxim Fishman, Brian Chmiel, Ron Banner, Daniel Soudry:
Scaling FP8 training to trillion-token LLMs. CoRR abs/2409.12517 (2024) - [i50]Edan Kinderman, Itay Hubara, Haggai Maron, Daniel Soudry:
Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks. CoRR abs/2410.01483 (2024) - [i49]Itamar Harel, William M. Hoza, Gal Vardi, Itay Evron, Nathan Srebro, Daniel Soudry:
Provable Tempered Overfitting of Minimal Nets and Typical Nets. CoRR abs/2410.19092 (2024) - [i48]Hrithik Ravi, Clayton Scott, Daniel Soudry, Yutong Wang:
The Implicit Bias of Gradient Descent on Separable Multiclass Data. CoRR abs/2411.01350 (2024) - 2023
- [c45]Itay Evron, Ophir Onn, Tamar Weiss Orzech, Hai Azeroual, Daniel Soudry:
The Role of Codeword-to-Class Assignments in Error-Correcting Codes: An Empirical Study. AISTATS 2023: 8053-8077 - [c44]Hagay Michaeli, Tomer Michaeli, Daniel Soudry:
Alias-Free Convnets: Fractional Shift Invariance via Polynomial Activations. CVPR 2023: 16333-16342 - [c43]Brian Chmiel, Ron Banner, Elad Hoffer, Hilla Ben-Yaacov, Daniel Soudry:
Accurate Neural Training with 4-bit Matrix Multiplications at Standard Formats. ICLR 2023 - [c42]Brian Chmiel, Itay Hubara, Ron Banner, Daniel Soudry:
Minimum Variance Unbiased N: M Sparsity for the Neural Gradients. ICLR 2023 - [c41]Mor Shpigel Nacson, Rotem Mulayoff, Greg Ongie, Tomer Michaeli, Daniel Soudry:
The Implicit Bias of Minima Stability in Multivariate Shallow ReLU Networks. ICLR 2023 - [c40]Itay Evron, Edward Moroshko, Gon Buzaglo, Maroun Khriesh, Badea Marjieh, Nathan Srebro, Daniel Soudry:
Continual Learning in Linear Classification on Separable Data. ICML 2023: 9440-9484 - [c39]Itai Kreisler, Mor Shpigel Nacson, Daniel Soudry, Yair Carmon:
Gradient Descent Monotonically Decreases the Sharpness of Gradient Flow Solutions in Scalar Networks and Beyond. ICML 2023: 17684-17744 - [c38]Niv Giladi, Shahar Gottlieb, Moran Shkolnik, Asaf Karnieli, Ron Banner, Elad Hoffer, Kfir Y. Levy, Daniel Soudry:
DropCompute: simple and more robust distributed synchronous training via compute variance reduction. NeurIPS 2023 - [c37]Chen Zeno, Greg Ongie, Yaniv Blumenfeld, Nir Weinberger, Daniel Soudry:
How do Minimum-Norm Shallow Denoisers Look in Function Space? NeurIPS 2023 - [c36]Ev Zisselman, Itai Lavie, Daniel Soudry, Aviv Tamar:
Explore to Generalize in Zero-Shot RL. NeurIPS 2023 - [i47]Itay Evron, Ophir Onn, Tamar Weiss Orzech, Hai Azeroual, Daniel Soudry:
The Role of Codeword-to-Class Assignments in Error-Correcting Codes: An Empirical Study. CoRR abs/2302.05334 (2023) - [i46]Hagay Michaeli, Tomer Michaeli, Daniel Soudry:
Alias-Free Convnets: Fractional Shift Invariance via Polynomial Activations. CoRR abs/2303.08085 (2023) - [i45]Itai Kreisler, Mor Shpigel Nacson, Daniel Soudry, Yair Carmon:
Gradient Descent Monotonically Decreases the Sharpness of Gradient Flow Solutions in Scalar Networks and Beyond. CoRR abs/2305.13064 (2023) - [i44]Ev Zisselman, Itai Lavie, Daniel Soudry, Aviv Tamar:
Explore to Generalize in Zero-Shot RL. CoRR abs/2306.03072 (2023) - [i43]Itay Evron, Edward Moroshko, Gon Buzaglo, Maroun Khriesh, Badea Marjieh, Nathan Srebro, Daniel Soudry:
Continual Learning in Linear Classification on Separable Data. CoRR abs/2306.03534 (2023) - [i42]Niv Giladi, Shahar Gottlieb, Moran Shkolnik, Asaf Karnieli, Ron Banner, Elad Hoffer, Kfir Yehuda Levy, Daniel Soudry:
DropCompute: simple and more robust distributed synchronous training via compute variance reduction. CoRR abs/2306.10598 (2023) - [i41]Mor Shpigel Nacson, Rotem Mulayoff, Greg Ongie, Tomer Michaeli, Daniel Soudry:
The Implicit Bias of Minima Stability in Multivariate Shallow ReLU Networks. CoRR abs/2306.17499 (2023) - [i40]Chen Zeno, Greg Ongie, Yaniv Blumenfeld, Nir Weinberger, Daniel Soudry:
How do Minimum-Norm Shallow Denoisers Look in Function Space? CoRR abs/2311.06748 (2023) - 2022
- [c35]Aviv Tamar, Daniel Soudry, Ev Zisselman:
Regularization Guarantees Generalization in Bayesian Reinforcement Learning through Algorithmic Stability. AAAI 2022: 8423-8431 - [c34]Itay Evron, Edward Moroshko, Rachel A. Ward, Nathan Srebro, Daniel Soudry:
How catastrophic can catastrophic forgetting be in linear regression? COLT 2022: 4028-4079 - [c33]Matan Haroush, Tzviel Frostig, Ruth Heller, Daniel Soudry:
A Statistical Framework for Efficient Out of Distribution Detection in Deep Neural Networks. ICLR 2022 - [c32]Mor Shpigel Nacson, Kavya Ravichandran, Nathan Srebro, Daniel Soudry:
Implicit Bias of the Step Size in Linear Diagonal Neural Networks. ICML 2022: 16270-16295 - [i39]Brian Chmiel, Itay Hubara, Ron Banner, Daniel Soudry:
Optimal Fine-Grained N: M sparsity for Activations and Neural Gradients. CoRR abs/2203.10991 (2022) - [i38]Itay Evron, Edward Moroshko, Rachel A. Ward, Nati Srebro, Daniel Soudry:
How catastrophic can catastrophic forgetting be in linear regression? CoRR abs/2205.09588 (2022) - 2021
- [j13]Chen Zeno, Itay Golan, Elad Hoffer, Daniel Soudry:
Task-Agnostic Continual Learning Using Online Variational Bayes With Fixed-Point Updates. Neural Comput. 33(11): 3139-3177 (2021) - [c31]Brian Chmiel, Liad Ben-Uri, Moran Shkolnik, Elad Hoffer, Ron Banner, Daniel Soudry:
Neural gradients are near-lognormal: improved quantized and sparse training. ICLR 2021 - [c30]Shahar Azulay, Edward Moroshko, Mor Shpigel Nacson, Blake E. Woodworth, Nathan Srebro, Amir Globerson, Daniel Soudry:
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent. ICML 2021: 468-477 - [c29]Itay Hubara, Yury Nahshan, Yair Hanani, Ron Banner, Daniel Soudry:
Accurate Post Training Quantization With Small Calibration Sets. ICML 2021: 4466-4475 - [c28]Niv Giladi, Zvika Ben-Haim, Sella Nevo, Yossi Matias, Daniel Soudry:
Physics-Aware Downsampling with Deep Learning for Scalable Flood Modeling. NeurIPS 2021: 1378-1389 - [c27]Rotem Mulayoff, Tomer Michaeli, Daniel Soudry:
The Implicit Bias of Minima Stability: A View from Function Space. NeurIPS 2021: 17749-17761 - [c26]Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Joseph Naor, Daniel Soudry:
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N: M Transposable Masks. NeurIPS 2021: 21099-21111 - [i37]Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Seffi Naor, Daniel Soudry:
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N: M Transposable Masks. CoRR abs/2102.08124 (2021) - [i36]Shahar Azulay, Edward Moroshko, Mor Shpigel Nacson, Blake E. Woodworth, Nathan Srebro, Amir Globerson, Daniel Soudry:
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent. CoRR abs/2102.09769 (2021) - [i35]Matan Haroush, Tzviel Frostig, Ruth Heller, Daniel Soudry:
Statistical Testing for Efficient Out of Distribution Detection in Deep Neural Networks. CoRR abs/2102.12967 (2021) - [i34]Niv Giladi, Zvika Ben-Haim, Sella Nevo, Yossi Matias, Daniel Soudry:
Physics-Aware Downsampling with Deep Learning for Scalable Flood Modeling. CoRR abs/2106.07218 (2021) - [i33]Aviv Tamar, Daniel Soudry, Ev Zisselman:
Regularization Guarantees Generalization in Bayesian Reinforcement Learning through Algorithmic Stability. CoRR abs/2109.11792 (2021) - [i32]Brian Chmiel, Ron Banner, Elad Hoffer, Hilla Ben-Yaacov, Daniel Soudry:
Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning. CoRR abs/2112.10769 (2021) - 2020
- [j12]Zhihui Zhu, Daniel Soudry, Yonina C. Eldar, Michael B. Wakin:
The Global Optimization Geometry of Shallow Linear Neural Networks. J. Math. Imaging Vis. 62(3): 279-292 (2020) - [c25]Blake E. Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, Nathan Srebro:
Kernel and Rich Regimes in Overparametrized Models. COLT 2020: 3635-3673 - [c24]Elad Hoffer, Tal Ben-Nun, Itay Hubara, Niv Giladi, Torsten Hoefler, Daniel Soudry:
Augment Your Batch: Improving Generalization Through Instance Repetition. CVPR 2020: 8126-8135 - [c23]Matan Haroush, Itay Hubara, Elad Hoffer, Daniel Soudry:
The Knowledge Within: Methods for Data-Free Model Compression. CVPR 2020: 8491-8499 - [c22]Niv Giladi, Mor Shpigel Nacson, Elad Hoffer, Daniel Soudry:
At Stability's Edge: How to Adjust Hyperparameters to Preserve Minima Selection in Asynchronous Training of Neural Networks? ICLR 2020 - [c21]Greg Ongie, Rebecca Willett, Daniel Soudry, Nathan Srebro:
A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case. ICLR 2020 - [c20]Yaniv Blumenfeld, Dar Gilboa, Daniel Soudry:
Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization? ICML 2020: 960-969 - [c19]Edward Moroshko, Blake E. Woodworth, Suriya Gunasekar, Jason D. Lee, Nati Srebro, Daniel Soudry:
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy. NeurIPS 2020 - [i31]Blake E. Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, Nathan Srebro:
Kernel and Rich Regimes in Overparametrized Models. CoRR abs/2002.09277 (2020) - [i30]Brian Chmiel, Liad Ben-Uri, Moran Shkolnik, Elad Hoffer, Ron Banner, Daniel Soudry:
Neural gradients are lognormally distributed: understanding sparse and quantized training. CoRR abs/2006.08173 (2020) - [i29]Itay Hubara, Yury Nahshan, Yair Hanani, Ron Banner, Daniel Soudry:
Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming. CoRR abs/2006.10518 (2020) - [i28]Yaniv Blumenfeld, Dar Gilboa, Daniel Soudry:
Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization? CoRR abs/2007.01038 (2020) - [i27]Edward Moroshko, Suriya Gunasekar, Blake E. Woodworth, Jason D. Lee, Nathan Srebro, Daniel Soudry:
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy. CoRR abs/2007.06738 (2020) - [i26]Chen Zeno, Itay Golan, Elad Hoffer, Daniel Soudry:
Task Agnostic Continual Learning Using Online Variational Bayes with Fixed-Point Updates. CoRR abs/2010.00373 (2020)
2010 – 2019
- 2019
- [c18]Mor Shpigel Nacson, Nathan Srebro, Daniel Soudry:
Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate. AISTATS 2019: 3051-3059 - [c17]Mor Shpigel Nacson, Jason D. Lee, Suriya Gunasekar, Pedro Henrique Pamplona Savarese, Nathan Srebro, Daniel Soudry:
Convergence of Gradient Descent on Separable Data. AISTATS 2019: 3420-3428 - [c16]Pedro Savarese, Itay Evron, Daniel Soudry, Nathan Srebro:
How do infinite width bounded norm networks look in function space? COLT 2019: 2667-2690 - [c15]Mor Shpigel Nacson, Suriya Gunasekar, Jason D. Lee, Nathan Srebro, Daniel Soudry:
Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models. ICML 2019: 4683-4692 - [c14]Yaniv Blumenfeld, Dar Gilboa, Daniel Soudry:
A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off. NeurIPS 2019: 7036-7046 - [c13]Ron Banner, Yury Nahshan, Daniel Soudry:
Post training 4-bit quantization of convolutional networks for rapid-deployment. NeurIPS 2019: 7948-7956 - [i25]Elad Hoffer, Tal Ben-Nun, Itay Hubara, Niv Giladi, Torsten Hoefler, Daniel Soudry:
Augment your batch: better training with larger batches. CoRR abs/1901.09335 (2019) - [i24]Pedro Savarese, Itay Evron, Daniel Soudry, Nathan Srebro:
How do infinite width bounded norm networks look in function space? CoRR abs/1902.05040 (2019) - [i23]Mor Shpigel Nacson, Suriya Gunasekar, Jason D. Lee, Nathan Srebro, Daniel Soudry:
Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models. CoRR abs/1905.07325 (2019) - [i22]Yaniv Blumenfeld, Dar Gilboa, Daniel Soudry:
A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off. CoRR abs/1906.00771 (2019) - [i21]Elad Hoffer, Berry Weinstein, Itay Hubara, Tal Ben-Nun, Torsten Hoefler, Daniel Soudry:
Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency. CoRR abs/1908.08986 (2019) - [i20]Niv Giladi, Mor Shpigel Nacson, Elad Hoffer, Daniel Soudry:
At Stability's Edge: How to Adjust Hyperparameters to Preserve Minima Selection in Asynchronous Training of Neural Networks? CoRR abs/1909.12340 (2019) - [i19]Greg Ongie, Rebecca Willett, Daniel Soudry, Nathan Srebro:
A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case. CoRR abs/1910.01635 (2019) - [i18]Matan Haroush, Itay Hubara, Elad Hoffer, Daniel Soudry:
The Knowledge Within: Methods for Data-Free Model Compression. CoRR abs/1912.01274 (2019) - [i17]Tzofnat Greenberg-Toledo, Ben Perach, Daniel Soudry, Shahar Kvatinsky:
MTJ-Based Hardware Synapse Design for Quantized Deep Neural Networks. CoRR abs/1912.12636 (2019) - 2018
- [j11]Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, Nathan Srebro:
The Implicit Bias of Gradient Descent on Separable Data. J. Mach. Learn. Res. 19: 70:1-70:57 (2018) - [j10]Philippa J. Karoly, Levin Kuhlmann, Daniel Soudry, David B. Grayden, Mark J. Cook, Dean R. Freestone:
Seizure pathways: A model-based investigation. PLoS Comput. Biol. 14(10) (2018) - [c12]Elad Hoffer, Itay Hubara, Daniel Soudry:
Fix your classifier: the marginal value of training the last weight layer. ICLR (Poster) 2018 - [c11]Daniel Soudry, Elad Hoffer:
Exponentially vanishing sub-optimal local minima in multilayer neural networks. ICLR (Workshop) 2018 - [c10]Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Nathan Srebro:
The Implicit Bias of Gradient Descent on Separable Data. ICLR (Poster) 2018 - [c9]Suriya Gunasekar, Jason D. Lee, Daniel Soudry, Nathan Srebro:
Characterizing Implicit Bias in Terms of Optimization Geometry. ICML 2018: 1827-1836 - [c8]Elad Hoffer, Ron Banner, Itay Golan, Daniel Soudry:
Norm matters: efficient and accurate normalization schemes in deep networks. NeurIPS 2018: 2164-2174 - [c7]Ron Banner, Itay Hubara, Elad Hoffer, Daniel Soudry:
Scalable methods for 8-bit training of neural networks. NeurIPS 2018: 5151-5159 - [c6]Suriya Gunasekar, Jason D. Lee, Daniel Soudry, Nati Srebro:
Implicit Bias of Gradient Descent on Linear Convolutional Networks. NeurIPS 2018: 9482-9491 - [i16]Elad Hoffer, Itay Hubara, Daniel Soudry:
Fix your classifier: the marginal value of training the last weight layer. CoRR abs/1801.04540 (2018) - [i15]Elad Hoffer, Shai Fine, Daniel Soudry:
On the Blindspots of Convolutional Networks. CoRR abs/1802.05187 (2018) - [i14]Suriya Gunasekar, Jason D. Lee, Daniel Soudry, Nathan Srebro:
Characterizing Implicit Bias in Terms of Optimization Geometry. CoRR abs/1802.08246 (2018) - [i13]Elad Hoffer, Ron Banner, Itay Golan, Daniel Soudry:
Norm matters: efficient and accurate normalization schemes in deep networks. CoRR abs/1803.01814 (2018) - [i12]Mor Shpigel Nacson, Jason D. Lee, Suriya Gunasekar, Nathan Srebro, Daniel Soudry:
Convergence of Gradient Descent on Separable Data. CoRR abs/1803.01905 (2018) - [i11]Chen Zeno, Itay Golan, Elad Hoffer, Daniel Soudry:
Bayesian Gradient Descent: Online Variational Bayes Learning with Increased Robustness to Catastrophic Forgetting and Weight Pruning. CoRR abs/1803.10123 (2018) - [i10]Zhihui Zhu, Daniel Soudry, Yonina C. Eldar, Michael B. Wakin:
The Global Optimization Geometry of Shallow Linear Neural Networks. CoRR abs/1805.04938 (2018) - [i9]Ron Banner, Itay Hubara, Elad Hoffer, Daniel Soudry:
Scalable Methods for 8-bit Training of Neural Networks. CoRR abs/1805.11046 (2018) - [i8]Suriya Gunasekar, Jason D. Lee, Daniel Soudry, Nathan Srebro:
Implicit Bias of Gradient Descent on Linear Convolutional Networks. CoRR abs/1806.00468 (2018) - [i7]Mor Shpigel Nacson, Nathan Srebro, Daniel Soudry:
Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate. CoRR abs/1806.01796 (2018) - [i6]Ron Banner, Yury Nahshan, Elad Hoffer, Daniel Soudry:
ACIQ: Analytical Clipping for Integer Quantization of neural networks. CoRR abs/1810.05723 (2018) - 2017
- [j9]Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio:
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations. J. Mach. Learn. Res. 18: 187:1-187:30 (2017) - [j8]Johannes Friedrich, Weijian Yang, Daniel Soudry, Yu Mu, Misha B. Ahrens, Rafael Yuste, Darcy S. Peterka, Liam Paninski:
Multi-scale approaches for high-speed imaging and analysis of large neural populations. PLoS Comput. Biol. 13(8) (2017) - [c5]Elad Hoffer, Itay Hubara, Daniel Soudry:
Train longer, generalize better: closing the generalization gap in large batch training of neural networks. NIPS 2017: 1731-1741 - [i5]Elad Hoffer, Itay Hubara, Daniel Soudry:
Train longer, generalize better: closing the generalization gap in large batch training of neural networks. CoRR abs/1705.08741 (2017) - [i4]Daniel Soudry, Elad Hoffer, Nathan Srebro:
The Implicit Bias of Gradient Descent on Separable Data. CoRR abs/1710.10345 (2017) - 2016
- [c4]Eyal Rosenthal, Sergey Greshnikov, Daniel Soudry, Shahar Kvatinsky:
A fully analog memristor-based neural network with online gradient training. ISCAS 2016: 1394-1397 - [c3]Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio:
Binarized Neural Networks. NIPS 2016: 4107-4115 - [i3]Daniel Soudry, Yair Carmon:
No bad local minima: Data independent training error guarantees for multilayer neural networks. CoRR abs/1605.08361 (2016) - [i2]Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio:
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations. CoRR abs/1609.07061 (2016) - 2015
- [j7]Daniel Soudry, Suraj Keshri, Patrick Stinson, Min-hwan Oh, Garud Iyengar, Liam Paninski:
Efficient "Shotgun" Inference of Neural Connectivity from Highly Sub-sampled Activity Data. PLoS Comput. Biol. 11(10) (2015) - [j6]Daniel Soudry, Dotan Di Castro, Asaf Gal, Avinoam Kolodny, Shahar Kvatinsky:
Memristor-Based Multilayer Neural Networks With Online Gradient Descent Training. IEEE Trans. Neural Networks Learn. Syst. 26(10): 2408-2421 (2015) - [i1]Zhiyong Cheng, Daniel Soudry, Zexi Mao, Zhen-zhong Lan:
Training Binary Multilayer Neural Networks for Image Classification using Expectation Backpropagation. CoRR abs/1503.03562 (2015) - 2014
- [j5]Daniel Soudry, Ron Meir:
The neuronal response at extended timescales: a linearized spiking input-output relation. Frontiers Comput. Neurosci. 8: 29 (2014) - [j4]Daniel Soudry, Ron Meir:
The neuronal response at extended timescales: long-term correlations without long-term memory. Frontiers Comput. Neurosci. 8: 35 (2014) - [j3]Danilo Pezo, Daniel Soudry, Patricio Orio:
Diffusion approximation-based simulation of stochastic ion channels: which method to use? Frontiers Comput. Neurosci. 8: 139 (2014) - [c2]Daniel Soudry, Itay Hubara, Ron Meir:
Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights. NIPS 2014: 963-971 - 2012
- [j2]Daniel Soudry, Ron Meir:
Conductance-Based Neuron Models and the Slow Dynamics of Excitability. Frontiers Comput. Neurosci. 6: 4 (2012) - [c1]Dmitri B. Chklovskii, Daniel Soudry:
"Neuronal spike generation mechanism as an oversampling, noise-shaping A-to-D converter". NIPS 2012: 512-520 - 2010
- [j1]Daniel Soudry, Ron Meir:
History-Dependent Dynamics in a Generic Model of Ion Channels - An Analytic Study. Frontiers Comput. Neurosci. 4: 3 (2010)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-12 21:57 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint