- 2022
- Ali Abbasi, Parsa Nooralinejad, Vladimir Braverman, Hamed Pirsiavash, Soheil Kolouri:
Sparsity and Heterogeneous Dropout for Continual Learning in the Null Space of Neural Activations. CoLLAs 2022: 617-628 - Ahmed Akakzia, Olivier Sigaud:
Learning Object-Centered Autotelic Behaviors with Graph Neural Networks. CoLLAs 2022: 351-365 - Rahaf Aljundi, Daniel Olmeda Reino, Nikolay Chumerin, Richard E. Turner:
Continual Novelty Detection. CoLLAs 2022: 1004-1025 - Christopher Beckham, Issam H. Laradji, Pau Rodríguez, David Vázquez, Derek Nowrouzezahrai, Christopher Pal:
Overcoming challenges in leveraging GANs for few-shot data augmentation. CoLLAs 2022: 255-280 - Leonard Bereska, Efstratios Gavves:
Continual Learning of Dynamical Systems With Competitive Federated Reservoir Computing. CoLLAs 2022: 335-350 - Alessandro Betti, Lapo Faggi, Marco Gori, Matteo Tiezzi, Simone Marullo, Enrico Meloni, Stefano Melacci:
Continual Learning through Hamilton Equations. CoLLAs 2022: 201-212 - Prashant Shivaram Bhat, Bahram Zonooz, Elahe Arani:
Task Agnostic Representation Consolidation: a Self-supervised based Continual Learning Approach. CoLLAs 2022: 390-405 - Prashant Shivaram Bhat, Bahram Zonooz, Elahe Arani:
Consistency is the Key to Further Mitigating Catastrophic Forgetting in Continual Learning. CoLLAs 2022: 1195-1212 - Lucas Caccia, Jing Xu, Myle Ott, Marc'Aurelio Ranzato, Ludovic Denoyer:
On Anytime Learning at Macroscale. CoLLAs 2022: 165-182 - Stephanie C. Y. Chan, Andrew Kyle Lampinen, Pierre Harvey Richemond, Felix Hill:
Zipfian Environments for Reinforcement Learning. CoLLAs 2022: 406-429 - Mingxi Cheng, Tingyang Sun, Shahin Nazarian, Paul Bogdan:
Trustworthiness Evaluation and Trust-Aware Design of CNN Architectures. CoLLAs 2022: 1086-1102 - Hugo Cisneros, Tomás Mikolov, Josef Sivic:
Benchmarking Learning Efficiency in Deep Reservoir Computing. CoLLAs 2022: 532-547 - Liam Collins, Aryan Mokhtari, Sanjay Shakkottai:
How Does the Task Landscape Affect MAML Performance? CoLLAs 2022: 23-59 - Nicholas Corrado, Yuxiao Qu, Josiah P. Hanna:
Simulation-Acquired Latent Action Spaces for Dynamics Generalization. CoLLAs 2022: 661-682 - Zachary Alan Daniels, Aswin Raghavan, Jesse Hostetler, Abrar Rahman, Indranil Sur, Michael R. Piacentino, Ajay Divakaran, Roberto Corizzo, Kamil Faber, Nathalie Japkowicz, Michael Baron, James Seale Smith, Sahana Pramod Joshi, Zsolt Kira, Cameron Ethan Taylor, Mustafa Burak Gurbuz, Constantine Dovrolis, Tyler L. Hayes, Christopher Kanan, Jhair Gallardo:
Model-Free Generative Replay for Lifelong Reinforcement Learning: Application to Starcraft-2. CoLLAs 2022: 1120-1145 - Sepideh Esmaeilpour, Lei Shu, Bing Liu:
Open Set Recognition Via Augmentation-Based Similarity Learning. CoLLAs 2022: 875-885 - Kilian Fatras, Hiroki Naganuma, Ioannis Mitliagkas:
Optimal Transport meets Noisy Label Robust Loss and MixUp Regularization for Domain Adaptation. CoLLAs 2022: 966-981 - Maia Fraser, Vincent Létourneau:
Inexperienced RL Agents Can't Get It Right: Lower Bounds on Regret at Finite Sample Complexity. CoLLAs 2022: 327-334 - Christina M. Funke, Paul Vicol, Kuan-Chieh Wang, Matthias Kümmerer, Richard S. Zemel, Matthias Bethge:
Disentanglement and Generalization Under Correlation Shifts. CoLLAs 2022: 116-141 - Cristina Garbacea, Qiaozhu Mei:
Adapting Pre-trained Language Models to Low-Resource Text Simplification: The Path Matters. CoLLAs 2022: 1103-1119 - Martin Gauch, Maximilian Beck, Thomas Adler, Dmytro Kotsur, Stefan Fiel, Hamid Eghbal-zadeh, Johannes Brandstetter, Johannes Kofler, Markus Holzleitner, Werner Zellinger, Daniel Klotz, Sepp Hochreiter, Sebastian Lehner:
Few-Shot Learning by Dimensionality Reduction in Gradient Space. CoLLAs 2022: 1043-1064 - Shruthi Gowda, Bahram Zonooz, Elahe Arani:
InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness. CoLLAs 2022: 1026-1042 - Valentin Guillet, Dennis George Wilson, Emmanuel Rachelson:
Neural Distillation as a State Representation Bottleneck in Reinforcement Learning. CoLLAs 2022: 798-818 - Simon Guiroy, Christopher Pal, Gonçalo Mordido, Sarath Chandar:
Improving Meta-Learning Generalization with Activation-Based Early-Stopping. CoLLAs 2022: 213-230 - Meghna Gummadi, David Kent, Jorge A. Mendez, Eric Eaton:
SHELS: Exclusive Feature Sets for Novelty Detection and Continual Learning Without Class Boundaries. CoLLAs 2022: 1065-1085 - NareshKumar Gurulingan, Elahe Arani, Bahram Zonooz:
Curbing Task Interference using Representation Similarity-Guided Multi-Task Feature Sharing. CoLLAs 2022: 937-951 - Tyler L. Hayes, Christopher Kanan:
Online Continual Learning for Embedded Devices. CoLLAs 2022: 744-766 - Xu Ji, Razvan Pascanu, R. Devon Hjelm, Balaji Lakshminarayanan, Andrea Vedaldi:
Test Sample Accuracy Scales with Training Sample Density in Neural Networks. CoLLAs 2022: 629-646 - Alex Kearney, Anna Koop, Johannes Günther, Patrick M. Pilarski:
What Should I Know? Using Meta-Gradient Descent for Predictive Feature Discovery in a Single Stream of Experience. CoLLAs 2022: 604-616 - Gyuhak Kim, Bing Liu, Zixuan Ke:
A Multi-Head Model for Continual Learning via Out-of-Distribution Replay. CoLLAs 2022: 548-563