Continual Learning Comparison Hub
20 papers - avg viability 5.0
Recent advancements in continual learning are addressing the persistent challenge of catastrophic forgetting, particularly in dynamic environments where data streams evolve over time. Innovative frameworks like SPRInG and Routing without Forgetting are enhancing personalization and adaptability in large language models and transformers, respectively, by employing selective adaptation and energy-based associative retrieval methods. These approaches allow models to maintain performance across tasks without the need for extensive retraining, thus reducing computational costs. Additionally, methods such as Local Classifier Alignment and Semantic Geometry Preservation are improving the alignment between model backbones and task-specific classifiers, ensuring better generalization and stability. The introduction of frameworks like Dream2Learn highlights a novel approach to knowledge restructuring through internally generated synthetic experiences, further enhancing adaptability. Collectively, these efforts are paving the way for more efficient, scalable solutions in real-world applications, from personalized AI assistants to robust visual recognition systems, by enabling models to learn continuously without sacrificing previously acquired knowledge.
Top Papers
- Attention Retention for Continual Learning with Vision Transformers(7.0)
A novel framework for attention retention to solve catastrophic forgetting in Vision Transformers, enhancing continual learning.
- Routing without Forgetting(7.0)
RwF is a novel transformer architecture that enhances continual learning by dynamically routing representations without task identifiers.
- Representation Finetuning for Continual Learning(7.0)
CoRe introduces a novel framework for continual learning by shifting the finetuning paradigm from weight space to representation space.
- Grow, Assess, Compress: Adaptive Backbone Scaling for Memory-Efficient Class Incremental Learning(7.0)
GRACE adaptively scales model capacity for class incremental learning, balancing plasticity and stability while reducing memory footprint.
- SPRInG: Continual LLM Personalization via Selective Parametric Adaptation and Retrieval-Interpolated Generation(7.0)
SPRInG provides continual personalization of Large Language Models, addressing preference drift with robust semi-parametric adaptation.
- LCA: Local Classifier Alignment for Continual Learning(7.0)
LCA introduces a novel loss function to enhance classifier alignment in continual learning, mitigating catastrophic forgetting.
- Shared LoRA Subspaces for almost Strict Continual Learning(6.0)
Revolutionize continual learning with Share: a scalable, efficient model replacing multiple LoRA adapters for diverse tasks and modalities.
- Why Do Neural Networks Forget: A Study of Collapse in Continual Learning(5.0)
A novel approach to understand and mitigate catastrophic forgetting in neural networks by analyzing structural collapse.
- Continual Learning through Control Minimization(5.0)
A novel continual learning approach that minimizes control efforts for task integration.
- Dream2Learn: Structured Generative Dreaming for Continual Learning(5.0)
Dream2Learn uses synthetic experiences to enhance continual learning by autonomously generating novel classes through diffusion models.