Continual Learning

14papers
4.6viability
+33%30d

State of the Field

Recent advancements in continual learning are addressing the persistent challenge of catastrophic forgetting, particularly in dynamic environments where models must adapt to evolving data streams. New frameworks, such as those leveraging selective adaptation and attention retention, are enhancing the ability of models to distinguish between genuine shifts in user preferences and transient noise, thereby improving personalization in applications like large language models. Techniques that utilize shared low-rank subspaces and geometric redundancy are streamlining the integration of new tasks without the need for extensive retraining, significantly reducing computational overhead. Additionally, innovative approaches are being developed to maintain structural integrity while allowing for the plasticity necessary to learn new information, which is crucial for applications ranging from image classification to natural language understanding. As these methods mature, they promise to make continual learning more practical and efficient, paving the way for robust AI systems capable of lifelong learning across diverse domains.

Last updated Feb 26, 2026

Papers

1–10 of 14
Research Paper·Jan 15, 2026

SPRInG: Continual LLM Personalization via Selective Parametric Adaptation and Retrieval-Interpolated Generation

Personalizing Large Language Models typically relies on static retrieval or one-time adaptation, assuming user preferences remain invariant over time. However, real-world interactions are dynamic, whe...

7.0 viability
Research Paper·Feb 5, 2026·B2B

Attention Retention for Continual Learning with Vision Transformers

Continual learning (CL) empowers AI systems to progressively acquire knowledge from non-stationary data streams. However, catastrophic forgetting remains a critical challenge. In this work, we identif...

7.0 viability
Research Paper·Feb 5, 2026

Shared LoRA Subspaces for almost Strict Continual Learning

Adapting large pretrained models to new tasks efficiently and continually is crucial for real-world deployment but remains challenging due to catastrophic forgetting and the high cost of retraining. W...

6.0 viability
Research Paper·Feb 4, 2026·B2BEducation

Continual Learning through Control Minimization

Catastrophic forgetting remains a fundamental challenge for neural networks when tasks are trained sequentially. In this work, we reformulate continual learning as a control problem where learning and...

5.0 viability
Research Paper·Jan 8, 2026

Key-Value Pair-Free Continual Learner via Task-Specific Prompt-Prototype

Continual learning aims to enable models to acquire new knowledge while retaining previously learned information. Prompt-based methods have shown remarkable performance in this domain; however, they t...

5.0 viability
Research Paper·Jan 26, 2026

Beyond Retention: Orchestrating Structural Safety and Plasticity in Continual Learning for LLMs

Continual learning in Large Language Models (LLMs) faces the critical challenge of balancing stability (retaining old knowledge) and plasticity (learning new tasks). While Experience Replay (ER) is a ...

5.0 viability
Research Paper·Feb 3, 2026·B2B

PLATE: Plasticity-Tunable Efficient Adapters for Geometry-Aware Continual Learning

We develop a continual learning method for pretrained models that \emph{requires no access to old-task data}, addressing a practical barrier in foundation model adaptation where pretraining distributi...

5.0 viability
Research Paper·Mar 2, 2026

Dream2Learn: Structured Generative Dreaming for Continual Learning

Continual learning requires balancing plasticity and stability while mitigating catastrophic forgetting. Inspired by human dreaming as a mechanism for internal simulation and knowledge restructuring, ...

5.0 viability
Research Paper·Feb 23, 2026

GOAL: Geometrically Optimal Alignment for Continual Generalized Category Discovery

Continual Generalized Category Discovery (C-GCD) requires identifying novel classes from unlabeled data while retaining knowledge of known classes over time. Existing methods typically update classifi...

5.0 viability
Research Paper·Feb 23, 2026

Representation Stability in a Minimal Continual Learning Agent

Continual learning systems are increasingly deployed in environments where retraining or reset is infeasible, yet many approaches emphasize task performance rather than the evolution of internal repre...

4.0 viability
Page 1 of 2