MSSR: Memory-Aware Adaptive Replay for Continual LLM Fine-Tuning

PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $10K - $14K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (71)

[1]
Cognitive Memory in Large Language Models
2025Lianlei Shan, Shixian Luo et al.
[2]
PCL: Prompt-based Continual Learning for User Modeling in Recommender Systems
2025Mingdai Yang, Fan Yang et al.
[3]
From RAG to Memory: Non-Parametric Continual Learning for Large Language Models
2025Bernal Jim'enez Guti'errez, Yiheng Shu et al.
[4]
CMT: A Memory Compression Method for Continual Knowledge Learning of Large Language Models
2024Dongfang Li, Zetian Sun et al.
[5]
Parameter-efficient fine-tuning in large language models: a survey of methodologies
2024Luping Wang, Sheng Chen et al.
[6]
Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement
2024An Yang, Beichen Zhang et al.
[7]
The Llama 3 Herd of Models
2024Abhimanyu Dubey, Abhinav Jauhri et al.
[8]
Gemma 2: Improving Open Language Models at a Practical Size
2024Gemma Team Morgane Riviere, Shreya Pathak et al.
[9]
Towards Lifelong Learning of Large Language Models: A Survey
2024Junhao Zheng, Shengjie Qiu et al.
[10]
Large Language Models Meet NLP: A Survey
2024Libo Qin, Qiguang Chen et al.
[11]
Continual Learning of Large Language Models: A Comprehensive Survey
2024Haizhou Shi, Zihao Xu et al.
[12]
LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models
2024Yaowei Zheng, Richong Zhang et al.
[13]
Continual Learning and Catastrophic Forgetting
2024Gido M. van de Ven, Nicholas Soures et al.
[14]
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
2024Machel Reid, N. Savinov et al.
[15]
Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal
2024Jianheng Huang, Leyang Cui et al.
[16]
Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning
2024Weijieying Ren, Xinlong Li et al.
[17]
A Survey on Knowledge Distillation of Large Language Models
2024Xiaohan Xu, Ming Li et al.
[18]
Intern VL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks
2023Zhe Chen, Jiannan Wu et al.
[19]
Investigating the Catastrophic Forgetting in Multimodal Large Language Models
2023Yuexiang Zhai, Shengbang Tong et al.
[20]
An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-Tuning
2023Yun Luo, Zhen Yang et al.

Showing 20 of 71 references

Founder's Pitch

"MSSR is an adaptive replay framework for continual fine-tuning of LLMs that mitigates catastrophic forgetting while ensuring rapid adaptation."

LLM TrainingScore: 8View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

2/4 signals

5

Quick Build

4/4 signals

10

Series A Potential

3/4 signals

7.5

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 3/10/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.

Related Papers

Loading…