Papers
1–3 of 3Research Paper·Feb 8, 2026
Online Domain-aware LLM Decoding for Continual Domain Evolution
LLMs are typically fine-tuned offline on domain-specific data, assuming a static domain. In practice, domain knowledge evolves continuously through new regulations, products, services, and interaction...
6.0 viability
Research Paper·Feb 12, 2026
Manifold-Aware Temporal Domain Generalization for Large Language Models
Temporal distribution shifts are pervasive in real-world deployments of Large Language Models (LLMs), where data evolves continuously over time. While Temporal Domain Generalization (TDG) seeks to mod...
6.0 viability
Research Paper·Feb 17, 2026
Updating Parametric Knowledge with Context Distillation Retains Post-Training Capabilities
Post-training endows pretrained LLMs with a variety of desirable skills, including instruction-following, reasoning, and others. However, these post-trained LLMs only encode knowledge up to a cut-off ...
2.0 viability