PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $10K - $14K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (48)

[1]
Learning Robust Spectral Dynamics for Temporal Domain Generalization
2025Enshui Yu, Jie Lu et al.
[2]
Continuous Domain Generalization
2025Z. Cai, Yiheng Yao et al.
[3]
Qwen3 Technical Report
2025An Yang, Anfeng Li et al.
[4]
Out-of-Distribution Generalization in Time Series: A Survey
2025Xin Wu, Fei Teng et al.
[5]
Batch Training for Streaming Time Series: A Transferable Augmentation Framework to Combat Distribution Shifts
2025Weiyang Zhang, Xinyang Chen et al.
[6]
How to Merge Your Multimodal Models Over Time?
2024Sebastian Dziadzio, Vishaal Udandarao et al.
[7]
Learning Pattern-Specific Experts for Time Series Forecasting Under Patch-level Distribution Shift
2024Yanru Sun, Zongxia Xie et al.
[8]
The Llama 3 Herd of Models
2024Abhimanyu Dubey, Abhinav Jauhri et al.
[9]
Mixture-of-Subspaces in Low-Rank Adaptation
2024Taiqiang Wu, Jiahao Wang et al.
[10]
Revisiting Catastrophic Forgetting in Large Language Model Tuning
2024Hongyu Li, Liang Ding et al.
[11]
Continuous Temporal Domain Generalization
2024Z. Cai, Guangji Bai et al.
[12]
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
2024Zeyu Han, Chao Gao et al.
[13]
Generalizing across Temporal Domains with Koopman Operators
2024Qiuhao Zeng, Wei Wang et al.
[14]
TinyLlama: An Open-Source Small Language Model
2024Peiyuan Zhang, Guangtao Zeng et al.
[15]
Time is Encoded in the Weights of Finetuned Language Models
2023Kai Nylund, Suchin Gururangan et al.
[16]
Boosting Urban Prediction via Addressing Spatial-Temporal Distribution Shift
2023Xuanming Hu, Wei Fan et al.
[17]
Continuous Invariance Learning
2023Yong Lin, Fan Zhou et al.
[18]
Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
2023Ming Jin, Shiyu Wang et al.
[19]
OneNet: Enhancing Time Series Forecasting Models under Concept Drift by Online Ensembling
2023Yifan Zhang, Qingsong Wen et al.
[20]
An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-Tuning
2023Yun Luo, Zhen Yang et al.

Showing 20 of 48 references

Founder's Pitch

"MaT-LoRA enables efficient temporal adaptation of LLMs through manifold-aware low-rank reparameterization."

LLM AdaptationScore: 6View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

4/4 signals

10

Series A Potential

2/4 signals

5

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/12/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.