Gradients Must Earn Their Influence: Unifying SFT with Generalized Entropic Objectives

PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $9K - $13K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (19)

[1]
ProFit: Leveraging High-Value Signals in SFT via Probability-Guided Token Selection
2026Tao Liu, Taiqiang Wu et al.
[2]
Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting
2026Muxi Diao, Lele Yang et al.
[3]
Enhancing Large Language Model Reasoning via Selective Critical Token Fine-Tuning
2025Zhiwen Ruan, Yixia Li et al.
[4]
Beyond Log Likelihood: Probability-Based Objectives for Supervised Fine-Tuning across the Model Capability Continuum
2025Gaotang Li, Ruizhong Qiu et al.
[5]
On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification
2025Yongliang Wu, Yizhou Zhou et al.
[6]
Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning
2025Ali Taheri Ghahrizjani, Alireza Taban et al.
[7]
REASONING GYM: Reasoning Environments for Reinforcement Learning with Verifiable Rewards
2025Zafir Stojanovski, Oliver Stanley et al.
[8]
Step-wise Adaptive Integration of Supervised Fine-tuning and Reinforcement Learning for Task-Specific LLMs
2025Jack Chen, Fazhong Liu et al.
[9]
m1: Unleash the Potential of Test-Time Scaling for Medical Reasoning with Large Language Models
2025Xiaoke Huang, Juncheng Wu et al.
[10]
Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning
2025Jinlong Pang, Na Di et al.
[11]
Mitigating Forgetting in LLM Fine-Tuning via Low-Perplexity Token Learning
2025Chao-Chung Wu, Zhi Rui Tam et al.
[12]
HybridFlow: A Flexible and Efficient RLHF Framework
2024Guangming Sheng, Chi Zhang et al.
[13]
Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement
2024An Yang, Beichen Zhang et al.
[14]
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
2024Zhihong Shao, Peiyi Wang et al.
[15]
Instruction Tuning for Large Language Models: A Survey
2023Shengyu Zhang, Linfeng Dong et al.
[16]
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
2023Rafael Rafailov, Archit Sharma et al.
[17]
Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus
2021Jesse Dodge, Ana Marasovic et al.
[18]
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
2018Zhilu Zhang, M. Sabuncu
[19]
Possible generalization of Boltzmann-Gibbs statistics
1988C. Tsallis

Founder's Pitch

"Dynamic Entropy Fine-Tuning (DEFT) offers a novel approach to improve supervised fine-tuning of models by dynamically adjusting token-level weighting according to predictive distribution concentration."

NLP OptimizationScore: 4View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

0/4 signals

0

Series A Potential

0/4 signals

0

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/11/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.

Related Papers

Loading…