LLM Optimization
Papers in LLM Optimization
11 papers
- Token-Level LLM Collaboration via FusionRoute
FusionRoute optimizes collaboration between domain-specialized language models at a token level for efficient, high-performance decoding.
LLM OptimizationViability: 7.0 - Identifying Good and Bad Neurons for Task-Level Controllable LLMs
NeuronLLM enhances LLMs by identifying and controlling task-critical neurons for improved NLP performance.
LLM OptimizationViability: 6.0 - Benchmarking Post-Training Quantization of Large Language Models under Microscaling Floating Point Formats
Optimize large language model inference with advanced low-precision quantization for enhanced computational efficiency.
LLM OptimizationViability: 7.0 - Disentangling Task Conflicts in Multi-Task LoRA via Orthogonal Gradient Projection
Unlock robust multi-task performance in LLMs with Ortho-LoRA's innovative gradient projection for parameter-efficient training.
LLM OptimizationViability: 5.0 - PROTEUS: SLA-Aware Routing via Lagrangian RL for Multi-LLM Serving Systems
PROTEUS optimizes LLM routing for SLA targets, achieving high accuracy and cost savings.
LLM OptimizationViability: 7.0 - Optimizing Prompts for Large Language Models: A Causal Approach
Causal Prompt Optimization offers a robust method to tailor LLM prompts for specific queries, enhancing enterprise workflows by reducing dependency on costly real-time evaluations.
LLM OptimizationViability: 8.0 - Following the Teacher's Footsteps: Scheduled Checkpoint Distillation for Domain-Specific LLMs
Efficiently distill large language models for domain-specific tasks using Scheduled Checkpoint Distillation.
LLM OptimizationViability: 3.0 - LLM-as-RNN: A Recurrent Language Model for Memory Updates and Sequence Prediction
Turn frozen LLMs into error-correcting, recurrent sequence predictors with interpretable memory updates.
LLM OptimizationViability: 8.0 - HeteroCache: A Dynamic Retrieval Approach to Heterogeneous KV Cache Compression for Long-Context LLM Inference
HeteroCache offers a high-performance, training-free dynamic compression framework to optimize LLM inference in long-context tasks.
LLM OptimizationViability: 7.0 - The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language Models
Optimize reasoning in diffusion language models by simplifying order processing with JustGRPO.
LLM OptimizationViability: 7.0 - What Makes Low-Bit Quantization-Aware Training Work for Reasoning LLMs? A Systematic Study
Optimize LLM reasoning speed and accuracy with low-bit quantization-aware training.
LLM OptimizationViability: 6.0