LLM Reasoning Comparison Hub
4 papers - avg viability 4.0
Top Papers
- RouteGoT: Node-Adaptive Routing for Cost-Efficient Graph of Thoughts Reasoning(7.0)
RouteGoT optimizes LLM reasoning by adaptively routing tasks to different models based on predicted difficulty and budget constraints, significantly reducing token usage while maintaining accuracy.
- Evaluation of Deontic Conditional Reasoning in Large Language Models: The Case of Wason's Selection Task(7.0)
Leverage LLMs' improved deontic reasoning with a Wason Selection Task dataset to build a bias detection and mitigation tool.
- LEAD: Breaking the No-Recovery Bottleneck in Long-Horizon Reasoning(7.0)
LEAD enhances LLM long-horizon reasoning by mitigating the no-recovery bottleneck through lookahead validation and overlapping rollouts, enabling more stable and accurate task execution.
- Reforming the Mechanism: Editing Reasoning Patterns in LLMs with Circuit Reshaping(7.0)
REdit selectively modifies reasoning patterns in LLMs by reshaping neural circuits to improve generality and locality of edits.
- Improving reasoning at inference time via uncertainty minimisation(7.0)
Improve LLM reasoning by selecting the most self-certain thought at each step, enhancing accuracy with minimal computational overhead.
- Learning When to Sample: Confidence-Aware Self-Consistency for Efficient LLM Chain-of-Thought Reasoning(7.0)
A confidence-aware framework that optimizes reasoning paths in LLMs to reduce inference costs while maintaining accuracy.
- DiSCTT: Consensus-Guided Self-Curriculum for Efficient Test-Time Adaptation in Reasoning(7.0)
DiSCTT dynamically optimizes LLM reasoning at test-time using a consensus-guided self-curriculum, improving accuracy and efficiency.
- $\textbf{Re}^{2}$: Unlocking LLM Reasoning via Reinforcement Learning with Re-solving(7.0)
Improve LLM reasoning by enabling models to abandon unproductive paths and restart, leading to better performance and efficiency.
- Efficient Paths and Dense Rewards: Probabilistic Flow Reasoning for Large Language Models(6.0)
CoT-Flow offers an efficient probabilistic reasoning framework for enhancing LLM reasoning with step-wise information gain.
- Thinking to Recall: How Reasoning Unlocks Parametric Knowledge in LLMs(5.0)
Unlocking reasoning in LLMs enhances parametric knowledge recall for improved accuracy.