State of the Field
Recent advancements in AI reasoning are focusing on enhancing the capabilities of large language models (LLMs) through improved supervision and structured frameworks. Techniques like fine-grained credit assignment and hybrid actor-refiner collaborations are enabling models to better distinguish between effective reasoning steps and erroneous ones, particularly in complex, multi-turn tasks. The introduction of frameworks that synthesize modular reasoning skills and optimize evidence retrieval is addressing the challenges of sparse rewards in long-context scenarios, making AI systems more efficient and accurate. Additionally, new approaches are exploring the integration of neuro-symbolic methods to bolster commonsense reasoning and dynamic rule adaptation, which are crucial for real-world applications. These developments not only enhance the reasoning accuracy of LLMs but also promise significant commercial benefits, such as improved performance in customer service automation, data analysis, and decision-making processes, ultimately leading to more reliable AI-driven solutions across various industries.
Papers
1–10 of 17MatchTIR: Fine-Grained Supervision for Tool-Integrated Reasoning via Bipartite Matching
Tool-Integrated Reasoning (TIR) empowers large language models (LLMs) to tackle complex tasks by interleaving reasoning steps with external tool interactions. However, existing reinforcement learning ...
TRIM: Hybrid Inference via Targeted Stepwise Routing in Multi-Step Reasoning Tasks
Multi-step reasoning tasks like mathematical problem solving are vulnerable to cascading failures, where a single incorrect step leads to complete solution breakdown. Current LLM routing methods assig...
Evidence-Augmented Policy Optimization with Reward Co-Evolution for Long-Context Reasoning
While Reinforcement Learning (RL) has advanced LLM reasoning, applying it to long-context scenarios is hindered by sparsity of outcome rewards. This limitation fails to penalize ungrounded "lucky gues...
Agentic Proposing: Enhancing Large Language Model Reasoning via Compositional Skill Synthesis
Advancing complex reasoning in large language models relies on high-quality, verifiable datasets, yet human annotation remains cost-prohibitive and difficult to scale. Current synthesis paradigms ofte...
Search-R2: Enhancing Search-Integrated Reasoning via Actor-Refiner Collaboration
Search-integrated reasoning enables language agents to transcend static parametric knowledge by actively querying external sources. However, training these agents via reinforcement learning is hindere...
Latent Chain-of-Thought as Planning: Decoupling Reasoning from Verbalization
Chain-of-Thought (CoT) empowers Large Language Models (LLMs) to tackle complex problems, but remains constrained by the computational cost and reasoning path collapse when grounded in discrete token s...
A Balanced Neuro-Symbolic Approach for Commonsense Abductive Logic
Although Large Language Models (LLMs) have demonstrated impressive formal reasoning abilities, they often break down when problems require complex proof planning. One promising approach for improving ...
Code over Words: Overcoming Semantic Inertia via Code-Grounded Reasoning
LLMs struggle with Semantic Inertia: the inability to inhibit pre-trained priors (e.g., "Lava is Dangerous") when dynamic, in-context rules contradict them. We probe this phenomenon using Baba Is You,...
Learning Structured Reasoning via Tractable Trajectory Control
Large language models can exhibit emergent reasoning behaviors, often manifested as recurring lexical patterns (e.g., "wait," indicating verification). However, complex reasoning trajectories remain s...
ITLC at SemEval-2026 Task 11: Normalization and Deterministic Parsing for Formal Reasoning in LLMs
Large language models suffer from content effects in reasoning tasks, particularly in multi-lingual contexts. We introduce a novel method that reduces these biases through explicit structural abstract...