State of the Field
Recent advancements in AI benchmarking are focusing on enhancing the evaluation of large language models (LLMs) across diverse real-world scenarios. New benchmarks like DSAEval and AgentDrive are addressing the complexities of data science and autonomous systems, respectively, by providing structured datasets that reflect the multifaceted nature of these fields. DSAEval evaluates LLMs on a wide array of data science tasks, revealing strengths in structured data but highlighting challenges in unstructured domains. Meanwhile, AgentDrive introduces a comprehensive dataset for autonomous driving scenarios, facilitating the training and assessment of reasoning capabilities in dynamic environments. The emergence of benchmarks such as Gaia2 and PhysicsMind further emphasizes the need for robust evaluations in asynchronous settings and physical reasoning, respectively. Collectively, these efforts aim to refine AI models for practical applications, addressing commercial needs in automation, data analysis, and decision-making, while also revealing critical gaps in current model capabilities that require further research and development.
Papers
1–10 of 10DSAEval: Evaluating Data Science Agents on a Wide Range of Real-World Data Science Problems
Recent LLM-based data agents aim to automate data science tasks ranging from data analysis to deep learning. However, the open-ended nature of real-world data science problems, which often span multip...
AgentDrive: An Open Benchmark Dataset for Agentic AI Reasoning with LLM-Generated Scenarios in Autonomous Systems
The rapid advancement of large language models (LLMs) has sparked growing interest in their integration into autonomous systems for reasoning-driven perception, planning, and decision-making. However,...
PhysicsMind: Sim and Real Mechanics Benchmarking for Physical Reasoning and Prediction in Foundational VLMs and World Models
Modern foundational Multimodal Large Language Models (MLLMs) and video world models have advanced significantly in mathematical, common-sense, and visual reasoning, but their grasp of the underlying p...
ConstraintBench: Benchmarking LLM Constraint Reasoning on Direct Optimization
Large language models are increasingly applied to operational decision-making where the underlying structure is constrained optimization. Existing benchmarks evaluate whether LLMs can formulate optimi...
ARC Prize 2025: Technical Report
The ARC-AGI benchmark series serves as a critical measure of few-shot generalization on novel tasks, a core aspect of intelligence. The ARC Prize 2025 global competition targeted the newly released AR...
Gaia2: Benchmarking LLM Agents on Dynamic and Asynchronous Environments
We introduce Gaia2, a benchmark for evaluating large language model agents in realistic, asynchronous environments. Unlike prior static or synchronous evaluations, Gaia2 introduces scenarios where env...
Retrieval-Infused Reasoning Sandbox: A Benchmark for Decoupling Retrieval and Reasoning Capabilities
Despite strong performance on existing benchmarks, it remains unclear whether large language models can reason over genuinely novel scientific information. Most evaluations score end-to-end RAG pipeli...
Bi-Level Prompt Optimization for Multimodal LLM-as-a-Judge
Large language models (LLMs) have become widely adopted as automated judges for evaluating AI-generated content. Despite their success, aligning LLM-based evaluations with human judgments remains chal...
Valet: A Standardized Testbed of Traditional Imperfect-Information Card Games
AI algorithms for imperfect-information games are typically compared using performance metrics on individual games, making it difficult to assess robustness across game choices. Card games are a natur...
SE-Bench: Benchmarking Self-Evolution with Knowledge Internalization
True self-evolution requires agents to act as lifelong learners that internalize novel experiences to solve future problems. However, rigorously measuring this foundational capability is hindered by t...