LLMs
LLMs is a research_field in our research taxonomy.
Related papers
- TEA-Bench: A Systematic Benchmarking of Tool-enhanced Emotional Support Dialogue Agent
- Domain-Adaptation through Synthetic Data: Fine-Tuning Large Language Models for German Law
- SYMPHONY: Synergistic Multi-agent Planning with Heterogeneous Language Model Assembly
- Can LLMs Compress (and Decompress)? Evaluating Code Understanding and Execution via Invertibility
- MiRAGE: A Multiagent Framework for Generating Multimodal Multihop Question-Answer Dataset for RAG Evaluation
- DSC2025 -- ViHallu Challenge: Detecting Hallucination in Vietnamese LLMs
- Detecting and Correcting Hallucinations in LLM-Generated Code via Deterministic AST Analysis
- Veri-Sure: A Contract-Aware Multi-Agent Framework with Temporal Tracing and Formal Verification for Correct RTL Code Generation
- Multi-Persona Thinking for Bias Mitigation in Large Language Models
- MedRedFlag: Investigating how LLMs Redirect Misconceptions in Real-World Health Communication
- DiagLink: A Dual-User Diagnostic Assistance System by Synergizing Experts with LLMs and Knowledge Graphs
- CORE:Toward Ubiquitous 6G Intelligence Through Collaborative Orchestration of Large Language Model Agents Over Hierarchical Edge
- Assessing the Business Process Modeling Competences of Large Language Models
- H-AIM: Orchestrating LLMs, PDDL, and Behavior Trees for Hierarchical Multi-Robot Planning
- Conversation for Non-verifiable Learning: Self-Evolving LLMs through Meta-Evaluation
- Up to 36x Speedup: Mask-based Parallel Inference Paradigm for Key Information Extraction in MLLMs
- AgenticSCR: An Autonomous Agentic Secure Code Review for Immature Vulnerabilities Detection
- HESTIA: A Hessian-Guided Differentiable Quantization-Aware Training Framework for Extremely Low-Bit LLMs
- GameTalk: Training LLMs for Strategic Conversation
- A Decompilation-Driven Framework for Malware Detection with Large Language Models