Papers
1–3 of 3Research Paper·Jan 26, 2026
HalluGuard: Demystifying Data-Driven and Reasoning-Driven Hallucinations in LLMs
The reliability of Large Language Models (LLMs) in high-stakes domains such as healthcare, law, and scientific discovery is often compromised by hallucinations. These failures typically stem from two ...
7.0 viability
Research Paper·Mar 18, 2026
Mitigating LLM Hallucinations through Domain-Grounded Tiered Retrieval
Large Language Models (LLMs) have achieved unprecedented fluency but remain susceptible to "hallucinations" - the generation of factually incorrect or ungrounded content. This limitation is particular...
7.0 viability
Research Paper·Jan 27, 2026
Rewarding Intellectual Humility Learning When Not To Answer In Large Language Models
Large Language Models (LLMs) often produce hallucinated or unverifiable content, undermining their reliability in factual domains. This work investigates Reinforcement Learning with Verifiable Rewards...
7.0 viability