Top papers
- Compression Favors Consistency, Not Truth: When and Why Language Models Prefer Correct Information(8.0)
- Quantifying the Necessity of Chain of Thought through Opaque Serial Depth(5.0)
- Think Before You Lie: How Reasoning Improves Honesty(4.0)
- A Unified Assessment of the Poverty of the Stimulus Argument for Neural Language Models(3.0)
- Algorithmic Consequences of Particle Filters for Sentence Processing: Amplified Garden-Paths and Digging-In Effects(3.0)
- One Language, Two Scripts: Probing Script-Invariance in LLM Concept Representations(2.0)
- Lost in the Middle at Birth: An Exact Theory of Transformer Position Bias(2.0)