NLP Research Comparison Hub
8 papers - avg viability 3.6
Top Papers
- Compression Favors Consistency, Not Truth: When and Why Language Models Prefer Correct Information(8.0)
A study revealing how language models prioritize consistent information over truth, with implications for model training.
- Quantifying the Necessity of Chain of Thought through Opaque Serial Depth(5.0)
A method to quantify reasoning depth in LLMs, enhancing interpretability and monitoring.
- Think Before You Lie: How Reasoning Improves Honesty(4.0)
A study exploring how reasoning in LLMs can enhance honesty by navigating the representational space of deceptive and honest responses.
- A Unified Assessment of the Poverty of the Stimulus Argument for Neural Language Models(3.0)
Develop an evaluation suite to test neural language models on their ability to generalize language phenomena without innate syntax constraints.
- Algorithmic Consequences of Particle Filters for Sentence Processing: Amplified Garden-Paths and Digging-In Effects(3.0)
This paper explores the implications of particle filters in understanding sentence processing and structural ambiguity.
- One Language, Two Scripts: Probing Script-Invariance in LLM Concept Representations(2.0)
This research explores the abstract meaning representation in Sparse Autoencoders using Serbian digraphia.
- Lost in the Middle at Birth: An Exact Theory of Transformer Position Bias(2.0)
This paper explores the inherent geometric properties of transformer position bias in LLMs.