SommBench: Assessing Sommelier Expertise of Language Models

PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $9K - $13K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (24)

[1]
A Survey on Large Language Model Benchmarks
2025Shiwen Ni, Guhong Chen et al.
[2]
gpt-oss-120b&gpt-oss-20b Model Card
2025OpenAI Sandhini Agarwal, L. Ahmad et al.
[3]
Do Political Opinions Transfer Between Western Languages? An Analysis of Unaligned and Aligned Multilingual LLMs
2025Franziska Weeber, Tanise Ceron et al.
[4]
Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities
2025Gheorghe Comanici, Eric Bieber et al.
[5]
Qwen3 Technical Report
2025An Yang, Anfeng Li et al.
[6]
Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation
2024Shivalika Singh, Angelika Romanou et al.
[7]
Towards Wine Tasting Activity Recognition for a Digital Sommelier
2024Mario O. Parra, Jesús Favela et al.
[8]
CulturalBench: A Robust, Diverse, and Challenging Cultural Benchmark by Human-AI CulturalTeaming
2024Yu Ying Chiu, Liwei Jiang et al.
[9]
The Llama 3 Herd of Models
2024Abhimanyu Dubey, Abhinav Jauhri et al.
[10]
BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages
2024Junho Myung, Nayeon Lee et al.
[11]
A Benchmark for Recipe Understanding in Artificial Agents
2024Jens Nevens, Robin de Haes et al.
[12]
Towards Measuring and Modeling “Culture” in LLMs: A Survey
2024Farid Adilazuarda, Sagnik Mukherjee et al.
[13]
DrBenchmark: A Large Language Understanding Evaluation Benchmark for French Biomedical Domain
2024Yanis Labrak, Adrien Bazoge et al.
[14]
MMMU: A Massive Multi-Discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI
2023Xiang Yue, Yuansheng Ni et al.
[15]
Towards Understanding Sycophancy in Language Models
2023Mrinank Sharma, Meg Tong et al.
[16]
Learning to Taste: A Multimodal Wine Dataset
2023Thoranna Bender, Simon Moe Sorensen et al.
[17]
Discovering Language Model Behaviors with Model-Written Evaluations
2022Ethan Perez, Sam Ringer et al.
[18]
Challenges and Strategies in Cross-Cultural NLP
2022Daniel Hershcovich, Stella Frank et al.
[19]
Measuring Massive Multitask Language Understanding
2020Dan Hendrycks, Collin Burns et al.
[20]
Scaling Laws for Neural Language Models
2020J. Kaplan, Sam McCandlish et al.

Showing 20 of 24 references

Founder's Pitch

"SommBench is a multilingual benchmark for assessing sommelier expertise in language models."

BenchmarkingScore: 8View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

2/4 signals

5

Quick Build

3/4 signals

7.5

Series A Potential

4/4 signals

10

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 3/12/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.

Related Papers

Loading…