Qwen
Qwen is a model in our research taxonomy.
Related papers
- TikZilla: Scaling Text-to-TikZ with High-Quality Data and Reinforcement Learning
- NEX: Neuron Explore-Exploit Scoring for Label-Free Chain-of-Thought Selection and Model Ranking
- Bypassing AI Control Protocols via Agent-as-a-Proxy Attacks
- T2S-Bench & Structure-of-Thought: Benchmarking and Prompting Comprehensive Text-to-Structure Reasoning
- STAPO: Stabilizing Reinforcement Learning for LLMs by Silencing Rare Spurious Tokens
- Towards Autonomous Memory Agents
- Raising Bars, Not Parameters: LilMoo Compact Language Model for Hindi
- Precision over Diversity: High-Precision Reward Generalizes to Robust Instruction Following
- EyeLayer: Integrating Human Attention Patterns into LLM-Based Code Summarization
- Small Language Models for Privacy-Preserving Clinical Information Extraction in Low-Resource Languages
- Human Values in a Single Sentence: Moral Presence, Hierarchies, and Transformer Ensembles on the Schwartz Continuum
- NeuroProlog: Multi-Task Fine-Tuning for Neurosymbolic Mathematical Reasoning via the Cocktail Effect
- MACD: Model-Aware Contrastive Decoding via Counterfactual Data
- ProToken: Token-Level Attribution for Federated Large Language Models
- Spatio-Temporal Token Pruning for Efficient High-Resolution GUI Agents
- Importance of Prompt Optimisation for Error Detection in Medical Notes Using Language Models
- Beyond Dominant Patches: Spatial Credit Redistribution For Grounded Vision-Language Models
- ProactiveMobile: A Comprehensive Benchmark for Boosting Proactive Intelligence on Mobile Devices
- ArcFlow: Unleashing 2-Step Text-to-Image Generation via High-Precision Non-Linear Flow Distillation
- Evaluating Zero-Shot and One-Shot Adaptation of Small Language Models in Leader-Follower Interaction