PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $9K - $13K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (25)

[1]
Rotate Both Ways: Time-and-Order RoPE for Generative Recommendation
2025Xiaokai Wei, Jiajun Wu et al.
[2]
Time to Split: Exploring Data Splitting Strategies for Offline Evaluation of Sequential Recommenders
2025D. Gusak, A. Volodkevich et al.
[3]
Yambda-5B — A Large-Scale Multi-Modal Dataset for Ranking and Retrieval
2025Alexander Ploshkin, V. Tytskiy et al.
[4]
A Contextual-Aware Position Encoding for Sequential Recommendation
2025Jun Yuan, Guohao Cai et al.
[5]
Scalable Cross-Entropy Loss for Sequential Recommendations with Large Item Catalogs
2024G. Mezentsev, D. Gusak et al.
[6]
RECE: Reduced Cross-Entropy Loss for Large-Catalogue Sequential Recommenders
2024D. Gusak, G. Mezentsev et al.
[7]
Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation
2024Alejo Lopez-Avila, Jinhua Du et al.
[8]
From Variability to Stability: Advancing RecSys Benchmarking Practices
2024Valeriy Shevchenko, Nikita Belousov et al.
[9]
Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion
2023Alexandru Meterez, Amir Joudaki et al.
[10]
Turning Dross Into Gold Loss: is BERT4Rec really better than SASRec?
2023Anton Klenitskiy, Alexey Vasilev
[11]
Progressive Self-Attention Network with Unsymmetrical Positional Encoding for Sequential Recommendation
2022Yuehua Zhu, Bo Huang et al.
[12]
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
2021Ofir Press, Noah A. Smith et al.
[13]
Backward Gradient Normalization in Deep Neural Networks
2021A. Cabana, Luis F. Lago-Fern'andez
[14]
RoFormer: Enhanced Transformer with Rotary Position Embedding
2021Jianlin Su, Yu Lu et al.
[15]
MEANTIME: Mixture of Attention Mechanisms with Multi-temporal Embeddings for Sequential Recommendation
2020S. Cho, Eunhyeok Park et al.
[16]
Rethinking Positional Encoding in Language Pre-training
2020Guolin Ke, Di He et al.
[17]
Time Interval Aware Self-Attention for Sequential Recommendation
2020Jiacheng Li, Yujie Wang et al.
[18]
Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects
2019Jianmo Ni, Jiacheng Li et al.
[19]
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
2019Colin Raffel, Noam Shazeer et al.
[20]
Self-Attentive Sequential Recommendation
2018Wang-Cheng Kang, Julian McAuley

Showing 20 of 25 references

Founder's Pitch

"Enhance next-item recommendation systems with position-aware sequential attention technology for improved prediction accuracy."

Recommendation SystemsScore: 5View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

3/4 signals

7.5

Series A Potential

0/4 signals

0

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/24/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.