SYNAPSE: Framework for Neuron Analysis and Perturbation in Sequence Encoding

PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $9K - $13K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (25)

[1]
Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities
2025Zora Che, Stephen Casper et al.
[2]
Transfer Learning in Pre-Trained Large Language Models for Malware Detection Based on System Calls
2024P. Sánchez, Alberto Huertas Celdrán et al.
[3]
Does Large Language Model Contain Task-Specific Neurons?
2024Ran Song, Shizhu He et al.
[4]
NeuroX Library for Neuron Analysis of Deep NLP Models
2023Fahim Dalvi, Hassan Sajjad et al.
[5]
Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
2022Chandresh Pravin, Ivan Martino et al.
[6]
Analyzing Individual Neurons in Pre-trained Language Models
2020Nadir Durrani, Hassan Sajjad et al.
[7]
A Survey of the State of Explainable AI for Natural Language Processing
2020Marina Danilevsky, Kun Qian et al.
[8]
Language Models are Few-Shot Learners
2020Tom B. Brown, Benjamin Mann et al.
[9]
GoEmotions: A Dataset of Fine-Grained Emotions
2020Dorottya Demszky, Dana Movshovitz-Attias et al.
[10]
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
2019Alejandro Barredo Arrieta, Natalia Díaz Rodríguez et al.
[11]
RoBERTa: A Robustly Optimized BERT Pretraining Approach
2019Yinhan Liu, Myle Ott et al.
[12]
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
2019Jacob Devlin, Ming-Wei Chang et al.
[13]
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models
2018Fahim Dalvi, Nadir Durrani et al.
[14]
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2018C. Rudin
[15]
Identifying and Controlling Important Neurons in Neural Machine Translation
2018A. Bau, Yonatan Belinkov et al.
[16]
Generating Natural Adversarial Examples
2017Zhengli Zhao, Dheeru Dua et al.
[17]
Attention is All you Need
2017Ashish Vaswani, Noam Shazeer et al.
[18]
Learning to Generate Reviews and Discovering Sentiment
2017Alec Radford, R. Józefowicz et al.
[19]
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
2017Weilin Xu, David Evans et al.
[20]
Understanding Neural Networks through Representation Erasure
2016Jiwei Li, Will Monroe et al.

Showing 20 of 25 references

Founder's Pitch

"Develop a lightweight framework, SYNAPSE, for stress-testing and understanding Transformer models' robustness without retraining."

AI Model AnalysisScore: 3View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

0/4 signals

0

Quick Build

4/4 signals

10

Series A Potential

1/4 signals

2.5

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 3/9/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.

Related Papers

Loading…