Open source · MIT licensed · Built in public

Every prompt you type,
perfected.

Vague in. Sharp out. PrePrompt sits between you and your LLM — intercepts every prompt, scores complexity in <1ms, and rewrites vague requests into precise specifications automatically, inside Claude Code and Cursor.

Claude Code Cursor Python 3.11+ ~$0.001/prompt
preprompt — active session
You typed
write a middleware thatidates tokens and handles refresh
PrePrompt sent
Write a FastAPI middleware class that validates JWT Bearer tokens using python-jose with RS256. Extract token from Authorization header, verify signature and expiry, attach decoded payload to request.state.user, return HTTP 401 with structured error body on failure. Handle refresh by checking expiry within 5 minutes and issuing new token via /auth/refresh.
PrePrompt +65 · FastAPI context injected from memory · 2 requirements expanded to 6
Try it live

See PrePrompt in action

Type a vague prompt or click an example. Watch the classifier score it and Haiku rewrite it in real time.

preprompt — demo
Your prompt
Classifier analysis
Click "Forge it" to analyze...
Optimized prompt
Output appears here...
Install in 3 commands
git clone https://github.com/yashdeeptehlan/preprompt
cd preprompt && ./scripts/install.sh
Restart Claude Code or Cursor — done. ✓ active
How it works

Install once. Works everywhere.
Gets smarter.

🪝

INTERCEPT

Registers as a global UserPromptSubmit hook in Claude Code and an MCP server in Cursor. Every prompt passes through before reaching the LLM.

CLASSIFY

Pure Python heuristics score every prompt 0–100 in under 1ms. No API call. Simple prompts pass through untouched. Score ≥38 triggers optimization.

OPTIMIZE

Claude Haiku rewrites flagged prompts with your full stack context injected — framework, language, style preferences learned from past sessions.

Stack Memory

It learns your stack. Permanently.

After a few sessions, PrePrompt knows you use FastAPI, prefer typed code, and work with SQLite. It injects that context into every optimization — without you saying a word.

  • Confidence compounds with each prompt
  • Resets automatically when you switch stacks
  • Stored locally — never leaves your machine
$ preprompt-memory
PrePrompt — learned stack memory
──────────────────────────────────────────────
frameworkfastapi0.92(47x)
languagepython0.88(31x)
databasesqlite0.80(12x)
styletyped0.75(8x)
──────────────────────────────────────────────
Tip: more prompts = better optimization context
See a full session

Watch PrePrompt work across a real dev session

5 turns. PrePrompt intercepts only when it matters.

session replay
Click replay to start...
< 1ms
Classifier latency
~$0.001
Per optimization
0
Lock errors (SQLite WAL)
MIT
License

Built in the open.

PrePrompt is MIT licensed. The classifier, optimizer, memory layer, and IDE integrations are all open source. Come build with us.

Improve the classifier

Tune scoring weights and signals for different languages and domains.

Help Wanted

Add IDE support

Windsurf, Zed, VS Code integrations — each needs a rules file and testing.

Pending

Build the dashboard

Real-time visualization of optimization history, cost savings, stack memory.

RFC

Get early access.

Free forever for open source. Shape the roadmap. Join developers already using PrePrompt in their daily workflow.

Free beta access Shape the roadmap No spam

Frequently Asked Questions

Is it free?
The core MCP server is 100% open source and free forever. Enterprise features — team memory, audit logs, hosted dashboard — will be paid.
Does it send my prompts anywhere?
No. Everything runs locally. The only external call is to Anthropic's API for optimization — the same call your IDE makes anyway. Your data never leaves your machine.
Which IDEs are supported?
Claude Code and Cursor work fully automatically. Any MCP-compatible IDE (Windsurf, Zed) works with manual setup. More coming in Phase 7.
How much does the API cost?
~$0.001 per optimized prompt. Most prompts are never sent — the classifier skips simple ones. Typical usage: $1–3/month.
Will it slow down my workflow?
No. The classifier runs in under 1ms locally using pure Python heuristics. The API call adds ~1–2 seconds on complex prompts only — less time than you'd spend rewriting.
How do I install it?
git clone https://github.com/yashdeeptehlan/preprompt
cd preprompt && ./scripts/install.sh
# Restart Claude Code or Cursor — done.