AI × CODE

Prompts Are Source Code

EVAN REISER / WORK IN PROGRESS / 4 MIN READ

I lost 846 prompts on a Tuesday. I could regenerate every Python file from those prompts in an afternoon. I could not regenerate those prompts from the Python files in a lifetime.

I lost 846 prompts on a Tuesday.

A path bug in my capture script had been silently writing to /prompts instead of ./prompts for eleven days. Every design decision, every correction, every "actually, do it this way instead." Gone. Not in the filesystem. Not in git history. Not recoverable.

I've lost code before. You restore from a backup, rewrite a function from memory. Annoying but survivable. Losing those prompts felt categorically different. I could regenerate every Python file from those prompts in an afternoon. I could not regenerate those prompts from the Python files in a lifetime.

The Inversion

Your prompts are the latent, embedded information about what you actually care about. The intent, the constraints, the reasoning, the corrections. The code is a derivative artifact, compiled from those prompts by an LLM the same way machine code is compiled from C. Any code file can be regenerated from its originating prompt in minutes. The prompts can't be reconstructed from the code, ever. When you internalize that, the entire economics of software engineering inverts.

Code review is reviewing compiler output. The meaningful question isn't "is this function correct?" It's "did the human express their intent correctly?" Review the prompts. The code follows.

Technical debt is prompt debt. Messy code regenerates clean in minutes. Unclear intent, missing constraints, decisions made without context: those propagate into every future generation. When the prompts are wrong, every build inherits the error.

Dead code is sabotage. In a traditional codebase, deprecated code is clutter. In an AI-native codebase, it's poison. The AI reads everything in context. Old code that no longer reflects current intent actively misleads the next generation. Delete it. If you need it back, the prompt that created it is in the archive.

Architecture becomes fluid. Every prompt builds on top of the current architecture without knowing where you're heading. You end up in a local maximum. But if the prompts are archived, you can do periodic clean rebuilds: feed all your prompts to the AI at once and let it make different foundational decisions with the full picture. Architecture stops being a one-way commitment.

Migration scripts are a tax on the old paradigm. I don't write them. When a schema needs to change, I delete the old one and regenerate from the prompt that describes what I actually want. Migration code exists because changing code used to be expensive. It isn't anymore.

Onboarding is reading the prompts. The prompt archive is the complete record of what was built, why, what was tried, what was rejected. The code tells you what the system does. The prompts tell you why. New engineers don't need to spend weeks reading a codebase to understand the system. They read the decision log.

The Archive

I now have 1,000+ prompts auto-captured across two projects. Every human message gets committed to prompts/YYYY-MM-DD-<session-hash>.md automatically, version-controlled and searchable.

The archive records something code never can: the evolution of thinking. My early prompts are vague. "Make a system that handles the data pipeline." Later prompts encode learned constraints: "Process each document in isolation, never batch, because cross-document context bleeding caused false positives in the February run." That trajectory from vague to precise is the actual engineering knowledge. It lives in the prompts, and it compounds.

What You Can Do Today

Add this to your CLAUDE.md or project instructions:

## Prompt Capture

Prompts are source code. Code is compiled output.

1. Capture prompts at session end, not per-message.
   Write to prompts/YYYY-MM-DD-<session-hash>.md
   so concurrent sessions never conflict.
2. Version-control the prompts directory.
3. When understanding a past decision, search prompts first, code second.
4. Never write migration scripts for schemas only AI touches.
   Delete and regenerate.
5. When incremental changes feel like they're fighting the architecture,
   do a clean rebuild from the full prompt archive.

Try this prompt in your own project:

Analyze this project's relationship between human intent and code. For each of these questions, give a specific answer with evidence: (1) If I lost all the code but kept every prompt/instruction I'd ever written, how long would it take to regenerate? (2) If I lost all prompts but kept the code, what decisions would be impossible to reconstruct? (3) What migration or compatibility code exists only because changing code used to be expensive? (4) Where would a clean rebuild from scratch produce better architecture than the current incremental result? Identify the highest-impact change and implement it.

We keep treating AI-generated code with the same reverence we give human-written code. Protecting it, versioning it, migrating it, reviewing it line by line. Meanwhile the actual source of truth, the human's words and decisions, scrolls off the terminal and disappears forever.

Protect the thing that can't be regenerated.

-Evan

SOURCES

  1. Projects referenced: anonymized production systems with 2,100+ and 4,700+ commits respectively, built exclusively with Claude Code from January-March 2026
  2. Prompt capture: UserPromptSubmit hook, bash script, auto-committed to version-controlled prompts/ directory