The Concept

Why Intent Engineering exists, what it rejects, and what changes when intent becomes the scarce resource.

The bottleneck has moved

Software engineering spent decades optimizing for one constraint: human-written code was expensive. Patterns, code review, CI/CD, microservices, agile: all management systems for scarce code.

That constraint collapsed. AI agents can now generate, review, test, and deploy code at machine speed.

The new bottleneck is intent. When code is cheap, the scarce resource is deciding what to build, why it matters, and what must stay out. Without that, AI produces polished waste.

Intent Engineering accepts the shift: everything other than intent is choice. Stack, architecture, timeline. The human job is Why, What, Not, and Learnings. AI handles the rest.

What Intent Engineering is not

It is not prompt engineering: prompt engineering tunes one conversation, while Intent Engineering defines the persistent document every conversation inherits.

It is not a design doc: design docs prescribe How, while Intent Documents refuse to, because stack, architecture, and timelines are implementation choices.

It is not a PRD: PRDs try to be comprehensive and static, while Intent Documents stay minimal, evolve through exploration, and can be killed.

It is not a framework or tool: it is a markdown file and a discipline.

The three layers

An Intent Document has exactly three layers because humans only own three things here: Why, What, and Not. Everything else is implementation choice.

Why — the reason this exists

Why is the claim about value: whose pain matters, what outcome counts as success, and what evidence would change your mind. In seed and exploring, it is a wager; in clarified, it is a commitment.

What — the thing being built

What is the surface area of the promise: features, user flows, and edge cases. It must be concrete enough for AI to derive work from, but silent on How; "users can export reports as PDF" belongs here, "use puppeteer" does not.

Not — the boundaries that must not be crossed

Not is where seriousness shows: security constraints, scope limits, quality bars, forbidden patterns. Without Not, AI will optimize against assumptions you never wrote down.

Learnings — the evolution engine

Clarity usually comes late. An idea starts as seed: vague, fragile, and wrong in interesting ways. The way forward is exploration.

Learnings is the timestamped record of what you tried, what reality said, and what changed.

For humans: it kills revisionist history. When the project pivots, the reason is in Learnings, not in memory.

For AI: it carries context code never will. The agent sees not just the current shape, but the path that produced it.

The lifecycle

Intent moves through four states:

StateDescriptionWhat to do
seed Idea only. Why is a guess; What and Not are empty or vague. Write the hypothesis. Run the first test.
exploring Hypothesis under pressure through prototypes, research, and interviews. Run experiments. Record Learnings. Rewrite Why/What/Not as needed.
clarified Why, What, and Not are explicit. Uncertainty markers are gone. Hand off to AI. Let it build.
killed Evidence says do not build, or an existing solution already wins. Record why. Stop. This is a good outcome.

The hardest transition is any state → killed. Starting projects is cheap; stopping bad ones is judgment.

Relationship to existing concepts

ConceptScopeRelationship to Intent Engineering
Context Engineering Per-task, ephemeral The Intent Document is the persistent context above all tasks.
Harness Engineering Tool and environment setup The Not layer should compile into harness rules: CLAUDE.md, linters, CI.
Spec-driven Development Feature specification Specs are derived from What. Humans stop hand-authoring them.
Vibe Coding Code generation Vibe coding gets leverage only when intent is sharp.

The automation pipeline

In a mature pipeline, humans touch two points: intent at the top, judgment at the bottom.

  [Intent Document]     ← Human writes and maintains
         ↓
  [Spec Generation]     ← AI derives task list from Why/What/Not
         ↓
  [Implementation]      ← AI agents write code in parallel
         ↓
  [Verification]        ← AI cross-reviews, tests, lints
         ↓
  [Deployment]          ← CI/CD automates release
         ↓
  [Feedback]            ← User data flows back
         ↓
  [Learn & Decide]      ← Human evaluates, updates intent or kills

Everything between intent and feedback is automatable now. Tech stack, architecture, task breakdown, verification flow, deployment cadence: choices. The irreducible human job is Why, What, Not, and Learnings.

Why it works

Constraints breed clarity. Why/What/Not removes the places people hide: roadmap theater, architecture cosplay, false precision. If you cannot state intent cleanly, you are not ready to build.

AI is better at How than you are. It has broader implementation recall than any individual engineer. Over-specifying How caps the solution at your local maximum.

Killing is productive. A documented kill is not waste; it is paid-down uncertainty. The Learnings from a dead project often outlast the code from a shipped one.

Intent precision scales. One sharp sentence in Why propagates through task generation, review, tests, and deployment. Vague intent multiplies ambiguity; sharp intent multiplies coherence.

Getting started

Create INTENT.md at the project root and write only Why, What, Not, and Learnings. Mark uncertainty with (?) until evidence removes it.

The mechanics are in the Quick Start Guide; the discipline is refusing to smuggle How into the document.

→ Quick Start Guide

한국어 中文 日本語