Agent Chain Deep-Dive

How the researcher → planner → executor → verifier chain works — why each agent gets a fresh 200K context and why CONTEXT.md is the most important file you influence

advanced

Each GSD phase spawns a chain of specialized agents. They don’t share context — they hand off through files. This is intentional: file-based handoffs preserve the full output of each agent and let every successor start at peak quality.

The chain

/gsd:discuss-phase N → CONTEXT.md (your locked decisions)

/gsd:plan-phase N
  → Researcher (×4 parallel) → RESEARCH.md
  → Planner reads CONTEXT.md + RESEARCH.md → PLAN.md files
  → Checker verifies plans (up to 3 loops)

/gsd:execute-phase N
  → Wave 1: Executor A (fresh 200K ctx) → commit
  → Wave 1: Executor B (fresh 200K ctx) → commit  [parallel]
  → Wave 2: Executor C (fresh 200K ctx) → commit   [after Wave 1]

/gsd:verify-work N → VERIFICATION.md

Each agent is spawned fresh. None inherits the conversation history of the agent before it.

Why fresh context per agent?

Each executor starts at peak quality. No accumulated garbage from prior tasks, no context pressure from the discussion or planning phases. A 10-plan phase executes with the same quality at plan 10 as at plan 1.

Without fresh contexts, quality would degrade continuously across the phase. The executor at plan 8 would be running at 80%+ context saturation — rushed outputs, missed edge cases, skipped verification.

CONTEXT.md is your control point

Your /gsd:discuss-phase answers are written to CONTEXT.md. The planner reads it. The executor reads the plan, which embeds your decisions. You influence the executor without being in the loop.

This means the discuss step has more leverage than it appears. A clear, specific discussion produces a CONTEXT.md that shapes every plan — and through the plans, every executor. Vague discussion produces vague plans that leave decisions to the executor.

$ /gsd:discuss-phase N

Waves = parallel execution

Plans in the same wave have no file conflicts and can run simultaneously. Dependent plans are placed in later waves and wait for their wave to complete. GSD uses this to maximize throughput without creating race conditions.

A plan’s wave: and depends_on: frontmatter fields control scheduling. The orchestrator reads them and assigns parallel or sequential execution accordingly.