Part I Build It The Weekend Sprint
9 min read

Chapter 1 – The Minimum Viable Factory (Your First Loop)

Build a tiny loop: code changes, docs sync, checks decide.

You can have it running in about an hour. We’ll keep the language plain until you’ve seen it run, then we’ll name the parts so we can reuse them precisely.

Quick Bootstrap: Minimum Viable Factory (MVF) v0 (keep code and docs in sync)

Three Surfaces You Can Trust (Map · Terrain · Ledger)

This loop gets practical when three things are explicit: what should be true, what actually runs, and what changed last time.

MAP
Versioned description
The README, spec, or contract you want to keep honest.
TERRAIN
What actually runs
The code and config your system actually runs.
LEDGER
Change record
The diff and check output from the last run.
Example project tree

File

read write execute Toggle trace to see what the loop reads, writes, and runs.

We’re going to build a tiny maintenance loop that proves the basic shape:

That check is also your first example of Physics in this book: a hard rule the change script cannot argue with.

In this Minimum Viable Factory (MVF) v0, the goal is still implicit. It lives in CLI (command-line interface) flags and make targets. You can think of that as a rough mission. In Chapter 7, we make it explicit as a typed run request called a Mission Object. You can diff it, version it, and re-run it.

This first loop is deterministic on purpose. It lets you see the mechanics without keys, SDKs, or network calls. Later, you can swap in a model-based change script (an Effector) and keep everything else unchanged.

What can go wrong (when you add a model)

MVF v0 avoids model failure modes by design. The moment you replace the deterministic change script with a model-driven one, a few Day 2 issues show up:

Chapters 2 and 5 show the main fixes: strict output contracts, strict parsing, validator findings fed back into retries, and hard stop conditions. Appendix B is the quick diagnostic guide when the loop misbehaves.

For the reusable hardening checklist and output-contract patterns, use Appendix C. Chapter 1 names the failure modes; Appendix C is the copy/paste kit.

All runnable code for MVF v0 lives in the companion repo: kjwise/aoi_code on GitHub.

First, clone the repository and run the full loop:

# Clone the companion code repo
git clone https://github.com/kjwise/aoi_code.git

# Enter the directory
cd aoi_code

# Run the sync + validate loop
make all

This command first runs the change script (sync_public_interfaces.py) to update the documentation surface, then runs the check (validate_map_alignment.py) to confirm the docs and code still agree.

On the current companion repo checkout, make all produces output like this. The commands, paths, and validator names are real; the exact diff hunk is illustrative because line numbers can move as the demo evolves:

python3 factory/tools/sync_public_interfaces.py --src product/src --doc product/docs/architecture.md --apply
--- product/docs/architecture.md
+++ product/docs/architecture.md
@@ -2,7 +2,9 @@
 ## Public Interfaces
-- (generated)
+- `calculate_tax(amount, country, rate)`
+
+- `normalize_country(country)`
 ## Notes
[effector] applied patch to product/docs/architecture.md
python3 factory/tools/validate_map_alignment.py --src product/src --doc product/docs/architecture.md
[validator] map_terrain_sync=pass

Run it a second time to confirm idempotence: the change script should report no drift detected and the check should still pass.

On a fresh clone, product/docs/architecture.md still contains the placeholder - (generated), so make validate alone fails until you sync. That initial failure is intentional: the demo starts with visible drift, then repairs it deterministically.

If you want to reset the demo between runs:

git restore product/docs/architecture.md

If you want to read the implementation details, start here (in the companion repo):

What the tools do (interfaces + flow)

The two scripts are intentionally small and deterministic:

You now have a minimal loop that finds drift, proposes a patch, applies it inside one bounded surface, and proves the docs and code still align.

Re-run make all to confirm idempotence: the sync script should report no drift detected and the validator should still pass.

No mystery: a Validator can be this small

The companion repo gives you a fuller example, but the underlying mechanism is not hiding anything exotic. A Validator can literally be a thin wrapper around an existing deterministic check:

#!/usr/bin/env python3
import subprocess
import sys

result = subprocess.run(
    ["pytest", "tests/test_public_interfaces.py", "-q"],
    text=True,
)
if result.returncode == 0:
    print("[validator] PASS")
    sys.exit(0)

print("[validator] FAIL")
sys.exit(result.returncode)

That is already enough for the Chapter 1 contract: run one bounded check and return pass/fail. The rest of the book adds better scope control, better evidence, and better composition. It does not add hidden complexity.

The command panel: make

In the companion repo, those scripts are already wired behind a stable interface (make sync, make validate, make all).

If you run make sync and then make validate after you’ve synced once, the stable second-run output looks like this:

[effector] no drift detected (Map matches Terrain)
[validator] map_terrain_sync=pass

That’s the same loop, now behind a stable interface.

Why make shows up in this book

In this book, make is the default command panel: one stable command surface that humans, CI, and agents can all call. In Chapter 7 we give that pattern a formal name. For now, make is just the book’s default runner.

You can replace make with any task runner. The point is the interface, not the tool:

If you’re a Bazel/Nx/Turborepo shop, mentally substitute your monorepo task runner; Chapter 7’s Driver Pattern shows how to put a stable command surface in front of whatever you already run.

Hardening ladder: MVF v0 → v0.2 → v0.5 → v1 → v2

The factory grows by adding stronger checks and stronger governance, not by jumping straight to more autonomy. This ladder gives you a practical path from a tiny deterministic loop to a more capable system.

Now name the parts

You just ran a small loop against a real boundary. The book gives the parts short names so we can talk about them without repeating “code / doc / change script / check” every time.

You can keep reading these as “code / doc / change script / check” for now. The short names matter because later chapters combine the same parts in bigger loops.

Terrain = code you run

The product/src/ directory is our Terrain: the code that actually runs.

Map = doc or spec you keep aligned

The product/docs/architecture.md file is our Map: a structured representation of the Terrain that we can keep aligned.

Git = Ledger (change history)

Git is the Ledger. Every commit records what changed, when, and by whom. In a fuller Software Development as Code (SDaC) system, each automated run also records which mission, inputs, validators, and outputs were involved.

Effector = bounded writer

The sync tool is an Effector: it observes the code and proposes a patch to a versioned description of the code. Later, you can put a model call inside an Effector, but the contract stays the same: output a diff, then prove it is allowed.

Validator = hard check

The validator script is the hard check. In the book, that kind of check is called a Validator. If Map and Terrain disagree, it fails and blocks the change. No debates. No “looks fine.”

The Loop in Action

What you’ve built is the core loop:

  1. Edit the code: you change product/src/.

  2. Run the sync script: make sync proposes or applies a patch to product/docs/architecture.md. A git diff shows the change.

  3. Run the check: make validate enforces whether the doc and code still align.

  4. Review the diff and decide: if the validator passes, you can commit the change to Git. If it fails, the change is blocked and you refine.

This loop gives you a doc you can review, checks you can trust, and a Git trail you can inspect later. That is the whole point: AI can help with the change, but the system still explains what happened.

If you want a frame: the loop can improve code, and later it can help improve the process around the code too, but only inside hard boundaries and with a record of what happened.

At this point, you have a working factory. It’s minimal, but it demonstrates the core components.

Where this leads: in Chapter 13 (The Hofstadter Bridge), we take this same pattern (one bounded block + one Effector + one Validator) and apply it to documentation itself. That “micro-loop” is the smallest step toward turning parts of the docs into rules instead of leaving them as guidance.

SDaC does not require knowing the destination up front. It requires knowing what a safe iteration looks like.

Actionable: What you can do this week

  1. Run MVF v0 (recommended path):

    cd aoi_code
    make all

    If you want a clean baseline between runs:

    git restore product/docs/architecture.md
  2. Break it (create drift):

    • Add a new public function to product/src/tax_calculator.py (for example def tax_rate(country: str) -> float: ...).

    • Run only the validator:

      make validate

    It should fail with missing_in_map=[...].

  3. Fix it (restore alignment):

    • Run make sync and then make validate again.

    • Inspect the diff:

      git diff

    If the diff touches more than you intended, treat it as a blast-radius failure. Appendix B has a quick diagnostic for “slice too large / too small.” Even without a large language model (LLM), the failure shape is the same: too much context or too much freedom produces surprise edits.

  4. Harden one step:

    • Add one more Validator (formatting, type checks, or a schema Validator) and make all depend on it.

This is the shape of the Minimum Viable Factory: small blast radius, concrete diffs, and pass/fail gates.

Share