Part VI Appendices
18 min read

Appendix C: Pattern Library

This appendix serves as a practical library of patterns, templates, and configurations for various components of a Software Development as Code (SDaC) loop. Exhausted engineers and technical leaders need blueprints, not just concepts. Here, you’ll find concrete examples for defining missions, crafting robust validators for common technology stacks, configuring effectors to integrate with your existing workflows, and establishing circuit breakers for safety and control.

These patterns are designed to be immediately actionable. Treat them as starting points, adapting them to your specific environment and requirements. Each pattern emphasizes clarity, auditable configuration, and a focus on verifiable outcomes.

How to use this appendix

Treat this appendix as a library with five shelves:

If a chapter points here, it should be because you want a reusable artifact, not because the main text ran out of room. If a detail does not fit one of these shelves cleanly, keep it in the chapter that needs it instead of turning this appendix into a junk drawer.

Artifact taxonomy (keep these surfaces separate)

This appendix includes three different kinds of artifacts. They should not be collapsed into one schema:

If your local tooling uses different field names, translate at the runner boundary. Do not teach readers a second Mission Object schema.

Canonical reusable artifacts

Use this appendix as the book’s single copy-and-specialize home for reusable loop artifacts:

Other chapters should explain when to use these artifacts. This appendix should hold the reusable wording and shapes.

Minimum Viable Factory (MVF) v0: demo tools (companion repo)

Chapter 1’s MVF v0 loop lives in the companion code repo: github.com/kjwise/aoi_code.

If you want to read or modify the runnable tools, start here:

The contract is intentionally tiny:

Effector interface:

python3 factory/tools/sync_public_interfaces.py \
  --src <terrain_dir> \
  --doc <map_file> \
  [--apply]

Validator interface:

python3 factory/tools/validate_map_alignment.py \
  --src <terrain_dir> \
  --doc <map_file>

Logic flow (sketch):

# Effector
signatures = extract_public_signatures(src)
doc_after = rewrite_heading_block(doc_before, heading="## Public Interfaces", items=signatures)
diff = unified_diff(doc_before, doc_after)
if apply: write(doc_after)

# Validator
terrain = extract_public_signatures(src)
map_items = parse_items_from_heading_block(doc_text, heading="## Public Interfaces")
pass if sets_match(terrain, map_items)

This is the smallest useful Map surface: one bounded heading block that can’t drift without tripping a deterministic gate.

The Three-Tier Frame (Substrate → Loop → Governance)

Use this as a completeness checklist for autonomous engineering. If you’re missing one layer, the system fails in a predictable way.

Substrate (existential foundation)

Context Graph implementation guide (tool-agnostic)

You do not need a fancy platform to get value from a Context Graph. You need stable identity, stable edges, and a way to query “what is relevant to this task?” without dumping the repo.

Two artifacts make this practical:

Some systems start with packets (a deterministic “prep” builder) and add the graph later. Others start with a coarse graph (workspace packages and dependencies) and improve the slicer over time. Either path works if the outputs are deterministic and logged.

Minimal data model

Identity is computed, not inferred

Treat identity as a deterministic extraction problem: read manifests and attach the result to nodes. A node should carry enough identity for the Driver Pattern to select the right mechanism without guessing.

Storage options (pick the simplest that works)

Incremental rebuild strategy

  1. Detect changed files (Git diff, file watcher, or CI inputs).
  2. Re-extract nodes for those files.
  3. Re-derive edges that touch those nodes.
  4. Persist a new graph snapshot with a content hash and extractor version.

Query shape: anchor → slice

Most practical queries start from an anchor (a failing test, a function, a file path) and pull a bounded neighborhood:

Task packet shape (what you hand to the model)

Treat the task packet as an auditable build artifact:

Illustrative scale (not a benchmark)

For a medium polyglot repo, it’s common to end up with tens of thousands of nodes and edges once you include files, symbols, doc sections, and test cases.

One illustrative shape:

The point is not the absolute size. The point is that a slice is small, bounded, and reproducible.

Implementation options for code repositories (from light to heavy)

You can “roll your own” Context Graph, or you can adopt parts of existing graph tooling. The correct choice depends on the questions you need to answer.

Common options:

The rule is to build what you query. Don’t build a graph because graphs are fashionable. Define the packet queries you need, then implement the minimal graph and sensors that answer them deterministically.

Loop (operational mechanisms)

Governance (automated sovereignty)

Reference implementation note: Dream in the genAIbook engine

The book is not a “here is the finished system” handoff. But it can be useful to see one real Dream loop wiring so you know what “Depth 2” looks like in practice.

In this repository, Dream is exposed as Make targets:

make dream
make dream-loop

Under the hood it runs python -m tools.dream, which builds a core.workflow.Workflow with explicit Steps:

The important pattern is visible: Dream does not invent new capabilities from text. It chooses from an allowlisted action set, runs deterministic gates, and emits evidence you can audit.

Meta-Patterns (the stance)

These are the stances that make the mechanisms compose.

System Message / Output Contract Patterns

In an SDaC loop, Prep selects a bounded slice and Validation grades the artifact. The system message is the instruction layer that turns a general-purpose model into a component with an output contract.

The goal is determinism at the interface: machine-parseable output, no commentary, and a fail-closed mode when constraints can’t be satisfied.

Different platforms call this a system message or instruction layer. The name doesn’t matter. The output contract does.

This section is the canonical copy-and-specialize source for output-contract text used elsewhere in the book.

Keep the system message stable per Effector. Put task-specific intent in the Mission Object and the slice.

The Effector system message (diff-only)

Use this when the model’s job is to act as an Effector that proposes a patch.

You are an Effector component in a deterministic SDaC loop.

You are not a chat interface. You do not explain. You do not teach.
Your output is machine-consumed and will be validated by deterministic Physics.

Output contract:
- Output MUST be a single unified diff (git-style patch).
- Do NOT wrap the diff in Markdown fences.
- Do NOT include commentary, headings, or prose before/after the diff.
- The diff MUST use paths relative to the repository root.

Scope and safety:
- Only modify files that are explicitly in the scope allowlist provided in the Mission Object.
- Do not touch protected paths or policy surfaces unless the Mission Object explicitly allows it.
- Keep the diff minimal: smallest change that satisfies the acceptance criteria.
- Preserve existing style and formatting unless the Mission Object requires otherwise.

Failure mode (fail closed):
- If you cannot comply with the output contract or constraints, output a single JSON object (and nothing else):
  {"status":"fail","reason":"<one sentence>","blocking_issue":"<short label>"}

You will be given:
- A Mission Object (intent, scope, constraints, acceptance criteria)
- A context slice (evidence: Map + Terrain + failing signals)

Now produce the diff.

The Effector system message (JSON-only)

Use this when the Effector’s artifact is structured data (a plan, a report, a mission draft, a config snippet) and you need the model to “shut up and output JSON.”

You are an Effector component in a deterministic SDaC loop.

Your output is machine-consumed and will be parsed strictly.

Output contract:
- Output MUST be a single JSON object.
- Do NOT wrap the JSON in Markdown fences.
- Do NOT include any additional keys beyond the schema provided in the Mission Object.
- Do NOT include commentary or trailing text.

Failure mode (fail closed):
- If you cannot produce valid JSON that matches the schema, output:
  {"status":"fail","reason":"<one sentence>","blocking_issue":"<short label>"}

Now output the JSON.

Notes (what this pattern does and doesn’t do)

Mission Object Templates (Chapter 7 canon)

Mission Objects are the executable contracts that drive work through the loop (Chapter 7). The templates below intentionally use the Chapter 7 schema and field names: the core contract (mission_id, mission_version, goal, scope, quality_gate, fallbacks) plus common optional fields (dependencies, constraints, acceptance_criteria, budgets, rollback_on, and scope.edit_regions) when the mission needs them.

Tooling differs. If your runner uses different field names, map them internally, but keep the manuscript canon stable. In other words: translate in code, not in docs.

This section is the canonical copy-and-specialize source for Mission templates used elsewhere in the book.

Minimal Mission Object skeleton (copy, then specialize)

mission_id: change-one-surface
mission_version: 1
goal: "Make one bounded change, proven by a deterministic gate"
scope:
  modify:
    - path/to/one_or_two_files.ext
  read_only: []
  do_not_touch:
    - .github/**
    - governance/**
budgets:
  max_files_changed: 2
  max_lines_changed: 120
quality_gate:
  cmd: "make test"
rollback_on:
  - "quality_gate_fail"
  - "scope_violation"
fallbacks:
  max_iterations: 3
  on_fail: revert

This is the smallest useful Mission Object shape. It is the right default when you are introducing Missions to a team for the first time.

Production Mission Object (feature-style example)

This template outlines a production-style mission to implement a new API endpoint. It adds dependencies, constraints, acceptance criteria, and a stricter scope boundary while keeping the same Chapter 7 schema.

mission_id: add-user-profile-endpoint
mission_version: 1
goal: "Add GET /api/v1/users/{user_id} for user profiles"
dependencies:
  - backend/src/schemas/user.py
  - backend/tests/api/v1/test_users.py
scope:
  modify:
    - backend/src/api/v1/users.py
    - backend/tests/api/v1/test_users.py
  read_only:
    - backend/src/schemas/user.py
  do_not_touch:
    - .github/**
    - governance/**
    - infra/**
constraints:
  forbidden:
    - "Skip authentication/authorization checks"
acceptance_criteria:
  must_be_true:
    - "GET /api/v1/users/{id} returns 200 for valid ID and authorized user"
    - "GET /api/v1/users/{id} returns 404 for invalid ID"
    - "GET /api/v1/users/{id} returns 401 for unauthenticated requests"
    - "GET /api/v1/users/{id} returns 403 for unauthorized users"
budgets:
  max_files_changed: 2
  max_lines_changed: 250
quality_gate:
  cmd: "pytest -q backend/tests/api/v1/test_users.py"
rollback_on:
  - "quality_gate_fail"
  - "scope_violation"
fallbacks:
  max_iterations: 5
  on_fail: escalate

This is the “full-fat” version readers should copy when they need a real contract with bounded scope, explicit gates, and a clear failure policy.

Bug Fix Mission Object variant

This variant focuses on reproducing, fixing, and verifying a specific bug. It keeps the same Mission Object anatomy but changes the acceptance criteria and rollback posture.

mission_id: fix-pagination-off-by-one
mission_version: 1
goal: "Fix pagination off-by-one in product list"
scope:
  modify:
    - backend/src/services/product_service.py
    - backend/tests/services/test_product_service.py
  read_only:
    - backend/src/repositories/product_repository.py
  do_not_touch:
    - .github/**
    - governance/**
constraints:
  forbidden:
    - "Change public API response shape"
acceptance_criteria:
  must_be_true:
    - "No duplicate items appear across page boundaries"
    - "A new regression test reproduces the old bug and passes after the fix"
budgets:
  max_files_changed: 2
  max_lines_changed: 180
quality_gate:
  cmd: "pytest -q backend/tests/services/test_product_service.py"
rollback_on:
  - "quality_gate_fail"
  - "scope_violation"
fallbacks:
  max_iterations: 4
  on_fail: revert

Unknown constraint policy (fail fast, don’t hand-wave)

If a Mission Object introduces a constraint that your runner cannot enforce yet, you have two safe options:

Do not silently ignore the field. A constraint that cannot be enforced is not Physics yet.

Day 2 hardening checklist

When you swap Chapter 1’s deterministic loop for a stochastic Effector, use this as the default hardening order. This is the book’s canonical Day 2 checklist.

Runner Configuration: Validator Recipes by Stack

Validators are the bedrock of SDaC as defined in this book, providing deterministic gates that ensure quality, correctness, and adherence to standards.

These YAML snippets are runner configuration, not Mission Objects. They describe how the local runner invokes validators, parses findings, and hints the next move to the Judge. Keep them out of the Mission schema itself.

Python Development Stack

A robust Python validation pipeline typically includes schema validation (e.g., Pydantic), static type checking, linting, and unit tests.

runner_validation_profile:
  validators:
    - id: PythonTypeChecker
      description: "Ensures type annotations are consistent and correct."
      command: ["mypy", "--strict", "src/"]
      cwd: "backend/"
      judge_hint: iterate
      findings_parser:
        type: "regex"
        # Example: file:line: error: message
        pattern: "^(.*?):(\\d+): error: (.*)$"
        groups: ["file", "line", "message"]

    - id: PythonLinter
      description: "Checks code style and common pitfalls."
      command: ["flake8", "src/"]
      cwd: "backend/"
      judge_hint: iterate
      findings_parser:
        type: "regex"
        # Example: file:line:col: message
        pattern: "^(.*?):(\\d+):(\\d+): (.*)$"
        groups: ["file", "line", "column", "message"]

    - id: PythonUnitTests
      description: "Executes unit tests and checks for failures."
      command:
        - pytest
        - --cov=src
        - --json-report
        - --json-report-file=.pytest_cache/report.json
      cwd: "backend/"
      judge_hint: iterate
      findings_parser:
        type: "jsonpath"
        path: "$.summary.failed"
      pass_when: "value == 0"

    - id: PydanticSchemaValidation
      description: "Ensures generated API schemas are valid Pydantic models (requires custom script)."
      command: ["python", "scripts/validate_pydantic_schemas.py", "src/schemas"]
      cwd: "backend/"
      judge_hint: iterate

TypeScript/Node Development Stack

For TypeScript, critical validators include the TypeScript compiler, ESLint for code quality, and a unit testing framework like Jest.

runner_validation_profile:
  validators:
    - id: TypeScriptCompiler
      description: "Verifies TypeScript syntax and type correctness."
      command: ["tsc", "--noEmit"]
      cwd: "frontend/"
      judge_hint: iterate
      findings_parser:
        type: "regex"
        # Example: file(line,col): error TSxxxx: message
        pattern: "^(.*?)\\((\\d+),(\\d+)\\): error (TS\\d+): (.*)$"
        groups: ["file", "line", "column", "errorCode", "message"]

    - id: ESLintLinter
      description: "Enforces code style and best practices for TypeScript/JavaScript."
      command: ["eslint", "--format", "json", "src/", "--max-warnings=0"]
      cwd: "frontend/"
      judge_hint: iterate
      findings_parser:
        type: "jsonpath"
        path: "$.[?(@.errorCount > 0)].errorCount"
      pass_when: "value.length == 0"

    - id: JestUnitTests
      description: "Runs Jest unit tests and reports failures."
      command: ["jest", "--ci", "--json", "--outputFile=jest-report.json"]
      cwd: "frontend/"
      judge_hint: iterate
      findings_parser:
        type: "jsonpath"
        path: "$.numFailedTests"
      pass_when: "value == 0"

    - id: ZodSchemaValidation
      description: "Ensures data schemas adhere to Zod definitions (for API contracts)."
      command: ["node", "scripts/validate_zod_schemas.js", "src/schemas"]
      cwd: "frontend/"
      judge_hint: iterate

Terraform/Infrastructure as Code Stack

Infrastructure as Code (IaC) requires specialized validation for syntax, best practices, security, and idempotency.

runner_validation_profile:
  validators:
    - id: TerraformSyntaxValidate
      description: "Checks Terraform configuration syntax and argument validity."
      command: ["terraform", "validate"]
      cwd: "infra/aws/"
      judge_hint: iterate

    - id: TFLint
      description: "Lints Terraform code for errors, warnings, and best practices."
      command: ["tflint", "--recursive", "--format", "json"]
      cwd: "infra/aws/"
      judge_hint: iterate
      findings_parser:
        type: "jsonpath"
        path: "$.issues"
      pass_when: "value.length == 0"

    - id: CheckovSecurityScan
      description: "Scans Terraform for common security misconfigurations and policy violations."
      command: ["checkov", "-f", ".", "--framework", "terraform", "--output", "json"]
      cwd: "infra/aws/"
      judge_hint: iterate
      findings_parser:
        type: "jsonpath"
        path: "$.results.failed_checks"
      pass_when: "value.length == 0"

    - id: TerraformPlanDiffCheck
      description: "Generates a Terraform plan and checks if any resource changes are proposed."
      command:
        - bash
        - -c
        - >-
            terraform plan -no-color |
            grep 'No changes. Your infrastructure matches the configuration.'
      cwd: "infra/aws/"
      judge_hint: escalate
      pass_when: "exit_code == 0"

Runner Configuration: Effector Patterns

Effectors are the final stage of a successful SDaC loop, taking the validated changes and applying them to the external world. These snippets are runner configuration for how an approved artifact is applied after the Judge returns PASS.

Git Commit Effector

This effector commits the generated and validated changes to a Git repository, potentially pushing to a remote.

effector_profile:
  type: "git_commit"
  config:
    auto_push: true
    branch_prefix: "sdac-" # New branch names will start with 'sdac-'
    branch_name_template: "{mission_id}-v{mission_version}-{run_id_short}" # e.g., sdac-fix-pagination-off-by-one-v1-abc123
    commit_message_template: |
      sdac({mission_id}@{mission_version}): {goal}

      Automated by SDaC loop.
      Audit trace: {audit_trace_url}
    git_repo_path: "./" # Relative path to the git repository

Pull Request Creation Effector (GitHub)

After committing, this pattern automatically opens a pull request on GitHub, ready for human review.

effector_profile:
  type: "github_pull_request"
  config:
    git_repo_path: "./"
    base_branch: "main"
    title_template: "[SDaC] {mission_id}@{mission_version}: {goal}"
    body_template: |
      This Pull Request was automatically generated by an SDaC loop.

      **Mission:** {mission_id}@{mission_version}

      **Goal:** {goal}

      **Audit Trace:** {audit_trace_url}

      Please review the changes carefully.
    labels: ["sdac-automated", "awaiting-review"]
    reviewers: ["@dev-lead", "@qa-engineer"] # Optional: Request specific reviewers
    draft: true # Creates a draft PR initially

CI/CD Trigger Effector (Webhook)

This effector sends a webhook to trigger a CI/CD pipeline, allowing further automated checks or deployment.

effector_profile:
  type: "webhook"
  config:
    url: "https://api.github.com/repos/{github_org}/{github_repo}/dispatches" # GitHub Actions example
    method: "POST"
    headers:
      Authorization: "Bearer {github_token}"
      Accept: "application/vnd.github.v3+json"
    payload:
      event_type: "sdac_changes_detected"
      client_payload:
        mission_id: "{mission_id}"
        branch: "{branch_name}"
        commit_sha: "{commit_sha}"
        repo_name: "{github_repo}"

Local File System Effector

For local development and rapid iteration, this effector saves the generated changes to a specific local directory.

effector_profile:
  type: "local_filesystem"
  config:
    output_directory: "sdac_artifacts/{mission_id}"
    filename_template: "{mission_id}-v{mission_version}_changes.patch"
    format: "diff" # Can also be 'full_files' for complete changed files

Policy Configuration: Circuit Breakers

Circuit breakers are critical safety mechanisms that prevent an SDaC loop from running out of control, consuming excessive resources, or making too many unsuccessful attempts. These are policy configuration, not Mission Object fields.

Max Iterations Breaker

This is one of the simplest circuit breakers: limit the number of times the Map-Updater can attempt to satisfy a mission.

circuit_breaker_policy:
  type: "max_iterations"
  config:
    limit: 10 # Stop after 10 attempts by the Map-Updater
    on_break: "log_and_notify"
    notification_channel: "slack_#sdac-alerts"

Time Limit Breaker

Prevents a single SDaC loop execution from running for an unreasonably long time, indicating a potential stall or complex problem.

circuit_breaker_policy:
  type: "time_limit"
  config:
    duration_seconds: 900 # Stop after 15 minutes (900 seconds)
    on_break: "log_and_cancel" # Cancel any ongoing Map-Updater processes
    notification_channel: "pagerduty_on_call"

Validation Failure Threshold Breaker

Halts the loop if the Map-Updater consistently fails validation, suggesting it’s unable to produce correct output or the mission is ill-defined.

circuit_breaker_policy:
  type: "validation_failure_threshold"
  config:
    consecutive_failures: 3 # Stop if 3 consecutive validation runs fail
    total_failures: 5       # Stop if 5 total validation runs fail across all iterations
    on_break: "log_and_suspend" # Suspend the loop, require manual intervention to resume
    notification_channel: "email_sdac-owners"

Human Intervention Breaker

Introduces a manual approval step into the loop, allowing engineers to review changes before further automation proceeds. This is particularly useful for sensitive operations or after a certain number of automatic retries.

circuit_breaker_policy:
  type: "manual_approval"
  config:
    require_after_iterations: 3 # After 3 automatic iterations, require human approval
    prompt_message: "SDaC loop has made 3 attempts. Review proposed changes before proceeding."
    notification_channel: "slack_#sdac-approvals"
    approval_timeout_minutes: 60 # If no approval within 60 minutes, break the loop
    on_break: "log_and_discard_changes"

Actionable: What you can do this week

  1. Choose a stack: Select one of the “Validator Recipes by Stack” that matches a common technology in your codebase (e.g., Python, TypeScript, Terraform).

  2. Pick a simple mission: Start from the canonical minimal Mission Object or adapt one of the production-style variants above for a small, isolated change.

  3. Configure a local loop: Set up a basic SDaC environment that uses your chosen Mission Object, one runner validation profile, and the “Local File System Effector.” Your goal is to see the system generate changes, run the validators against them, and then output a diff to a local directory.

  4. Experiment with circuit breakers: Start with the “Max Iterations Breaker” and set a low limit (e.g., 2 or 3) to observe how the system handles reaching that threshold.

Share