Chapter 16 – Appendix C: Pattern Library
This appendix serves as a practical library of proven patterns, templates, and configurations for various components of a Software Development as Code (SDaC) loop. Exhausted engineers and technical leaders need blueprints, not just concepts. Here, you’ll find concrete examples for defining missions, crafting robust validators for common technology stacks, configuring effectors to integrate with your existing workflows, and establishing circuit breakers for safety and control.
These patterns are designed to be immediately actionable. Treat them as starting points, adapting them to your specific environment and requirements. Each pattern emphasizes clarity, auditable configuration, and a focus on verifiable outcomes.
MVF v0: demo tools (full source)
Chapter 1 uses two tiny scripts to demonstrate the substrate: one
Effector that proposes/applies a diff, and one Validator that enforces
PASS/FAIL.
To keep Chapter 1 readable, the full runnable source lives here.
Save these files as:
factory/tools/sync_public_interfaces.py(Effector)factory/tools/validate_map_alignment.py(Validator)
sync_public_interfaces.py
(Effector)
from __future__ import annotations
import argparse
import ast
import difflib
from pathlib import Path
def _public_function_signatures(src_root: Path) -> list[str]:
signatures: set[str] = set()
for path in sorted(src_root.rglob("*.py")):
module = ast.parse(path.read_text(encoding="utf-8"))
for node in module.body:
if isinstance(node, ast.FunctionDef) and not node.name.startswith("_"):
args = [a.arg for a in node.args.args]
signatures.add(f"{node.name}({', '.join(args)})")
return sorted(signatures)
def _rewrite_public_interfaces_block(doc_text: str, signatures: list[str]) -> str:
lines = doc_text.splitlines(keepends=True)
heading = "## Public Interfaces"
start = next((i for i, line in enumerate(lines) if line.rstrip() == heading), None)
if start is None:
raise ValueError(f"Heading not found: {heading}")
end = next(
(i for i in range(start + 1, len(lines)) if lines[i].startswith("## ")),
len(lines),
)
replacement = [heading + "\n", "\n"]
for sig in signatures:
replacement.append(f"- `{sig}`\n")
replacement.append("\n")
return "".join(lines[:start] + replacement + lines[end:])
def main() -> int:
parser = argparse.ArgumentParser(
description="MVF demo Effector: sync Public Interfaces in Map from Terrain."
)
parser.add_argument("--src", type=Path, required=True, help="Source root (Terrain)")
parser.add_argument("--doc", type=Path, required=True, help="Docs file (Map)")
parser.add_argument(
"--apply", action="store_true", help="Apply the diff to the Map file"
)
args = parser.parse_args()
signatures = _public_function_signatures(args.src)
before = args.doc.read_text(encoding="utf-8")
after = _rewrite_public_interfaces_block(before, signatures)
if before == after:
print("[effector] no drift detected (Map matches Terrain)")
return 0
diff = difflib.unified_diff(
before.splitlines(),
after.splitlines(),
fromfile=str(args.doc),
tofile=str(args.doc),
lineterm="",
)
print("\n".join(diff))
if args.apply:
args.doc.write_text(after, encoding="utf-8")
print(f"[effector] applied patch to {args.doc}")
return 0
if __name__ == "__main__":
raise SystemExit(main())validate_map_alignment.py
(Validator)
from __future__ import annotations
import argparse
import ast
import re
import sys
from pathlib import Path
def _public_function_signatures(src_root: Path) -> list[str]:
signatures: set[str] = set()
for path in sorted(src_root.rglob("*.py")):
module = ast.parse(path.read_text(encoding="utf-8"))
for node in module.body:
if isinstance(node, ast.FunctionDef) and not node.name.startswith("_"):
args = [a.arg for a in node.args.args]
signatures.add(f"{node.name}({', '.join(args)})")
return sorted(signatures)
def _map_signatures(doc_text: str) -> list[str]:
lines = doc_text.splitlines()
start = next((i for i, line in enumerate(lines) if line.strip() == "## Public Interfaces"), None)
if start is None:
return []
end = next((i for i in range(start + 1, len(lines)) if lines[i].startswith("## ")), len(lines))
block = "\n".join(lines[start:end])
return sorted(set(re.findall(r"`([^`]+\\([^`]*\\))`", block)))
def main() -> int:
parser = argparse.ArgumentParser(description="MVF demo Validator: Map/Terrain sync.")
parser.add_argument("--src", type=Path, required=True)
parser.add_argument("--doc", type=Path, required=True)
args = parser.parse_args()
terrain = _public_function_signatures(args.src)
map_sigs = _map_signatures(args.doc.read_text(encoding="utf-8"))
missing = [s for s in terrain if s not in map_sigs]
extra = [s for s in map_sigs if s not in terrain]
if missing or extra:
print("[validator] map_terrain_sync_fail", file=sys.stderr)
if missing:
print(f" missing_in_map={missing}", file=sys.stderr)
if extra:
print(f" extra_in_map={extra}", file=sys.stderr)
return 1
print("[validator] map_terrain_sync=pass")
return 0
if __name__ == "__main__":
raise SystemExit(main())The Three-Tier Frame (Substrate → Loop → Governance)
Use this as a completeness checklist for autonomous engineering. If you’re missing one layer, the system fails in a predictable way.
Substrate (existential foundation)
Map / Terrain / Ledger: explicit intent, explicit reality, and an auditable history.
Physics: deterministic rules that halt bad changes (lint, schema, tests, architectural constraints).
Context Graph: topology and identity you can query (nodes: apps/packages; edges: dependencies). Nodes carry language/framework identity so Drivers select the right mechanisms without guessing.
Mission Object (Chapter 7): the unit-of-work contract + state of record (goal, scope, budgets, gates, lifecycle status).
Context Graph implementation guide (tool-agnostic)
You do not need a fancy platform to get value from a Context Graph. You need stable identity, stable edges, and a way to query “what is relevant to this task?” without dumping the repo.
Two artifacts make this practical:
- A graph snapshot (repo-wide): deterministic nodes and edges you can query.
- A task packet (per Mission Object): a small, bounded slice pulled from the snapshot (plus live sensor output) with provenance and budgets.
Some systems start with packets (a deterministic “prep” builder) and add the graph later. Others start with a coarse graph (workspace packages and dependencies) and improve the slicer over time. Either path works if the outputs are deterministic and logged.
Minimal data model
Node: stableid,kind, and optional identity (language/build) plus pointers to raw artifacts (file path, symbol name, doc heading).Edge: typed relations (imports,depends_on,documents,exercises,validated_by) with stablesrcanddst.
Identity is computed, not inferred
Treat identity as a deterministic extraction problem: read manifests and attach the result to nodes. A node should carry enough identity for the Driver Pattern to select the right mechanism without guessing.
Storage options (pick the simplest that works)
- In-memory adjacency maps (small repos, fast iteration).
- A file-backed index (JSON/JSONL) that you can rebuild deterministically.
- A lightweight database when you need richer queries or incremental updates.
Incremental rebuild strategy
- Detect changed files (Git diff, file watcher, or CI inputs).
- Re-extract nodes for those files.
- Re-derive edges that touch those nodes.
- Persist a new graph snapshot with a content hash and extractor version.
Query shape: anchor → slice
Most practical queries start from an anchor (a failing test, a function, a file path) and pull a bounded neighborhood:
- include “neighbors” by relation type (imports, exercised-by, documented-by)
- cap the result (
max_nodes, token budget) - order results deterministically (so the same query yields the same packet)
Task packet shape (what you hand to the model)
Treat the task packet as an auditable build artifact:
- anchors: “what is being changed” (file, symbol, failing test, contract path)
- Map first: contracts, rules, validators, allowed edit region
- Terrain next: the minimum implementation excerpts and evidence needed
- budgets:
max_nodes,max_tokens, freshness window, safety filters - provenance: commit hash + graph snapshot hash + extractor version
Illustrative scale (not a benchmark)
For a medium polyglot repo, it’s common to end up with tens of thousands of nodes and edges once you include files, symbols, doc sections, and test cases.
One illustrative shape:
nodes: ~15,000edges: ~45,000- serialized graph: a few megabytes (depending on how much you store per node)
The point is not the absolute size. The point is that a slice is small, bounded, and reproducible.
Implementation options for code repositories (from light to heavy)
You can “roll your own” Context Graph, or you can adopt parts of existing graph tooling. The correct choice depends on the questions you need to answer.
Common options:
Workspace/package graph (manifests + dependency edges): nodes are packages/apps; edges come from build manifests. This is enough for polyglot identity and driver selection, and it supports governance checks like coupling hotspots.
Test coverage graph (coverage-derived edges): treat “exercises” as an edge from tests to code regions. This makes test selection and safe refactors more deterministic without needing a full call graph.
Parser-backed symbol index (syntax trees + import graph): use language parsers (for example Tree-sitter) to index files, symbols, and imports across a polyglot repo. This is the usual on-ramp to reliable repo-level slicing.
Program analysis graphs (security-grade structure): for deeper semantic slicing, some teams build a code property graph that unifies syntax structure, control flow, and data flow. These graphs are common in vulnerability analysis, and can be sliced into small subgraphs for focused reasoning (for example via Joern).
Graph-backed retrieval (graph slices + text retrieval): treat the graph as the structural skeleton, then rank nodes with semantic search and expand through edges. Many “graph rag” approaches are just this: retrieve a small subgraph, then render it into a task packet.
Graph database backends (optional): if you need complex traversals, incremental updates, or multiple consumers, store your graph in a queryable backend (for example Neo4j or Memgraph). If your queries are simple, a file-backed index is often enough.
The rule is to build what you query. Don’t build a graph because graphs are fashionable. Define the packet queries you need, then implement the minimal graph and sensors that answer them deterministically.
Loop (operational mechanisms)
Deterministic Sandwich: prep → model → validation.
Feedback Injection: feed the failing signal back into the next iteration (stderr, linter lines, test output).
Driver Pattern: decouple “what to do” from “how to do it” across polyglot stacks.
Salvage Protocol: move failed/reverted attempts into a Quarantine directory (for example
.sdac/workflow-quarantine/) so useful fragments are recoverable.
Governance (automated sovereignty)
Scope Guard: blast-radius control via explicit path allowlists.
Mission Gate: run acceptance criteria without running the model.
Dream Daemon: scheduled maintenance that emits Missions from entropy signals.
Immutable Infrastructure: protect graders and guardrails from self-editing.
Directives: human-authored behavioral laws (for example
agent_directives.md). Unlike Physics, Directives constrain the agent instruction layer and should be treated as protected surfaces.Mirror: keep the system’s self-image consistent with its implementation.
Reference implementation note: Dream in the genAIbook engine
The book is not a “here is the finished system” handoff. But it can be useful to see one real Dream loop wiring so you know what “Depth 2” looks like in practice.
In this repository, Dream is exposed as Make targets:
make dream
make dream-loopUnder the hood it runs python -m tools.dream, which
builds a core.workflow.Workflow with explicit Steps:
- read prime directive
- scan entropy (coverage + static analysis + duplication)
- decide action (
test,refactor,split,dedupe, orreflect) - run the selected action via Make targets
- run the full Immune System suite as a post-check
The important pattern is visible: Dream does not invent new capabilities from text. It chooses from an allowlisted action set, runs deterministic gates, and emits evidence you can audit.
Meta-Patterns (the stance)
These are the stances that make the mechanisms compose.
Don’t Chat, Compile: use chat for ideation, but run production work through versioned Mission Objects and deterministic runners that produce diffs and logs.
Physics is Law: if a change fails Physics, it does not exist. No warnings. No loopholes.
Recursion: the system uses the same toolchain to maintain itself (Dream Daemon emits Missions; the loop validates those changes under the same gates).
Mission Templates
Missions are the initial executable contracts given to your SDaC system. They define the intent, scope, and expected outcomes, guiding the Map-Updater’s generation process. Clear missions are crucial for deterministic and controllable output.
Basic Feature Development Mission
This template outlines a mission to implement a new API endpoint. It specifies the functional requirements, scope, and expected validation.
mission_name: "Implement user profile GET API endpoint"
description: |
Add a new REST API endpoint to retrieve user profile information by ID.
The endpoint should be `/api/v1/users/{user_id}`.
It must return a JSON object conforming to the `UserProfileSchema`.
Authentication and authorization checks are required.
scope:
- "backend/src/api/v1/users.py"
- "backend/src/schemas/user.py"
- "backend/tests/api/v1/test_users.py"
requirements:
- "authentication_enforced"
- "authorization_checked_for_owner_or_admin"
- "response_matches_user_profile_schema"
- "unit_tests_pass"
acceptance_criteria:
- "GET /api/v1/users/{id} returns 200 OK for valid ID and authorized user."
- "GET /api/v1/users/{id} returns 404 Not Found for invalid ID."
- "GET /api/v1/users/{id} returns 401 Unauthorized for unauthenticated requests."
- "GET /api/v1/users/{id} returns 403 Forbidden for unauthorized users trying to access other profiles."
output_format: "git_diff_with_commit_message"
context_files:
- "backend/src/models/user.py"
- "backend/src/services/user_service.py"
- "backend/pyproject.toml"
max_iterations: 7 # Maximum attempts for the Map-UpdaterBug Fix Mission
This template focuses on reproducing, fixing, and verifying a specific bug. The emphasis is on clear reproduction steps and test-driven verification.
mission_name: "Fix pagination off-by-one error in product list"
description: |
Users report that when navigating to the second page of the product list,
the first item of the second page is a duplicate of the last item of the first page.
This appears to be an off-by-one error in the pagination logic.
scope:
- "backend/src/services/product_service.py"
- "backend/tests/services/test_product_service.py"
requirements:
- "reproduce_bug_with_new_test_case"
- "fix_verified_by_passing_new_test"
- "all_existing_tests_pass"
acceptance_criteria:
- "Pagination correctly fetches distinct items across page boundaries."
- "No duplicate items appear when navigating through pages."
output_format: "git_diff_with_commit_message"
context_files:
- "backend/src/services/product_service.py"
- "backend/src/repositories/product_repository.py"
- "backend/tests/services/test_product_service.py"
max_iterations: 4Validator Recipes by Stack
Validators are the bedrock of SDaC, providing deterministic gates that ensure quality, correctness, and adherence to standards. These recipes provide common configurations for various technology stacks.
Python Development Stack
A robust Python validation pipeline typically includes schema validation (e.g., Pydantic), static type checking, linting, and unit tests.
validators:
- name: PythonTypeChecker
description: "Ensures type annotations are consistent and correct."
command: ["mypy", "--strict", "src/"]
cwd: "backend/"
on_fail: "fail_and_retry"
output_parser:
type: "regex"
pattern: "^(.*?):(\\d+): error: (.*)$" # Example: file:line: error: message
groups: ["file", "line", "message"]
- name: PythonLinter
description: "Checks code style and common pitfalls."
command: ["flake8", "src/"]
cwd: "backend/"
on_fail: "fail_and_retry"
output_parser:
type: "regex"
pattern: "^(.*?):(\\d+):(\\d+): (.*)$" # Example: file:line:col: message
groups: ["file", "line", "column", "message"]
- name: PythonUnitTests
description: "Executes unit tests and checks for failures."
command: ["pytest", "--cov=src", "--json-report", "--json-report-file=.pytest_cache/report.json"]
cwd: "backend/"
on_fail: "fail_and_retry"
output_parser:
type: "jsonpath"
path: "$.summary.failed"
success_criteria: "value == 0" # Assumes value is the number of failed tests
- name: PydanticSchemaValidation
description: "Ensures generated API schemas are valid Pydantic models (requires custom script)."
command: ["python", "scripts/validate_pydantic_schemas.py", "src/schemas"]
cwd: "backend/"
on_fail: "fail_and_retry"
# Assuming the script exits with 0 for success, non-0 for failure.TypeScript/Node Development Stack
For TypeScript, critical validators include the TypeScript compiler, ESLint for code quality, and a unit testing framework like Jest.
validators:
- name: TypeScriptCompiler
description: "Verifies TypeScript syntax and type correctness."
command: ["tsc", "--noEmit"]
cwd: "frontend/"
on_fail: "fail_and_retry"
output_parser:
type: "regex"
pattern: "^(.*?)\\((\\d+),(\\d+)\\): error (TS\\d+): (.*)$" # Example: file(line,col): error TSxxxx: message
groups: ["file", "line", "column", "errorCode", "message"]
- name: ESLintLinter
description: "Enforces code style and best practices for TypeScript/JavaScript."
command: ["eslint", "--format", "json", "src/", "--max-warnings=0"]
cwd: "frontend/"
on_fail: "fail_and_retry"
output_parser:
type: "jsonpath"
path: "$.[?(@.errorCount > 0)].errorCount" # Check if any files have errors
success_criteria: "value.length == 0" # Value is an array of error counts, should be empty
- name: JestUnitTests
description: "Runs Jest unit tests and reports failures."
command: ["jest", "--ci", "--json", "--outputFile=jest-report.json"]
cwd: "frontend/"
on_fail: "fail_and_retry"
output_parser:
type: "jsonpath"
path: "$.numFailedTests"
success_criteria: "value == 0"
- name: ZodSchemaValidation
description: "Ensures data schemas adhere to Zod definitions (e.g., for API contracts)."
command: ["node", "scripts/validate_zod_schemas.js", "src/schemas"]
cwd: "frontend/"
on_fail: "fail_and_retry"
# Assuming script exits with 0 for success.Terraform/Infrastructure as Code Stack
Infrastructure as Code (IaC) requires specialized validation for syntax, best practices, security, and idempotency.
validators:
- name: TerraformSyntaxValidate
description: "Checks Terraform configuration syntax and argument validity."
command: ["terraform", "validate"]
cwd: "infra/aws/"
on_fail: "fail_and_retry"
- name: TFLint
description: "Lints Terraform code for errors, warnings, and best practices."
command: ["tflint", "--recursive", "--format", "json"]
cwd: "infra/aws/"
on_fail: "fail_and_retry"
output_parser:
type: "jsonpath"
path: "$.issues"
success_criteria: "value.length == 0"
- name: CheckovSecurityScan
description: "Scans Terraform for common security misconfigurations and policy violations."
command: ["checkov", "-f", ".", "--framework", "terraform", "--output", "json"]
cwd: "infra/aws/"
on_fail: "fail_and_retry"
output_parser:
type: "jsonpath"
path: "$.results.failed_checks"
success_criteria: "value.length == 0"
- name: TerraformPlanDiffCheck
description: "Generates a Terraform plan and checks if any resource changes are proposed."
command: ["bash", "-c", "terraform plan -no-color | grep 'No changes. Your infrastructure matches the configuration.'"]
cwd: "infra/aws/"
on_fail: "fail_and_retry" # If plan proposes changes, something is wrong or the mission wasn't to change.
success_criteria: "exit_code == 0" # grep will return 0 if the string is found (no changes)Effector Patterns
Effectors are the final stage of a successful SDaC loop, taking the validated changes and applying them to the external world. These patterns integrate with common development workflows.
Git Commit Effector
This effector commits the generated and validated changes to a Git repository, potentially pushing to a remote.
effector:
type: "git_commit"
config:
auto_push: true
branch_prefix: "sdac-" # New branch names will start with 'sdac-'
branch_name_template: "{mission_name_slug}-{mission_id_short}" # e.g., sdac-implement-user-profile-abc123
commit_message_template: |
feat({mission_name_slug}): {description_summary}
Automated by SDaC loop for mission: {mission_id}
See audit trace: {audit_trace_url}
git_repo_path: "./" # Relative path to the git repositoryPull Request Creation Effector (GitHub)
After committing, this pattern automatically opens a pull request on GitHub, ready for human review.
effector:
type: "github_pull_request"
config:
git_repo_path: "./"
base_branch: "main"
title_template: "[SDaC] {mission_name} - Automated Change"
body_template: |
This Pull Request was automatically generated by an SDaC loop.
**Mission:** {mission_name}
**Description:** {description}
**Mission ID:** {mission_id}
**Audit Trace:** {audit_trace_url}
Please review the changes carefully.
labels: ["sdac-automated", "awaiting-review"]
reviewers: ["@dev-lead", "@qa-engineer"] # Optional: Request specific reviewers
draft: true # Creates a draft PR initiallyCI/CD Trigger Effector (Webhook)
This effector sends a webhook to trigger a CI/CD pipeline, allowing further automated checks or deployment.
effector:
type: "webhook"
config:
url: "https://api.github.com/repos/{github_org}/{github_repo}/dispatches" # GitHub Actions example
method: "POST"
headers:
Authorization: "Bearer {github_token}"
Accept: "application/vnd.github.v3+json"
payload:
event_type: "sdac_changes_detected"
client_payload:
mission_id: "{mission_id}"
branch: "{branch_name}"
commit_sha: "{commit_sha}"
repo_name: "{github_repo}"Local File System Effector
For local development and rapid iteration, this effector saves the generated changes to a specific local directory.
effector:
type: "local_filesystem"
config:
output_directory: "sdac_artifacts/{mission_id}"
filename_template: "{mission_name_slug}_changes.patch"
format: "diff" # Can also be 'full_files' for complete changed filesCircuit Breaker Configurations
Circuit breakers are critical safety mechanisms that prevent an SDaC loop from running out of control, consuming excessive resources, or making too many unsuccessful attempts.
Max Iterations Breaker
This is the simplest and most common circuit breaker, limiting the number of times the Map-Updater can attempt to satisfy a mission.
circuit_breaker:
type: "max_iterations"
config:
limit: 10 # Stop after 10 attempts by the Map-Updater
on_break: "log_and_notify"
notification_channel: "slack_#sdac-alerts"Time Limit Breaker
Prevents a single SDaC loop execution from running for an unreasonably long time, indicating a potential stall or complex problem.
circuit_breaker:
type: "time_limit"
config:
duration_seconds: 900 # Stop after 15 minutes (900 seconds)
on_break: "log_and_cancel" # Cancel any ongoing Map-Updater processes
notification_channel: "pagerduty_on_call"Validation Failure Threshold Breaker
Halts the loop if the Map-Updater consistently fails validation, suggesting it’s unable to produce correct output or the mission is ill-defined.
circuit_breaker:
type: "validation_failure_threshold"
config:
consecutive_failures: 3 # Stop if 3 consecutive validation runs fail
total_failures: 5 # Stop if 5 total validation runs fail across all iterations
on_break: "log_and_suspend" # Suspend the loop, require manual intervention to resume
notification_channel: "email_sdac-owners"Human Intervention Breaker
Introduces a manual approval step into the loop, allowing engineers to review changes before further automation proceeds. This is particularly useful for sensitive operations or after a certain number of automatic retries.
circuit_breaker:
type: "manual_approval"
config:
require_after_iterations: 3 # After 3 automatic iterations, require human approval
prompt_message: "SDaC loop has made 3 attempts. Review proposed changes before proceeding."
notification_channel: "slack_#sdac-approvals"
approval_timeout_minutes: 60 # If no approval within 60 minutes, break the loop
on_break: "log_and_discard_changes"Actionable: What you can do this week
Choose a stack: Select one of the “Validator Recipes by Stack” that matches a common technology in your codebase (e.g., Python, TypeScript, Terraform).
Pick a simple mission: Adapt the “Basic Feature Development Mission” or “Bug Fix Mission” template to a small, isolated change in your chosen stack.
Configure a local loop: Set up a basic SDaC environment that uses your chosen mission, the selected validator recipe, and the “Local File System Effector.” Your goal is to see the system generate changes, run the validators against them, and then output a diff to a local directory.
Experiment with circuit breakers: Start with the “Max Iterations Breaker” and set a low limit (e.g., 2 or 3) to observe how the system handles reaching that threshold.