Appendix C: Pattern Library
This appendix serves as a practical library of patterns, templates, and configurations for various components of a Software Development as Code (SDaC) loop. Exhausted engineers and technical leaders need blueprints, not just concepts. Here, you’ll find concrete examples for defining missions, crafting robust validators for common technology stacks, configuring effectors to integrate with your existing workflows, and establishing circuit breakers for safety and control.
These patterns are designed to be immediately actionable. Treat them as starting points, adapting them to your specific environment and requirements. Each pattern emphasizes clarity, auditable configuration, and a focus on verifiable outcomes.
How to use this appendix
Treat this appendix as a library with five shelves:
- Mission Object Templates: minimal and production-shaped work contracts.
- Validator Recipes by Stack: runner-side checks for Python, TypeScript, Terraform, and similar surfaces.
- Effector / Output Contract Patterns: stable system-message and diff/JSON output shapes.
- Runner / Policy Configuration: local adapter logic, allowlists, and governance wiring.
- Circuit Breakers: stop conditions, budgets, and human-intervention policies.
If a chapter points here, it should be because you want a reusable artifact, not because the main text ran out of room. If a detail does not fit one of these shelves cleanly, keep it in the chapter that needs it instead of turning this appendix into a junk drawer.
Artifact taxonomy (keep these surfaces separate)
This appendix includes three different kinds of artifacts. They should not be collapsed into one schema:
- Mission Objects: executable work contracts. These
should use Chapter 7’s field names and validation posture: the core
contract (
mission_id,mission_version,goal,scope,quality_gate,fallbacks) plus optional fields such asdependencies,constraints,acceptance_criteria,budgets,rollback_on, andscope.edit_regionswhen deterministic gates can enforce them. - Runner configuration: local implementation details for how your runner invokes validators, parses findings, and applies Effectors. These are adapter-layer details, not Mission Object fields.
- Policy configuration: circuit breakers, approvals, notification rules, and other governance settings that shape loop behavior.
If your local tooling uses different field names, translate at the runner boundary. Do not teach readers a second Mission Object schema.
Canonical reusable artifacts
Use this appendix as the book’s single copy-and-specialize home for reusable loop artifacts:
- canonical Effector output-contract patterns
- canonical minimal and production Mission Object templates
- the small Day 2 hardening checklist
Other chapters should explain when to use these artifacts. This appendix should hold the reusable wording and shapes.
Minimum Viable Factory (MVF) v0: demo tools (companion repo)
Chapter 1’s MVF v0 loop lives in the companion code repo:
github.com/kjwise/aoi_code.
If you want to read or modify the runnable tools, start here:
factory/tools/sync_public_interfaces.py— Effector (diff-first; apply optional)factory/tools/validate_map_alignment.py— Validator (Map/Terrain alignment)Makefile— command panel (make sync,make validate,make all)
The contract is intentionally tiny:
Effector interface:
python3 factory/tools/sync_public_interfaces.py \
--src <terrain_dir> \
--doc <map_file> \
[--apply]
Validator interface:
python3 factory/tools/validate_map_alignment.py \
--src <terrain_dir> \
--doc <map_file>
Logic flow (sketch):
# Effector
signatures = extract_public_signatures(src)
doc_after = rewrite_heading_block(doc_before, heading="## Public Interfaces", items=signatures)
diff = unified_diff(doc_before, doc_after)
if apply: write(doc_after)
# Validator
terrain = extract_public_signatures(src)
map_items = parse_items_from_heading_block(doc_text, heading="## Public Interfaces")
pass if sets_match(terrain, map_items)
This is the smallest useful Map surface: one bounded heading block that can’t drift without tripping a deterministic gate.
The Three-Tier Frame (Substrate → Loop → Governance)
Use this as a completeness checklist for autonomous engineering. If you’re missing one layer, the system fails in a predictable way.
Substrate (existential foundation)
Map / Terrain / Ledger: explicit intent, explicit reality, and an auditable history.
Physics: deterministic rules that halt bad changes (lint, schema, tests, architectural constraints).
Context Graph: topology and identity you can query (nodes: apps/packages; edges: dependencies). Nodes carry language/framework identity so Drivers select the right mechanisms without guessing.
Mission Object (Chapter 7): the unit-of-work contract + state of record (goal, scope, budgets, gates, lifecycle status).
Context Graph implementation guide (tool-agnostic)
You do not need a fancy platform to get value from a Context Graph. You need stable identity, stable edges, and a way to query “what is relevant to this task?” without dumping the repo.
Two artifacts make this practical:
- A graph snapshot (repo-wide): deterministic nodes and edges you can query.
- A task packet (per Mission Object): a small, bounded slice pulled from the snapshot (plus live sensor output) with provenance and budgets.
Some systems start with packets (a deterministic “prep” builder) and add the graph later. Others start with a coarse graph (workspace packages and dependencies) and improve the slicer over time. Either path works if the outputs are deterministic and logged.
Minimal data model
Node: stableid,kind, and optional identity (language/build) plus pointers to raw artifacts (file path, symbol name, doc heading).Edge: typed relations (imports,depends_on,documents,exercises,validated_by) with stablesrcanddst.
Identity is computed, not inferred
Treat identity as a deterministic extraction problem: read manifests and attach the result to nodes. A node should carry enough identity for the Driver Pattern to select the right mechanism without guessing.
Storage options (pick the simplest that works)
- In-memory adjacency maps (small repos, fast iteration).
- A file-backed index (JSON/JSONL) that you can rebuild deterministically.
- A lightweight database when you need richer queries or incremental updates.
Incremental rebuild strategy
- Detect changed files (Git diff, file watcher, or CI inputs).
- Re-extract nodes for those files.
- Re-derive edges that touch those nodes.
- Persist a new graph snapshot with a content hash and extractor version.
Query shape: anchor → slice
Most practical queries start from an anchor (a failing test, a function, a file path) and pull a bounded neighborhood:
- include “neighbors” by relation type (imports, exercised-by, documented-by)
- cap the result (
max_nodes, token budget) - order results deterministically (so the same query yields the same packet)
Task packet shape (what you hand to the model)
Treat the task packet as an auditable build artifact:
- anchors: “what is being changed” (file, symbol, failing test, contract path)
- Map first: contracts, rules, validators, allowed edit region
- Terrain next: the minimum implementation excerpts and evidence needed
- budgets:
max_nodes,max_tokens, freshness window, safety filters - provenance: commit hash + graph snapshot hash + extractor version
Illustrative scale (not a benchmark)
For a medium polyglot repo, it’s common to end up with tens of thousands of nodes and edges once you include files, symbols, doc sections, and test cases.
One illustrative shape:
nodes: ~15,000edges: ~45,000- serialized graph: a few megabytes (depending on how much you store per node)
The point is not the absolute size. The point is that a slice is small, bounded, and reproducible.
Implementation options for code repositories (from light to heavy)
You can “roll your own” Context Graph, or you can adopt parts of existing graph tooling. The correct choice depends on the questions you need to answer.
Common options:
Workspace/package graph (manifests + dependency edges): nodes are packages/apps; edges come from build manifests. This is enough for polyglot identity and driver selection, and it supports governance checks like coupling hotspots.
Test coverage graph (coverage-derived edges): treat “exercises” as an edge from tests to code regions. This makes test selection and safe refactors more deterministic without needing a full call graph.
Parser-backed symbol index (syntax trees + import graph): use language parsers (for example Tree-sitter) to index files, symbols, and imports across a polyglot repo. This is the usual on-ramp to reliable repo-level slicing.
Program analysis graphs (security-grade structure): for deeper semantic slicing, some teams build a code property graph that unifies syntax structure, control flow, and data flow. These graphs are common in vulnerability analysis, and can be sliced into small subgraphs for focused reasoning (for example via Joern).
Graph-backed retrieval (graph slices + text retrieval): treat the graph as the structural skeleton, then rank nodes with semantic search and expand through edges. Many “graph rag” approaches are just this: retrieve a small subgraph, then render it into a task packet.
Graph database backends (optional): if you need complex traversals, incremental updates, or multiple consumers, store your graph in a queryable backend (for example Neo4j or Memgraph). If your queries are simple, a file-backed index is often enough.
The rule is to build what you query. Don’t build a graph because graphs are fashionable. Define the packet queries you need, then implement the minimal graph and sensors that answer them deterministically.
Loop (operational mechanisms)
Deterministic Sandwich: prep → model → validation.
Feedback Injection: feed the failing signal back into the next iteration (stderr, linter lines, test output).
Driver Pattern: decouple “what to do” from “how to do it” across polyglot stacks.
Salvage Protocol: move failed/reverted attempts into a Quarantine directory (for example
.sdac/workflow-quarantine/) so useful fragments are recoverable.
Governance (automated sovereignty)
Scope Guard: blast-radius control via explicit path allowlists.
Mission Gate: run acceptance criteria without running the model.
Dream Daemon: scheduled maintenance that emits Missions from entropy signals.
Immutable Infrastructure: protect graders and guardrails from self-editing.
Directives: human-authored behavioral laws (for example
agent_directives.md). Unlike Physics, Directives constrain the agent instruction layer and should be treated as protected surfaces.Mirror: keep the system’s self-image consistent with its implementation.
Reference implementation note: Dream in the genAIbook engine
The book is not a “here is the finished system” handoff. But it can be useful to see one real Dream loop wiring so you know what “Depth 2” looks like in practice.
In this repository, Dream is exposed as Make targets:
make dream
make dream-loopUnder the hood it runs python -m tools.dream, which
builds a core.workflow.Workflow with explicit Steps:
- read prime directive
- scan entropy (coverage + static analysis + duplication)
- decide action (
test,refactor,split,dedupe, orreflect) - run the selected action via Make targets
- run the full Immune System suite as a post-check
The important pattern is visible: Dream does not invent new capabilities from text. It chooses from an allowlisted action set, runs deterministic gates, and emits evidence you can audit.
Meta-Patterns (the stance)
These are the stances that make the mechanisms compose.
Don’t Chat, Compile: use chat for ideation, but run production work through versioned Mission Objects and deterministic runners that produce diffs and logs.
Physics is Law: if a change fails Physics, it does not exist. No warnings. No loopholes.
Recursion: the system uses the same toolchain to maintain itself (Dream Daemon emits Missions; the loop validates those changes under the same gates).
System Message / Output Contract Patterns
In an SDaC loop, Prep selects a bounded slice and
Validation grades the artifact. The system message is the
instruction layer that turns a general-purpose model into a component
with an output contract.
The goal is determinism at the interface: machine-parseable output, no commentary, and a fail-closed mode when constraints can’t be satisfied.
Different platforms call this a system message or instruction layer. The name doesn’t matter. The output contract does.
This section is the canonical copy-and-specialize source for output-contract text used elsewhere in the book.
Keep the system message stable per Effector. Put task-specific intent in the Mission Object and the slice.
The Effector system message (diff-only)
Use this when the model’s job is to act as an Effector that proposes a patch.
You are an Effector component in a deterministic SDaC loop.
You are not a chat interface. You do not explain. You do not teach.
Your output is machine-consumed and will be validated by deterministic Physics.
Output contract:
- Output MUST be a single unified diff (git-style patch).
- Do NOT wrap the diff in Markdown fences.
- Do NOT include commentary, headings, or prose before/after the diff.
- The diff MUST use paths relative to the repository root.
Scope and safety:
- Only modify files that are explicitly in the scope allowlist provided in the Mission Object.
- Do not touch protected paths or policy surfaces unless the Mission Object explicitly allows it.
- Keep the diff minimal: smallest change that satisfies the acceptance criteria.
- Preserve existing style and formatting unless the Mission Object requires otherwise.
Failure mode (fail closed):
- If you cannot comply with the output contract or constraints, output a single JSON object (and nothing else):
{"status":"fail","reason":"<one sentence>","blocking_issue":"<short label>"}
You will be given:
- A Mission Object (intent, scope, constraints, acceptance criteria)
- A context slice (evidence: Map + Terrain + failing signals)
Now produce the diff.
The Effector system message (JSON-only)
Use this when the Effector’s artifact is structured data (a plan, a report, a mission draft, a config snippet) and you need the model to “shut up and output JSON.”
You are an Effector component in a deterministic SDaC loop.
Your output is machine-consumed and will be parsed strictly.
Output contract:
- Output MUST be a single JSON object.
- Do NOT wrap the JSON in Markdown fences.
- Do NOT include any additional keys beyond the schema provided in the Mission Object.
- Do NOT include commentary or trailing text.
Failure mode (fail closed):
- If you cannot produce valid JSON that matches the schema, output:
{"status":"fail","reason":"<one sentence>","blocking_issue":"<short label>"}
Now output the JSON.
Notes (what this pattern does and doesn’t do)
- This pattern makes the stochastic step easier to integrate: the Judge can immediately validate “is it parseable?” before spending effort on deeper Physics.
- It does not replace Validators. A model that outputs a perfectly formed diff can still be wrong; Physics decides.
- If your platform supports strict structured outputs (JSON schema, tool calls), use them. Treat them as output-shape constraints, not as correctness guarantees.
Mission Object Templates (Chapter 7 canon)
Mission Objects are the executable contracts that drive work through
the loop (Chapter 7). The templates below intentionally use the Chapter
7 schema and field names: the core contract (mission_id,
mission_version, goal, scope,
quality_gate, fallbacks) plus common optional
fields (dependencies, constraints,
acceptance_criteria, budgets,
rollback_on, and scope.edit_regions) when the
mission needs them.
Tooling differs. If your runner uses different field names, map them internally, but keep the manuscript canon stable. In other words: translate in code, not in docs.
This section is the canonical copy-and-specialize source for Mission templates used elsewhere in the book.
Minimal Mission Object skeleton (copy, then specialize)
mission_id: change-one-surface
mission_version: 1
goal: "Make one bounded change, proven by a deterministic gate"
scope:
modify:
- path/to/one_or_two_files.ext
read_only: []
do_not_touch:
- .github/**
- governance/**
budgets:
max_files_changed: 2
max_lines_changed: 120
quality_gate:
cmd: "make test"
rollback_on:
- "quality_gate_fail"
- "scope_violation"
fallbacks:
max_iterations: 3
on_fail: revertThis is the smallest useful Mission Object shape. It is the right default when you are introducing Missions to a team for the first time.
Production Mission Object (feature-style example)
This template outlines a production-style mission to implement a new API endpoint. It adds dependencies, constraints, acceptance criteria, and a stricter scope boundary while keeping the same Chapter 7 schema.
mission_id: add-user-profile-endpoint
mission_version: 1
goal: "Add GET /api/v1/users/{user_id} for user profiles"
dependencies:
- backend/src/schemas/user.py
- backend/tests/api/v1/test_users.py
scope:
modify:
- backend/src/api/v1/users.py
- backend/tests/api/v1/test_users.py
read_only:
- backend/src/schemas/user.py
do_not_touch:
- .github/**
- governance/**
- infra/**
constraints:
forbidden:
- "Skip authentication/authorization checks"
acceptance_criteria:
must_be_true:
- "GET /api/v1/users/{id} returns 200 for valid ID and authorized user"
- "GET /api/v1/users/{id} returns 404 for invalid ID"
- "GET /api/v1/users/{id} returns 401 for unauthenticated requests"
- "GET /api/v1/users/{id} returns 403 for unauthorized users"
budgets:
max_files_changed: 2
max_lines_changed: 250
quality_gate:
cmd: "pytest -q backend/tests/api/v1/test_users.py"
rollback_on:
- "quality_gate_fail"
- "scope_violation"
fallbacks:
max_iterations: 5
on_fail: escalateThis is the “full-fat” version readers should copy when they need a real contract with bounded scope, explicit gates, and a clear failure policy.
Bug Fix Mission Object variant
This variant focuses on reproducing, fixing, and verifying a specific bug. It keeps the same Mission Object anatomy but changes the acceptance criteria and rollback posture.
mission_id: fix-pagination-off-by-one
mission_version: 1
goal: "Fix pagination off-by-one in product list"
scope:
modify:
- backend/src/services/product_service.py
- backend/tests/services/test_product_service.py
read_only:
- backend/src/repositories/product_repository.py
do_not_touch:
- .github/**
- governance/**
constraints:
forbidden:
- "Change public API response shape"
acceptance_criteria:
must_be_true:
- "No duplicate items appear across page boundaries"
- "A new regression test reproduces the old bug and passes after the fix"
budgets:
max_files_changed: 2
max_lines_changed: 180
quality_gate:
cmd: "pytest -q backend/tests/services/test_product_service.py"
rollback_on:
- "quality_gate_fail"
- "scope_violation"
fallbacks:
max_iterations: 4
on_fail: revertUnknown constraint policy (fail fast, don’t hand-wave)
If a Mission Object introduces a constraint that your runner cannot enforce yet, you have two safe options:
- fail fast with an explicit “unknown constraint” error
- escalate to human review
Do not silently ignore the field. A constraint that cannot be enforced is not Physics yet.
Day 2 hardening checklist
When you swap Chapter 1’s deterministic loop for a stochastic Effector, use this as the default hardening order. This is the book’s canonical Day 2 checklist.
- Freeze the output contract first: pick
diff-onlyorJSON-only, parse strictly, and fail closed on extra prose or malformed structure. - Move intent into a versioned Mission Object with bounded
scope, explicitquality_gate,fallbacks, and budgets before you broaden autonomy. - Enforce scope mechanically:
modify,read_only,do_not_touch, andscope.edit_regionswhen only one surface should move. - Make validator failure reusable: emit structured findings
(
file_path,line_numberorunknown,error_code,message) and retry from those findings, not from vague summaries. - Add circuit breakers before you trust retries:
max_iterations, diff/file budgets, and a no-progress stop when the same failure repeats. - Salvage near-misses: keep the failed diff plus findings in quarantine, then revert the Terrain by default when the loop does not converge.
- Re-run the passing mission once more. If run 2 still changes anything, you still have drift. Keep policies, validators, CI, and mission schema immutable/code-owned while you tighten the loop.
Runner Configuration: Validator Recipes by Stack
Validators are the bedrock of SDaC as defined in this book, providing deterministic gates that ensure quality, correctness, and adherence to standards.
These YAML snippets are runner configuration, not Mission Objects. They describe how the local runner invokes validators, parses findings, and hints the next move to the Judge. Keep them out of the Mission schema itself.
Python Development Stack
A robust Python validation pipeline typically includes schema validation (e.g., Pydantic), static type checking, linting, and unit tests.
runner_validation_profile:
validators:
- id: PythonTypeChecker
description: "Ensures type annotations are consistent and correct."
command: ["mypy", "--strict", "src/"]
cwd: "backend/"
judge_hint: iterate
findings_parser:
type: "regex"
# Example: file:line: error: message
pattern: "^(.*?):(\\d+): error: (.*)$"
groups: ["file", "line", "message"]
- id: PythonLinter
description: "Checks code style and common pitfalls."
command: ["flake8", "src/"]
cwd: "backend/"
judge_hint: iterate
findings_parser:
type: "regex"
# Example: file:line:col: message
pattern: "^(.*?):(\\d+):(\\d+): (.*)$"
groups: ["file", "line", "column", "message"]
- id: PythonUnitTests
description: "Executes unit tests and checks for failures."
command:
- pytest
- --cov=src
- --json-report
- --json-report-file=.pytest_cache/report.json
cwd: "backend/"
judge_hint: iterate
findings_parser:
type: "jsonpath"
path: "$.summary.failed"
pass_when: "value == 0"
- id: PydanticSchemaValidation
description: "Ensures generated API schemas are valid Pydantic models (requires custom script)."
command: ["python", "scripts/validate_pydantic_schemas.py", "src/schemas"]
cwd: "backend/"
judge_hint: iterateTypeScript/Node Development Stack
For TypeScript, critical validators include the TypeScript compiler, ESLint for code quality, and a unit testing framework like Jest.
runner_validation_profile:
validators:
- id: TypeScriptCompiler
description: "Verifies TypeScript syntax and type correctness."
command: ["tsc", "--noEmit"]
cwd: "frontend/"
judge_hint: iterate
findings_parser:
type: "regex"
# Example: file(line,col): error TSxxxx: message
pattern: "^(.*?)\\((\\d+),(\\d+)\\): error (TS\\d+): (.*)$"
groups: ["file", "line", "column", "errorCode", "message"]
- id: ESLintLinter
description: "Enforces code style and best practices for TypeScript/JavaScript."
command: ["eslint", "--format", "json", "src/", "--max-warnings=0"]
cwd: "frontend/"
judge_hint: iterate
findings_parser:
type: "jsonpath"
path: "$.[?(@.errorCount > 0)].errorCount"
pass_when: "value.length == 0"
- id: JestUnitTests
description: "Runs Jest unit tests and reports failures."
command: ["jest", "--ci", "--json", "--outputFile=jest-report.json"]
cwd: "frontend/"
judge_hint: iterate
findings_parser:
type: "jsonpath"
path: "$.numFailedTests"
pass_when: "value == 0"
- id: ZodSchemaValidation
description: "Ensures data schemas adhere to Zod definitions (for API contracts)."
command: ["node", "scripts/validate_zod_schemas.js", "src/schemas"]
cwd: "frontend/"
judge_hint: iterateTerraform/Infrastructure as Code Stack
Infrastructure as Code (IaC) requires specialized validation for syntax, best practices, security, and idempotency.
runner_validation_profile:
validators:
- id: TerraformSyntaxValidate
description: "Checks Terraform configuration syntax and argument validity."
command: ["terraform", "validate"]
cwd: "infra/aws/"
judge_hint: iterate
- id: TFLint
description: "Lints Terraform code for errors, warnings, and best practices."
command: ["tflint", "--recursive", "--format", "json"]
cwd: "infra/aws/"
judge_hint: iterate
findings_parser:
type: "jsonpath"
path: "$.issues"
pass_when: "value.length == 0"
- id: CheckovSecurityScan
description: "Scans Terraform for common security misconfigurations and policy violations."
command: ["checkov", "-f", ".", "--framework", "terraform", "--output", "json"]
cwd: "infra/aws/"
judge_hint: iterate
findings_parser:
type: "jsonpath"
path: "$.results.failed_checks"
pass_when: "value.length == 0"
- id: TerraformPlanDiffCheck
description: "Generates a Terraform plan and checks if any resource changes are proposed."
command:
- bash
- -c
- >-
terraform plan -no-color |
grep 'No changes. Your infrastructure matches the configuration.'
cwd: "infra/aws/"
judge_hint: escalate
pass_when: "exit_code == 0"Runner Configuration: Effector Patterns
Effectors are the final stage of a successful SDaC loop, taking the
validated changes and applying them to the external world. These
snippets are runner configuration for how an approved
artifact is applied after the Judge returns PASS.
Git Commit Effector
This effector commits the generated and validated changes to a Git repository, potentially pushing to a remote.
effector_profile:
type: "git_commit"
config:
auto_push: true
branch_prefix: "sdac-" # New branch names will start with 'sdac-'
branch_name_template: "{mission_id}-v{mission_version}-{run_id_short}" # e.g., sdac-fix-pagination-off-by-one-v1-abc123
commit_message_template: |
sdac({mission_id}@{mission_version}): {goal}
Automated by SDaC loop.
Audit trace: {audit_trace_url}
git_repo_path: "./" # Relative path to the git repositoryPull Request Creation Effector (GitHub)
After committing, this pattern automatically opens a pull request on GitHub, ready for human review.
effector_profile:
type: "github_pull_request"
config:
git_repo_path: "./"
base_branch: "main"
title_template: "[SDaC] {mission_id}@{mission_version}: {goal}"
body_template: |
This Pull Request was automatically generated by an SDaC loop.
**Mission:** {mission_id}@{mission_version}
**Goal:** {goal}
**Audit Trace:** {audit_trace_url}
Please review the changes carefully.
labels: ["sdac-automated", "awaiting-review"]
reviewers: ["@dev-lead", "@qa-engineer"] # Optional: Request specific reviewers
draft: true # Creates a draft PR initiallyCI/CD Trigger Effector (Webhook)
This effector sends a webhook to trigger a CI/CD pipeline, allowing further automated checks or deployment.
effector_profile:
type: "webhook"
config:
url: "https://api.github.com/repos/{github_org}/{github_repo}/dispatches" # GitHub Actions example
method: "POST"
headers:
Authorization: "Bearer {github_token}"
Accept: "application/vnd.github.v3+json"
payload:
event_type: "sdac_changes_detected"
client_payload:
mission_id: "{mission_id}"
branch: "{branch_name}"
commit_sha: "{commit_sha}"
repo_name: "{github_repo}"Local File System Effector
For local development and rapid iteration, this effector saves the generated changes to a specific local directory.
effector_profile:
type: "local_filesystem"
config:
output_directory: "sdac_artifacts/{mission_id}"
filename_template: "{mission_id}-v{mission_version}_changes.patch"
format: "diff" # Can also be 'full_files' for complete changed filesPolicy Configuration: Circuit Breakers
Circuit breakers are critical safety mechanisms that prevent an SDaC loop from running out of control, consuming excessive resources, or making too many unsuccessful attempts. These are policy configuration, not Mission Object fields.
Max Iterations Breaker
This is one of the simplest circuit breakers: limit the number of times the Map-Updater can attempt to satisfy a mission.
circuit_breaker_policy:
type: "max_iterations"
config:
limit: 10 # Stop after 10 attempts by the Map-Updater
on_break: "log_and_notify"
notification_channel: "slack_#sdac-alerts"Time Limit Breaker
Prevents a single SDaC loop execution from running for an unreasonably long time, indicating a potential stall or complex problem.
circuit_breaker_policy:
type: "time_limit"
config:
duration_seconds: 900 # Stop after 15 minutes (900 seconds)
on_break: "log_and_cancel" # Cancel any ongoing Map-Updater processes
notification_channel: "pagerduty_on_call"Validation Failure Threshold Breaker
Halts the loop if the Map-Updater consistently fails validation, suggesting it’s unable to produce correct output or the mission is ill-defined.
circuit_breaker_policy:
type: "validation_failure_threshold"
config:
consecutive_failures: 3 # Stop if 3 consecutive validation runs fail
total_failures: 5 # Stop if 5 total validation runs fail across all iterations
on_break: "log_and_suspend" # Suspend the loop, require manual intervention to resume
notification_channel: "email_sdac-owners"Human Intervention Breaker
Introduces a manual approval step into the loop, allowing engineers to review changes before further automation proceeds. This is particularly useful for sensitive operations or after a certain number of automatic retries.
circuit_breaker_policy:
type: "manual_approval"
config:
require_after_iterations: 3 # After 3 automatic iterations, require human approval
prompt_message: "SDaC loop has made 3 attempts. Review proposed changes before proceeding."
notification_channel: "slack_#sdac-approvals"
approval_timeout_minutes: 60 # If no approval within 60 minutes, break the loop
on_break: "log_and_discard_changes"Actionable: What you can do this week
Choose a stack: Select one of the “Validator Recipes by Stack” that matches a common technology in your codebase (e.g., Python, TypeScript, Terraform).
Pick a simple mission: Start from the canonical minimal Mission Object or adapt one of the production-style variants above for a small, isolated change.
Configure a local loop: Set up a basic SDaC environment that uses your chosen Mission Object, one runner validation profile, and the “Local File System Effector.” Your goal is to see the system generate changes, run the validators against them, and then output a diff to a local directory.
Experiment with circuit breakers: Start with the “Max Iterations Breaker” and set a low limit (e.g., 2 or 3) to observe how the system handles reaching that threshold.