Chapter 9 – The Dream Daemon (Background Maintenance)
We’ve explored how to build individual Mission Objects
to encapsulate intent (Chapter 7) and how Map-Updaters keep
our system’s understanding of its environment fresh (Chapter 8). These
mechanisms shine when triggered by explicit events: a code commit, a
schema change, or a manual command. But many maintenance tasks aren’t
tied to specific events; they’re about the slow accumulation of entropy.
Documentation drifts, dependencies age, and conventions subtly
diverge.
This is where background maintenance comes in. For systems that evolve over months and years, you need mechanisms to systematically combat entropy, not just react to it.
This chapter introduces the Dream Daemon as a pattern: a controller that turns entropy signals into bounded maintenance work. We’ll start at Depth 0 (sensors only) and then outline deeper implementations you can grow into.
Dream is a Control Loop, Not a Schedule
A scheduler is just a clock. The Dream Daemon is the loop that turns measured entropy into bounded work.
Start with the lowest-risk version: measure entropy and emit a report. Scheduling is optional. Automated changes come later, after you trust your sensors, budgets, and gates. The mistake is thinking “cron” is the architecture. Cron is packaging.
Scrum, Kanban, and every other process are also control loops: select work, execute, inspect, adapt. The difference is enforcement. Those loops run on meetings and social contracts. Dream is the same posture compiled into executable artifacts: Sensors emit signals, a deterministic ranker selects targets, Effectors produce bounded diffs, Validators grade, and governance gates decide what is admitted.
The Dream Loop: Sense → Decide → Act → Verify
At its best, Dream is a controller with a simple posture:
- Sense: run entropy sensors and collect signals.
- Decide: rank + budget, then pick work (or defer).
- Act: dispatch to an allowlisted action to produce a diff.
- Verify: run the same Validators you require for merges.
The pattern stays the same. What changes is the depth of implementation: at Depth 0 you only Sense + Decide. A human performs Act. Verification still runs through the Immune System.
Implementation Depths (Start at Depth 0)
You can implement Dream as a ladder. Each step adds autonomy, but also raises the governance bar.
Recommended rollout (safe default posture)
- Weeks 1–2: Depth 0 — run the scan weekly, tune sensors, and build trust in the evidence. No writes.
- Weeks 3–4: Depth 0.5 — schedule the report (nightly/weekly), still read-only. No diffs.
- Promote to Depth 1 only when all are true:
- Sensors are reproducible (same finding appears across runs; low false positives).
- Budgets are explicit (one target per cycle, hard diff limits, protected paths).
- Governance is wired (required checks, CODEOWNERS/branch protection, no bypass).
- Review + rollback are real (who reviews Dream PRs, and how you revert safely).
Depth 0: Sensors Only (default)
Depth 0 is a deterministic entropy scan that outputs a ranked worklist with evidence. It does not open PRs. It does not modify files. It produces targets.
This is enough to change team behavior because it makes maintenance specific:
- you stop arguing about “what to clean up”
- you stop doing maintenance only when something breaks
- you get a consistent stream of small, reviewable targets
Example output (signals as data):
signal=coverage_low file=services/billing/tax.py cov=58.0
signal=complexity_high file=services/payments/routing.py cc=22
signal=duplication_high file=services/api/handlers.py dup=3.1
At Depth 0, the “Decide” step is simply ranking + budgeting the report:
signals = sense_entropy(roots)
ranked = rank(signals, key=["severity", "blast_radius", "recency"])
emit_report(ranked[:10])Then a human chooses one item and runs a normal SDaC loop (Ouroboros or a one-shot Refactor) against that bounded target, under the same Physics gates.
Depth 0.5: Scheduled reports (no diffs)
Depth 0.5 is Depth 0 on a schedule.
You run the entropy scan nightly or weekly and publish the report as an artifact: a dashboard entry, a ticket, or a message with the top findings and their evidence. Nothing is modified. No diffs are generated. The automation is real, but it is read-only.
This is the lowest-friction path to enterprise adoption: you get a consistent maintenance signal without triggering the fear that “the agent is writing code in the background.”
Depth 1: Scheduled, human-approved proposals
Once you trust your sensors, you can move from “report” to “proposal”:
- the daemon picks one target per run
- it runs an allowlisted action to produce a diff
- it runs the Immune System suite
- it opens a PR for review
At this depth, the daemon is not “creative.” It is a work scheduler for a fixed catalog of maintenance actions.
Depth 2: Autonomous selection (bounded)
At higher autonomy, Dream becomes a controller: it chooses which strategy to apply based on signal shape (coverage gaps, complexity hotspots, duplication spikes). This is where you must tighten budgets and allowlists:
- one unit of work per cycle
- hard diff budgets (files/lines)
- protected paths enforced by policy
- escalation rules (defer, file ticket) instead of “try harder”
Depth 3: Autonomous merge (aspirational)
Auto-merge is possible, but only after Depth 1–2 are stable and your governance is mature. If you cannot explain why a diff exists and reproduce the verification, you cannot auto-merge it.
Security: Hostile Terrain and Instruction Injection
So far, we’ve focused on protecting the repository from the agent
(scope limits, Validators, protected graders). You also need the other
half of the threat model: protecting the agent from hostile
input. This is why Input Hygiene is a governance concern
(Chapter 12) and why Prep must sanitize (Chapter 2).
Dream increases the amount of Terrain text your system reads. That turns comments, tickets, and logs into an input channel, and you should treat it as adversarial.
The concrete attack example and the Prep hardening
posture live in Chapter 12. For Dream, keep a simple rule: untrusted
text never becomes authority. It can only become evidence attached to a
Mission Object compiled from allowlisted templates, with explicit
budgets and gates.
Actionable: What you can do this week
Pick one entropy sensor: coverage gaps, complexity hotspots, duplication, or drift between a Map surface and Terrain.
Implement Depth 0: write a deterministic scan that emits a ranked worklist with evidence. No writes.
Run it weekly: treat the output like a backlog generator, not a one-time audit.
Fix one item: pick the top target and run a bounded loop with explicit Physics gates.
Only then add autonomy: when you can predict the failure modes, add Depth 1 proposals and keep them human-approved.