An architecture is a story until somebody onboards a second consumer. It holds up only if the operating experience is small enough to fit on a sticky note. Conduit’s does. Most of the running cost is reading the artifacts, not maintaining the pipeline that produces them.

This cairn is the runbook view: what you’d hand a teammate adding a new project to Conduit, or a future-you who came back after six months and forgot what the moving parts are.

Onboarding a new project

Adding a new project to Conduit is a five-step task. Most of those steps are mechanical.

flowchart LR
  S[Step 1<br/>state.json] --> P[Step 2<br/>project.json]
  P --> W[Step 3<br/>workflow YAMLs]
  W --> SE[Step 4<br/>secrets]
  SE --> A[Step 5<br/>access setting]
  A --> D[Done. Push to main.]

The five steps:

Step 1: .conduit/state.json. A small JSON file with the starting state. Either fresh for a new project (no prior anchors), or seeded at the migration commit if the project had a previous publisher (so you don’t reprocess history):

{
  "next_adr": 1,
  "anchors": {
    "devblog":       "<commit-sha-or-empty-on-fresh>",
    "adr":           "<commit-sha-or-empty-on-fresh>",
    "sprint-report": "<commit-sha-or-empty-on-fresh>",
    "release-notes": "<commit-sha-or-empty-on-fresh>"
  }
}

The action’s first run will treat empty anchors as “start from the root commit” and produce a bounded first range, but seeding at the migration commit avoids drafting against the entire history.For a fresh project with no prior history, seeding to the most recent commit is fine. The first artifact ships from the next push onward.

Step 2: .conduit/project.json. The project’s identity card. Five fields: name, description, color, icon, repo_url. The consumer site reads this to render the project’s landing page, recent-activity feed, and color accents.

Step 3: two workflow YAMLs. publish-conduit-devblog.yml (push trigger, ~50 lines) and publish-conduit-decisions.yml (cron + workflow_dispatch, ~55 lines). Both are thin shells that resolve to a single uses: Constructured/conduit/.github/actions/publish@v1 line. Copy from an existing consumer; change the artifact_types input and the trigger schedule if you want a different cadence.

Step 4: two repo secrets. GEMINI_API_KEY (Google API key with access to the chosen model) and CONDUIT_DISPATCH_TOKEN (GitHub PAT with repo scope sufficient to post repository_dispatch events to the Conduit repo). These are managed through the project repo’s GitHub Settings, not by the action.

Step 5: the access setting on the action’s host repo. Conduit is a private repo. For another repo in the same GitHub organization to consume actions from it, the action’s host repo needs actions/permissions/access set to organization. This is a one-time setting per host-org pair (already done for Constructured), but it is invisible in workflow YAML and easy to forget when standing up a new org.

Tip

If onboarding fails with Unable to resolve action ‘constructured/conduit’, repository not found, that’s almost certainly the access setting. Run gh api -X PUT repos/<owner>/<action-host-repo>/actions/permissions/access -f access_level=organization to fix it.

That’s the entire onboarding. Total time: ~15 minutes if you are copying from an existing consumer, plus whatever it takes to provision the secrets through whatever credential management your team uses.

What happens on a workflow run

From a single trigger to a published artifact, the run goes through a fixed series of steps. Knowing this sequence makes the failure modes legible.

sequenceDiagram
  participant Trigger as Trigger<br/>(push or cron)
  participant Workflow as Workflow YAML
  participant Action as Composite Action
  participant Timbers as timbers CLI
  participant Gemini as Gemini API
  participant Conduit as Conduit repo

  Trigger->>Workflow: fire
  Workflow->>Action: uses: ...@v1
  Action->>Action: Install timbers
  Action->>Action: Install canonical templates
  Action->>Gemini: Probe key (models.list)
  Action->>Action: Read state.json, compute per-type ranges
  Action->>Timbers: query --range
  Action->>Timbers: draft <template>
  Timbers->>Gemini: generateContent
  Gemini-->>Timbers: artifact body
  Action->>Action: Wrap with frontmatter, build bundle
  alt artifact_count > 0
    Action->>Conduit: dispatch event_type=project-artifact-bundle
    Action->>Workflow: commit state.json bump
  else quiet window
    Action->>Workflow: silent (no commit)
  end
  Action->>Workflow: surface op_failures (red if > 0)

The two paths at the end are the important property. A run with at least one artifact dispatches and commits. A run with zero artifacts and zero operational failures stays silent — no commit, no notice. A run with operational failures fails the workflow loudly, after dispatch and commit succeed for whichever types did work, so partial-success anchors still land and the failure is visible alongside.

The probe step deserves its own paragraph. The action calls Google’s models.list endpoint with the supplied key on every run and logs HTTP 200 plus the count of accessible models when healthy, or the upstream error body when rejected. This is observability for the credential, which used to be the most painful failure mode — timbers’ “LLM request failed” is opaque about why; Google’s error response is precise. The probe never fails the workflow on its own; it is informational. But a downstream draft failure now ships with the actual upstream context attached.

Versioning policy

The action ships at @v1, and that tag is deliberately a moving reference rather than a frozen one. Patch fixes — bug fixes, prompt tweaks, observability additions — re-tag v1 in place. Consumers pinned to @v1 pick the change up on their next workflow run, with no per-repo PR.

A future breaking change to the bundle envelope or the input contract would ship as @v2. The migration story for v1 → v2 is per-consumer and deliberate: each consumer updates its uses: pin when ready, with @v1 continuing to work in the meantime. We have not needed v2 yet.

The versioning matrix in plain English:

Change Tag motion Consumer cost
Bug fix in publisher script retag v1 in place none — picked up on next run
Template improvement retag v1 in place none — picked up on next run
New optional input retag v1 in place none — existing callers ignore the new input
Required input change bump to v2 each consumer updates pin at its own pace
Bundle envelope schema break bump to v2 each consumer updates pin at its own pace
Output shape change bump to v2 each consumer updates pin at its own pace
Warning

Re-tagging v1 is a force-push to the tag. Anyone who fetched the old v1 locally has a stale cache; CI consumers re-resolve the tag at workflow runtime so they pick up the new commit transparently. The blast radius is “humans with stale local clones” plus possibly some CI caching layers, but never CI execution itself. We have done it twice without trouble.

For consumers that want stricter immutability, pinning to a specific commit SHA (@<40-char-sha>) instead of @v1 is supported and documented. We did this during the first migration window before v1 was stable. The trade-off is “you lose patch fixes until you bump the SHA,” which is sometimes the right answer.

Operational failures and how to read them

Three failure modes have shown up in production, and each looks distinct in the workflow log:

The credential is bad. The probe step shows HTTP 401 or HTTP 403 with Google’s error body inline. timbers’ draft step then fails with Error: LLM request failed, the script records op_failures > 0, and the final step fails the workflow red. The remediation is “rotate the secret,” which is a project-side credential change with no Conduit-side action needed.

The credential is fine but the model isn’t accessible. The probe shows HTTP 200 with a model count, but if the configured model input isn’t in the list, the draft step fails for that artifact type. This used to be invisible until we added the probe; now the workflow log carries an explicit list of what is accessible, which usually identifies the right alias to switch to. The action default is flash (= gemini-3-flash-preview), which most active Google API keys support.

The repository_dispatch is rejected. This is rare and usually means the CONDUIT_DISPATCH_TOKEN lacks the right scope, or the consumer repo’s webhook is misconfigured. The dispatch step fails with a 422 or 404 from gh api. The most common subform is the client_payload is not an object 422, which was a bug we fixed by switching from gh api -F to jq | gh api --input -. Now fixed in the action; if you see it on your own integration, you have copied an old recipe.

Definition

“Loud failure” in this system means: workflow goes red, the upstream error body is in the log, and the artifacts that did succeed still ship. There is no swallowed exception, no silent skip on operational error, no “partial run pretends to be a quiet week.”

Recovering from any of these is a project-side action. The Conduit-side cost is roughly zero in normal failure modes — the consumer site doesn’t care that a specific run failed, because it only sees what was successfully dispatched.

Backfill and replay

Two scenarios need a way to override the natural anchor-driven flow:

Backfill after a long quiet period. A repo went silent for a month, then resumed activity. The natural anchors[type]..HEAD range is now too wide — MAX_ENTRIES (default 30) might trip and the workflow refuses to silently drop entries. Either raise the cap with MAX_ENTRIES_OVERRIDE=1 (accepting tail-drop), or use the override below.

Re-running against a specific window. Maybe a previous run dispatched a bad artifact, you’ve fixed the prompt, and you want to re-draft the same range. The decisions workflow exposes a workflow_dispatch input called since_sha that forces every requested type to use the supplied SHA as its anchor:

gh workflow run publish-conduit-decisions.yml \
  --ref main \
  --field since_sha=<sha> \
  --field artifact_types=adr,sprint-report

The action validates since_sha up front: it must be reachable from HEAD. A typo or truncated SHA fails loud rather than falling silently to a HEAD-relative window.

Tip

The override applies to every requested type for the duration of that run, but does not modify state.json permanently. The next scheduled run reads the original anchors. Treat since_sha as a transient backfill knob, not a configuration knob.

This is the same mechanism that makes migrating from a different publisher cleanly. If a project had a previous (non-Conduit) workflow that tracked a last_published_sha, you can seed state.json at the migration commit and run a one-off since_sha dispatch covering the gap window if you want backfill.

Templates as a shared resource

The canonical timbers prompts ship inside the action and get installed into the runner’s ~/.config/timbers/templates/ on every run. timbers’ resolution order is project → global → built-in, which means:

  • A project that has no .timbers/templates/ of its own (the default state) gets the canonical Conduit prompts.
  • A project that wants to override a single template (say, devblog.md) can drop one file in its own .timbers/templates/. Only that file overrides; the other templates still come from the action.
  • A project’s overrides do not affect other consumers — each runner is isolated to its own project repo’s checkout.

This is the cleanest split we found. Every consumer gets the same default voice. A project that legitimately needs to differ (different tone, different audience) can override one template at a time without forking the whole publisher.

Editing a prompt becomes a one-PR change against the conduit repo. Every consumer picks it up on the next run after the v1 retag. That property is what makes prompt iteration manageable across multiple projects without going insane.

Scenario: Updating the decision-log voice across all projects
@Bob "The ADR voice has been too dry. Can we make it lead with a one-line verdict before the context?"
@Q Open a PR on conduit's timbers-templates/decision-log.md, get it reviewed, merge, retag v1. Strike picks it up on the next push; vantage on the next Sunday cron. No project-side PRs needed unless somebody wants to override.

The cost of changing the system’s editorial voice is “review one prompt PR.” That is a property worth protecting.

What’s deferred until the system grows

Several things are deliberately not built yet, with the trade-offs visible.

Q reviewer support across all repos. The team’s automated Claude Code reviewer (Q) currently reviews PRs in osprey-strike but not in conduit, vantage, or cairns. Until that ships, the PR-review workflow rule we use (“re-request review after fix, wait for re-review, converge”) is one-sided in those repos. The rule is in place; the loop will close when the reviewer matures. Tracked in beads.

Multi-cycle convergence loop. The longer-term goal is for the reviewer to participate in a structured back-and-forth: the implementer pushes fixes, the reviewer looks again, and a circuit breaker stops the loop after a configurable number of exchanges (default 3) if no convergence is reached. The mechanics on the implementer side are written into our project rules; the mechanics on the reviewer side are pending. Tracked in beads.

Release-notes drafting. The action accepts release-notes as an artifact type, but the per-type loop currently rejects it with a clear error rather than producing nothing. A future tag-push workflow will wire the actual draft logic and add release-notes to the valid set.

Migration from option 2 to option 3. The current architecture (publisher in the consumer repo, executed in each project repo’s CI) is the right fit for our scale. If the consumer count grows past five-or-so, the conduit-side puller (option 3) becomes worth considering. The migration is real and not a one-way door — the publisher logic moves into a conduit-side script, project-side workflows get deleted, and state moves to conduit-side files. We do not need this yet.

Customer-facing slice. Conduit publishes for an internal audience. A future customer-facing slice (filtered to release notes plus selected devblogs) would require a permission model on the consumer side and a different bundle-envelope tag indicating “this is for the public feed.” Not built; not blocked; will be built when there is a real customer-facing need.

Key Takeaway

The deferred work is visible rather than hidden. Every “we’ll do this later” item is captured either as a bead, a callout in the action’s README, or a note in this trail. That makes the deferral honest instead of accidental.

Summary

  1. Onboarding is five mechanical steps. Two JSON files, two workflow YAMLs, two secrets, one access setting. Roughly 15 minutes for the second project after the first.
  2. The action's run sequence has three terminal states. Green-with-artifact, green-silent, red. Each is visually distinct in the workflow UI; lumping them was a real bug.
  3. Versioning is asymmetric. Patch fixes retag v1 in place; breaking changes ship as v2. Consumers pin to @v1 for floating patches or to a SHA for strict immutability.
  4. Failure modes are observable. The Gemini key probe surfaces credential issues before the draft step runs; op_failures distinguishes operational failure from quiet weeks.
  5. Backfill and replay are first-class. The since_sha workflow_dispatch input lets you re-run against any reachable commit, with up-front validation that fails loud on a typo.
  6. Templates are a shared resource. Canonical prompts ship in the action; project overrides are file-by-file. Editing voice across all consumers is one PR.
  7. Deferred work is visible. What's not built yet — Q reviewer for non-Strike repos, multi-cycle convergence, release-notes drafting, option-3 migration, customer-facing slice — is captured in beads or in the action's README, not assumed.
  • The probe currently runs on every workflow invocation, which is one extra HTTP call per run. Is that the right cost-benefit ratio, or should the probe be opt-in (and miss a class of failures it currently catches)?
  • The five-step onboarding still requires per-project secrets management. Would a centralized credential approach (an org-level secret with selective repo access) be a meaningful simplification, or would it introduce more risk than it removes?
  • If we wanted to drop the cron schedule for the decisions workflow and trigger it instead on "tag push" or "PR-merge with a label," what would change about the cadence semantics? Would the LLM still get a wide-enough window to consolidate iterations, or would we be back to the per-PR problem?
  1. The Work That Writes Itself — First cairn in this trail. Why Conduit exists at all.
  2. How Conduit is Shaped — Second cairn in this trail. The architectural choices that this cairn's operations rest on.
  3. Timbers — The development ledger Conduit reads from. Required reading if you want to understand what shapes the artifacts the publisher produces.
  4. GitHub Composite Actions — The mechanism the publisher uses to share logic across N project repos. The action's action.yml and build-bundle.sh in the conduit repo are the canonical references.
  5. GitHub Actions permissions/access endpoint — The setting that allows other repos in the same org to consume actions from a private repo. Required configuration for cross-repo composite actions; easy to forget.
  6. Google Gemini Models — The list of available Gemini models and their aliases. Useful when the probe step shows a model count and you need to choose a different alias.
  7. Cloudflare Pages — The hosting layer for the Conduit static site. Build-on-push deployment; no runtime configuration to manage.