The “Never Funded” Problem

Every team has a backlog of internal tooling projects that would genuinely help but will never get built. Not because they’re bad ideas — because the ROI math doesn’t work. A knowledge base that takes two sprints to build and requires ongoing maintenance can’t compete with product work when you have four engineers and a launch to hit. The classic “Tragedy of the Internal Tool” — everyone agrees it should exist, nobody can justify the time, so knowledge keeps accumulating in Slack threads that expire, Google Docs that nobody can find, and one person’s head.

So the knowledge stays where it is: scattered across Slack channels with a 90-day retention window, buried in Google Docs with titles like “Notes from Q3 Planning (FINAL) (2),” or in the head of whoever happened to be in the room. New team members spend their first weeks doing archaeology. When someone leaves, their context leaves with them.

This week, the ROI math changed. Cairns — the site you’re reading this on — went from nothing to a deployed, self-improving knowledge base over a long weekend, with less than 8 hours of actual human effort. Not a prototype. Not a demo. A production system with SSO, full-text search, inline annotations, automated content generation, drift detection, and a feedback loop that turns reader comments into fixes within the hour.

Key Takeaway

The economics of internal tooling have fundamentally shifted. Projects that lived in the “nice to have, never funded” category are now in the “build it Saturday, use it Monday” category. The constraint isn’t engineering time anymore — it’s deciding what to build first.

What We Actually Built

Cairns is a static knowledge base built on Eleventy 3.x, deployed to Cloudflare Pages behind Google SSO. Articles are markdown files. Search is Pagefind — a build-time indexer that produces zero-JS client-side search. Dark mode, Mermaid diagrams, and a table of contents come free from the templating layer.

But the static site is just the visible surface. Underneath, there’s an automation layer:

  • Weekly article generation — a cron that either picks up team suggestions from Slack or autonomously selects topics based on what’s missing from the knowledge base
  • Hourly issue monitoring — annotation-generated GitHub issues get triaged, classified, and fixed automatically
  • Content drift detection — weekly comparison of articles against their source documents, flagging anything that’s fallen out of date
  • Tag cleanup and cross-link audits — automated maintenance that keeps the knowledge graph healthy
  • Mid-week engagement nudge — a Slack message that surfaces recent articles without being annoying

None of this requires a server. The crons run through OpenClaw’s scheduling system. The site rebuilds on every push to main. Cloudflare handles the CDN, SSL, and access control.

Tip

Static doesn’t mean simple. The “static site” framing undersells the architecture — Cairns has more automated feedback loops than most dynamic applications. The key insight is that all the intelligence lives in the build pipeline and the agent layer, not the runtime.

The Weekend Timeline

Here’s what actually happened, as close to real time as memory allows. The clock started Friday morning, March 21st — though the real timeline is longer than it sounds. Elapsed time was about 72 hours from Friday to Sunday, but total human effort was probably under 8 hours.

Day 1 — Friday, March 21

Morning: Bob had already built gorewood/cairns — an open-source Eleventy knowledge base template — in a short burst before this project. That open-source version has its own history worth exploring. For Constructured, Bob forked that base and started customizing. The template was clean but generic — a blog skeleton with basic navigation. First task: make it ours. Custom color palette (the dark purples and greens you see now), restructured layouts, team branding. Q handled the CSS customization while Bob defined the content model — what frontmatter fields we’d need, how tags should work, what “a cairn” actually means. The fork-and-customize approach saved roughly a day compared to building from scratch. Eleventy’s template system is clean enough that restyling was mostly CSS, not fighting the framework.

Afternoon: First articles written. “The Quiet Teammate” — a field report on deploying Q as organizational infrastructure. “The Memory Problem” — how persistent context changes what an agent can do. Both written by Q with Bob reviewing, editing, and steering the voice. Deployed locally, iterated on the reading experience. By end of day: a working knowledge base with two published articles and a clear content format.

Evening: Bob started on the automation layer. The weekly article cron was the first piece — research a topic, write the article, build, commit, push, announce to Slack. The issue monitor came next: watch for GitHub issues with the content-feedback label, triage them, fix content issues automatically, create PRs for anything structural. Noam deployed to Cloudflare Pages with Google SSO.

Day 2–3 — Saturday–Sunday, March 22–23

Saturday morning: The trailhead redesign. The default homepage was a flat list of articles — fine for a blog, wrong for a knowledge base. Bob designed a three-tier layout: featured trails at the top (multi-part series with reading time and part counts), a featured cairn in the middle, and recent articles at the bottom. Added a guide page explaining how everything works.

Saturday afternoon: Moved search from a dedicated page to the header — always available, one click away. Restructured article directories from flat (src/articles/slug.md) to date-organized (src/articles/YYYY/MM/slug.md). Added the library view (tag-organized) and archives view (chronological). Built the content drift detection cron.

Sunday morning: The annotation system. This is the piece that closes the feedback loop. Readers select text in any article, a floating toolbar appears, they add a comment. Annotations accumulate in the browser. When ready, one click bundles all annotations into a GitHub issue with section deep links. The annotation system is vanilla JavaScript — no frameworks, no server, just browser APIs (Selection, Range, localStorage). It’s about 400 lines of code. The entire annotation-to-fix loop was designed, built, and tested in a single morning session.

Sunday afternoon: Wired the annotation → GitHub issue → auto-fix → deploy loop end-to-end. Tested it: selected text, added a comment, created the issue, watched the hourly monitor pick it up, triage it, spawn a Claude Code agent in a git worktree, push the fix, close the issue, and trigger a Cloudflare rebuild. Time from annotation to live fix: under an hour.

Here’s the progression visualized:

gantt
    title Cairns: Weekend Build Timeline
    dateFormat  YYYY-MM-DD HH:mm
    axisFormat %a %H:%M
    todayMarker off

    section Day 1 — Friday
    Fork and customize styling           :d1a, 2026-03-20 09:00, 3h
    Define content model and frontmatter :d1b, 2026-03-20 09:00, 2h
    Write first articles                 :d1c, 2026-03-20 13:00, 4h
    Local deploy and iteration           :d1d, 2026-03-20 15:00, 2h
    Article cron and issue monitor       :d1e, 2026-03-20 18:00, 4h
    Cloudflare deploy with SSO           :d1f, 2026-03-20 20:00, 2h

    section Day 2 — Saturday
    Trailhead redesign                   :d2a, 2026-03-21 09:00, 3h
    Header search and directory restructure :d2b, 2026-03-21 13:00, 3h
    Library, archives, drift detection   :d2c, 2026-03-21 15:00, 3h

    section Day 3 — Sunday
    Annotation system                    :d3a, 2026-03-22 09:00, 4h
    End-to-end feedback loop             :d3b, 2026-03-22 13:00, 3h

The Tool Stack

No single tool made this possible. It was the combination — and specifically how they compose — that compressed a multi-sprint project into a weekend.

Claude Opus 4.6 handled research, writing, and architectural decisions. Every article on this site went through a research phase where Opus searched the web, cross-referenced sources, and synthesized findings into the article structure. Writing a 15-minute article from research to final draft takes about 20 minutes of wall-clock time.

Claude Code did the implementation work. When Bob described a feature — “move search to the header, make it always visible” — Claude Code read the existing templates, understood the CSS architecture, and made the changes. For parallel work, it spawned sub-agents in git worktrees: one agent restructuring directories while another built the annotation system.

Google Stitch handled the UI design work. After Claude struggled to nail the visual design to Bob’s satisfaction, Stitch stepped in and did it brilliantly — producing polished layouts and component designs that translated cleanly into the CSS and templates Claude Code then implemented.

OpenClaw was the orchestration layer. Cron scheduling for the weekly article generation, the hourly issue monitor, the weekly drift check. Slack integration for announcements and engagement nudges. Session management so Claude Code agents could be spawned with the right context. Memory persistence so Q knows what articles exist, what topics have been covered, and what the team has asked about. OpenClaw’s skill system was critical here. The “cairns” skill is a markdown file that teaches any agent the content format, publishing workflow, and maintenance tasks. New agents don’t need to rediscover the conventions — they read the skill and go.

GitHub CLI handled the issue lifecycle. OAuth via the system keyring — no stored tokens, no secret management. Issues get created, labeled, assigned, commented on, and closed entirely through gh commands.

Eleventy + Pagefind gave us the static site. Eleventy’s data cascade and collection system made it trivial to generate tag pages, trail navigation, and the trailhead layout from frontmatter alone. Pagefind’s build-time indexing means search works without any runtime JavaScript.

Cloudflare Pages + Google SSO handled deployment and access control. Push to main, site rebuilds in under two minutes. Google Workspace SSO means the team logs in with their existing accounts. No user management, no password resets, no invitation flow.

The Self-Improvement Loops

The real story isn’t the build — it’s what happens after. Cairns doesn’t just publish content. It actively improves itself through four interlocking feedback loops.

The Annotation Loop

This is the fastest loop. A reader notices something wrong or unclear, selects the text, adds a comment. The annotation system bundles their feedback into a GitHub issue with deep links to the exact sections. The hourly issue monitor picks it up, classifies it as a content fix, spawns Claude Code in a git worktree, pushes the fix, closes the issue, and triggers a rebuild.

graph LR
    A[Reader selects text] --> B[Adds annotation]
    B --> C[Creates GitHub issue]
    C --> D[Hourly monitor triages]
    D --> E[Claude Code fixes in worktree]
    E --> F[Push to main]
    F --> G[Cloudflare rebuilds]
    G --> H[Fix live on site]

End-to-end time: under one hour. The human effort is about 30 seconds — select, comment, click “Create Issue.” Everything after that is automated.

Key Takeaway

The annotation loop is the system’s most important feature. It turns every reader into an editor with near-zero friction. The feedback doesn’t go into a backlog — it gets fixed within the hour.

The Content Generation Loop

Weekly, a cron checks for topic suggestions in the team Slack channel. If someone’s asked for an article, it gets written. If not, Q surveys the existing knowledge base, identifies gaps, and picks a topic autonomously. The article goes through research, writing, build verification, and publishing — then gets announced to Slack.

The Drift Detection Loop

Articles reference source documents — architecture specs, design docs, API documentation. Weekly, a cron compares article content against those sources and flags anything that’s drifted. An article about the event system that was accurate when written but doesn’t mention a new event type added last week gets flagged for update.

The Maintenance Loop

Tag cleanup, cross-link audits, orphan detection, trail continuity checks. These run on schedule and either fix issues automatically (tag normalization) or surface them for human decision (should these two articles be cross-linked?).

Example: The annotation loop in action
@Gary Reading the event sourcing article — the diagram shows EventStore connecting to ReadModel but doesn't mention the projection step. Annotated it.
@Q Got it. Issue #47 created from Gary's annotations. Picking it up now — content fix, should be live within the hour.
@Q Fixed and deployed. Added the projection step to the diagram and a sentence explaining the transformation. Issue #47 closed. Thanks Gary 👍

What Would This Have Cost Before?

Let’s be honest about the comparison. In a traditional development workflow, what would a system with these features actually require?

A knowledge base with SSO, full-text search, custom theming, and a structured content model — that’s a small project, maybe a sprint for one engineer. Call it 40 hours.

Add inline annotations with GitHub issue integration — that’s a frontend feature with non-trivial UX. Another sprint, maybe more if design review gets involved. Add automated content generation, drift detection, and a self-improving feedback loop — now you’re talking about a PM writing requirements, an architect reviewing the integration points, and at least two engineers across two sprints.

Conservatively: 3-4 sprints, 2 engineers, PM involvement, design review. In a small team, that’s 6-8 weeks of calendar time competing with product work. It would never get prioritized. Not because it’s not valuable — because the opportunity cost is too high.

Here it was a weekend — under 8 hours of human effort. One engineer, one agent. To be fair to the comparison: that weekend benefited from existing open-source components (Eleventy, Pagefind, Cloudflare Pages) and an agent framework (OpenClaw) that was already deployed. The infrastructure wasn’t built from zero. But neither is any modern software project — the point is that the “assembly from components” step is now dramatically faster.

Warning

This is not an argument that AI replaces engineers. Bob made every architectural decision, steered every design choice, and reviewed every piece of output. The agent compressed the implementation time, not the thinking time. The ratio shifted: more hours designing, fewer hours typing.

The Deeper Point

Cairns is a small project with a narrow audience. It matters because of what it represents, not what it is.

Every team has a list of projects like this — internal dashboards, onboarding systems, runbook generators, audit tools, documentation sites, integration monitors. Projects that would genuinely help but can’t compete for engineering time against the product roadmap. They sit in a “someday” column in Linear or Notion, accumulating upvotes but never getting built.

The tools that made Cairns possible — Opus for research and writing, Claude Code for implementation, OpenClaw for orchestration — didn’t exist in their current form a year ago. Others, like Cloudflare Pages and Eleventy, were already in our stack. The capability curve is steep enough that what was infeasible in January is routine in March.

This matters most for small teams. A company with 200 engineers can afford a platform team that builds internal tools. A company with 2 part-time engineers can’t. But now a company with 2 part-time engineers and an agent can build the same internal tool in a weekend that the platform team would take a quarter to ship.

The question isn’t whether AI-assisted development is faster — that debate is settled. The question is: what do you build now that it is? Which items on the “never funded” list just became feasible? Which problems have you been living with that you don’t have to anymore?

For Constructured, the answer this week was a knowledge base. Next week it might be something else. The backlog of things that are now “worth building” is longer than the backlog of things we’ve already built.

Key Takeaways

  1. The ROI calculus for internal tooling has changed — projects that required multi-sprint commitments can now be built in days, making "nice to have" tools economically viable for small teams.
  2. Self-improvement loops are the multiplier — a knowledge base that just publishes content is useful; one that actively detects drift, processes feedback, and generates new content autonomously is transformative.
  3. The agent compresses implementation, not design — every architectural decision, design choice, and quality judgment still required a human. The time saved was in translating decisions into code, not in making the decisions.
  4. Composition beats construction — the speed came from combining existing tools (Eleventy, Pagefind, Cloudflare, OpenClaw) through an agent, not from building anything novel. The agent's value is in assembly and integration.

Discussion

  • What's on your team's "never funded" internal tooling list? Now that the build cost has dropped by an order of magnitude, which of those projects would you prioritize first — and why that one over the others?
  • The annotation → fix loop closes in under an hour with zero human intervention after the initial comment. What other feedback loops in your workflow could be compressed this way? Where does "slow feedback" silently cost you quality?
  • If the trend continues and an engineer-plus-agent can build in a weekend what used to take a quarter, how should small teams think about "build vs. buy" differently? When does it stop making sense to pay for SaaS tools that you could build — and own — yourself?

References

  1. Eleventy Documentation — The static site generator powering Cairns. Version 3.x with its ESM-first architecture and data cascade system made the content model trivial to implement.
  2. Pagefind — Build-time search indexing that produces a fully client-side search experience with no runtime JavaScript dependencies. The search you see in the Cairns header.
  3. Cloudflare Pages — Static site hosting with automatic builds on git push, custom domains, and integration with Cloudflare Access for SSO. The deployment target for Cairns.
  4. OpenClaw — The open-source agent framework providing cron scheduling, Slack integration, session management, and the skill system that teaches agents the Cairns content format.
  5. Claude Model Documentation — Opus 4.6 for research and writing, Claude Code for implementation. The model capabilities that made the weekend timeline possible.
  6. Being Glue — Tanya Reilly — The original articulation of "glue work" in engineering organizations. Cairns automates exactly the knowledge-sharing component of glue work that Reilly describes.