Your Box and Your Trust Model
What runs on your machine, and how much rope you give the agent — one has a project answer, the other is yours · ~19 min read ~– min read · Suggested by Bob engineersecurityoperations
Two questions every contributor answers in their first week: what runs on my box, and how much rope do I give the agent? The first has a project answer — mise pins it, and most of it installs itself. The second is yours, and this trail will not tell you where to land.
mise as the source of truth
mise is the polyglot version manager — the modern replacement for nvm, pyenv, goenv, rbenv, and asdf. It reads .mise.toml and pins exact versions of Go, Node, just, air, direnv, tokei, awscli, opentofu, kubectl, and helm. Run mise install once after cloning; the toolchain materializes in a per-project sandbox and the shell cd-hook activates the right versions automatically. Upgrades go through the file, not through every contributor’s machine.
mise pins runtime; the project owns the pins. If you find yourself working around a pinned version, raise it as a project concern — do not paper over it locally. Local workarounds become tomorrow’s “works on my machine” debugging session.
What’s in mise versus what isn’t
The pattern is consistent: anything that runs your code or your build is pinned through mise; anything that runs outside your build is a manual install. Pinned: go, node, just, air, direnv, tokei, awscli, opentofu, kubectl, helm. Manual: Claude Code, Codex CLI, beads, timbers, gh, Docker, RTK. Most of the manual list is just tools without mise plugins yet.
Docker for the compose stack and integration tests
Docker is required for two things: the local just dev compose stack that brings up Strike’s services in development, and the api package’s integration suite (Strike uses testcontainers to spin up a real Postgres for just api test-integration). Without a Docker daemon the integration tests skip themselves cleanly, but just dev won’t start. Docker Desktop or Colima both work; you don’t need either for unit tests or frontend gates.
Strike does not yet support multiple docker compose stacks running on unique ports for concurrent worktree dev. Only one stack runs at a time; Working in Parallel (Mostly) covers the implications.
Where Strike’s env vars actually live
Strike layers env config across three files. A toolchain baseline (GOFLAGS=-race, NODE_ENV=development) lives in .mise.toml’s [env] block and loads on cd — these are useful even outside direnv; the trust handshake here is mise trust, once per machine. The bulk lives in .envrc — every service port, DATABASE_URL, ENCRYPTION_KEY, Render and Twilio integration, Keycloak, observability — gitignored, so on day one you run just setup (an idempotent recipe that copies .envrc.example → .envrc if absent, never overwriting) followed by direnv allow. Personal overrides (AWS profile, GH_TOKEN, machine-specific paths) go in .envrc.local, also gitignored, sourced from .envrc. Without .envrc, services will not start; just doctor is the safety net — it warns on a missing .envrc with the exact fix command. The per-developer trust handshake is direnv allow, re-prompted whenever .envrc changes; that handshake is deliberate.
Permission modes, sandbox, and how much rope you give the agent
Two layers shape what Claude Code can do without interrupting you. Permission modes — six of them, cycled mid-session with Shift+Tab — are the baseline cost/oversight trade. Sandboxing is an OS-level cage limiting what bash subprocesses can reach. They are independent; permission rules (allow, ask, deny) layer on top of either, in every mode except bypassPermissions.
The six modes: default (reads only), acceptEdits (reads, edits, common filesystem commands — good for iterating on code you’ll review via git diff), plan (proposes changes without making them), auto (runs everything but routes each action through a separate classifier model — the “fewer prompts, still safe” middle for daily work), dontAsk (only pre-approved tools, for CI and locked-down runs), and bypassPermissions (no checks — containers and VMs only, not a personal-laptop mode). In every mode except bypassPermissions, writes to protected paths (.git, .vscode, .idea, most of .claude/, shell rc files) are never auto-approved.
The mode worth understanding in detail is auto: not an allow list, but a separate classifier model evaluating each action. Reads and edits inside your working directory pass without review; everything else gets evaluated. The classifier blocks curl | bash, data exfiltration, production deploys, mass cloud deletions, IAM grants, irreversible destruction of pre-session files, and force-pushing or pushing to main. It allows local file ops, lockfile-declared dependency installs, read-only HTTP, and pushing to your starting branch. Three details: boundaries stated in chat (“don’t push until I review”) are enforced; broad allow rules like Bash(*) drop on entering auto (a guardrail against “I auto-allowed everything once and forgot”); after 3 consecutive or 20 total denials, auto pauses and prompting resumes.
Auto mode availability (May 2026): Max, Team, Enterprise, or API plans only — not Pro. Models: Sonnet 4.6, Opus 4.6, or Opus 4.7 on Team / Enterprise / API; Opus 4.7 only on Max. Anthropic API only. Requires Claude Code v2.1.83 or later.
Narrow allow entries like Bash(just check) save the classifier round-trip on routine recipes; the deny list is the hard floor. The sandbox (Seatbelt on macOS, bubblewrap on Linux/WSL2) limits subprocess reach regardless of what permissions approved — filesystem writes are confined to your working directory plus configured paths, and a proxy restricts subprocess HTTP to allowed domains by hostname only (no TLS termination, which matters for broad allowed domains like github.com). Permissions decide what runs; the sandbox decides how far what runs can reach.
BYOD: the threat-and-trust model is yours
Constructured uses BYOD — every contributor brings their own laptop, their own threat model, their own comfort with how much rope the agent gets. We teach the levers; we will not tell you where to land. Answer, in order: what runs without a prompt (mode plus allow rules); what requires explicit approval mid-flow (writes outside the repo, secrets reads, production talk, destructive ops); what is never allowed (home-directory writes, .ssh/, credentials, unrelated repos); and what changes when the machine changes. Some people ease in; some run auto from day one with a tight list. Both are correct in the sense that both are thought through. The wrong move is to pick a posture and stop revisiting it.
No setup is foolproof. An agent with broad permissions can run a malicious tool. An agent with narrow permissions can still leak data through what it writes to a log. The trust model is a moving target. The only wrong answer is to stop thinking about it.
Where to read more on the threat side
This trail points you at the threat surface and trusts you to find your own posture; The Injection Problem is the deeper read. Three cases worth knowing before you set your auto-allow list. Prompt injection through tool output: an agent reads a webpage, Slack message, or log line and, by design, does not distinguish “data to summarize” from “instructions hidden inside the data.” Codeless supply chain: a third-party skill, slash command, or plugin instruction file — natural-language artifacts with no executable code — can still steer a tool-calling agent, effectively a local RCE engine through novel means. Sensitive output in agent logs: logs end up in transcripts, transcripts end up in screenshots, screenshots end up in Slack — treat agent logs like commit messages.
- How We Build Here — The trail's opening cairn. The "your trust model is yours" framing is restated and deepened in this cairn.
- Claude Code as Daily Driver — The harness cairn. The sandbox-vs-auto-mode discussion here picks up where the harness cairn left off.
- The Injection Problem — The deep read on the threat side: prompt injection, supply-chain risks, and the failure modes of agents reading instructions and data through the same channel.
- mise — Official documentation for the polyglot version manager.
- testcontainers-go — The library Strike's
apipackage uses to spin up real Postgres for integration tests.
A reproducible box and a personal trust model are the two halves of the same posture. The reproducible box is what the project agrees on — versions of Go and Node, the local services, the environment variables that turn the dev experience from “fragile” into “predictable.” The trust model is what you agree on with your agent — what it can run unprompted, what it cannot, and where you have decided the boundary sits this week. Both halves matter; only one of them is something the team should pick for you.
This cairn covers both. The first half is short and prescriptive. The second half is longer and deliberately not prescriptive.
mise as the source of truth
mise is the polyglot version manager — the modern replacement for nvm, pyenv, goenv, rbenv, and asdf, all in one tool. It reads .mise.toml at the repo root and installs the exact versions of the languages and tools the project needs. Strike’s .mise.toml is short, deliberate, and the canonical answer to “which version of X should I use?”:
[tools]
go = "1.26.3"
node = "24"
"github:casey/just" = "latest"
air = "latest"
direnv = "latest"
tokei = "latest"
awscli = "2"
opentofu = "latest"
kubectl = "latest"
helm = "latest"
Run mise install once after cloning the repo, and the entire pinned toolchain materializes in a per-project sandbox under ~/.local/share/mise/installs/. The shell cd-hook activates the right versions automatically when you enter the directory. You will not pick versions; the project does.
Adding a new language version goes through .mise.toml. If you want a Go upgrade, the conversation happens at the file, not at every contributor’s machine. The same rule applies to bumping Node, swapping kubectl for a different version, or adding a new ops tool the team needs everyone to have.
mise pins runtime; the project owns the pins. If you find yourself working around a pinned version, raise it as a project concern — do not paper over it locally. Local workarounds become tomorrow’s “works on my machine” debugging session.
What’s in mise versus what isn’t
A short recap of the workshop, focused. The pattern is consistent: anything that runs your code or your build is pinned through mise; anything that runs outside your build (your agent, your issue tracker, your shell-tool collection) is a manual install.
Pinned in .mise.toml |
Manual install |
|---|---|
| go, node | Claude Code, Codex CLI |
| just, air, direnv, tokei | beads, timbers |
| awscli, opentofu, kubectl, helm | gh, Docker, RTK |
The “manual install” column is mostly tools without mise plugins yet. Beads and timbers may join the catalog at some point — there is no fundamental obstacle, just nobody has done the packaging work yet. Treat the table as a snapshot of today, not a permanent boundary.
Docker for the compose stack and integration tests
Docker is required for two related-but-distinct things on Strike. First, the local just dev compose stack brings up Strike’s services for development; without a running Docker daemon, just dev won’t start. Second, the api package’s integration suite uses testcontainers to spin up a real Postgres for just api test-integration; without Docker, the integration tests skip themselves with a clear message and the rest of the suite continues. Either Docker Desktop or Colima works. Pick whichever your machine likes better.
Strike does not yet support multiple docker compose stacks running on unique ports for concurrent worktree dev. Today, only one stack runs at a time. Worktrees are still useful (Working in Parallel (Mostly) covers when), but if you are running the full just dev stack in one worktree, the second worktree cannot bring up its own copy on different ports without manual intervention. This is a known TBD.
You will not need Docker for just api test (unit tests) or just web check (frontend gates). If your work does not touch the api package or its integration boundaries, Docker can stay off until you do.
Where Strike’s env vars actually live
Strike layers env config across three files, and the split matters: each layer has a different audience and a different trust posture.
.mise.toml’s[env]block — the project-wide toolchain baseline. Today that isGOFLAGS=-raceandNODE_ENV=development— values genuinely useful even outside a direnv-loaded shell, so they live with the toolchain. Loads automatically when youcdinto the repo. Trust handshake:mise trust, once per machine..envrc— the bulk of local dev config: every service port (STRIKE_NGINX_PORT, web/api/mock/mcp/keycloak),DATABASE_URL,ENCRYPTION_KEY, Render integration, mock-server vars, Twilio, Keycloak (KEYCLOAK_URL/realm + frontendNEXT_PUBLIC_*), webhook auth, observability (LOG_LEVEL,OTEL_*). It is gitignored. Day-one setup isjust setup— an idempotent recipe that copies.envrc.example→.envrcif absent (and never overwrites) — followed bydirenv allow. After that, direnv re-prompts fordirenv allowevery time the file changes; that is the per-developer trust handshake, and it is deliberate..envrcalso runsmise hook-env, which is how mise’s pinned tool versions reach the shell..envrc.local— your personal overrides (AWS profile,GH_TOKEN=$(gh auth token ...), machine-specific paths). Sourced from.envrcif it exists. Also gitignored.
If .envrc does not exist, your services will not start — there are no ports configured, no encryption key, no Keycloak URL. just doctor is the safety net: it warns when .envrc is missing with the exact fix (just setup or just envrc && direnv allow). The reader’s job is to run setup, run direnv allow, and trust the doctor to flag drift later.
direnv handshake: the explicit direnv allow step that tells direnv you trust the .envrc in this directory. Re-prompted every time the file changes. Strike requires this on day one (after just setup) and again whenever .envrc changes upstream. just doctor is the safety net that surfaces a missing or stale handshake.
Permission modes, sandbox, and how much rope you give the agent
Two layers shape what Claude Code can do without interrupting you, and the picture has changed meaningfully since the older “sandbox-or-no-sandbox” framing some of us internalized a year ago. The current stack is:
- Permission modes — six of them, each making a different cost/oversight trade. You pick one as a baseline and can change mid-session.
- Sandboxing — an OS-level cage that limits what bash subprocesses can reach on the filesystem and network. Independent of the permission mode; configured separately.
Permission rules (allow, ask, deny lists) layer on top of either, in any mode except bypassPermissions. The next section is where you pick a posture; this one is the levers.
The six permission modes
Cycle modes mid-session with Shift+Tab; the active mode shows in the status bar.
| Mode | Runs without asking | When it’s the right pick |
|---|---|---|
default |
Reads only | Sensitive work, shared machines, anything where the wrong action is expensive |
acceptEdits |
Reads, file edits, common filesystem commands (mkdir, touch, rm, rmdir, mv, cp, sed) |
Iterating on code you’ll review via git diff afterward |
plan |
Reads only — Claude proposes changes without making them | Exploring a codebase before touching it |
auto |
Everything, with a separate classifier checking each action for safety | Long, trusted tasks where you want fewer prompts but still want oversight |
dontAsk |
Only pre-approved tools | CI pipelines and locked-down non-interactive runs |
bypassPermissions |
Everything, no checks | Containers and VMs only — not a personal-laptop mode |
In every mode except bypassPermissions, writes to a small set of protected paths are never auto-approved. The list includes .git, .vscode, .idea, .husky, most of .claude/ (except .claude/commands, .claude/agents, .claude/skills, .claude/worktrees), shell rc files, .gitconfig / .gitmodules, .ripgreprc, .mcp.json, and .claude.json.
Auto mode is classifier-gated, not just an allow list
The mode worth understanding in detail is auto. It is not “skip permissions if the action is on a list” — it is “let a separate classifier model evaluate each action before it runs.” Read-only actions and edits inside your working directory pass without classifier review. Anything else (shell commands, network calls, edits outside the working dir) gets evaluated; if the classifier flags it as risky, the action is blocked and Claude has to find another path.
What the classifier blocks by default:
- Downloading and executing code (
curl | bashand friends) - Sending sensitive data to external endpoints
- Production deploys and migrations
- Mass deletion on cloud storage
- Granting IAM or repo permissions
- Modifying shared infrastructure
- Irreversibly destroying files that existed before the session
- Force-pushing or pushing directly to
main
What the classifier allows by default:
- Local file operations in your working directory
- Installing dependencies declared in your lock files or manifests
- Reading
.envand sending those credentials to the matching API - Read-only HTTP requests
- Pushing to the branch you started on or one Claude created during the session
Three more details that matter:
- Boundaries you state in conversation are enforced. Tell Claude “don’t push until I review” and the classifier blocks matching pushes, even when its default rules would allow them. The boundary stays in force until you lift it. (Boundaries are read from the transcript on each check, so a context compaction that loses the message can lose the boundary — a
denyrule is the hard guarantee.) - Broad allow rules drop on entering auto mode. Blanket
Bash(*), wildcarded interpreters (Bash(python*)), package-manager run commands,Agentallow rules — all suspended for the duration of the auto session. Narrow rules likeBash(npm test)carry over. This is a deliberate guardrail against the “I auto-allowed everything once and forgot” failure mode. - Fallback after repeated blocks. If the classifier denies 3 actions in a row or 20 total in a session, auto mode pauses and the harness resumes prompting until you approve a prompted action.
The classifier runs on a separate, server-configured model independent of your /model selection. Each check is a round-trip and counts toward token usage.
Auto mode availability (May 2026): Max, Team, Enterprise, or API plans only — not Pro. Models supported: on Team / Enterprise / API, any of Sonnet 4.6, Opus 4.6, or Opus 4.7; on Max, only Opus 4.7. Anthropic API only — not Bedrock or Vertex. Team and Enterprise admins gate it. Requires Claude Code v2.1.83 or later.
Permission rules layer on every mode (except bypass)
Independent of which mode you pick, you can pre-approve or pre-deny specific tool patterns through permissions.allow, permissions.ask, and permissions.deny in your settings.json. A typical contributor’s settings look like:
{
"permissions": {
"defaultMode": "auto",
"allow": [
"Bash(just check)",
"Bash(just check-fast)",
"Bash(just api fix)",
"Bash(git status)",
"Bash(git diff *)",
"Bash(gh pr view *)"
],
"deny": [
"Bash(git push --force *)",
"Bash(rm -rf /)",
"Bash(rm -rf ~)"
]
}
}
Narrow allow entries save the classifier round-trip on routine recipes the agent runs every hour. The deny list is the hard floor — operations you never want, even if the classifier would have caught them. The decision order is: explicit allow or deny resolves first; then read-only and working-directory edits auto-approve; then the classifier evaluates whatever is left.
The sandbox is a separate layer
Permission modes decide what Claude pauses to ask about. Sandboxing is the OS-level cage that limits what bash subprocesses can actually reach, regardless of whether the permission flow approved them. Enable it with /sandbox; configure it under "sandbox" in settings.
What it does:
- Filesystem isolation. Subprocesses can write only to your working directory and any paths in
sandbox.filesystem.allowWrite. Read access is broader by default;denyRead/allowReadrules narrow it. Enforcement is at the OS level — Seatbelt on macOS, bubblewrap on Linux/WSL2 — so it coverskubectl,terraform,npm, anything bash spawns. - Network isolation. A proxy outside the sandbox restricts subprocess HTTP to allowed domains. New domain requests prompt you (unless an admin sets
sandbox.network.allowManagedDomainsOnlyin managed settings, which auto-denies). The proxy enforces by hostname only — it does not terminate TLS, which matters for threat-modeling broad allowed domains likegithub.com.
Inside the sandbox, autoAllowBashIfSandboxed is on by default — sandboxed bash commands skip the regular permission flow because the cage itself is the constraint. Commands the sandbox cannot contain (network access to disallowed hosts, tools incompatible with sandboxing like docker, watchman-driven test runners) fall back to the normal permission flow.
Sandbox and permissions are complementary: permissions decide what runs; the sandbox decides how far what runs can reach.
bypassPermissions and the legacy dangerouslyDisableSandbox
Two terms that show up in older guidance and may still be in your settings:
bypassPermissionsis the permission mode that skips all checks. It is what--dangerously-skip-permissionsactivates. The right mode for agents inside dev containers, fresh VMs, or other isolated environments — and the wrong mode for a personal laptop.dangerouslyDisableSandboxis the per-tool-call escape hatch the harness uses to retry a sandboxed command outside the cage when the cage blocks something the agent legitimately needs (e.g., adockercommand). Even when this fires, the permission flow still applies — the agent does not get a free pass. You can disable the escape hatch entirely with"allowUnsandboxedCommands": falsein sandbox settings.
If you have year-old configurations that still treat per-call dangerouslyDisableSandbox as the daily mechanism, those are not wrong — but the layered modes above are the cleaner expression today. Auto mode in particular gives you the “fewer prompts, still safe” posture that older builds had to fake with broad allow lists.
BYOD: the threat-and-trust model is yours
Constructured uses BYOD — every contributor brings their own laptop, their own threat model, their own comfort with how much rope the agent gets. We can teach you the levers; we are not going to tell you where on each spectrum to land.
The questions you should be answering, in roughly this order:
- What runs without a prompt? Two parts: which permission mode you default to, and which
allowrules layer on top. Be specific.Bash(just check)reaches differently thanBash(just *). - What requires explicit approval, even mid-flow? Common answers: anything that writes outside the repo, anything that reads secrets, anything that talks to production, anything destructive (
rm -rf,git push --force). - What is never allowed, period? Common answers: writes to your home directory, reads of
.ssh/, reads of credentials files, opens of unrelated repos. - What changes when the machine changes? The trust model on a personal laptop is not the trust model on a borrowed work machine, a coworking space’s loaner, or a colleague’s setup you are pair-debugging on.
Some people ease in: start with manual approvals, expand the auto-allowed set as comfort grows, take a more conservative posture during weeks of high-stakes work. Some people run auto from day one with a tight list. Both are correct in the sense that both are thought through. The wrong move is to pick a posture and stop revisiting it.A useful checkpoint: every quarter, re-read your auto-allowed list. Anything on it that you no longer remember why you added — remove it. Anything that surprised you in retrospect — restrict it. The list ages quietly; the discipline of re-reading keeps it honest.
No setup is foolproof. An agent with broad permissions can run a malicious tool. An agent with narrow permissions can still leak data through what it writes to a log. The trust model is a moving target. The only wrong answer is to stop thinking about it.
Where to read more on the threat side
This trail’s job is to point you at the threat surface and trust you to find your own posture. The deeper read on the threat side itself is The Injection Problem — a separate cairn that walks through prompt injection, supply-chain risks, and the specific failure modes that come from agents reading instructions and data through the same channel.
Three concrete cases worth knowing about before you set your auto-allow list:
- Prompt injection through tool output. An agent reads a webpage, a Slack message, an issue body, or a log line that contains instructions in human language. The agent does not, by design, distinguish “data the user wants me to summarize” from “instructions hidden inside the data.” If your auto-allow list includes broad shell access, an attacker who can put text in front of your agent can sometimes leverage that.
- Supply-chain risks now include things without code. Classically, supply chain meant npm packages, Go modules, MCP servers, shell utilities — code that runs in the agent’s environment. LLMs widen the surface. A third-party skill, a slash command, a plugin’s instruction file — natural-language artifacts with no executable code of their own — can still steer an agent that does have tool-calling power. An LLM with relaxed permissions plus a hostile-but-codeless skill is, effectively, a local RCE engine through entirely novel means. Your trust model has to extend to the prompts and skills you let the agent read, not just the binaries you let it execute. Understanding this is the first step toward sizing your own risk tolerance.
- Sensitive output in agent logs. Agents log a lot. Logs end up in transcripts, transcripts end up in screenshots, screenshots end up in Slack. Treat agent logs like commit messages: they will be read by people who were not in the room.
The Injection Problem is the long-form treatment of all three. Read it before you settle on an auto-allow list, and re-read it when something in the threat landscape changes.
Summary
- mise pins the runtime; the project owns the pins. Run
mise installonce, and the entire toolchain materializes from.mise.toml. - Manual installs are the small set of tools that live outside your build. Claude, Codex, beads, timbers, gh, Docker, RTK.
direnv,just, and the dev-ops surface are pinned through mise. - Docker is required for integration tests. Strike does not yet support concurrent docker compose stacks across worktrees; Working in Parallel (Mostly) covers the workaround.
- direnv asks for explicit allow. That refusal is the trust handshake. Read the
.envrc, allow it, move on. - Six permission modes, one sandbox layer. Modes range from
default(asks for everything) tobypassPermissions(asks for nothing — containers only). Auto mode runs a separate classifier on each action and is the meaningful "fewer prompts, still safe" middle for daily work, where your plan and model qualify. Sandboxing is OS-level isolation that complements any mode. - Auto mode is classifier-gated, not list-gated. A separate model evaluates each action; broad
Bash(*)-style allow rules drop on entering auto. Boundaries you state in chat ("don't push") are enforced. After 3 consecutive or 20 total denials, auto pauses and prompting resumes. - BYOD: the trust model is yours. The team will teach you the levers, not where to set them. Revisit the model quarterly; review your
allowlist and remove anything you no longer remember adding. - The threat side is real. The Injection Problem is the deep read. Prompt injection, supply-chain (now including codeless skills and prompts that steer tool-calling agents), and sensitive logs — all three deserve attention before you set permissions broadly.
- If you came in with a strong prior on permission modes —
default,acceptEdits, orautoas your daily — what is it, and what signal would convince you to flip? - The "review your
allowlist every quarter" suggestion is easy to write and easy to skip. What ritual would make it stick — pinned to a calendar, tied to a code review, attached to a release cycle? - Codeless supply chain — a skill or slash command with hostile prompt content but no executable code — is a relatively new attack vector. What guardrail would you put on the install path for new skills and plugins to keep that risk bounded?
- BYOD on a personal laptop is one trust model. BYOD on a borrowed or shared machine is a different one. What about BYOD on a personal machine you sometimes lend to a family member? Where is the right place in our setup guidance to make those distinctions?
- How We Build Here — The trail's opening cairn. The "your trust model is yours" framing is restated and deepened in this cairn; read together for the full posture.
- The Workshop — The trail's tool map. The mise-vs-manual breakdown in this cairn is the deep read on the runtime row of that table.
- Claude Code as Daily Driver — The harness cairn. The sandbox-vs-auto-mode discussion here picks up where the harness cairn left off, and the orientation ritual it described is part of the trust posture.
- The Injection Problem — The deep read on the threat side: prompt injection, supply-chain risks, and the specific failure modes that come from agents reading instructions and data through the same channel.
- mise — Official documentation for the polyglot version manager. Install instructions, full
.mise.tomlreference, integration patterns. The canonical source. - direnv — Official documentation for the per-directory environment manager. Useful when adding new variables to
.envrcor troubleshooting the allow handshake. - testcontainers-go — The library Strike's
apipackage uses to spin up real Postgres for integration tests. Background reading when integration tests do something surprising.
Generated by Cairns · Agent-powered with Claude