Cookbook · five recipes

Five patterns. One namespace.

How an addressable shared-memory tree turns into persistent agent memory, job queues, reactive multi-agent pipelines, self-distributing tools, and privacy-preserving synthesis — without an orchestrator, a queue server, a release pipeline, or a third party seeing your sensitive data.

Local-first reads E2E private scope Versioned writes SSE heartbeat
01 Tier 1 · Persistent memory

Working memory that survives sessions

The first pattern every solo user gets the moment they finish bitpub init. Use this when an agent finishes work it might want again — a finding, a decision, an intermediate result, a follow-up — and you'd otherwise dump it in /tmp.

bitpub:// private:agent_h7q2x9k4m1nz Workspaces q3-launch encrypted · 4 fresh
private:agent_h7q2x9k4m1nz your encrypted namespace
Workspaces
q3-launch 4 slices · anchored by .bitpub/workspace.json
plan v3 agent_h7q2 09:14
findings v1 agent_h7q2 11:02
pr-draft v2 agent_h7q2 15:48
next-steps v1 agent_h7q2 17:30

Coordination · solo across sessions

Day 1 saves into private encrypted memory; next morning reads zero-latency from local cache.
09:14
agent_h7q2 push /Workspaces/q3-launch/plan v1
"Step 1 — audit competitor landing pages…" bitpub save plan "..."
11:02
agent_h7q2 push /Workspaces/q3-launch/findings v1
15:48
agent_h7q2 push /Workspaces/q3-launch/pr-draft v2 — bumps existing slice
17:30
agent_h7q2 push /Workspaces/q3-launch/next-steps v1
overnight · agent terminates · machine sleeps
09:02 +1d
agent_h7q2 catch-up ~/projects/q3-launch 0ms · local cache
All four slices restored from ~/.bitpub/cache.db; no network round-trip. Works offline.
CLI · day 1, inside ~/projects/q3-launchbash
$ bitpub save plan "Step 1 — audit competitor landing pages..." ✓ Saved → bitpub://private:agent_h7q2x9k4m1nz/Workspaces/q3-launch/plan (v1) $ bitpub save findings --file ./scratch/findings.md $ bitpub save next-steps "Run the messaging test on the top three angles."
CLI · next morning, no chat memory, same folderbash
$ cd ~/projects/q3-launch && bitpub catch-up ═══ Catch-up ═══ Workspace : q3-launch Namespace : bitpub://private:agent_h7q2x9k4m1nz/Workspaces/q3-launch/ Your recent work (private): [2026-04-20 17:30] next-steps Run the messaging test on the top three angles. [2026-04-20 15:48] pr-draft ## Q3 launch positioning v2 ... [2026-04-20 11:02] findings [2026-04-20 09:14] plan

Why this works

The .bitpub/workspace.json marker dropped in the project folder is the only piece of state that survives across sessions — and that's enough. When the agent walks in tomorrow with no memory, walking up from cwd finds the marker (the same way Git finds .git/) and the same namespace; the same namespace makes the same slices reachable.

The local-first read path is what makes catch-up feel instant. Reads hit ~/.bitpub/cache.db on disk, not the network. Even on a flight with no Wi-Fi, the agent's prior context is fully available; the fetch later just reconciles cloud-side updates.

Every other recipe in the cookbook layers on top of this one. Job queues (Tier 3) are the same primitive scaled to multi-agent coordination; team namespaces (Tier 2) are the same primitive widened to a domain. The skill an agent learns saving findings here is the same skill it uses pushing scrubbed insights to group:acme.com/Insights/....

02 Tier 3 · Coordination

Namespace as job queue

Use this when you have producer agents, worker agents, and you don't want to run a queue server. The namespace tree is the queue; --expect-version 0 turns the race into a safe claim.

bitpub:// group:acme.com Jobs Transcripts LIVE · 6 slices · 2 fresh
group:acme.com
Jobs
Transcripts 3-state queue
Pending producers append · workers watch & claim
meeting-2026-04-20 v1 agent_ingest 1m
meeting-2026-04-21 v1 agent_ingest 3m
meeting-2026-04-22 v1 agent_ingest 5m
InProgress first writer wins · --expect-version 0
meeting-2026-04-20 v1 worker-3 23s
meeting-2026-04-21 v1 worker-7 1m
Done outputs · downstream agents subscribe here
meeting-2026-04-19 v1 worker-1 2h
meeting-2026-04-18 v1 worker-2 6h

Coordination · two workers race for one job

A producer drops a task; both workers see the heartbeat; --expect-version 0 turns the race into a safe claim.
10:00:00
agent_ingest push /Pending/meeting-2026-04-20 v1
creates the slot · emits SSE heartbeat to all watchers
10:00:01
worker-3 watch ← event received on /Pending/**
10:00:01
worker-7 watch ← event received on /Pending/**
10:00:02
worker-3 claim /InProgress/meeting-2026-04-20 v1 200 OK
push --expect-version 0 · slot transitioned 0 → 1
10:00:02
worker-7 claim /InProgress/meeting-2026-04-20 409 Conflict
lost the race — falls back to next pending id
worker-3 processes for ~14 min
10:14:08
worker-3 push /Done/meeting-2026-04-20 v1
downstream agents (Recipe 03) wake on this event
Producer · ingest agent drops a transcriptbash
$ bitpub push \\ --address "bitpub://group:acme.com/Jobs/Transcripts/Pending/meeting-2026-04-20" \\ --file ./transcript.txt ✓ Pushed (v1)
Worker · subscribe + claim safelybash
# 1. Watch the pending queue (SSE — no polling) $ bitpub watch --address "bitpub://group:acme.com/Jobs/Transcripts/Pending/**" # 2. On each event, race for the claim slot: $ bitpub push \\ --address "bitpub://group:acme.com/Jobs/Transcripts/InProgress/meeting-2026-04-20" \\ --content "claimed-by: worker-3 at 2026-04-20T10:00:03Z" \\ --expect-version 0 # slot must not exist yet → 200 OK → worker-3 owns the job, processes it, then writes Done/<id> → 409 Conflict → another worker claimed first; pick the next pending id

Why this works

The four-state pattern generalizes. Pending/InProgress/Done/ (with optional Failed/) is the basic shape; producers and consumers only ever interact with paths, never with each other. Add a worker by pointing a watcher at Pending/**; remove a worker by killing it. No service registry, no orchestrator config, no PR to add a queue.

--expect-version 0 is a coordination primitive, not a safety feature. The producer has just written v1 to Pending/meeting-2026-04-20. Both workers see the heartbeat. Both call push. The first one to land wins because the slot at InProgress/meeting-2026-04-20 goes from "doesn't exist" (v0) to v1. The second worker's push declares "I expect v0" — it sees v1 and the server returns 409. Both workers know unambiguously who owns the job.

Work in flight survives crashes. The state of the queue lives in Postgres, not in worker memory. Restart a worker mid-job and it can detect the stranded InProgress/<id> still has its written_by hash; recovery is just resuming the read.

03 Tier 3 · Coordination

Reactive agents — the namespace graph is the execution graph

Use this for asynchronous multi-agent pipelines. A producer emits a slice; N subscribers fan out independently; each writes its own derived slice; some of those become inputs to the next layer. Nobody wires anything to anyone.

bitpub:// group:acme.com LIVE · 4 fresh · 3 subscribers
group:acme.com
CRM/Prospects producer writes here · 3 subscribers fan out
AcmeCo v1 agent_crm_sync 30s
Research/Prospects researcher writes · drafter watches
AcmeCo v1 agent_researcher 12s
Content/Drafts/Social drafter writes · two-hop downstream
AcmeCo v1 agent_drafter 4s
Intel/Competitors competitor monitor writes · independent branch
AcmeCo v1 agent_intel 6s

Coordination · one push, three reactions, two-hop pipeline

Producer doesn't know who's listening; subscribers don't know about each other; the namespace graph is the execution graph.
14:32:00
agent_crm_sync push /CRM/Prospects/AcmeCo v1
SSE fan-out → all watchers of /CRM/Prospects/** wake
14:32:00
agent_researcher watch ← event received · starts research
14:32:00
agent_intel watch ← event received · starts competitor scan
14:32:00
agent_drafter watch /Research/Prospects/** (waiting · second-hop)
three agents work in parallel
14:32:18
agent_researcher push /Research/Prospects/AcmeCo v1
→ wakes agent_drafter (next layer of the pipeline)
14:32:18
agent_drafter watch ← event received · starts drafting
14:32:24
agent_intel push /Intel/Competitors/AcmeCo v1
14:33:02
agent_drafter push /Content/Drafts/Social/AcmeCo v1
62 seconds end-to-end · zero orchestrator
Producer (one push, three reactions)bash
$ bitpub push \\ --address "bitpub://group:acme.com/CRM/Prospects/AcmeCo" \\ --file ./prospect-record.json
Subscribers (three independent processes / agents)js · @bitpub/smp
import { BitPub } from "@bitpub/smp"; const tb = new BitPub(); // auto-provisions if needed // Researcher for await (const evt of tb.watch("bitpub://group:acme.com/CRM/Prospects/**")) { const brief = await researchProspect(evt.data.hcu); await tb.push(`bitpub://group:acme.com/Research/Prospects/${brief.name}`, { content: brief.markdown }); }

Why this works

The graph lives in addresses, not in code. No central scheduler. No DAG framework. No "step 2 fires after step 1" config. The producer writes a path; the subscribers' paths declare what they care about. Add a fourth subscriber by deploying a fourth watcher — the producer doesn't even learn that a new agent now reacts to its writes.

Two-hop pipelines compose for free. The post-drafter doesn't watch CRM/ — it watches Research/, one layer downstream of the researcher. Each agent only knows its own input scope and its own output scope; the pipeline shape is whatever the union of those scopes happens to be.

Pair this with Recipe 02 when load is real. When you have multiple researchers and a single brief should only be written once, the watch fires for all of them but only one wins the --expect-version 0 race for Research/Prospects/<name>. The other watchers move on to the next event without re-doing work. Reactive + claim is the full pattern.

04 Tier 1 · Shared knowledge

Self-distributing tools — no repo, no release, no install step

Use this when an agent or a teammate writes a useful script that should run on every other agent's machine. Push the script to a stable address; everyone else pulls the current version on demand. Updates ship instantly.

bitpub:// group:acme.com Tools 6 slices · 1 fresh
group:acme.com
Tools no repo · no release · pull-by-address
ingest-script v3 agent_alice 2d
connectors 3 slices
granola-ingest v7 agent_bob 3h
salesforce-poll v2 agent_alice 5d
notion-export v1 agent_carol today
runbooks 2 slices
on-call-escalation v12 agent_dan 1w
db-migration-checklist v4 agent_alice 3w

Distribution · push once, every reader gets the next version

No repo, no PR, no release pipeline. Update is a push; install is a read.
Mon 14:02
agent_alice push /Tools/connectors/granola-ingest v6
first version of the connector
~2 days · v6 in production
Wed 09:18
agent_bob push /Tools/connectors/granola-ingest v7
fixes a parsing bug · attribution + version recorded automatically
Wed 11:30
agent_carol read /Tools/connectors/granola-ingest → gets v7 automatically
Wed 14:52
agent_dan read /Tools/connectors/granola-ingest → gets v7
Thu 08:40
agent_eve read /Tools/connectors/granola-ingest → gets v7
no install. no PR. no release. v7 ships the moment agent_bob pushed.
Author publishes the agent / connector oncebash
$ bitpub push \\ --address "bitpub://group:acme.com/Tools/connectors/granola-ingest" \\ --file ./granola_ingest.py ✓ Pushed (v7) written_by: agent_bob
Any agent / teammate fetches and runsbash
$ bitpub read \\ --address "bitpub://group:acme.com/Tools/connectors/granola-ingest" \\ --format raw > granola_ingest.py $ python granola_ingest.py ✓ Pulled 12 transcripts → bitpub://group:acme.com/Meetings/...

Why this works

The address is the package name. A teammate doesn't need to know what repo a tool lives in, what branch, or what release tag — they need the address. Distribution collapses to "I have the URL, I have the key for that scope, I can read it."

Versions and attribution come for free. Every push records written_by (a hash of the writing API key) and increments a version counter. Diffing v6 vs. v7 is one read each. Reverting is one push of an older payload. The audit trail is the storage primitive.

This composes with Recipe 03. Push a tool, declare it in Config/Agents/<name>.md with frontmatter pointing at tools: bitpub://group:acme.com/Tools/connectors/**, and a runtime that reads agent definitions from BitPub picks up new tools the moment they land. The runner doesn't redeploy; the agent doesn't reload. The next watch event uses the new tool.

05 Tier 4 · Boundary-crossing

Privacy-preserving synthesis — sensitive in, sanitized out

Use this when raw content can't leave your machine in plaintext but the insights from that content should be broadly useful to the team. The private: scope is encrypted client-side; only the synthesized, scrubbed output reaches group:.

bitpub:// private:alice Transcripts Granola encrypted · ciphertext at rest
private:alice alice-only · server stores ciphertext
Transcripts/Granola
2026-Q1-prospect-call v1 agent_alice 2h
2026-Q1-vendor-review v1 agent_alice 2h
2026-Q2-team-1on1 v1 agent_alice 2h
bitpub:// group:acme.com Insights team-visible · plaintext · 3 fresh
group:acme.com
Insights scrubbed · readable by anyone in the org
prospect-themes-2026-q1 v1 agent_alice 1m
vendor-coverage-2026-q1 v1 agent_alice 1m
team-pulse-2026-q2 v1 agent_alice 1m

Coordination · sensitive in, sanitized out

Two enforcement layers — server-side scope ACL + client-side AES-256-GCM. The classify agent is the trust boundary.
14:00
agent_ingest push private:alice/Transcripts/Granola/2026-Q1-call
AES-256-GCM client-side · server stores encrypted:v1:<ciphertext>
14:01
agent_ingest push private:alice/Transcripts/Granola/2026-Q1-vendor
14:02
agent_ingest push private:alice/Transcripts/Granola/2026-Q2-1on1
classify agent runs on alice's machine
14:30
agent_classify pull private:alice/Transcripts/Granola/**
decrypted in-process only · never persisted plaintext · pii redact + customer-name scrub
14:32
agent_classify push group:acme.com/Insights/prospect-themes-2026-q1 v1
plaintext · sanitized · team-visible · no raw transcript ever crossed the boundary
Step 1 · ingest into private (encrypted) namespacejs · @bitpub/smp
import { BitPub } from "@bitpub/smp"; const tb = new BitPub(); for (const t of await pullGranolaTranscripts()) { // Content is AES-256-GCM-encrypted before it leaves this process. await tb.push(`bitpub://private:${tb.config.owner}/Transcripts/Granola/${t.slug}`, { content: t.text, // plaintext locally tags: "transcript,granola", }); }
Step 2 · classify locally, push only the sanitized synthesisjs · @bitpub/smp
const raw = await tb.pull(`bitpub://private:${tb.config.owner}/Transcripts/Granola/**`); // raw[*].payload.content is decrypted in-process — never persisted plaintext. const themes = await classifyAndScrub(raw); // PII removal, customer scrubbing await tb.push("bitpub://group:acme.com/Insights/prospect-themes-2026-q1", { content: themes.markdown, tags: "insight,quarterly,sanitized", }); // group: payloads are NOT encrypted — readable by anyone in the org.

Why this works

Two enforcement layers, one address scheme. The server-side check ensures only the API key whose owner matches can read, write, list, or delete the private slice — even with a domain key, you can't reach into someone else's private namespace. The client-side encryption ensures even a compromised database admin sees only ciphertext.

The classify step is the trust boundary. One agent, on one machine, reads sensitive data, applies the org's privacy rules in code that's version-controlled and reviewable, and emits scrubbed output. No raw transcript ever lands in group:. The pipeline is statically inspectable: you can bitpub list both namespaces and confirm the shape.

This is what makes multi-agent pipelines safe to share. An ingest script in Tools/connectors/granola-ingest (Recipe 04) is safe to give to the whole team — it only touches private:. The classify script that embeds the org's privacy rules stays controlled. Mix the two into one agent and you've made the script un-distributable.

Read the rest of the cookbook on GitHub.

Nine more patterns — declarative agent registries, append logs, content workflows, addressable synthesis, working memory across agents — plus the namespace design principles that hold them all together.

$ curl -fsSL https://bitpub.io/install.sh | bash