Skip to content
Threadbaire

Start here

Keep your context and decisions in one place when working with AI. Start with markdown files. Scale to a database when you need it. Let AI agents integrate themselves.

The Method

A simple way to keep AI memory and context together. Two markdown files you control. No installation, no platform, works with any model.

If you’re curious why the method stays small and file-based, read the Thesis.

What you get

Four files in the GitHub repository:

Core_Template.md

Holds stable information about your project: what it is, who it's for, voice, constraints, guardrails.

Addendum_Template.md

A running log of decisions and progress. Each entry has a receipt: who, why, source, date, model.

Extensions.md

Optional extras for specific project types: dev logs, content metrics, pipeline tracking.

How it works

1

Copy the templates

Download Core_Template.md and Addendum_Template.md. Rename them for your project.

2

Fill in the Core

Add your project's identity: what it is, who it's for, voice, constraints. Keep it short—this travels with every conversation.

3

Log as you go

When you make a decision or learn something durable, add an entry to the Addendum with a receipt.

4

Paste into new sessions

When you start a new AI conversation, paste the Core. Add Addendum entries when relevant. Context travels with you.

Receipts

Every Addendum entry includes a receipt—a short note recording where the decision came from:

who: [you or collaborator]

why: [reason for the decision]

source: [link, document, or conversation]

date: [when it happened]

model: [which AI, which version]

Receipts make your work traceable. Prove where answers came from, replay the same setup on different models, avoid re-explaining decisions.

The Server

After a few months, addendum files get long. AI context windows have limits. Threadbaire Server moves your entries to a database with a REST API.

Query instead of scroll

Filter by project, date range, keywords. Get just what you need.

AI reads via API

Give any AI model the URL. It fetches entries as JSON. No more copy-paste.

Your data stays yours

Deploy to your own Vercel account. SQLite locally, Postgres in production. Nothing sent anywhere else.

RundownAPI

An endpoint that teaches AI agents how to use your API. Not just "here are the endpoints" — but "here's when to use them."

The idea: Add a /api/rundown endpoint to any API. AI reads that URL, understands the API, and integrates itself. No SDK, no MCP server, no pre-built connectors.

What makes it different

OpenAPI tells you how to call an endpoint. RundownAPI tells the AI when to care.

MCP requires running servers. RundownAPI is just a JSON endpoint.

The spec includes behavioral instructions — triggers, behaviors, constraints. The AI knows what to do, not just what's available.

Tested and working

Gave Claude Code the URL and a token. It read the endpoint, built its own integration, and started querying project history. No prior knowledge of the API. No setup on my side beyond the endpoint itself.

Open spec. CC BY-SA license. Use it for your own APIs.

Questions

Do I need to install anything?

No. The method is just markdown files. Copy them, edit them, paste them into AI conversations. The server and RundownAPI are optional — use them when you need them.

Does it work with ChatGPT? Claude? Other models?

Yes. Everything is model-agnostic. The method, server, and RundownAPI work with any AI that accepts text input or can make HTTP requests.

Do I need the server to use RundownAPI?

No. RundownAPI is an open spec. You can add it to any API. Threadbaire Server is just one implementation.

What's the difference between Core and Addendum?

Core holds stable information (project identity, rules, constraints). Addendum is a running log (decisions, progress, lessons learned). Core changes rarely; Addendum grows over time.

Why markdown?

It's portable, readable, and works everywhere. No vendor lock-in. Your context lives in files you control, not in someone else's platform. If you want the deeper reasoning, the Thesis spells it out.

Is this free?

Yes. The method, server, and RundownAPI spec are all open source. Use them, adapt them, share them.

Pick where you want to start:

Questions? Get in touch.