Skip to content
Threadbaire

Glossary

The language we use to make memory work across models and time. One-sentence definitions; why it matters; where to see it.

Data stays in your stack. We never train on your content. Privacy

LLM-agnostic memory

In one sentence: Portable, provenance-first context that works across GPT, Claude, and local models, no lock-in.

Why it matters: Your strategy survives tool & model changes; outputs stay comparable on the same history.

  • Portable across GPT, Claude, and local/Ollama.
  • Receipts on every recall (who/what/when/which model/source).
  • Role- and project-threaded recall; not another notes app.

See it on the demo · Methods (Pilot-only)

What “agnostic” means in practice
  • Same history produces comparable answers across models.
  • Provenance (receipts) travels with the recall.
  • No provider-specific tools are required to replay.
  • Clear pass rules so teams can decide when a swap is “safe.”
Examples
  • Switch from GPT → Claude without losing decision rationale.
  • Work offline on a local model and keep the same memory thread.
  • Handoff to a teammate: they see what changed, who decided, and why instantly.
FAQ

What does “LLM-agnostic” mean? Your memory and context work with any model, GPT, Claude, local/Ollama, and future models, without provider lock-in.

Why not just use one provider’s memory? Portability + provenance. Threads survive tool changes, and every recall carries who/what/when/which model so decisions remain auditable.

How is this different from a notes app? It’s a recall + provenance layer tied to roles and projects. It feeds the right slice back into whatever model you use next.

Memory slice

In one sentence: A compact export of the relevant thread (decisions, sources, role) you can replay on any model.

Why it matters: Enables apples-to-apples comparison (GPT ↔ Claude ↔ local) and quick handoffs.

See it on the demo

Provenance-first

In one sentence: Memory that carries its own receipts: each recall points to source, who, when, and (if AI-assisted) which model.

Why it matters: Prevents contextless answers, makes model swaps auditable, and enables stable comparisons on the same history.

Methods (Pilot-only) · See MPR

Provenance chip (MPR)

In one sentence: A tiny JSON stamp attached to each recall showing who/what/when/which model/source (plus optional integrity hash).

Why it matters: Makes answers auditable and reproducible across LLMs.

Under the hood (gated)

Hybrid retrieval

In one sentence: Mix lexical/regex with vectors and filters to cast a smart net before assembly.

Why it matters: Reduces junk; surfaces the right context with fewer tokens.

See it on the demo

Provenance

In one sentence: The “receipts” carried with each recall, who decided, what changed, when, and which model or source was used.

Why it matters: Makes AI-influenced work auditable and speeds up reviews and handoffs.

See also: Provenance-first · Provenance chip (MPR)

Re-rank

In one sentence: Order candidates after retrieval to promote, dedupe, and diversify before capping.

Why it matters: Tight, relevant context > maximal context; improves answer stability.

Signals we consider include relevance, recency, source quality, and role fit; specifics are covered in the Pilot.

Model-swap safe

In one sentence: Memory stays stable as you change models; you can compare outputs on the same slice.

Why it matters: Prices, policies, and capabilities shift, your process shouldn’t.

See it on the demo

Comparable if… (checklist)
  • Same memory slice (unchanged).
  • Same universal prompt style and role/objective (if used).
  • Deliberation budget capped to avoid runaway chains.
  • Pass rules met (e.g., preserves key timeline events; next step aligns).

Strategic recall

In one sentence: Pull the “why, who, and what-changed” in under 60 seconds for the current role and project.

Why it matters: Kills re-explaining loops; turns memory into a working surface, not an archive.

Typical pattern
  1. Filter by project and role to surface the last relevant events.
  2. Summarize facts → rationale into a quick timeline with receipts.
  3. Propose the next step consistent with the stated objective.

Receipts by default

In one sentence: Decisions are stored with links to sources and owners, not just summaries.

Why it matters: Explains “who approved this?” without meetings.

See receipts

Role-threaded identity

In one sentence: Keep Founder/PM/Researcher threads clean so each recall uses the right perspective, without brittle prompts.

Why it matters: Reduces noise; improves relevance without prompt gymnastics.

Implementation notes
  • Tag events with role at capture time.
  • Switch roles via UI/metadata rather than prompt text.
  • Merge conflicting roles only on explicit request.

Write-through capture

In one sentence: Capture happens passively as you work, decisions and sources flow into the memory thread.

Why it matters: No new workflow; less copy-paste; fewer gaps.

See it in the Approach

Replay

In one sentence: Run the same slice on different models to compare outputs or reproduce a result later.

Why it matters: Transparency for stakeholders; confidence to change vendors.

See it on the demo

Comparison rubric (quick)
  • Preserves at least two key timeline events (what, who, when, and why).
  • Proposed next step aligns with the stated objective.
  • Receipts are present; no fabricated sources.
  • Minor wording differences are acceptable.

Decision log

In one sentence: A timeline of AI-influenced changes with owners, sources, and timestamps.

Why it matters: Answers “who approved this?” without meetings; keeps scope honest.

We log timestamp, change description, rationale, owner, source URI, and (if relevant) model and status.

Context rot

In one sentence: Degrading answer quality as bloated context windows accumulate noise and stale detail.

Why it matters: We cap and compact to keep recall tight and comparable across models.

See it on the methods

Deliberation budget

In one sentence: A hard cap on “thinking” to prevent overlong chains and drift; lock early once quality is met.

Why it matters: Keeps runs reproducible; controls cost and variance across LLMs.

We cap steps/tokens and lock early when pass rules are satisfied; specifics are covered in the Pilot.

Variance check

In one sentence: Replay passes when ≥2 key timeline events are preserved and the proposed next_action is aligned; phrasing differences are OK.

See also: Model-swap safe · Replay

Keep going

Open the demo to see a slice replayed on two models, or read the Approach for posture and fit.