A brain for your AI agents that actually learns

Give Claude Code, Cursor, and every MCP-compatible agent a persistent knowledge system that tracks confidence, records what failed, detects gaps, and gets smarter every session. Not a filing cabinet — a compounding knowledge graph.

$ pip install -U attestdb
$ attest brain install
Free & open source Runs 100% locally No API keys required No cloud account
Makes these tools smarter
Claude Code / OpenClaw / Cursor / Windsurf / Codex / Gemini CLI
Coding agents
Personal agents
You're shipping a feature and hit a bug. Your agent solved this exact issue two weeks ago in a different branch — but has no memory of it.
✗ Without Brain
> The auth tests are failing after the Redis upgrade
Looking at the test output... It's a serialization
error. Let me try updating the session format...
✗ Still failing. Trying msgpack instead of JSON...
✗ Nope. Let me check the Redis docs for 7.x changes...
Ah, Redis 7 changed the default serialization.
Need to set decode_responses=True on the client.
✓ Tests pass now.
40 min. Tried 3 wrong approaches first.
You fixed this same thing 2 weeks ago on another branch.
Tomorrow: starts from scratch again.
Session total: 40 min, 3 failed attempts
✓ With Brain
[session start] Warnings for your context:
⚠ redis client — v7 needs decode_responses=True
⚠ session serialization — msgpack doesn't work, use JSON + decode flag
> The auth tests are failing after the Redis upgrade
The brain already has this — Redis 7 changed the
default serialization. Adding decode_responses=True
to the client config. Also, I see that msgpack was
tried before and didn't work, so skipping that.
✓ Tests pass. First try.
[recorded] fix linked to previous bug automatically
Session total: 3 min, 0 failed attempts
You ask your agent to book a trip. It's the fourth time this month — but every time it asks the same questions and makes the same mistakes.
✗ Without Brain
> Book me a flight to Denver next Tuesday
What airline do you prefer?
> Delta, like I always say
Window or aisle?
> Aisle. You asked me this last week.
Found a flight with a 3h layover in Dallas...
> No layovers! I told you I hate layovers!
OK, here's a direct flight at 6am...
> I don't do flights before 9am either.
8 messages to book a simple flight.
Same Q&A every single time.
8 back-and-forth messages. Repeats every time.
✓ With Brain
[recall] Travel preferences (confidence: 0.95):
• Delta preferred, aisle seat, no layovers
• No flights before 9am
• Negative result: Spirit Airlines — never again
> Book me a flight to Denver next Tuesday
Found Delta direct DEN, Tue 10:15am, aisle seat,
$342. Matches all your preferences. Book it?
> Yes
✓ Booked. Confirmation sent to your email.
[confidence +0.1] Delta preference reinforced
2 messages. Learns more with every booking.
The problem
Your agent forgets everything between sessions
When your coding agent discovers a bug, learns a pattern, or figures out that something doesn't work — that knowledge disappears when the session ends. Tomorrow it will make the same mistakes. Again.
Filing-cabinet memory
Attest Brain
Store things
Key-value pairs, embeddings
Store things
Structured claims with SHA-256 provenance
Know what's still true?
No — memories are static forever
Know what's still true?
Yes — confidence decays, evidence updates scores
Track what failed?
No — only stores what worked
Track what failed?
Yes — negative results are first-class
Find what's missing?
No — doesn't know what it doesn't know
Find what's missing?
Yes — automated gap detection and blindspot analysis
Handle contradictions?
Last write wins (or crashes)
Handle contradictions?
Automated resolution with confidence weighting
Audit trail?
None — no idea where memories came from
Audit trail?
Merkle chain — every claim traceable to source
How it works
Five things that happen automatically
After attest brain install, your coding agent gets lifecycle hooks that fire automatically. No workflow changes needed.
01

Session starts — knowledge is recalled

The brain checks your git status (modified files, recent commits) and surfaces relevant warnings, bugs, patterns, and prior session outcomes. Your agent starts every session with context from every previous session.

02

Before edits — known issues surface

When your agent opens a file for editing, the brain checks for known bugs, warnings, and patterns. "Last time you edited this file, you hit X" appears before the agent writes a single line.

03

During work — findings are recorded

Your agent calls attest_learned to record bugs, fixes, patterns, decisions, and warnings. Each becomes a provenanced claim with a confidence score. Fixes auto-link to bugs. Dependencies auto-generate inverse relationships.

04

When tests fail — prior fixes appear

After a test failure, the brain searches for prior fixes to similar test failures. If someone (you or another agent) solved this before, the solution surfaces immediately instead of re-debugging from scratch.

05

Session ends — outcome is tracked

When the session ends, the outcome (success, partial, failure) is recorded. Claims from successful sessions gain confidence. Claims from failed sessions lose it. Over time, the brain learns which knowledge is reliable.

What makes it different
Not just storage — epistemic governance
The entire agent-memory field treats memory as a retrieval problem. Attest Brain answers the harder question: is this still true?

Confidence scoring

Every claim has a 0–1 confidence score that updates as evidence arrives. Multiple sources confirming the same fact = higher confidence. Contradictions = lower.

Negative results

Record what you tried that didn't work. Next time someone asks the same question, the brain says "this was tried before and it failed" instead of re-investigating.

Gap detection

The brain knows what it doesn't know. Single-source entities, low-confidence areas, and missing expected relationships surface as blindspots to investigate.

Confidence decay

Stale knowledge loses confidence over time. A pattern discovered 6 months ago isn't as reliable as one confirmed yesterday. Configurable half-life.

Contradiction resolution

When two claims conflict, the brain doesn't pick one arbitrarily. It compares provenance, recency, and corroboration to resolve with principled reasoning.

🔗

Provenance chain

Every claim has a SHA-256 ID computed from its content + source + timestamp. Merkle audit chain. Full traceability from assertion back to origin.

In practice
What your agent sees
SessionStart hook output (automatic)
## Attest Memory
*142 claims, 67 entities*

### Continue from previous session (2h ago)
finish the payment webhook handler. stripe test mode working.

### Warnings & patterns (relevant to current work)
- **[warning]** `stripe webhooks`: must verify signature before parsing body
- **[warning]** `checkout.session.completed`: customer field is nullable in test mode
- **[pattern]** `payment_handler.py`: always use idempotency_key on charge creation
- **[bug]** `stripe api v2025`: meter.create() doesn't accept metadata param

### Recent sessions (12 total)
- [+] success (2h ago) — Stripe webhook handler, 3 event types
- [+] success (yesterday) — Auth middleware, JWT validation
- [~] partial (2d ago) — Database migration, 2 of 5 tables done
Agent records a finding (during work)
# Agent discovers a bug and records it
attest_learned("payment_handler.py", "webhook signature check was after JSON parse - must verify raw body first", "bug")

# Agent records the fix
attest_learned("payment_handler.py", "moved verify_signature() before json.loads() in handle_webhook()", "fix")
# ^^ automatically links this fix to the bug above

# Agent records what didn't work
attest_negative_result("stripe idempotency", "using request_id as idempotency key causes duplicates on retry - must use checkout session ID")
Comparison
How Attest Brain compares
We surveyed 24 agent-memory projects. Here's how the capabilities stack up.
Capability Attest Brain Mem0 Graphiti/Zep Engram Memelord
Store & recall
Knowledge graph Pro only
Confidence scoring ✓ per-claim EMA weights
Confidence decay ✓ configurable validity windows implicit
Negative results ✓ first-class contradict
Gap detection ✓ automated
Contradiction resolution ✓ principled temporal delete
Provenance (source tracking) ✓ SHA-256 chain episodes
Audit trail ✓ Merkle chain
Corroboration ✓ cross-source
Pre-edit warnings ✓ hook
No API keys required OpenAI required Neo4j + LLM
Install commands 2 5-7 (Docker) 3+ 2 2
Under the hood
Built on a real database, not a wrapper
Attest Brain isn't a thin layer over SQLite or a vector store. It's powered by a purpose-built claim-native database with a Rust storage engine.

Rust storage engine

LMDB backend via heed. 1.3M claims/sec insert, 8µs entity query. The same engine powers an 85M-claim production database.

Claims are atomic

Every fact is a (subject, predicate, object) triple with provenance, confidence, and timestamp. The graph is derived from claims, not the other way around.

84 MCP tools

Full knowledge graph operations: ingest, query, navigate, verify, predict, analyze. The brain uses a curated subset focused on learning.

What the agent has access to
# Record knowledge
attest_learned(subject, description, type)     # bug, fix, pattern, warning, decision, tip
attest_negative_result(topic, finding)         # record what didn't work
attest_session_end(outcome, summary)           # end session with notes

# Recall knowledge
attest_get_prior_approaches(problem)           # find what worked before
attest_check_file(path)                        # warnings for a file
attest_research_context(topic)                 # full context before starting
attest_confidence_trail(entity)                # confidence evolution over time

# Analyze knowledge
attest_blindspots()                             # find gaps in knowledge
attest_ask(question)                            # natural language questions
attest_predict(entity_a, entity_b)             # causal prediction via graph
Compatibility
Works with your tools
Attest Brain auto-detects your IDE and configures itself. One install covers everything.

Claude Code / OpenClaw

Full lifecycle hooks: SessionStart recall, PreEdit warnings, PostTest prior fixes, Stop session summary. The deepest integration.

Cursor / Windsurf

MCP server auto-configured. Cursor also gets .cursorrules agent instructions for optimal tool usage.

Codex / Gemini CLI

MCP via .mcp.json or .gemini/settings.json. Full tool access, auto-detected on install.

Two commands. Zero config.

Install the brain, start coding. It gets smarter every session.

$ pip install -U attestdb
$ attest brain install