Give Claude Code, Cursor, and every MCP-compatible agent a persistent knowledge system that tracks confidence, records what failed, detects gaps, and gets smarter every session. Not a filing cabinet — a compounding knowledge graph.
$ pip install -U attestdb
$ attest brain install
attest brain install, your coding agent gets lifecycle hooks
that fire automatically. No workflow changes needed.
The brain checks your git status (modified files, recent commits) and surfaces relevant warnings, bugs, patterns, and prior session outcomes. Your agent starts every session with context from every previous session.
When your agent opens a file for editing, the brain checks for known bugs, warnings, and patterns. "Last time you edited this file, you hit X" appears before the agent writes a single line.
Your agent calls attest_learned to record bugs, fixes, patterns,
decisions, and warnings. Each becomes a provenanced claim with a confidence
score. Fixes auto-link to bugs. Dependencies auto-generate inverse relationships.
After a test failure, the brain searches for prior fixes to similar test failures. If someone (you or another agent) solved this before, the solution surfaces immediately instead of re-debugging from scratch.
When the session ends, the outcome (success, partial, failure) is recorded. Claims from successful sessions gain confidence. Claims from failed sessions lose it. Over time, the brain learns which knowledge is reliable.
Every claim has a 0–1 confidence score that updates as evidence arrives. Multiple sources confirming the same fact = higher confidence. Contradictions = lower.
Record what you tried that didn't work. Next time someone asks the same question, the brain says "this was tried before and it failed" instead of re-investigating.
The brain knows what it doesn't know. Single-source entities, low-confidence areas, and missing expected relationships surface as blindspots to investigate.
Stale knowledge loses confidence over time. A pattern discovered 6 months ago isn't as reliable as one confirmed yesterday. Configurable half-life.
When two claims conflict, the brain doesn't pick one arbitrarily. It compares provenance, recency, and corroboration to resolve with principled reasoning.
Every claim has a SHA-256 ID computed from its content + source + timestamp. Merkle audit chain. Full traceability from assertion back to origin.
## Attest Memory *142 claims, 67 entities* ### Continue from previous session (2h ago) finish the payment webhook handler. stripe test mode working. ### Warnings & patterns (relevant to current work) - **[warning]** `stripe webhooks`: must verify signature before parsing body - **[warning]** `checkout.session.completed`: customer field is nullable in test mode - **[pattern]** `payment_handler.py`: always use idempotency_key on charge creation - **[bug]** `stripe api v2025`: meter.create() doesn't accept metadata param ### Recent sessions (12 total) - [+] success (2h ago) — Stripe webhook handler, 3 event types - [+] success (yesterday) — Auth middleware, JWT validation - [~] partial (2d ago) — Database migration, 2 of 5 tables done
# Agent discovers a bug and records it attest_learned("payment_handler.py", "webhook signature check was after JSON parse - must verify raw body first", "bug") # Agent records the fix attest_learned("payment_handler.py", "moved verify_signature() before json.loads() in handle_webhook()", "fix") # ^^ automatically links this fix to the bug above # Agent records what didn't work attest_negative_result("stripe idempotency", "using request_id as idempotency key causes duplicates on retry - must use checkout session ID")
| Capability | Attest Brain | Mem0 | Graphiti/Zep | Engram | Memelord |
|---|---|---|---|---|---|
| Store & recall | ✓ | ✓ | ✓ | ✓ | ✓ |
| Knowledge graph | ✓ | Pro only | ✓ | — | — |
| Confidence scoring | ✓ per-claim | — | — | — | EMA weights |
| Confidence decay | ✓ configurable | — | validity windows | — | implicit |
| Negative results | ✓ first-class | — | — | — | contradict |
| Gap detection | ✓ automated | — | — | — | — |
| Contradiction resolution | ✓ principled | — | temporal | — | delete |
| Provenance (source tracking) | ✓ SHA-256 chain | — | episodes | — | — |
| Audit trail | ✓ Merkle chain | — | — | — | — |
| Corroboration | ✓ cross-source | — | — | — | — |
| Pre-edit warnings | ✓ hook | — | — | — | — |
| No API keys required | ✓ | OpenAI required | Neo4j + LLM | ✓ | ✓ |
| Install commands | 2 | 5-7 (Docker) | 3+ | 2 | 2 |
LMDB backend via heed. 1.3M claims/sec insert, 8µs entity query. The same engine powers an 85M-claim production database.
Every fact is a (subject, predicate, object) triple with provenance, confidence, and timestamp. The graph is derived from claims, not the other way around.
Full knowledge graph operations: ingest, query, navigate, verify, predict, analyze. The brain uses a curated subset focused on learning.
# Record knowledge attest_learned(subject, description, type) # bug, fix, pattern, warning, decision, tip attest_negative_result(topic, finding) # record what didn't work attest_session_end(outcome, summary) # end session with notes # Recall knowledge attest_get_prior_approaches(problem) # find what worked before attest_check_file(path) # warnings for a file attest_research_context(topic) # full context before starting attest_confidence_trail(entity) # confidence evolution over time # Analyze knowledge attest_blindspots() # find gaps in knowledge attest_ask(question) # natural language questions attest_predict(entity_a, entity_b) # causal prediction via graph
Full lifecycle hooks: SessionStart recall, PreEdit warnings, PostTest prior fixes, Stop session summary. The deepest integration.
MCP server auto-configured. Cursor also gets .cursorrules agent instructions for optimal tool usage.
MCP via .mcp.json or .gemini/settings.json. Full tool access, auto-detected on install.
Install the brain, start coding. It gets smarter every session.
$ pip install -U attestdb
$ attest brain install