Code in GitHub. Architecture in Confluence. Customer context in Salesforce. Decisions buried in Slack threads. The only system that synthesizes all of it is human memory — bandwidth-limited, context-switching impaired, gone when people leave. Attest is the persistent layer underneath: every fact sourced, every claim confidence-weighted, nothing lost when someone walks out the door.
Not a search engine. Not a chatbot. Not a long context window with weak reasoning — long context with weak reasoning is actively harmful. Attest is a knowledge database: every claim has a source, a confidence score, and a contradiction detector. It updates when facts change. SaaS tools become data sources — Attest is the system of record for what is actually known and trusted.
Not embeddings. Not documents. Claims with subjects, predicates, objects — queryable, traversable, composable. Knowledge you can reason about.
Every fact carries its source. When a source is wrong, everything downstream is automatically flagged. No more “where did we get this number?”
Confidence changes as evidence accumulates. Contradictions coexist until resolved. Long context with weak reasoning is actively harmful — this engine reasons.
Actionable knowledge is fragmented across a dozen systems that don’t talk to each other. The why behind every decision lives in Slack threads. Architecture lives in Confluence pages no one updates. Customer context is locked in Salesforce. The only system that synthesizes all of it is human memory — and it walks out the door every time someone leaves.
One claim write. The graph, vectors, documents, and audit trail all update atomically. The same fact from 10 sources creates one content group with higher confidence — not 10 duplicates across four systems that slowly drift apart.
Point it at a Slack export, a ChatGPT conversation, or a folder of documents — the built-in extractor (heuristic or LLM-powered, 9 providers supported) pulls out claims with provenance tracing to the exact message.
Want a visual interface? pip install streamlit pyvis and run
streamlit run app.py for an interactive explorer with entity search,
relationship graphs, and natural-language questions — all backed by a
Rust search index.
Batch mode handles millions of claims
via the Rust engine. db.at(timestamp) gives you point-in-time queries —
what the agents knew last Tuesday, before the new data came in.
Event hooks (db.on("claim_ingested", callback)) let you
build reactive pipelines that trigger when knowledge changes.
pip install attestdb
$ pip install attestdb requests streamlit pyvis $ python examples/quickstart.py
Add your Slack bot token and a Gemini API key (free) to .env.
The quickstart connects to Slack, extracts claims, and opens an interactive
explorer at localhost:8501.
The March load test had a bug in the test harness. In a normal system, you’d spend a week asking around to find what depended on it. In Attest, you ask one question.
cascade = db.retract_cascade( "github:load-test-march-2024", reason="Bug in test harness" ) # removed: 1 # flagged: 2 # unaffected: 1
One call. The database already tracked the dependency chain — because provenance is structural, not metadata you hope someone filled in.
“The East Palestine water is contaminated.” Here's how three different databases store that:
When an LLM extracts 500 facts from your Slack channels overnight, you need every write to carry its source. Not as metadata you hope someone fills in — as a hard requirement the engine enforces. If two sources disagree, both claims coexist. When one turns out to be wrong, you retract it and the other survives.
We were running dozens of agents across thousands of research papers. They extracted findings, made claims, contradicted each other — and we had no way to store what they collectively knew in a form anyone could use.
The vector databases lost provenance. The graph databases couldn’t handle contradictions. The document stores couldn’t be queried structurally. So we built the layer underneath — at Omic, a computational biology company where AI agents ingest thousands of research papers.
The problem isn’t unique to biology. Every organization has actionable knowledge fragmented across a dozen systems, held together only by human memory — bandwidth-limited, context-switching impaired, and gone when someone leaves. Attest is the layer that was missing: structured, sourced, persistent. Knowledge that doesn’t walk out the door.
You’ve deployed 10–50 agents. They summarize, research, extract, classify. But where does that knowledge go? Today it vanishes when the session ends. Attest gives every agent a shared, queryable memory that gets stronger over time — not a context window that forgets. Every session compounds. The database gets smarter with every run.
The MCP server lets Claude and other agents read and write claims directly. Observe sessions, record outcomes, retrieve proven approaches. Every session compounds instead of starting from zero.
Architecture decisions live in Confluence pages no one updates. Incident context is buried in Slack threads. When a senior engineer quits, years of “why we did it this way” vanish. Attest continuously ingests from all of it and maintains the real dependency map.
Use db.impact() to answer “what breaks if this system goes down?” from 18 months of sourced incident data. Use db.source_reliability() to know which documentation you can actually trust.
Agents read every new paper in your domain as it becomes available and extract findings. They spot that your organization has deep knowledge on one drug target and almost nothing on the pathway that might connect to it — a gap no single researcher would see.
Use db.discover() to generate hypotheses from graph structure. Use db.close_gaps() to automatically research and validate them. Use db.blindspots() to find single-source vulnerabilities.