Attest — the source for
knowledge that contradicts itself

Every fact has a receipt. Every claim traces to who said it, when, and how confident they were. When a source is wrong, retract it — corroborated facts survive.

$ pip install attestdb
⭐ Star on GitHub

The most valuable knowledge in any organization — incident patterns, research hunches, tribal expertise — is contradictory, multi-source, and changes every week. Traditional databases can't hold it. Attest was built for it.

What Only Attest Can Do

Ten questions other databases can't answer — because they don't track provenance structurally.

Impact Analysis

db.impact("paper_42") — If this source is retracted, what breaks? How many claims and entities depend on it?

A key journal retracts a paper your drug target evaluation depended on. In seconds, see every downstream conclusion that relied on it — and which ones survive on independent evidence.

Blindspot Detection

db.blindspots() — Which entities are backed by only a single source? Where are you vulnerable?

Your team has 300 claims about kinase inhibitors and 3 about the metabolic pathway that might connect them. That's not just a gap — it's a research opportunity no single researcher would have noticed.

Consensus View

db.consensus("BRCA1") — How many sources agree? What's the agreement ratio across independent sources?

Before committing to a target, see exactly where the literature agrees, where it disagrees, and which disagreements are backed by stronger evidence.

Fragile Claims

db.fragile() — Find claims backed by a single source. These are your weakest links.

These are the facts your organization treats as settled that could collapse with a single retraction. Find them before they surprise you.

Stale Knowledge

db.stale(days=90) — What hasn't been corroborated or updated recently? Time-aware knowledge hygiene.

Knowledge decays. A claim from 2022 with no recent corroboration is a liability, not an asset.

Audit Trail

db.audit(claim_id) — Full provenance chain for any claim: who said it, what corroborates it, what depends on it.

When a regulator asks “how did you arrive at this conclusion?”, the answer isn't a narrative reconstruction — it's a queryable graph of every piece of evidence, who provided it, and whether any of it has been retracted since.

Knowledge Drift

db.drift(days=30) — How has your knowledge changed? New claims, new entities, retracted sources, confidence trends.

See what your organization learned this month, what it unlearned, and where confidence shifted — without reading a single Slack thread.

Source Reliability

db.source_reliability() — Per-source corroboration and retraction rates. Which sources can you trust?

After 6 months, the system knows which sources consistently get corroborated and which ones get contradicted. Trust becomes empirical, not reputational.

What-If Analysis

db.hypothetical(claim) — Would this claim corroborate existing knowledge? Does it fill a gap?

Before running an experiment, see how the result would fit into your existing knowledge topology. Prioritize experiments that resolve the most uncertainty.

Reactive Knowledge Pipelines

db.on("claim_retracted", notify_downstream) — Event hooks that fire when knowledge changes.

When a claim about drug toxicity is retracted, automatically notify every agent and system that used it. Knowledge changes propagate — they don't silently go stale.

Built by a drug discovery company. Tested on real research.

Attest was built by Omic, an AI drug discovery company. We use it internally to track findings across thousands of papers, internal assays, and agent-generated research. The result: contradictions surface automatically, retracted sources are traced in seconds, and research gaps become visible before they become costly.

See It Work

Watch knowledge build up from multiple sources — then see what happens when one turns out to be wrong. Each scenario auto-plays and loops.

Norfolk Southern company press release
“The situation is contained after controlled burn” 0.40
Residents on X social media · photos
Dead fish in Sulphur Run creek — waterway contamination 0.55
AP News / Ohio DNR wire service · state agency
3,500 fish killed — contamination confirmed 0.90 CORROBORATED
RETRACT Norfolk Southern press release
“Contained” claim removed. Contamination claims survive — independently corroborated by state agency and residents.
Result: Contamination confirmed by 2 independent sources. Company self-report retracted with full audit trail. Every claim traces to who said it.
Paper A (2019) peer-reviewed journal
“Compound X inhibits Target Y at IC50 < 10nM” 0.85
Internal assay lab results · primary data
Confirms inhibition — IC50 = 8.3nM in HEK293 cells 0.90 CORROBORATED
Paper B (2023) peer-reviewed journal · different cell line
“No significant inhibition observed for Compound X on Target Y” 0.75 CONTRADICTS
RETRACT Paper A — data fabrication investigation
Paper A removed. Internal assay survives as independent primary evidence (0.90). Paper B's contradiction now stands against only the internal assay.
Result: One independent source (your own data) still supports the claim. But confidence dropped from “corroborated by literature + internal data” to “single internal source vs one contradicting paper.” The system flags this for human review — exactly the right response.
Friend Marcus personal recommendation
“Sal's Pizza is excellent — best in the neighborhood” 0.85
Yelp platform rating
3 stars — “Average” 0.60
Coworker Dana first-hand experience
Got food poisoning last month 0.90 CONTRADICTS
RETRACT Yelp — rating manipulation
Yelp rating removed. Marcus and Dana's first-hand claims both survive — contradictory, but both are real experiences.
Result: Two real opinions remain. Marcus says great, Dana says dangerous. Both traceable. The database doesn't hide the disagreement.
Alice team Slack channel
“Bob knows React — he helped me with a component last year” 0.70
Bob's resume self-reported document
Skills: React, TypeScript, Node.js 0.80 CORROBORATED
GitHub PRs last 90 days · actual code
47 PRs — all Python, zero React 0.95 CONTRADICTS
RETRACT resume — 3 years outdated
Resume claim removed. Alice's Slack message survives at single-source (0.70). GitHub evidence (0.95) tells the real story.
Result: Code evidence (0.95) vs. hearsay (0.70). The contradiction is visible, not hidden. You see exactly why the data disagrees.
When a source turns out to be wrong:
Row-based approach
The typical response is to UPDATE or DELETE the row. If downstream reports already used that data, there's no automatic way to trace the impact.
Edge-based approach
The typical response is to remove the edge. Unless provenance metadata was manually maintained, tracing what depended on it requires custom logic.
Embedding-based approach
The typical response is to delete and re-embed. Answers already generated from the old embedding are not automatically traceable.
Attest
Retract the source. Corroborated facts survive. Full audit trail. The knowledge base heals itself.

The Same Fact, Three Systems

"The East Palestine water is contaminated." Here's how three different databases store that:

Relational DB
INSERT INTO events
(name, status)
VALUES ('EP Derailment', 'contaminated');
Who said contaminated? The EPA? A Reddit post? A company? No idea.
Knowledge Graph
(Derailment)-[:CAUSED]->(Contamination)
An edge exists. Says nothing about who established it or how certain we are.
Attest
AP News citing Ohio DNR says
Derailment caused Contamination
confidence: 0.90
+ residents corroborate (0.55)
An agent extracted this — you can see exactly what it was looking at, how confident it was, and who else agrees.

When an agent extracts 500 facts from your Slack channels overnight, you need every write to carry its source. Not as metadata you hope someone fills in — as a hard requirement the engine enforces. If two sources contradict each other, both claims coexist. When one is discredited, you retract it and the other survives.

Knowledge That Compounds

This is the part that matters most.

Run Attest for a week and you have structured notes. Run it for six months and you have a reality model — an emergent map of everything your organization knows, where the knowledge is deep, where it's thin, and where different domains connect in ways no single person noticed.

Topics emerge automatically from the claim graph. Your agents extracted 300 claims about your auth system and 4 about data center power capacity. That's not just data — that's a map of where you're knowledgeable and where you're blind. The insight engine finds connections across domain boundaries: auth failures always follow database connection issues, but nobody documented the dependency.

Two years of accumulated evidence can't be speed-run by a competitor. The topology gets richer. Cross-domain connections surface. The organization that started earlier has an advantage that compounds daily and is nearly impossible to replicate.

Practicalities

pip install attestdb — single-file database, no server, no infrastructure. Point it at a Slack export, a ChatGPT conversation, or a folder of documents — the built-in extractor (heuristic or LLM-powered, 7 providers supported) pulls out claims with provenance tracing to the exact message. Heuristic mode needs no API keys.

Want a visual interface? pip install attest-console gives you a browser dashboard that connects to live Slack, Gmail, and Google Docs via OAuth. Ingest your company's knowledge, explore an interactive graph, and ask natural-language questions — all for ~$0.06 on Groq's free tier.

Provenance is required on every write — the engine rejects claims without a source, whether the writer is a human or an agent. Batch mode handles millions of claims via the Rust backend. db.at(timestamp) gives you point-in-time queries — what the agents knew last Tuesday, before the new data came in.

For AI agents: the built-in MCP server lets Claude and other MCP-compatible agents read and write Attest directly. A REST API at /api/v1/ serves any HTTP client. Event hooks (db.on("claim_ingested", callback)) let you build reactive pipelines that trigger when knowledge changes.

Built For

Research

Scientific Teams

An agent reads 200 papers overnight and extracts findings. It notices your team has deep knowledge on kinase inhibition and almost nothing on the metabolic pathway that might connect to it — a gap no single researcher would see.

Use db.blindspots() to find the research gaps hiding between well-covered domains. Use db.hypothetical() to prioritize experiments that resolve the most uncertainty.

Operations

Engineering Teams

Agents ingest 18 months of Slack incident channels and postmortem docs. "What breaks if Redis goes down?" — answered from claims extracted across hundreds of incidents, each traceable to the person who figured it out.

Use db.impact() to answer “what breaks if this system goes down?” from 18 months of sourced incident data. Use db.source_reliability() to know which documentation you can actually trust.

Any Team

Anyone Building Agents

If your agents produce knowledge — from customer calls, experiments, market research, code reviews — Attest is the database layer that makes it compound instead of evaporate.

The MCP server lets Claude and other agents read and write attested claims directly. Your agents' knowledge compounds instead of evaporating between sessions.

Get Started

Slack Teams LLM Chat Documents Email Databases External
Attest
Extract · Store · Query · Correct · Discover
MCP Server REST API Dashboard Python SDK NDJSON
$ pip install attestdb attest-console
$ attest-console my_company.db

Opens a dashboard at localhost:8877. Click Connect Slack — authorize your workspace. Click Connect Google — authorize Gmail, Drive, Docs. Go to Ingest. Pick your channels. Hit go.

No API keys. No OAuth apps to create. No environment variables. Your data flows directly between your machine and Slack/Google. Full quick start guide →