Every extracted claim carries who said it, how confident we are, and what else corroborates it.
pip install -U attestdb attest brain install
Done. Your coding agent gets a persistent brain that learns across sessions. No API key needed. Works with Claude Code, Cursor, Windsurf, Codex, Gemini CLI.
Connect Slack, docs, or any text. Structured claims come out.
↓ Walkthrough below.
Three things, all free. Takes about 5 minutes total.
| What | Why | Time |
|---|---|---|
| Gemini API key | Extracts relationships from text (free tier) | 1 min |
| Slack Bot Token | Reads your Slack messages | 3 min |
| Google Docs token (optional) | Reads your Google Docs | 5 min |
pip install attestdb requests streamlit pyvis
attestdb is the database. requests is needed for API connectors.
streamlit and pyvis power the visual explorer.
Gemini has the best extraction quality in our benchmarks and a generous free tier (1,500 requests/day — enough for most workspaces).
AIza...)
Create a .env file in your project directory and add the key:
GOOGLE_API_KEY=AIza...your-key-here
channels:read · channels:history ·
channels:join · groups:read · groups:history ·
users:read · files:read
xoxb-)Add it to your .env file:
SLACK_BOT_TOKEN=xoxb-your-token-here
Your .env file should now look like:
GOOGLE_API_KEY=AIza...your-key-here SLACK_BOT_TOKEN=xoxb-your-token-here
python examples/quickstart.py
This reads your .env file, connects to Slack, fetches the last 90 days
of messages, extracts claims using your LLM provider, and launches the Streamlit explorer.
Common options:
# Fewer days of history (faster first run) python examples/quickstart.py --lookback-days 30 # Only specific channels python examples/quickstart.py --channels general,engineering,product # Skip the Streamlit UI python examples/quickstart.py --no-streamlit
On re-runs, the script resumes from where it left off — only new messages are fetched.
After ingestion, the quickstart automatically opens the Streamlit explorer at
localhost:8501. If you skipped it, launch it manually:
streamlit run app.py
Here’s what you’ll see — six tabs:
Sidebar stats show the shape of your knowledge base at a glance.
Pull documents from your Google Drive and extract claims from their content. No GCP project needed — use Google’s OAuth 2.0 Playground to get a token in 5 clicks.
Get an OAuth token via the Playground:
https://www.googleapis.com/auth/drive.readonlyhttps://www.googleapis.com/auth/documents.readonlyya29.)Add it to your .env file:
GOOGLE_DOCS_TOKEN=ya29.a0...your-token
Run with Docs:
python examples/quickstart.py --gdocs-token $GOOGLE_DOCS_TOKEN
Fetches up to 50 documents by default. Change with --max-docs 100.
Can be combined with Slack — both sources are ingested into the same database.
The quickstart auto-detects your domain from Slack channel names.
If your workspace has channels like #drug-discovery,
#ml-research, #customer-support, the extractor
automatically understands what your organization does and prioritizes
domain-relevant claims over routine CI/CD noise.
You can override auto-detection if needed:
db.set_domain_context( "E-commerce platform. High-value: product catalog, customer segments, " "pricing strategy, vendor relationships. Lower priority: build notifications." )
The extractor will still capture CI/CD events (who merged what), but will spend more effort on substantive domain discussions.
The quickstart script is a convenience wrapper. You can do everything from Python:
import attestdb db = attestdb.open("my.db") db.ingest( subject=("sarah chen", "person"), predicate=("proposed", "proposed"), object=("auth migration", "project"), provenance={"source_type": "slack", "source_id": "slack/eng-decisions"}, confidence=0.85, )
db.ingest_text( "Platform team approved migrating auth from JWT to sessions. " "Redis will handle session storage.", source_id="slack/eng-decisions/2024-02-15", )
frame = db.query("auth migration", depth=2) print(frame.narrative) for rel in frame.direct_relationships: print(f" {rel.target.name} --[{rel.predicate}]--> conf={rel.confidence:.2f}")
# Live Slack conn = db.connect("slack", token="xoxb-...") conn.run(db) # Google Docs conn = db.connect("gdocs", token="ya29.a0...") conn.run(db) # CSV file conn = db.connect("csv", path="data.csv", mapping={"subject": "from", "predicate": "relation", "object": "to"}) conn.run(db)
30 connectors available: slack, teams, gmail,
gdocs, gdrive, zoho, postgres, mysql,
mssql, notion, confluence, sharepoint,
csv, sqlite, github, jira,
linear, hubspot, salesforce, zendesk,
servicenow, pagerduty, http, airtable,
mongodb, elasticsearch, s3,
google_sheets, box, dsi.
See the Connector Library for setup details.
# Retract a bad source — corroborated facts survive cascade = db.retract_cascade("k8s-manifest-v2.3", reason="Outdated config") # Time-travel: what did we know yesterday? import time yesterday = time.time_ns() - 86_400 * 10**9 snapshot = db.at(yesterday) frame = snapshot.query("api-gateway", depth=1)
Every answer traces back to a specific Slack message, email, or config file. Not generated — grounded in claims with provenance.
Attest auto-detects your API key and uses the best available provider.
Set one environment variable in your .env file:
| Provider | Env Variable | Free Tier? | Notes |
|---|---|---|---|
| Gemini | GOOGLE_API_KEY | Yes (1,500 req/day) | Best extraction quality. Recommended. |
| Together | TOGETHER_API_KEY | Limited | Qwen 80B — strong extraction. |
| OpenAI | OPENAI_API_KEY | No | GPT-4.1 Mini. |
| DeepSeek | DEEPSEEK_API_KEY | Yes (limited) | DeepSeek V3.2. |
| Grok | GROK_API_KEY | Yes (limited) | Grok 4.1 Fast. |
| OpenRouter | OPENROUTER_API_KEY | Some models | Routes to best available model. |
| Anthropic | ANTHROPIC_API_KEY | No | Claude Haiku 4.5. |
| GLM | GLM_API_KEY | Yes | GLM-4 Flash. |
If multiple keys are set, Attest uses the first available in the order shown above. Heuristic extraction (no API key) is always available as a fallback.
db = attestdb.open("my.db") # Single-file Rust engine (1.3M claims/sec)
Attest stores everything in a single .attest file —
append-only claim log, maintained indexes, CRC32 crash recovery.
Zero infrastructure, zero configuration.
Full working examples: org knowledge, biomedical research, DevOps, and ML.
Every method, parameter, and return type.
What claim-native means and why provenance changes everything.
All 30 connectors: Slack, Teams, Gmail, Google Docs, Google Drive, Zoho Mail, Notion, Confluence, SharePoint, PostgreSQL, MySQL, MSSQL, CSV, SQLite, GitHub, Jira, Linear, HubSpot, Salesforce, Zendesk, ServiceNow, PagerDuty, HTTP, Airtable, MongoDB, Elasticsearch, S3, Google Sheets, Box, DSI.