Connectors

34 sources — 30 data connectors + 4 MCP integrations

Slack
Live Slack channels via Bot Token. Messages grouped by channel with heuristic or LLM extraction.
Teams
Microsoft Teams channels via Graph API. Requires Azure AD app with message read permissions.
Gmail
Fetch email threads via Gmail API. Claims extracted from message bodies using text extraction.
Google Docs
Fetch documents via Google Docs API. Extracts text from paragraphs, tables, and headings.
Notion
Fetch pages from a Notion database or workspace. Extracts text from blocks and page properties.
Confluence
Fetch pages from a Confluence space. HTML bodies are stripped and extracted as claims.
PostgreSQL
Query a Postgres database and map result rows directly to claims via column mapping.
MySQL
Query a MySQL database and map result rows to claims. Supports MariaDB-compatible servers.
CSV
Import claims from a local CSV or TSV file with column mapping. Zero external dependencies.
SQLite
Query a local SQLite database and map rows to claims. Zero external dependencies.
GitHub
Import issues and pull requests. Each produces claims for author, state, labels, and assignees.
Jira
Import issues via JQL. Claims for status, assignee, reporter, labels, components, and issue links.
Zoho Mail
Fetch email messages via Zoho Mail API. HTML bodies are stripped and extracted as claims.
SQL Server
Query Microsoft SQL Server databases and map rows to claims via pymssql column mapping.
Linear
Import issues from Linear via GraphQL. Claims for creator, assignee, status, priority, labels, and team.
HubSpot
Import contacts, companies, deals, and notes from HubSpot CRM. Deal stages resolved to human-readable names.
Google Drive
List and download files from Google Drive. Exports Docs/Sheets/Slides to text, downloads other formats directly.
SharePoint
Download files from SharePoint sites or personal OneDrive via Microsoft Graph API.
ServiceNow
Import incidents and change requests from ServiceNow. Claims for state, priority, assignee, and category.
Zendesk
Import support tickets with status, priority, and tags. Text extraction from descriptions and comments.
Salesforce
Import opportunities, contacts, and cases via SOQL. OAuth username-password flow with SOQL pagination.
Box
Download files from Box folders with file type filtering. Text extraction from documents.
Claude Code
Persistent cross-session memory via MCP + autonomous hooks. Pre-edit warnings, post-test fixes, and session recall — zero agent cooperation needed.
Cursor
MCP integration with persistent knowledge graph. Use attest_check_file before edits and attest_learned to record findings.
Windsurf
MCP server integration via Codeium config. 106 tools for knowledge recording, retrieval, and session tracking.
Codex
OpenAI Codex CLI reads the same .mcp.json as Claude Code. Shares the persistent knowledge graph across tools.
No connectors match your search.

Why This Matters

Without Attest’s connectors, here’s what you’d build for each data source:

  • API fetching + pagination — Slack cursors, Gmail page tokens, Jira JQL offsets, Graph API paging
  • Claim extraction — LLM prompt engineering to turn messages and documents into structured (subject, predicate, object) triples
  • Entity normalization — Unicode NFKD, Greek letter expansion, whitespace collapse, deduplication across sources
  • Provenance tracking — record which source, which page, which message produced each fact
  • Contradiction detection — when Slack says X and Confluence says not-X, flag it instead of silently overwriting
  • Embedding updates — re-compute or re-sync a separate vector store every time new data arrives

With Attest, one line handles all of it:

db.connect("slack", token="xoxb-...", channels=["#research"]).run(db)

Each connector automatically runs the full pipeline: fetch → extract → normalize → validate (13 rules) → store with provenance → update embeddings → track corroboration.