A library of grounded agents, built from how your team actually works.

You already know how to do the work. Attest watches how the work actually gets done — then builds you a library of agents you can trust.

01

Observe.

Attest ingests the places work happens — Slack, Jira, Gmail, GitHub, tickets, docs — and turns each artifact into claims with provenance, confidence, and timestamps. No glue code to fork; 30+ connectors ship in the box.

Real backing: the connector pipeline, already shipped. See connectors →
claim graph (sample) Claim( subject="TICKET-4821", predicate="assigned_to", object="eng-platform", source="jira", confidence=0.98, observed_at=1744502400, )
02

Discover patterns.

discover_workflows() surfaces recurring flows in the graph — “triage support ticket,” “review PR against policy,” “respond to security questionnaire.” Each signal is backed by real entities and a real frequency count, with a confidence score.

Real backing: agent_factory.discover_workflowsWorkflowSignal
WorkflowSignal workflow_id: "wf_triage_0a3f" name: "triage_support_ticket" source_connectors: ["zendesk", "slack", "jira"] entity_types: ["ticket", "customer", "engineer"] predicate_chain: ["reported_by", "assigned_to", "resolved_by"] frequency: 247 confidence: 0.91
03

Generate grounded specs + evals.

For each workflow, generate_spec() produces an agent spec grounded in the real claim graph — with a system prompt anchored to observed behavior. build_eval() builds an eval set from actual past cases, so you can measure before you ship.

Real backing: AgentSpec · EvalItem · EvalSet
AgentSpec name: "ticket-triage-agent" workflow_id: "wf_triage_0a3f" domains: ["support", "engineering"] capabilities: ["classify", "route", "cite_sla"] required_connectors: ["zendesk", "jira"] grounding_claims: 1284 # real past cases EvalItem question: "Which team owns billing tickets?" expected_answer: "fin-platform" supporting_claims: ["c_a12f", "c_c09b"] eval_type: "workflow_chain"
04

Deploy with trust.

Agents run via MCP. Attest skills govern behavior. validate_trust() runs continuously and produces a TrustReport — drift, grounding health, and recommendations — so you see degradation before your users do.

Real backing: validate_trustTrustReport
TrustReport agent_id: "ticket-triage-agent" current_score: 0.87 baseline_score: 0.92 drift: -0.05 data_freshness: 0.94 status: "drifting" recommendations: [ "rebuild eval (n=30): 3 stale cases", "new predicate observed: escalated_to" ]
Ships today

Discovery, spec generation, eval construction, and trust validation are available as Python APIs and as MCP tools (factory_discover_workflows, factory_generate_spec, factory_build_eval, factory_assemble_agent, factory_validate_trust, factory_run_pipeline, and six more) — so any MCP-aware agent can drive the full pipeline. See an end-to-end walkthrough on real seed data in the auto-agents demo →

Step through it on a real seed.

The auto-agents demo walks the four steps against a real claim graph.

Open the demo → Read the quickstart