The Notion connector is a text connector that fetches pages from Notion via the
Notion API and extracts claims from page block content using db.ingest_text().
It supports querying a specific Notion database by ID, or performing a global workspace search across all pages the integration has access to. Each page is fetched individually, its blocks are read, and the plain text is assembled and fed into the text extraction pipeline.
/databases/{id}/query) or searches the workspace (/search)"title")db.ingest_text() for claim extractionnotion:{page_id} for provenance trackingparagraphheading_1, heading_2, heading_3bulleted_list_itemnumbered_list_itemto_do
The connector is rate-limited with a 0.1-second sleep between API calls to stay within
Notion's rate limits. Results are paginated automatically up to max_pages.
ntn_.
This is the value you pass as token.
... menu, select "Add connections", and choose your integration.
The connector can only access pages that have been explicitly shared.
https://www.notion.so/{workspace}/{database_id}?v=... — the
32-character hex string before the ? is the database ID.
The Notion connector requires the requests package:
$ pip install requests
| Parameter | Required | Default | Description |
|---|---|---|---|
api_key |
Yes | — | Notion integration token (ntn_...) |
database_id |
No | None |
Specific database to query. If omitted, searches the entire workspace. |
max_pages |
No | 100 |
Maximum number of pages to fetch |
save |
No | False |
Encrypt and persist token to the token store |
Note: In db.connect(), api_key is passed as token.
conn = db.connect("notion", token="ntn_...") result = conn.run(db)
conn = db.connect("notion", token="ntn_...", database_id="abc123def456...", max_pages=50, ) result = conn.run(db)
conn = db.connect("notion", token=os.environ["NOTION_TOKEN"], database_id="abc123...", save=True, ) result = conn.run(db)