export async function createInvoiceHandler(req) {
return workflows.run("sync-invoices")
} The compiler that connects AI to your databases.
Auto-learns your schema and gives AI agents one governed query graph across your databases, APIs, source code, and filesystems. Ask about the data, the services that enrich it, the code that touches it, and the files around it from one MCP-ready binary.
npx graphjin serveWorks with all your databases. And more.
Point GraphJin at as many systems as you need — Postgres for users, MySQL for orders, Snowflake for analytics, MongoDB for events, HTTP APIs for remote services, object storage for files, and CodeSQL for source trees — and query them through a single GraphQL endpoint. Joins, remote joins, subscriptions, search, and mutations compose across systems in one request, so an AI assistant can reason across the data, APIs, files, and code without learning every backend.
- PostgreSQL
- MySQL
- MariaDB
- MongoDB
- SQLite
- SQL Server
- Oracle
- CockroachDB
- YugabyteDB
- Snowflake
- AWS Aurora
- Cloud SQL
- HTTP APIs
- S3 / GCS / Files
- Code
One compiler. Any system. Any client.
Point GraphJin at databases, object storage, source trees, and remote APIs. It learns the shape, compiles one GraphQL surface, enforces RBAC, and gives AI assistants, REST clients, and federated routers the same production-safe engine.
GraphQL in. Optimized queries, API calls, file ops, and code search out.
Built for the AI era, hardened for production.
A compiler — not a query parser, not a resolver framework. It learns your schema, plans the query, and emits one SQL statement. The result is calmer code, fewer round-trips, and a single integration point for every AI assistant.
Auto-discovery
Introspects tables, columns and relationships on boot. Add a column and the GraphQL schema updates without a redeploy.
One SQL per query
Every nested GraphQL request compiles to a single optimized SQL statement. No N+1, no resolver code, no ORM tax.
Production-grade
RBAC, JWT, row-level rules, query allow-lists, Redis caching, audit logs. Ready for the AI era and for production.
A native MCP server for any AI assistant.
GraphJin ships a Model Context Protocol server with the tools an assistant actually needs: schema discovery, saved queries, where-clause validation, fragments, query execution, audit logs, and health checks. Same engine, same RBAC, same allow-lists — everything that protects your HTTP API protects the AI.
One install command wires GraphJin into Claude Desktop, Codex, or any MCP host. Tools are discoverable, narrow, and audited: no shell access, no raw SQL by default, no surprise mutations. Mutation support is opt-in per role; raw queries can be locked out entirely.
For development, graphjin mcp runs over stdio. For team
access, run it as a long-lived HTTP+SSE endpoint, gated by the same JWT
or OIDC flow as the main API.
# install for Claude Code (or codex / cursor / custom)
graphjin mcp install --client claude --scope global --yes
# or set up against a hosted GraphJin
graphjin mcp setup https://api.example.com // claude_desktop_config.json
{
"mcpServers": {
"graphjin": {
"command": "graphjin",
"args": ["mcp", "--config", "/etc/graphjin"],
"env": {
"GRAPHJIN_USER_ID": "system",
"GRAPHJIN_USER_ROLE": "user"
}
}
}
} Ask in plain English. Get real data back.
Claude Desktop, Codex, or any MCP client talks to GraphJin — GraphJin compiles the query, hits your database, and the assistant answers with rows it can reason over.
who's the top customer?
{ customers { id full_name email purchases { quantity product { price } } } }Based on the purchase data, here are the top customers ranked by total spend:
| Rank | Customer | Orders | Items | Total Spent |
|---|---|---|---|---|
| 🥇 | Antwan Friesen | 20 | 124 | $928.45 |
| 🥈 | Lon Cruickshank | 20 | 94 | $586.50 |
| 🥉 | Susana Schaefer | 20 | 91 | $580.72 |
Antwan Friesen is the top customer with almost $1,000 in purchases — about 60% more than the runner-up.
CodeSQL: query your code as well
GraphJin turns databases, HTTP APIs, discovered metadata, source code, and filesystems into one governed graph for AI agents. CodeSQL keeps dev indexes live as files change, refreshes production indexes on restart, and lets agents ask where a column exists, which code references it, and which symbol owns that reference without shell access.
gj_columns { code_db_refs { file { path } symbol { name } } } Files as queryable tables. Local, S3, or GCS.
GraphJin streams multipart uploads straight to local disk, S3, Cloudflare R2, or Google Cloud Storage. Each backend exposes a virtual table — list, stat, get, put, delete, presign — and joins seamlessly with the rest of your schema.
Uploads follow the graphql-multipart-request-spec: send a single request, GraphJin parses, validates, signs, and persists. Returned rows include the storage URL and metadata, ready for the next mutation or a presigned download.
Bring your own bucket: GCS uses Application Default Credentials, S3 respects the standard AWS chain, local writes go to a configured volume. Backends are pluggable behind one interface.
# config/prod.yml
filesystems:
- name: "media"
type: s3 # local | s3 | gcs
bucket: "graphjin-media"
region: "us-east-1"
prefix: "uploads/"
uploads:
enabled: true
storage: "media"
storage_key_prefix: "avatars/{date}/"
allowed_mime: ["image/*", "application/pdf"]
max_size: 25_000_000
# graphql-multipart-request-spec
mutation ($file: Upload!) {
avatars(insert: { file: $file, user_id: $auth.user_id }) {
id
file_url
file_size
content_type
}
} OpenAPI specs become first-class fields in your graph.
Drop a Stripe, GitHub, or internal-service OpenAPI 3 spec into the config directory. GraphJin parses it, classifies the operations, and exposes them alongside your tables — joinable on any column → parameter mapping. One GraphQL query, one response, even when half the data lives behind REST.
Auth is configured once per spec — bearer, basic, API key, OAuth2 client-credentials, or token-exchange — and tokens are cached transparently. Concurrency caps per-spec keep upstream rate limits respected.
Joins are declarative: tell GraphJin which column feeds which parameter and the result is a nested field, RBAC-aware, with the same compiler that generates your SQL planning the calls.
# config/openapi/stripe.yml
base_url: "https://api.stripe.com"
auth:
scheme: bearer
token_url: "https://api.stripe.com/v1/oauth/token"
cache_ttl: "55m"
# Map a DB column onto a REST path/query param,
# so a join is just GraphQL.
joins:
- table: customers
operation: listInvoices
params:
- column: stripe_customer_id
param: customer query ($id: ID!) {
customers(id: $id) {
full_name
email
# joined live from Stripe via OpenAPI spec
invoices {
id
total
status
created
}
}
} A CLI that fits the developer loop.
One binary covers everything: a dev server with auto schema discovery, a database toolchain (setup, diff, migrate, seed), a remote client that authenticates over OIDC device-code, and an MCP server. No tokens to copy, no frameworks to learn.
graphjin serve --demo starts a working example in seconds.
graphjin cli setup opens the device-code login URL in your
browser and persists a refreshable JWT for every subsequent command.
Workflows can be invoked by name from CLI, MCP, REST,
or another workflow.
Every subcommand respects the same config, the same RBAC, and the same allow-list. What runs in CI matches what runs in production.
# spin up against a demo database
graphjin serve --demo
# scaffold and migrate a real schema
graphjin db setup
graphjin db migrate
graphjin db seed # authenticate via OIDC device-code flow
graphjin cli setup https://api.example.com
# run a saved query against prod
graphjin cli query top_customers --limit 5
# exec a workflow (chained queries + JS)
graphjin cli workflow customer_report
# tail audit logs
graphjin cli audit --since 1h OAuth, JWT, OIDC — and row-level rules.
JWT from Auth0, Firebase, Okta, or any JWKS endpoint. Header- or cookie-based sessions for legacy stacks. OIDC device-code login for the CLI and MCP. Whatever the source, every request lands in the same context — and RBAC + row-level filters do the rest.
Configure once. Every transport — HTTP, WebSocket, SSE, MCP — runs the same auth pipeline. Roles + row-level filters are authored in YAML and enforced inside the compiler, so even a workflow cannot read or write outside its lane.
The CLI and MCP authenticate via OIDC device-code: open a URL, approve, done. Tokens refresh automatically — no copy-pasting bearer strings into shell history.
# config/prod.yml
auth:
type: jwt
jwt:
provider: "auth0" # or firebase, okta, custom
audience: "https://api.example.com"
jwks_url: "https://example.auth0.com/.well-known/jwks.json"
cookie: "gj_session"
auth_login:
enabled: true # OIDC device-code login for CLI / MCP
provider: "https://login.example.com"
client_id: "graphjin-cli"
scopes: ["openid", "email", "offline_access"]
# config/roles.yml
roles:
- name: anon
tables:
products: { query: { columns: [id, name, price] } }
- name: user
tables:
orders:
query: { filters: ["{ user_id: { eq: $user_id } }"] }
insert: { columns: [product_id, quantity] }
update: { filters: ["{ status: { eq: "draft" } }"] }
Live queries with cursors that survive reconnects.
Subscribe with the same GraphQL you'd use for a query. GraphJin streams deltas over SSE or WebSockets, batches database polls into one statement, and emits cursors so clients can resume after a network hiccup without missing rows.
The subscription API is just queries with a cursor — no new schema, no resolver tree, no pub/sub bus to operate. Cursor-based pagination keeps feeds and chat-style UIs deterministic; the adaptive poll sizer keeps load predictable as subscriber count grows.
Multiple subscriptions can share a single WebSocket. Per-message timeouts and JWT expiry are enforced at the transport layer, not by hand-rolled middleware.
subscription LiveOrders($since: Cursor) {
orders(
where: { status: { eq: "open" } }
after: $since
first: 50
order_by: { id: asc }
) {
id
total
customer { id full_name }
cursor
}
} # config/prod.yml
subs_poll_duration: "2s" # adaptive batched polling
subs_max_clients: 10000
# both transports active at once
http:
sse: true
websocket: true
Everything a real backend needs.
One binary, one config file — federation, MCP, uploads, auth, subscriptions, and a CLI. No plugins, no add-ons, no surprise billing tiers.
SQL statement per query, regardless of nesting depth.
Databases supported with the same GraphQL surface.
Lines of resolver code. The compiler does the work.
Auto schema discovery
Tables, columns, foreign keys, views — introspected on boot and refreshed live.
Cross-database joins
Compose data across multiple databases in a single GraphQL request.
Production security
RBAC, JWT, row-level rules, allow-lists, audit logs — out of the box.
Live subscriptions
SSE and WebSocket transports with cursor-based resume.
Workflows
Chain queries with JS — invoke from REST, MCP, or another workflow.
Read-only replicas
Lock a database to query-only with a single config flag.
Remote API joins
Stitch in REST and GraphQL endpoints alongside your tables.
Recursive queries
Walk graphs and hierarchies natively — no custom resolvers.
Redis cache
Response caching with mutation-driven invalidation.
Run it in under a minute.
Pick your platform, copy the command, and you're querying. The demo flag ships a real schema and example queries so there's something to point an AI client at on the very first run.
npx graphjin serve --demoWire it into your AI client
one commandgraphjin mcp install --client claude --scope global --yesgraphjin mcp install --client codex --scope global --yesPrefer interactive setup? graphjin mcp install
Two paths. Both end with queries running.
- 1
Point to your database
Configure the connection — PostgreSQL, MySQL, SQLite, MongoDB, Oracle, MSSQL.
- 2
Auto-discover schema
GraphJin introspects tables, columns, and relationships on boot.
- 3
Start querying
Joins, mutations, subscriptions, federation, MCP — all out of the box.
Drop GraphJin into a federated supergraph.
Already running Apollo Router, Cosmo, or Hive? Flip one config flag and every primary-keyed table becomes a federation v2 subgraph — SDL with @key, @shareable, and @inaccessible directives, plus a working _service entry point. No resolver code.
# config/prod.yml
federation:
enabled: true
version: v2.5
keys:
users: "id"
products: "sku" - · Generated SDL refreshes on schema change
- · Per-table key overrides; field-level
@shareable/@inaccessible - · Multiple GraphJin processes compose into one supergraph
- · Same RBAC + allow-lists apply to entity references