New: Supaflow Claude Code Plugin -- let AI agents create, edit, and monitor your data pipelines. See how the plugin works

Open source · Free

Claude Code Plugin for Supaflow

Create datasources, build pipelines, run syncs, and manage schedules from chat. Let your AI agent handle the repetitive pipeline work so your team can focus on making decisions from the data instead of wrangling it.

Generic AI agents can sketch a pipeline in seconds — but sketches don't survive real sources, schema drift, or 2am incidents. This plugin puts Claude on top of the real Supaflow CLI, so the pipeline it builds is a pipeline that actually runs in production.

Create and update real Supaflow resources from Claude Code
Keep every command deterministic — no free-form shell guesses
Open source and easy to inspect, fork, or adapt for your own tooling
Supaflow plugin commands in Claude Code showing autocomplete for datasources, pipelines, schedules, and jobs

What You Can Do With It

The plugin is meant for practical day-to-day pipeline work. It helps Claude stay useful when you need to create, inspect, run, or schedule real Supaflow resources.

Create datasources

Connect a source or destination from Claude Code using the same Supaflow CLI workflow you would run manually.

Build pipelines

Ask Claude to set up a new pipeline, choose objects, and confirm the final configuration before anything is created.

Run and monitor syncs

Start a sync, check job status, and inspect failures without bouncing between docs, terminal history, and dashboards.

Schedule recurring loads

Add or update a schedule from chat once the pipeline is working and you are ready to automate it.

Run known commands, not invented ones

Every Claude action routes through an approved Supaflow command. No invented flags. No guessed field names. No one-off shell scripts.

Fork the plugin

The project is open source, so you can inspect how it works or reuse the same pattern for your own CLI-based workflows.

Why this plugin exists

Generic AI agents are good at producing a plausible pipeline. They are much worse at producing one that survives real-world edge cases. This plugin stops Claude from inventing pipelines from scratch and routes it through real Supaflow workflows, so the result is faster to build and far more reliable to operate.

Supaflow handles authentication, execution, retries, and state. Claude drives; the platform operates.

What a real session looks like

Excerpts from a real Claude Code session (March 2026) — building a Postgres to Snowflake pipeline end to end. The plugin is invoked conversationally; real commands run in the terminal; real output flows back to Claude.

Create a datasource

session excerpt
claude code — supaflow plugin
> build a supaflow pipeline from postgres to snowflake

$ supaflow datasources create --from postgres_source.env --json
{"error":{"code":"API_ERROR","message":"Environment variable
POSTGRES_PASSWORD is not set."}}

The env variable was set in your shell but isn't available in
my shell session. Let me put the password directly in the file.

$ supaflow datasources create --from postgres_source.env --json
{
"id": "ab534705-27e8-4efa-8bdc-f1dc2963f980",
"name": "Postgres Source",
"connector": "POSTGRES",
"state": "active"
}

Postgres source created and connection verified. The password
was also auto-encrypted in the env file for security.

Claude recovers from a failed connection and the plugin auto-encrypts credentials before they land on disk.

Create a pipeline

session excerpt
claude code — supaflow plugin
Which schemas should be included in the sync?
- public 29 tables — main application data
- bw_internal 33 tables — internal data
- auth 23 tables — authentication data
- airtable_* (all) 30 tables across 7 Airtable schemas

> just the public schema

Got it — just the public schema (29 tables). Let me update
objects.json to deselect everything except public.*
Selected 29 tables (public schema only)

$ supaflow pipelines create \
$ --name "Postgres to Snowflake" \
$ --source postgres_source \
$ --project snowflake_pipeline_project \
$ --objects objects.json --json
{
"id": "e1329144-efa7-429c-bd3b-577d5f100ddb",
"name": "Postgres to Snowflake",
"state": "active"
}

The plugin asks before acting. Claude scopes with structured options, respects the narrower answer, and filters the config before running the create command.

Execute a pipeline

session excerpt
claude code — supaflow plugin
Pipeline "Postgres to Snowflake" created. Running the first sync.

$ supaflow pipelines sync postgres_to_snowflake --json
{
"job_id": "d4c6a9b0-36d8-4583-a05d-048db5833a18",
"pipeline_id": "e1329144-efa7-429c-bd3b-577d5f100ddb",
"status": "queued"
}

Sync is queued. Let me monitor the job.

Plugin-first command syntax (pipelines sync), not shell-guess. Clean handoff into monitoring.

Check job status

session excerpt
claude code — supaflow plugin
$ supaflow jobs get d4c6a9b0-36d8-4583-a05d-048db5833a18 --json
{ "job_status": "picked", "status_message": null, ... }

Job is picked up and starting. Checking again in a moment.

$ sleep 10 && supaflow jobs get d4c6a9b0-... --json
{ "job_status": "running",
"status_message": "Initializing pipeline execution (V2)",
"started_at": "2026-03-26T19:04:15Z" }

The sync is running — initializing. Checking once more.

$ sleep 15 && supaflow jobs get d4c6a9b0-... --json
{ "job_status": "completed",
"status_message": "Pipeline execution completed successfully:
All 29 objects processed",
"execution_duration_ms": 25078 }

First sync completed. 29/29 tables, ~25 seconds, zero errors.

Claude polls the job, narrates state transitions in plain English (queued → picked → running → completed), and confirms success with real metrics.

Typical Workflows

These are the kinds of tasks where the plugin is most useful: enough structure to be safe, but still conversational enough to move faster than typing everything by hand.

Set up a new warehouse sync

Ask Claude to connect a source and destination, build the pipeline, and walk you through the confirmation steps.

Reconnect a broken datasource

Use the plugin to inspect the existing setup, update credentials or settings, and retry without hunting through old terminal commands.

Check why a job failed

Ask Claude to check recent runs, inspect job status, and narrow down whether the issue is configuration, schema drift, or a source-side problem.

Schedule a pipeline after validation

Once the initial sync is healthy, have Claude add a recurring schedule instead of manually composing another CLI command.

Why It Works Better Than Free-Form AI CLI Usage

1

Claude helps with reasoning, Supaflow handles execution

The plugin is built for teams who want conversational help during setup and operations, but do not want the model inventing the operational layer.

2

Real commands are safer than generated shell snippets

Deterministic command routing, explicit confirmation gates, and setup validation reduce the common failure modes in raw AI-CLI usage: guessed flags, wrong command routing, and skipped prerequisites.

3

Learn Supaflow while using it

Each plugin command maps to a real Supaflow workflow. By the end of your first session you know how datasources, pipelines, syncs, and schedules actually connect — not just how to prompt Claude for them.

Get Started

Add the plugin to Claude Code and start a session. The plugin checks for CLI installation, authentication, and workspace selection automatically on first run.

1. Clone the plugin

2. Add it to Claude Code

3. Start Claude Code

The plugin loads automatically. If the Supaflow CLI is not installed, authenticated, or pointed at the right workspace, the plugin walks you through setup before you start making changes. Try a command like /create-pipeline or ask Claude to build a pipeline from one system to another.

View CLI docs for how to create an API key and authenticate from the CLI.

Frequently Asked Questions

A few practical questions that usually come up before teams start using the plugin in real workflow conversations.

Do I need the Supaflow CLI installed first?
Yes. The plugin is built on top of the Supaflow CLI. On first run it checks whether the CLI is installed, authenticated, and pointed at the right workspace.
Is the Claude Code plugin open source?
Yes. The plugin repository is public on GitHub, so you can review the command structure, see how deterministic command routing is enforced, and fork it for your own internal tooling.
What happens if Claude misses a required field?
The plugin is designed to reduce that risk by routing Claude through explicit command workflows instead of open-ended shell generation. In practice, that means fewer guessed arguments and clearer confirmation steps before changes are made.
Can I use it with pipelines I already created in Supaflow?
Yes. The plugin is useful for both new setup and ongoing operations such as checking jobs, triggering syncs, and updating schedules for existing pipelines.
Does the plugin execute scripts or call Supaflow directly?
It drives Supaflow through the CLI and the platform workflows behind it. The goal is to keep execution anchored in the real product surface rather than in ad-hoc code generated by the model.