March 16, 2026 8 min read

Connectors: From Pasted Context to Real Execution

The boring infrastructure that makes AI-built apps actually work. How Workshop handles AI models, databases, secrets, and integrations so you don't have to.

ProductGuides

TL;DR

The hardest part of AI-built apps isn't the generation — it's the wiring. API keys, database credentials, secrets management, rate limits, billing per provider. Workshop handles this with connectors: secure, built-in integrations to AI models, databases, warehouses, and tools. One setup, centralized permissions, real execution — not pasted context.

The Integration Tax

Here's the dirty secret of AI-generated software: the code is the easy part.

You ask an AI to build an app that queries your database and summarizes the results with GPT. It generates the code. It looks right. Then you try to run it.

  • Where do the database credentials go?
  • You need an OpenAI API key — where does that live?
  • The key can't be in the frontend code, so now you need a backend.
  • The backend needs environment variables, so now you need a deployment config.
  • You want your teammate to use it too, so now you need shared secrets management.
  • Your OpenAI bill is climbing, so now you need rate limiting.

This is the integration tax. For every AI-generated app that "works on my screen," there are hours of wiring that stand between it and something your team can actually use.

Most AI tools ignore this problem. They generate code and leave the plumbing to you.

Workshop treats it as a first-class concern.

How Workshop Connectors Work

Connectors are secure bridges between Workshop and external services. You configure them once — in the Hub — and every conversation and every published app can use them. Your credentials are encrypted, never shared with AI models, and you control access.

There are two categories: AI model connectors and data connectors.

AI Model Connectors

The most common integration problem for AI apps is the model itself. Workshop gives you two paths:

Managed Connectors (Recommended)

Toggle on Anthropic (Claude), OpenAI (GPT), or Google (Gemini) — and you're done. No API key. No billing configuration. No rate limit management.

Usage goes through your Workshop credits. One bill, one place to manage it. Workshop handles the infrastructure: key rotation, rate limiting, error handling, and model routing.

This is the fastest way to build an AI-powered app. Your app calls the model through Workshop's managed layer — credentials never touch your code or your browser.

Bring Your Own Key (BYOK)

If you have your own API keys — maybe a negotiated enterprise rate, or you need a specific model version — plug them in. Usage is billed directly by the provider. You manage the key; Workshop handles the secure storage and server-side execution.

Both options work the same way from your app's perspective. The difference is who holds the key and who gets the bill.

Data Connectors

AI apps that do real work need real data. Workshop supports direct connections to:

Databases

Data Warehouses

Cloud Storage & Tools

When you connect a data source, Workshop can query it contextually in any conversation. Say "Using my Production Database, show me all users who signed up last week" — and Workshop writes the query, executes it, and works with the results. No copy-pasting CSVs. No manually describing your schema.

And critically: your data stays in your source. Workshop queries on demand — it doesn't copy or cache your data separately.

What "Real Execution" Means

This is the key difference between Workshop connectors and pasting credentials into a chat.

When you paste a database URL into ChatGPT, the AI generates SQL — but it can't run it. You copy the SQL, go to your database client, run it, copy the output, paste it back. Every step is manual. Every step is a chance for error.

When you connect a database to Workshop, the AI generates SQL and executes it. Against your real database. With real results. In the same conversation where you're building the app.

The same applies to AI models. Workshop doesn't generate code that would call GPT if you set up the key and the backend and the deployment. It calls GPT — right now, in the conversation — because the connector is already configured and the execution environment is real.

This is what we mean by "real execution, not pasted context."

Security and Permissions

Connectors handle sensitive credentials, so the security model matters:

  • Credentials are encrypted at rest and in transit
  • Connections use SSL/TLS for all data transmission
  • Credentials are never shared with AI models — the model sees results, not keys
  • Data is queried on demand — no separate storage or caching
  • You control access — delete connections anytime to revoke access
  • Published apps use the same connectors — be thoughtful about what you attach to public apps

Centralized secrets management means you configure once and use everywhere, rather than scattering API keys across environment variables, config files, and deployment scripts.

Walkthrough: An AI App With Real Data

Let's make this tangible. You want a tool that pulls customer data from your database, analyzes it with an AI model, and presents the results.

Step 1 — Connect Your Data

Open the Hub → Connectors. Select PostgreSQL (or your database). Enter your connection credentials. Workshop tests the connection and saves it.

Step 2 — Enable an AI Model

In the same Connectors tab, toggle on the Anthropic managed connector. Done — no key needed.

Step 3 — Build the App

"Build an app that shows a list of customers from my Production Database, sorted by last activity date. When I click a customer, use Claude to generate a health score summary based on their usage data, support tickets, and billing history."

Workshop queries your database, calls Claude server-side, and renders the results in a real app — all in one step. No manual wiring.

Step 4 — Publish

Hit Publish. The app goes live. Your connectors travel with it — the published app uses the same secure connections. Your team opens the URL, signs in, and starts using it.

The integration tax was zero.

Getting Started With Connectors

  1. Open the Hub from the Workshop sidebar
  2. Go to Connectors — you'll see available connector types
  3. Add a connection — select the type, enter credentials, click Add
  4. Reference it in conversation — mention the connection by name or type
  5. Build and publish — connectors carry through to published apps

Full setup guides for every connector at docs.workshop.ai/connectors.


FAQ

What happens to my credentials? Credentials are encrypted at rest and in transit. They're stored securely in Workshop and never shared with AI models. The AI sees query results, not connection strings or API keys.

Can I use connectors in published apps? Yes. Published apps use the same connector credentials you configured during development. This is convenient, but be mindful — if you publish an app with your production database connector, users of that app will be able to trigger queries against it.

Do managed AI connectors cost extra? Managed connector usage is billed through your Workshop credits — the same credits you use for everything else. There's no additional per-provider fee. If you use BYOK, usage is billed directly by the provider.

Can I connect to a local database from Workshop Cloud? Workshop Cloud connects to databases that are accessible over the internet (with SSL). For local or on-premise databases that aren't publicly accessible, use Workshop Desktop — it can connect to anything on your local network.

How many connectors can I have? There's no hard limit on the number of connectors. Add as many data sources and AI providers as you need.