April 9, 2026 4 min read

New in Workshop: More Model Choice, Subagents, and More

Shipping our most requested features: more models to power Workshop, subagents that delegate work for better speed and quality, voice dictation, and more.

Featured image for: New in Workshop: More Model Choice, Subagents, and More
ProductRelease

TL;DR

You can now power Workshop with OpenAI, Gemini, GLM 5.1, and more. We've launched subagents that can delegate work for better speed, quality, and cost. You can now dictate your prompts. Try them out at workshop.ai.

More Models to Power Workshop

You now have a much bigger set of models to choose from in Workshop.

In addition to Claude and open-source options, you can now use models from the OpenAI and Google Gemini rosters, and we've also added support for GLM 5.1.

That means more flexibility to choose the right model for the job, whether you care most about quality, speed, cost, or a specific provider.

Model selector showing OpenAI, Gemini, GLM 5.1 and other model options

You can switch models anytime from the model selector.

Subagents Are Here

We've launched subagents, which let the main agent delegate work to specialized child agents.

This improves things in a few ways:

  • Better quality: specialized agents can focus on specific tasks
  • Lower cost: simpler tasks can be handed off to cheaper models
  • Better speed: faster models can take care of straightforward tasks

Subagents are also multi-provider and multi-environment. This means you can now do things like use Gemini to orchestrate, and have it delegate across different models for different subtasks.

Claude for one step, GLM-5.1 for another, a GPT model from OpenAI for review, or local models like Gemma 4 for work that should stay on-device.

Subagents delegating work across multiple models and providers

What's especially exciting is what this enables for privacy-sensitive workflows: sensitive steps can run locally on your own machine with local models, and only the relevant outputs are shared back to the orchestrator.

This is just the beginning, and it's an area we're investing in heavily. Expect it to improve quickly from here, and we'd love your feedback as you use it.

You Can Now Dictate Your Prompts

Tired of typing your messages to Workshop? You can now speak them into reality. Click on the mic icon and let your voice do the talking (no pun intended).

Voice dictation — mic icon in the prompt bar

Other Improvements

A few more updates we shipped:

  • Easier connectors: some AI connectors are now enabled by default in the right flows, reducing setup friction
  • Better Plan Mode: now more helpful and easier to use
  • Stability improvements: range of bug fixes and quality improvements

FAQ

Which models are now available in Workshop? In addition to Claude (Anthropic) and open-source models, you can now use OpenAI (GPT) and Google Gemini models, plus GLM 5.1. You can switch between them anytime from the model selector in the toolbar. See Managed AI Connectors for the full list.

What are subagents and how do they work? Subagents are specialized child agents that the main agent can delegate work to. When the main agent encounters a task that benefits from a different model — faster, cheaper, or more specialized — it hands that task off automatically. You don't need to configure anything; Workshop handles the delegation.

Can subagents run on local models? Yes. Privacy-sensitive steps can be routed to local models running on your machine via Workshop Desktop, with only the relevant outputs returned to the orchestrator. This keeps sensitive data on-device.

Is voice dictation available on both Cloud and Desktop? Yes. The mic icon appears in the prompt bar on both Workshop Cloud and Workshop Desktop.

Where can I give feedback on subagents? Join the conversation on Discord — we're actively iterating and your feedback shapes what comes next.