Graphic Design

Figma Hands AI Agents the Keys to Its Design Canvas

Figma's new use_figma tool lets AI agents create and modify designs directly, with skills to encode team conventions.

Oliver Senti
Oliver SentiSenior AI Editor
March 24, 20265 min read
Share:
Terminal window overlaying a Figma design canvas showing an AI agent generating button component variants

Figma opened its canvas to AI agents today. Through a new use_figma tool on its MCP server, coding agents like Claude Code, Codex, and Cursor can now generate and modify native Figma design assets, frames, components, variables, auto layout, all wired to your existing design system. The beta is free. Pricing comes later.

That last part deserves attention. Figma is calling this a "usage-based paid feature" once the beta wraps, but hasn't said what that means in dollars. Given how aggressively AI coding tools burn through API calls, the eventual bill could sting teams that lean heavily on agentic workflows. For now, though, it's open season.

What use_figma actually does

The MCP server has been around since June 2025, when it launched as a read-only bridge between Figma files and AI coding tools. Agents could pull design context (variables, component trees, layout data) but couldn't touch the canvas itself. The generate_figma_design tool, which arrived in February, added the ability to capture live UI from a browser and push it into Figma as editable layers. But that's still a one-directional translation from code to canvas.

use_figma is different. It lets agents write to Figma files using your design system as the source of truth. An agent in Claude Code can create a full component set, 72 variants deep (Figma's own demo showed exactly this), by reading your existing library first and building with what's already there. The output inherits your tokens, your naming conventions, your spacing scale.

Or at least that's the pitch.

Skills: markdown files with real consequences

The more interesting piece here, and the one I suspect will matter more long-term, is the skills system. A skill is a markdown file that tells an agent how to work in Figma: which steps to follow, which conventions to respect, which tools to call in what order. Anyone who understands Figma can write one. No plugin code required.

Figma launched nine example skills today, built by a mix of internal teams and community practitioners. They range from generating component libraries from a codebase (/figma-generate-library) to syncing design tokens between code and Figma variables with drift detection (from Firebender) to generating screen reader specs from UI specifications (from a designer at Uber). A foundational skill called /use-figma sits underneath everything, giving agents a baseline understanding of how Figma works.

"The best products come from teams who care deeply about the details," says Cat Wu, head of product for Claude Code at Anthropic. "Skills teach Claude Code how to work directly in the design canvas, so you can build in a way that stays true to your team's intent and judgment." That's a careful framing, and probably the right one. The value proposition isn't "AI designs for you." It is "AI designs the way your team already designs."

The self-healing loop

There is a claim in Figma's announcement that caught my eye. When an agent generates a screen, it can take a screenshot of its own output, compare it against what was intended, and iterate on mismatches. Because the agent is working with real Figma structure (components, variables, auto layout) rather than pixel-pushing, those corrections ripple through the system properly.

I haven't tested this myself, and Figma didn't share metrics on how well it works in practice. The concept is sound, but "self-healing" is one of those terms that sounds better in a blog post than it performs in production. AI models are non-deterministic, as Figma's own post acknowledges, so the same prompt can produce different results. Skills are supposed to make that behavior more predictable. Whether they actually do at scale is an open question.

Who's onboard

The list of supported MCP clients is long: Augment, Claude Code, Codex, Copilot CLI, Copilot in VS Code, Cursor, Factory, Firebender, and Warp. Ed Bayes, design lead at Codex (OpenAI), offered the expected endorsement, saying Codex can now "find and use all the important design context in Figma" to build products more efficiently. OpenAI's partnership with Figma has been deepening steadily; this is another step in that direction.

But here's what the announcement doesn't address: accuracy. Independent testers have reported 85-90% styling inaccuracy when translating Figma's SVG-based node tree into web code through the MCP server. That number comes from SFAI Labs, which compared Figma's MCP approach against paper.design, a newer tool that stores designs as actual HTML and CSS with no translation step. Figma's response would probably be that use_figma isn't doing design-to-code translation; it is working natively within Figma's own format. Fair enough. But the accuracy question will follow this feature.

What comes next

Figma says it is working toward parity with the Plugin API, starting with image support and custom fonts. The Code Connect integration, which maps design system components to actual codebase components, is meant to tighten the loop further. And skills will get easier to share through the community.

The pricing question looms largest. Figma already charges per seat, and adding usage-based API fees on top could create friction, especially for teams where agents are making hundreds of calls per session. Starter plan users are already capped at six MCP tool calls per month. The developer docs mention rate limits that mirror Figma's REST API tiers, but the write-to-canvas tools are currently exempt. That exemption probably won't last.

For now, the beta is the bet. Figma is gambling that if enough teams build skills and weave use_figma into their workflows during the free period, they won't want to leave when the meter starts running. It is the same playbook every platform runs. Whether the output quality justifies the eventual cost is something teams will have to evaluate for themselves, preferably before they're locked in.

Tags:FigmaAI agentsMCPdesign toolsClaude CodeCodexdesign systemsdeveloper tools
Oliver Senti

Oliver Senti

Senior AI Editor

Former software engineer turned tech writer, Oliver has spent the last five years tracking the AI landscape. He brings a practitioner's eye to the hype cycles and genuine innovations defining the field, helping readers separate signal from noise.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Figma Lets AI Agents Design Directly on Canvas via MCP | aiHola