Select Page

Building with Claude MCP: An Experiment in Design-to-Code Automation

PROTOTYPE EXPLORATION

I built an experimental design-to-code pipeline with Claude’s API to understand where AI tooling genuinely helps versus where it creates friction. The value wasn’t in the tool—it was in the learning.

ROLE: Design Technologist
TIMELINE: 1 week (nights & weekends, ~15-20 hours)
TOOLS: Figma, Claude Desktop (MCP), shadcn/ui, Cursor
OUTCOME: Learned critical lessons about tool adoption, team scalability, and the difference between technical feasibility and product viability

The Vision: Closing the Design-to-Code Loop

In the transition from deterministic to probabilistic UI, static mockups in Figma fail to capture the “feel” of an AI’s latency, reasoning, or uncertainty. I wanted to build a unified prototyping pipeline using Claude’s Model Context Protocol (MCP) that would:

  1. Sync with my Figma library – Pull live components into a functional environment
  2. Enable high-fidelity testing – Test real tool calls and agentic responses in minutes, not days

The hypothesis: If I could connect Figma → MCP → Claude → Cursor, I could prototype AI interactions faster than our current Figma → engineer handoff cycle.

Figure 1: The MCP pipeline architecture—from Figma design system to functional React prototype in ~30 minutes.

What I Built

The Flow:

  1. Figma Design System → Components with MCP-compatible naming conventions
  2. MCP Server → Reads Figma via API, exposes metadata to Claude
  3. Claude (via Cursor) → Generates TypeScript React using shadcn/ui
  4. Local Development → Functional prototype with real API calls

What Actually Worked (For Me):

  • Spin up working prototypes in 30 minutes vs. 2 days
  • Components stayed consistent with design system
  • Could test real AI behaviors (streaming, latency) immediately
  • Fast iteration – ask Claude to modify, see results instantly

The magic moment: “Create a chat interface with streaming responses” → working code that matched our design system.

Figure 2: Figma design library using shadcn/ui component naming conventions for MCP compatibility

Figure 3: Claude Desktop (MCP) reading Figma component metadata to generate matching React code

The Hard Truth: Two Major Roadblocks

1. The Scaling & Setup “Tax”

The Problem: The pipeline required specific local environment configuration.

To use it, teammates needed:

  • Install MCP SDK + configure local servers
  • Manage Figma API keys + Claude API access
  • Set up Cursor + understand component naming conventions
  • Be comfortable with terminal commands

The team’s reaction: “This is cool for you, but I’m not setting all that up.”

The Realization: I built a “bespoke cockpit” for myself, not a “utility” for the team.

The Insight: For design tools to succeed in enterprise environments, they must be browser-first and zero-config. Setup friction killed adoption.

2. The “One-Way Street” Limitation

The Problem: Sync was unidirectional – I could pull from Figma but couldn’t push changes back.

The workflow breakdown:

  1. Pull Figma components → Generate code → Test
  2. Discover button needs different state
  3. Manually update Figma
  4. Re-pull to see changes
  5. Repeat

The Technical Gap: MCP can read context brilliantly, but writing back to design tools isn’t part of the paradigm yet.

Why this matters: AI-assisted design should enable bidirectional learning – the AI suggests improvements and updates the design file automatically.

Figure 4: The one-way sync problem—MCP can read from Figma but can’t write design changes back

What I Learned About AI-Powered Workflows

1. LLMs Lower Barriers, But Don’t Remove Them

Claude made generating code incredibly easy. But the gap between “working for me” and “working for the team” remained enormous. AI tools excel at individual productivity but fail at collaboration without intentional design for sharing.

2. AI Tools Are “Agreeable” – Sometimes Too Agreeable

Claude never pushed back on my assumptions. It happily built whatever I described, even when there were fundamental scaling problems. You still need critical thinking – AI amplifies your direction, good or bad.

3. The Best AI Tools Solve Clear, Specific Problems

What worked: “Generate a chat component that streams responses”
What didn’t: “Build a design-to-code pipeline that solves all handoff problems”

The most successful moments came from clear, bounded problems. The failure was trying to solve a vague, systemic issue.

4. Hands-On Building Reveals Hidden Truths

No planning would have revealed the setup tax or one-way sync limitation. I had to build it, use it, and try to share it to understand why it wouldn’t scale. Even “failed” prototypes teach what won’t work.

Key Lessons for Future AI Infrastructure

This “failed” experiment identified the frontier of design tooling:

Protocols > Plugins: MCP is the right direction, but for designers, protocols need to be as seamless as a “Share Link.” Any friction beyond clicking a button kills adoption.

Bi-Directional Requirement: AI needs to “write” to the canvas as easily as it “reads.” Read-only context isn’t enough.

The UX of Onboarding: If setup friction exceeds value provided, the tool dies. This isn’t a technical problem – it’s a product problem.

Reflection: Technical Feasibility vs. Product Viability

This project taught me the difference between building something that works and building something that scales.

As a designer who prototypes with code, it’s easy to fall in love with technical elegance. But my job isn’t building the most advanced pipeline – it’s building tools that empower the team to move faster together.

What I’m not using today: The MCP pipeline for team prototyping
What I am using today: The lessons about AI tool adoption

When we built hila’s SQL Debug feature, I insisted on zero setup – it just works in the browser. When we designed the Reasoning tab, we made it accessible with a single click, not a configuration file.

The meta-lesson: Sometimes the best thing an AI experiment can teach you is what not to build next time.

Connecting to Broader Work

This experiment informed how I think about AI product design:

My evaluation criteria now: “Can my least technical teammate use this without my help?” If not, it’s not ready.

The frontier of AI product design isn’t just making AI more powerful – it’s making powerful AI tools accessible enough that everyone can benefit, not just people who can configure local servers.