Share
In our last post, we talked about how design at Metronome evolved from polish to power. But even with sharper insights, and a strategic seat at the table, there was a limit to how fast we could move. We could see the work and frame it. But actually getting it out the door?
We kept running into the same roadblock: Our inability to ship small, high-impact fixes without interrupting priority workflows or stretching engineering focus.
When this started it wasn’t just about big feature launches. It was about the hundreds of small, high-friction UX gaps we knew would improve trust, clarity, and usability. The design team, working alongside product, had already spent months gathering insights and surfacing themes from across the user journey.
Exploring AI was about moving on those opportunities faster. But over time, it’s become more than just a workflow for design-led improvements: it’s changed how we collaborate, how we frame work across teams, and how we make room for more people to contribute directly to the product. We wanted to democratize progress throughout the company and tighten the loop between idea and impact.
Now let’s get to the part you’re probably here to read: our version of vibe-coding.
What is vibe-coding, really?
On the surface, vibe-coding is a cheeky name. But under the hood, it’s a workflow: a lightweight way for ALL contributors (designers, PMs, nonengineer-types) to ship thoughtful, high-impact improvements with the help of AI.
We viewed it as the bridge between insight and implementation.
The intention wasn't to use AI to replace designers or engineers. We wanted to use it to remove the wait time between identifying a problem and seeing a fix in the product. That’s the core of the Metronome Design Rapid-AI (MDR) program we’ve been building this year.
This article outlines the start of that journey: how we built the program, what’s live today, and what we're learning from the front lines.
Where we started: Strategy before speed
Before we ever opened Figma Make, Cursor, or Claude Code, we had to get clear on what success would look like. That meant setting intentional guardrails and the MDR Program laid the groundwork.
The name itself started as a tongue-in-cheek joke/reference to the show Severance, where the MDR (Macrodata Refinement) team literally vibes out feelings from numbers and sorts them into labeled boxes. Funny enough, that became a fitting metaphor for what we were trying to do: translate signals into structure, and insight into action.
The strategy wasn’t about automation. It was about unlocking velocity where design historically hits walls, turning UX problems into resolved experience without always waiting in the engineering queue. We framed MDR around a few nonnegotiables:
- Speed only matters if you’re solving the right problems.
- Scale only works if it doesn’t break the quality bar.
- AI should not only make the outputs faster, it should unlock better decision-making.
From there, we built four program goals to anchor the work:
- Accelerate time-to-market for design-led improvements.
- Empower autonomy across functions (Design, PM, Eng) to act independently.
- Shift from handoffs to iteration by embedding AI in collaborative workflows.
- Create a fast lane for client-impacting improvements that normally get stuck in backlog.
Again, we weren’t trying to automate product design. We were trying to operationalize momentum.
Hard stop: Is this a shared priority, or just your personal mission?
Before we jumped into implementation, we paused. Why? Because spinning up something like vibe-coding can go one of two ways: you unlock company-wide momentum as more people contribute to product quality or you end up with engineering overloaded, the codebase at risk, and trust eroded across teams.
To avoid that second path, we grounded our work in something design is uniquely suited for: cross-functional orchestration. Designers sit at the intersection of user insight, business goals, and product reality. We're used to navigating constraints, facilitating alignment, and working across departments to bring ideas to life. That made us the right team to prototype how this new kind of AI-augmented workflow could scale.
That said, it only worked because we had strong foundations. Ask yourself this, do you have…
- ...strategic commitment from leadership to make this a priority?
- ...an engineering org open to experimentation and partnership?
- ...a design team willing to stretch outside traditional role boundaries?
- ...a product team eager to ship faster and reduce feedback loops?
- ...a broader company culture that favors curiosity and action over perfect process?
We were fortunate enough to have all of the above but even then, it was messy. The first few months were heavy on alignment and expectation-setting. We had to call out early risks, clarify cross-functional outcomes, and continually check in on what success should look like. This wasn’t a one-and-done setup. We’re still meeting with other departments regularly to calibrate, make adjustments, and smooth over areas where friction pops up.
Then there were our discussions on early north stars across groups:
- Product
- AI is a defined operator in the product design lifecycle.
- Product creates model-consumable Linear tickets.
- Product can take their product requirement documents (PRDs) and create their own working prototypes.
- Design
- Design can commit code for customer quick-win tickets.
- Design fully owns the design system.
- Design has an experimental staging environment to test larger iterative, holistic changes.
- Engineering
- Engineering can operate efficiently with a single front end (FE) owner.
- Design-to-dev pipeline via Figma mock < MCP code connect > page generation.
- FE components are bug-free and easily leveraged across teams, and help eng run faster
- GTM
- Solution architects can build demo-able experiences for customers.
- Everyone can make small but impactful changes to the product.
And these shared goals needed to be backed by actual commitments before moving forward. It wasn’t enough to agree on paper; teams had to carve out real time, learn new tools, and shift how they worked day to day. Design committed to leading the rebuild of the design system in Figma and maintaining component documentation. Engineering partnered with us to create a new workflow from Model Context Protocol (MCP) to production, collaborated closely to rebuild frontend components, and offered their time in upholding quality in PRs. Product got involved early by joining prototyping workshops and learning the new tools and rhythms needed to move faster. And leadership stepped up to allocate budget, protect time for experimentation, and champion the strategic value of what we were doing across the org.
In short: this only works when there’s alignment. We know alignment isn’t something you check off once. It’s something you have to keep earning as things change. Teams have to check in often and stay honest about what’s working and what’s not. You will absolutely create friction and step on toes early on. It’s part of the cost of learning fast.
What we did first: Tackle the friction
Once the cross-functional buy-in was there, we shifted from strategy to execution. The big question was: where do we start?
We weren’t looking for flashy demos or big splash features. We wanted surface areas where friction was high, risk was low, and the value of faster iteration was obvious. So we focused early vibe-coding efforts on three core tracks:
- Design System Migration: We used Cursor to accelerate the transition from legacy styles to a modern, semantic design system: refactoring tokens, naming patterns, and UI consistency faster than we ever could manually.
- Design Quality of Life Wins: With Claude and Cursor, designers began shipping scoped UI improvements directly: empty state cleanup, button alignment issues, tooltip clarity, things that would’ve otherwise waited behind larger roadmap priorities.
- Customer Quick Improvements: We piloted a repeatable process for transforming raw client feedback into shippable enhancements. Designers learned to prototype in local environments, test safely, and deliver high-signal improvements with little overhead. (More on this process in a separate article.)
This wasn’t just about speed. It was about proving that this new model—AI-assisted, design-led, engineering-aligned—could hold up under real product constraints. So we started with what we knew: design systems, common UI pain points, and quick wins tied closely to feedback. That let us learn fast, fail safely, and build a rhythm we could scale. The goal was to take those early learnings, including more failed experiments than we’d like to admit, and shape them into a repeatable process we could refine and scale across the company.
Building the muscle: From scrappy to repeatable
The early wins gave us confidence but we knew one-off experiments don’t scale... So our next phase was about turning vibe-coding into something teams could rely on. It had to go beyond one-off experiments. We needed habits, tools, and rituals people could actually depend on, supported by real documentation, templates, and infrastructure. Cursor and Claude were core to the process, but the muscle wasn’t the tech—it was the habits.
We built prompt libraries around earlier vibed-tasks (like spacing refactors, tooltip edits, empty state scaffolding), created onboarding guides for new contributors, and established clear QA practices around testing, accessibility, and design system compliance.
Everyone soon had access to:
- Prompt libraries tied to common design tasks
- Onboarding guides for Cursor, Claude, Figma
- Idea -> Design Spec -> Working prototype pipelines
- Best practices for planning, QA, and shipping
We also stood up async pairing sessions, weekly vibe-coding office hours, biweekly group AI commits, making it easier for contributors to get unblocked without needing to escalate to engineering.
It wasn’t flawless, and it’s still evolving, but the structure gave us just enough friction to scale without chaos.
What's next
Our vibe-coding isn’t a finished system, it’s a new muscle we’re still learning to flex. But even in its imperfect form, it’s already helped us shorten the distance between insight and impact, and shown us what’s possible when design leads from the front.
Below are several active exploration tracks we’re either building, piloting, or expanding: each will get its own write-up in this series as we continue to test, refine, and ship.
- AI Tool Exploration: How we tested tools like Cursor, Claude Code, and Figma Make to enable faster prototyping and AI-assisted iteration across functions
- Client Quick Wins: The behind-the-scenes of our “fast lane” workflow that turns raw feedback into shippable design-led fixes, authored directly by nonengineering contributors
- Design System Migration: A deep dive into rebuilding our frontend foundations using semantic tokens, Cursor-powered refactors, and AI-supported structure audits
- MCP for Internal Docs, Linear & Figma: How we’re creating integrations to both power the design-to-code workflows and to enrich tickets with AI (via Claude), connecting all to visual design experiments
- Quick Win Automation Pipeline: We’re building a semi-automated pipeline to turn client insights from our research repository into shippable improvements with minimal friction
- Prompt Libraries + Workflow Automation: Building scalable prompt systems for specs, refactors, flows, and PRD conversion and the infrastructure behind making those reliable
- AI Personas: Using real research and transcripts to create conversational stand-ins for users: tools that help us test tone, UI clarity, and overall usability.
- Prototyping with PMs: How we’ve used Claude and Figma Make to turn specs into shareable prototypes, reducing overhead and enabling faster decision-making
- Design System Validation Agent: How AI can deeply understand component usage, properties, and best practices, enabling it to validate design system adherence during code reviews
- Big Rocks: Our ambitious bets: where we stop waiting for backend perfection and start testing major shifts in how billing should work. including experiments in pricing simulation, rethinking rate card UX, and visualizing complex object model relationships
Each of these tracks has already taught us something worth sharing and over the coming weeks, we’ll break them down into focused stories. No fluff, just the real ways we’re applying AI to improve velocity and craft across our product.
We’re kicking things off with tool exploration, how we tested Claude, Cursor, and Figma Make across teams. From there, we’ll dive into the anatomy of a real vibe-coded fix: the prompts we use, the workflows that support it, and how we’re scaling contributor impact without sacrificing quality.