BlogRight arrowExperimentation
Right arrowMaking experimentation work for product managers
Backspace icon
Search iconClose icon

Making experimentation work for product managers

LaunchDarkly Experimentation is the missing puzzle piece in the PM workflow.

Making experimentation work for product managers featured image

Sign up for our newsletter

Get tips and best practices on feature management, developing great AI apps, running smart experiments, and more.

Subscribe
Subscribe

Throughout my career, I’ve worked closely with product managers—initially as a designer collaborating alongside them, and later as someone building tools to support them. One thing has always stood out: the complexity of the product management (PM) role. Product managers are responsible for setting the vision, building the roadmap, understanding customer needs, and navigating the trade-offs between ambition and iteration.

They are expected to lead with confidence. However, they often lack direct visibility into what’s working and why. In theory, experimentation should help resolve that issue. In practice, it often adds complexity instead of clarity.

We wanted to change that; not by offering another analytics dashboard, but by designing an experience that aligns with how PMs work and think. Our goal was to create a system that integrates naturally into their workflow and mental model, enabling them to deliver real outcomes.

Centering experimentation around product work

Many experimentation platforms treat PMs as peripheral users. They may be involved in shaping hypotheses or reviewing results, but the tools themselves are geared toward technical users. This model doesn’t reflect how product teams operate today.

Product managers don’t just observe the results of experiments; they drive the success of experimentation programs. That’s why LaunchDarkly reimagined experimentation capabilities as a first-class product experience. It’s not hidden in a developer toolset. It’s designed to be clear, collaborative, and credible.

A user experience built with intention

I’ve spent time with PMs during design sprints, product reviews, and on-call escalations. I’ve seen how often their voice is stretched across strategy and execution. Our new experimentation workflow is a direct response to those realities. It’s not just about usability; it’s about advocacy—creating tools that support PMs in making informed, confident decisions about the parts of software that they help to build.

We restructured the experimentation workflow with Product Managers in mind, particularly with the following features:

Event Explorer

Gives PMs visibility into metric events. No more waiting on others or working in the dark to find out if the events that power metrics are available and ready for use. Search for events, confirm they’re firing correctly, and create custom metrics with confidence. The interface is designed to bring visibility to a traditionally engineering-focused task and promote self-sufficiency.

For example, imagine a product manager at a SaaS company who has rolled out a new onboarding flow behind a feature flag for 50% of new users. The goal is to increase first-week activation, defined as users completing three key setup steps. This product manager might use Event Explorer to:

  • Metric Event Discovery. They can use Event Explorer to confirm that LaunchDarkly is ingesting the events necessary for creating conversion metrics and that those events are in an active and healthy state. 
  • Define Custom Metrics. Because their team uses LaunchDarkly custom metrics, they identify and use events—instrumented in code by their engineering partners—to define the metrics they’ll use in their experiments.
  • Validate Event Activity. One of the most important parts of measuring flags with experiments is confirming that the metrics being used are healthy and collecting data before the experiment runs.

Experiment Builder

Provides a visual, step-by-step interface to define hypotheses, select metrics, configure targeting, and manage assignment: everything that is a necessary part of designing a valid experiment It supports asynchronous collaboration on a cross-functional activity that may take place over many days, enabling PMs to iterate with their teams without relying on technical intermediaries meeting in real time.
For example, consider a PM who’s working with design, engineering, and data science to test a new pricing page layout aimed at improving plan upgrade rates. However, there is confusion around what exactly constitutes a "success"—is it page clicks, trial starts, or paid conversions? Using Experiment Builder, the PM can:

  • Define a clear hypothesis. Inside the LaunchDarkly Experiment Builder, the PM writes a simple, shared hypothesis, such as: “We believe the new layout will increase the percentage of users who start a paid trial within 24 hours of visiting the pricing page by 2%.”
  • Choose predefined metrics. Instead of creating new metrics for every experiment they run, the PM selects the team’s existing trial_start and page_view metrics (that have already been vetted by their data science peers) directly from the LaunchDarkly metric library to confirm that everyone aligns on definitions and that they’re using the established source of truth.
  • Preview variations + audience rules. The PM configures which user sample (e.g., returning users from EMEA) sees which treatment (e.g., control or new layout) and shares the draft config with stakeholders via a link. No screenshots or docs are needed; everything can happen directly in LaunchDarkly before the experiment begins.
  • Align Without Meetings. Stakeholders and collaborators can leave comments directly in the experiment design, identify concerns (e.g., "should we exclude mobile?"), and agree to the experiment design asynchronously.

Results View

Translates statistical estimates of what will happen in the future—if the experiment were to be shipped to all users—into actionable insights. It’s easy for PMs to understand what’s happening and communicate it clearly to stakeholders, thanks to research-backed visualizations for communicating uncertainty, progressively disclosed details about significance and likelihood, "Ship It" indicators, and data slicing on audience attributes. The goal is to support confidence in decision-making, not just data interpretation, and to choose the treatment that has the most anecdotal buy-in.

In the Results View, product teams can track both predefined and custom metrics, including:

  • Conversion events (e.g., trial starts, purchases, completions)
  • Engagement metrics (e.g., click-throughs, time on page)
  • Custom metrics built from any event data you’re already sending to LaunchDarkly

Each metric is automatically analyzed across all experiment treatments (based on flag variations) and can be broken down by audience segments (e.g., user type, region, device) to give you deeper behavioral insights.

Fitting easily into existing workflows

PMs already juggle multiple tools (like Jira, Figma, Confluence, and Slack) and don’t have time to parse complex queries. We designed LaunchDarkly Experimentation to be accessible, confidence-building, and aligned with the tools PMs use every day.

Good design requires creating systems that save time, reduce friction, and support meaningful work. We focused on giving product managers a sense of control over their own experiments rather than dependency on others for the actionable insights they need.

Try it for yourself

If you’ve ever felt like experimentation was something happening around you, not with you, we encourage you to try this new flow. Begin by defining a metric, selecting your target audience, and launching a test. Review results that are clear and actionable. Every part of the LaunchDarkly Experimentation workflow is designed to help you make progress without guesswork.

Like what you read?
Get a demo
Previous
Next