Run experiments with AI Configs

Overview

This topic highlights how to run experiments with AI Configs. LaunchDarkly’s Experimentation feature lets you measure the effect of features on end users by tracking metrics your team cares about. By connecting metrics you create to AI Configs in your LaunchDarkly environment, you can measure the changes in your customers’ behavior based on the different AI Config variations your application serves. This helps you make more informed decisions, so the features your development team ships align with your business objectives.

Monitoring and Experimentation

Each AI Config that you create has a Monitoring tab in the LaunchDarkly user interface (UI). This tab contains data if you track AI metrics in your SDK, and the data is about the performance of your AI Config variations. For example, it includes data on the number of input and output tokens used and the total duration of calls to your LLM provider. To learn more, read Monitor AI Configs.

In contrast, Experimentation lets you measure changes in your customers’ behavior within your application, based on things like page views and clicks. For example, you may use the Monitoring tab of an AI Config to determine which AI Config variation uses the least amount of output tokens. However, you need to run an experiment to determine which AI Config variation leads to the most clicks in your chatbot app.

Additional resources

To learn more about how to use Experimentation with AI Configs, explore the following resources: