Tracking AI metrics
Overview
This topic explains how to record metrics from your AI model generation, including duration, generation, satisfaction, and several token-related metrics.
Use LaunchDarkly AI SDKs to record these metrics by wrapping your AI model provider call with a tracking method so metrics are captured as part of generation. LaunchDarkly displays recorded metrics on the AI Config Monitoring tab.
This feature is available for AI SDKs only. LaunchDarkly AI SDKs are designed for use with LaunchDarkly AI Configs and are currently in a pre-1.0 release under active development.
All AI SDKs include track* methods to record:
- duration
- token usage
- generation success
- generation error
- time to first token
- output satisfaction
Each AI SDK also includes a method to retrieve a summary of the metrics that have been automatically collected.
Some AI SDKs also include provider-specific track_[model]_metrics methods for AI Configs in completion mode. These methods take the result of a provider call and record:
- duration
- token usage
- generation success
- generation error
You can use provider-specific methods as a shorthand, or call track* methods directly to record additional metrics.
Both track* and track_[model]_metrics methods are called from a tracker. The tracker is returned by an AI Config customization call and is available for AI Configs in both completion mode and agent mode.
Record delayed feedback events
The SDK expects you to use the tracker within the same request lifecycle that generates AI content. Do not reuse a tracker across separate requests.
If user feedback arrives later, you must persist the tracking metadata to ensure the feedback is attributed to the same AI Config variation that generated the content. The tracker exposes getTrackData() so you can capture this metadata at generation time and reuse it when feedback arrives.
Use the following pattern to record delayed feedback:
- Use
tracker.getTrackData()to capture the tracker metadata at generation time. - Store the metadata alongside the context where generation results are stored.
- Retrieve the stored metadata when feedback arrives.
- Send the tracking event with the original metadata.
Here is an example using ldClient.track:
AI SDKs
This feature is available for the following AI SDKs:
.NET AI
Expand .NET AI SDK code sample
Use the TrackRequest function to wrap your AI model provider call and record metrics when the model generates content.
The tracker is returned from your call to customize the AI Config and is specific to that AI Config. Make sure to call Config again each time you generate content from your AI model so that metrics are correctly associated with the customized AI Config variation.
Here’s how to call an AI model provider and record metrics from generation:
If you want to perform additional tracking beyond what LaunchDarkly provides automatically, populate the Response object with the metrics you want to record.
You can use the SDK’s other Track* functions to record metrics manually. The TrackRequest function expects a response, so manual tracking may be required for streaming use cases.
Each of the Track* functions sends data back to LaunchDarkly. The Monitoring tab of the AI Config in the LaunchDarkly UI aggregates metrics from all variations of the AI Config.
Here’s how to record metrics manually:
The SDK automatically flushes pending analytics events to LaunchDarkly at regular intervals. If you have a short-lived application, you may need to explicitly request that the underlying LaunchDarkly client deliver any pending analytics events to LaunchDarkly, using flush() or close().
Here’s how:
To learn more, read LDAIConfigTracker.
Go AI
Expand Go AI SDK code sample
Use the TrackRequest() function to wrap your AI model provider call and record metrics from generation.
The tracker is returned from your call to customize the AI Config and is specific to that AI Config. Make sure to call Config() again each time you use the tracker and generate content from your AI model so that your metrics are correctly associated with the customized AI Config variation.
Here’s how to call an AI model provider and record metrics from generation:
You can use the SDK’s other Track* functions to record metrics manually. The TrackRequest function expects a response, so manual tracking may be required for streaming use cases.
Each Track* function sends data back to LaunchDarkly. To review the metrics that have been recorded, use GetSummary:
To learn more, read GetSummary and MetricSummary.
The Monitoring tab of the AI Config in the LaunchDarkly UI aggregates data from the Track* functions from across all variations of the AI Config.
Here’s how to record metrics manually:
The SDK automatically flushes these pending analytics events to LaunchDarkly at regular intervals. If you have a short-lived application, you may need to explicitly request that the underlying LaunchDarkly client deliver any pending analytics events to LaunchDarkly, using flush() or close().
Here’s how:
To learn more, read Tracker.
Node.js (server-side) AI
Expand Node.js (server-side) AI SDK code sample
If your AI Config uses completion mode, the Node.js (server-side) AI SDK provides several options for making a request to your generative AI provider and recording metrics from your AI model generation. You can use any of the following options:
- If you are working with OpenAI or Bedrock Converse, use the
trackOpenAIMetricsortrackBedrockConverseMetricsfunctions, respectively, to record metrics. These functions take the result of your generative operation as a parameter. - If you are working with Vercel, use either of the
trackVercelAISDKGenerateTextMetricsortrackVercelAISDKStreamTextMetricsfunctions to record metrics. These functions take the result of a generative operation from any provider supported by the Vercel AI SDK as a parameter. - If you are using a generative AI provider or framework for which the SDK does not provide a convenience function, use the SDK’s other
track*functions to record metrics manually.
If your AI Config uses agent mode, you can access the instructions returned from the agent() call to send to your AI model. Use the tracker returned in this call to record metrics.
In the following examples, the tracker is from your call to customize the AI Config, and is specific to that AI Config. Make sure to call config again each time you use the tracker and generate content from your AI model, so that your metrics are correctly associated with the customized AI Config variation.
Here’s how:
Here’s how to make a request using the Vercel AI SDK’s generateText or streamText, and record the metrics:
To learn more, read trackVercelAISDKGenerateTextMetrics, trackVercelAISDKStreamTextMetrics, and toVercelAISDK.
You can use the SDK’s other track* functions to record these metrics manually. You may need to do this if you are using a model for which the SDK does not provide a convenience track[Model]Metrics function, and you are not using the Vercel AI SDK. The track[Model]Metrics functions are expecting a response, so you may also need to do this if your application requires streaming.
Each of the track* functions sends data back to LaunchDarkly. To review the metrics that have been recorded, use getSummary:
To learn more, read getSummary.
The Monitoring tab of the AI Config in the LaunchDarkly UI aggregates data from the track* functions from across all variations of the AI Config.
Here’s how to record metrics manually:
In completion mode, the tracker is returned from the config() call, so you can access it directly, as in the examples above. Make sure to call config() again each time you use the tracker and generate content from your AI model.
In agent mode, the tracker is part of the agent returned from the agent() or agents() call. If you are working in agent mode, replace tracker with agent.tracker in the examples above.
The SDK automatically flushes these pending analytics events to LaunchDarkly at regular intervals. If you have a short-lived application, you may need to explicitly request that the underlying LaunchDarkly client deliver any pending analytics events to LaunchDarkly, using flush() or close().
Here’s how:
To learn more, read LDAIConfigTracker.
Python AI
Expand Python AI SDK code sample
If your AI Config uses completion mode, use one of the track_[model]_metrics functions to record metrics from your AI model generation. The SDK provides separate track_[model]_metrics functions for several of the models that you can select when you set up your AI Config variations in the LaunchDarkly user interface.
If your AI Config uses agent mode, you can access the instructions returned from the customized AI Config to send to your AI model. Use the tracker returned as part of the agent() or agents() functions to record metrics.
The tracker is returned from your call to customize the AI Config, and is specific to that AI Config. Make sure to call config again each time you use the tracker and generate content from your AI model, so that your metrics are correctly associated with the customized AI Config variation.
Here’s how:
You can use the SDK’s other track* functions to record metrics manually. This is useful when the SDK does not provide a convenience track_[model]_metrics function for your model, or when your application requires streaming, because track_[model]_metrics functions expect a response.
Each track* function sends data back to LaunchDarkly. To review the metrics that have been recorded, use get_summary:
To learn more, read get_summary and LDAIMetricSummary.
The Monitoring tab of the AI Config in the LaunchDarkly UI aggregates data from the track* functions across all variations of the AI Config.
Here’s how to record metrics manually:
In completion mode, the tracker is returned from the config() call, so you can access it directly, as in the examples above. Call config() each time you generate content from your AI model so your metrics are associated with the correct AI Config variation.
In agent mode, the tracker is part of the agent returned from agent() or agents(). In the examples above, replace tracker with agent.tracker.
The SDK automatically flushes pending analytics events to LaunchDarkly at regular intervals. If you have a short-lived application, you may need to explicitly request that the underlying LaunchDarkly client deliver any pending analytics events to LaunchDarkly, using flush() or close().
Here’s how:
To learn more, read LDAIConfigTracker.
Ruby AI
Expand Ruby AI SDK code sample
Use one of the track_[model]_metrics functions to wrap your AI model provider call and record metrics from generation. The SDK provides separate track_[model]_metrics functions for several models that you can select when you set up your AI Config variations in the LaunchDarkly UI.
The tracker is returned as part of your call to customize the AI Config and is specific to that AI Config. Make sure to call config again each time you use the tracker and generate content from your AI model so your metrics are correctly associated with the customized AI Config variation.
Here’s how to use a provider-specific function to call OpenAI or Bedrock providers and record metrics from your AI model generation:
You can use the SDK’s other track* functions to record these metrics manually. You may need to do this if you are using a model for which the SDK does not provide a convenience track_[model]_metrics function. The track_[model]_metrics functions are expecting a response, so you may also need to do this if your application requires streaming.
Each of the track* functions sends data back to LaunchDarkly. To review the metrics that have been recorded, use the summary property in the tracker:
To learn more, read summary and MetricSummary.
The Monitoring tab of the AI Config in the LaunchDarkly UI aggregates data from the track* functions from across all variations of the AI Config.
Here’s how to record metrics manually:
Make sure to call config again each time you use the tracker and generate content from your AI model.
The SDK automatically flushes these pending analytics events to LaunchDarkly at regular intervals. If you have a short-lived application, you may need to explicitly request that the underlying LaunchDarkly client deliver any pending analytics events to LaunchDarkly, using flush or close.
Here’s how:
To learn more, read ConfigTracker.