AI Configs

Overview

The topics in this category explain how to use LaunchDarkly to manage your AI Configs. You can use AI Configs to customize, test, and roll out new large language models (LLMs) in your generative AI applications.

An AI Config is a single resource that you create in LaunchDarkly to control how your application uses large language models. It lets teams manage prompts, instructions, and model settings outside of application code so they can iterate, experiment, and release changes more safely without redeploying. To learn how to create one, read Create AI Configs.

Choose a configuration mode

When you create an AI Config, you select a configuration mode that defines how the model behaves in your application.

AI Configs support two modes:

  • Completion mode: Configure prompts using messages and roles for single-step model responses. You can attach judges to completion-mode AI Config variations in the LaunchDarkly UI. To learn more, read Create and manage AI Config variations.
  • Agent mode: Configure multi-step workflows using structured instructions. For agent-based variations, invoke a judge programmatically using the AI SDK. Agent mode does not create a separate resource. To learn more, read Agents in AI Configs.

Both modes use the same AI Config resource and support variations, targeting rules, monitoring, experimentation, and lifecycle management.

Both completion mode and agent mode can integrate with external tools or APIs. Tool usage depends on how your application and SDK are implemented, not on the selected configuration mode. Agent mode enables structured, multi-step workflows. You can integrate external tools in either mode.

With AI Configs, you can:

  • Manage model configuration outside of your application code so you can update prompts and settings at runtime without deploying changes.
  • Upgrade to new model versions and roll out changes gradually and safely.
  • Add new model providers and progressively shift production traffic between them.
  • Compare variations to determine which performs better based on cost, latency, satisfaction, or other metrics.
  • Run experiments to measure the impact of generative AI features on end user behavior.

AI Configs support advanced use cases such as retrieval-augmented generation, integration with external tools or APIs, and evaluation in production. You can:

  • Track which knowledge base or vector index is active for a given model or audience.
  • Experiment with different chunking strategies, retrieval sources, or prompt and instruction structures.
  • Evaluate outputs using side-by-side comparisons or online evaluations with judges in completion mode, or invoke a judge programmatically using the AI SDK for other variations.
  • Build guardrails into runtime configuration using targeting rules to block risky generations or switch to fallback behavior.
  • Apply different safety filters by user type, geography, or application context.
  • Use live metrics, including satisfaction and quality signals you define, to guide rollouts.

These capabilities let you evaluate model behavior in production, run targeted experiments, and adopt new models safely without being locked into a single provider or manual workflow.

If you use an AI agent to create and manage AI Configs, you can use LaunchDarkly agent skills to help AI coding agents execute common tasks safely and consistently.

Availability

AI Configs is an add-on feature. Access depends on your organization’s LaunchDarkly plan. If AI Configs does not appear in your project, your organization may not have access to it.

To enable AI Configs for your organization, contact your LaunchDarkly account team. They can confirm eligibility and assist with activation.

For information about pricing, visit the LaunchDarkly pricing page or contact your LaunchDarkly account team.

How AI Configs work

Every AI Config contains one or more variations. Each variation defines model settings with messages for completion mode or instructions for agent mode. You define targeting rules to control which variation LaunchDarkly serves to a given context.

In your application, you use one of LaunchDarkly’s AI SDKs to evaluate an AI Config for a given context. The LaunchDarkly SDK evaluates targeting rules and selects a variation. The AI SDK plug-in then uses that variation to return the resolved configuration, including model settings and messages or instructions.

As part of this evaluation, the AI SDK resolves any variables in your prompts using context attributes and additional variables you provide. This enables you to tailor prompts and model settings for each context at runtime. When you update prompts, instructions, or model configuration in LaunchDarkly, those changes take effect immediately without requiring you to redeploy your application.

LaunchDarkly does not invoke model providers on your behalf. Your application is responsible for calling the model provider directly using its own credentials and the configuration returned by the AI SDK. LaunchDarkly does not proxy or independently invoke model providers.

After your application calls the model provider, use the AI SDK to track AI metrics such as generation count, token usage, latency, errors, and evaluation scores. LaunchDarkly aggregates these metrics and displays them on the Monitoring tab.

The topics in this category explain how to create AI Configs and variations, update targeting rules, monitor related metrics, and incorporate AI Configs into your application.

Additional resources

In this section:

In our guides:

In our SDK documentation: