Build AI Configs with Agent Skills in Claude Code, Cursor, or Windsurf

Published February 13, 2026

Portrait of Scarlett Attensil.

by Scarlett Attensil

LaunchDarkly Agent Skills let you build AI Configs by describing what you want. Tell your coding assistant to create an agent, and it handles the API calls, targeting rules, and tool definitions for you.

In this quickstart, you’ll create AI Configs using natural language, then run a sample LangGraph app that consumes them. You’ll build a “Side Project Launcher”—a three-agent pipeline that validates ideas, writes landing pages, and recommends tech stacks.

Watch the video

Prefer video? Watch Build a multi-agent system with LaunchDarkly Agent Skills for a walkthrough of this tutorial.

What you’ll build

A three-agent pipeline called “Side Project Launcher”:

  • Idea Validator: researches competitors, analyzes market gaps, scores viability
  • Landing Page Writer: generates headlines, copy, and CTAs based on your value prop
  • Tech Stack Advisor: recommends frameworks, databases, and hosting based on your requirements

By the end, you’ll have working AI Configs in LaunchDarkly and a sample app that fetches them at runtime.

Prerequisites

  • LaunchDarkly account (free trial works)
  • Claude Code, Cursor, or Windsurf installed
  • LaunchDarkly API access token (for creating configs)
  • Anthropic API key (for running the sample app)
You need three different credentials
  • LaunchDarkly API access token (LD_API_KEY): Used by Agent Skills to create projects and AI Configs. Get it from Authorization settings. Requires writer role or custom role with createProject and createAIConfig permissions.
  • LaunchDarkly SDK key (LAUNCHDARKLY_SDK_KEY): Used by your app at runtime to fetch AI Configs. Found in your project’s SDK settings after creation.
  • Model provider API key (e.g., ANTHROPIC_API_KEY): Used to call the model. Get it from your provider (Anthropic, OpenAI, etc.).

Store all keys in .env and never commit them to version control.

Start your free trial

Want to follow along? Start your 14-day free trial of LaunchDarkly. No credit card required.

30-second quickstart

If you just want to get started, here’s the fastest path:

1. Install skills:

$npx skills add launchdarkly/agent-skills

Or ask your editor: “Download and install skills from https://github.com/launchdarkly/agent-skills

Restart your editor after installing.

2. Set your token:

$export LD_API_KEY="api-xxxxx"

3. Build something:

Use the prompt in Build a multi-agent project below, or describe your own agents. The assistant creates everything and gives you links to view them in LaunchDarkly.

Install Agent Skills in Claude Code, Cursor, or Windsurf

Agent Skills work with any editor that supports the Agent Skills specification.

Step 1: Install the skills

You have two options:

Option A: Use skills.sh (recommended)

skills.sh is an open directory for agent skills. Install LaunchDarkly skills with one command:

$npx skills add launchdarkly/agent-skills

Option B: Ask your AI assistant

Open your editor and ask:

Download and install skills from https://github.com/launchdarkly/agent-skills

Both methods install the same skills.

Step 2: Restart your editor

Close and reopen your editor. The skills load on startup.

How to verify: Type /aiconfig in Claude Code. You should see autocomplete suggestions. In Cursor, ask “what LaunchDarkly skills do you have?” and the assistant should list them.

Step 3: Set your API token

$export LD_API_KEY="api-xxxxx"

Get your token from LaunchDarkly Authorization settings. The writer role works, or use a custom role with createProject and createAIConfig permissions.

Build a multi-agent project

Now let’s build something real: a Side Project Launcher that helps you validate ideas, write landing pages, and pick the right tech stack. Tell the assistant:

Create AI Configs for a "Side Project Launcher" with three configs.
Use Anthropic Claude models for all configs.
1. idea-validator: Analyzes startup ideas by researching competitors, estimating
market size, and scoring viability. Use variables for {{idea}}, {{target_audience}},
and {{problem_statement}}. Give it tools for web search and competitor analysis.
2. landing-page-writer: Generates compelling headlines, value props, and CTAs
based on {{idea}}, {{target_audience}}, and {{unique_value_prop}}.
Give it tools for copy generation and A/B test suggestions.
3. tech-stack-advisor: Recommends frameworks, databases, and hosting based on
{{expected_users}}, {{budget}}, and {{team_expertise}}. Give it a tool for
stack recommendations.
Put them in a new project called side-project-launcher.

What the assistant creates

The assistant uses several skills automatically:

  1. aiconfig-projects: creates the LaunchDarkly project
  2. aiconfig-create: builds each agent configuration with variables
  3. aiconfig-tools: defines tools for function calling

Expected output:

Creating project: side-project-launcher
Creating AI Config: idea-validator
- Model: anthropic.claude-sonnet-4-20250514
- Variables: idea, target_audience, problem_statement
- Instructions: "Validate the idea: {{idea}}. Research competitors targeting
{{target_audience}} who have {{problem_statement}}..."
- Tools: web_search, competitor_analysis
Creating AI Config: landing-page-writer
- Model: anthropic.claude-sonnet-4-20250514
- Variables: idea, target_audience, unique_value_prop
- Instructions: "Write landing page copy for {{idea}}. The target audience is
{{target_audience}}. Lead with: {{unique_value_prop}}..."
- Tools: generate_copy, suggest_ab_tests
Creating AI Config: tech-stack-advisor
- Model: anthropic.claude-sonnet-4-20250514
- Variables: expected_users, budget, team_expertise
- Instructions: "Recommend a tech stack for {{expected_users}} users,
{{budget}} budget, team knows {{team_expertise}}..."
- Tools: recommend_stack
Done! View your project:
https://app.launchdarkly.com/side-project-launcher/production/ai-configs

Claude Code showing created AI Configs with models, tools, variables, and SDK keys

Claude Code creates the configs and provides SDK keys

The variables ({{idea}}, {{target_audience}}, etc.) get filled in at runtime when you call the SDK. That’s how each user gets personalized output.

What it looks like in LaunchDarkly

AI Configs list in LaunchDarkly showing the three agents: idea-validator, landing-page-writer, and tech-stack-advisor

AI Configs list showing the three agents created by Agent Skills

After creation, your LaunchDarkly project contains:

  • 3 AI Configs with instructions, model settings, and variables
  • 3 tools with parameter definitions ready for function calling
  • Default targeting serving the configuration to all users

Default targeting settings showing the configuration served to all users

Default targeting serves the configuration to all users

Each agent has its own configuration with instructions, variables, and tools. Here’s the idea-validator:

Idea validator AI Config showing instructions, model settings, and variables

Idea validator config with instructions, variables, and tools

The landing-page-writer and tech-stack-advisor follow the same pattern with their own instructions and tools.

Run the Side Project Launcher

The full working code is available on GitHub: launchdarkly-labs/side-project-researcher

Clone it and run:

$git clone https://github.com/launchdarkly-labs/side-project-researcher.git
$cd side-project-researcher
$pip install -r requirements.txt
$cp .env.example .env
$# Edit .env with your SDK key and Anthropic API key
$python side_project_launcher_langgraph.py

You’ll need both the LaunchDarkly SDK key (from your project’s SDK settings) and your Anthropic API key in the .env file. The assistant can surface the SDK key from your project details, but store it in .env rather than hardcoding it.

The app prompts you for your idea details:

Terminal prompts asking for idea, target audience, problem statement, and tech requirements

The app prompts you for your side project details

Then each agent runs in sequence, fetching its config from LaunchDarkly and generating output:

Idea validator agent output with market analysis and viability score

Idea validator output with market analysis

Tech stack advisor output recommending frameworks and infrastructure

Tech stack advisor recommending frameworks and infrastructure

Connect to your framework

The AI Config stores your model, instructions, and tools. The SDK fetches the config and handles variable substitution automatically.

Code snippets show the pattern

The snippets below show the integration pattern. They omit imports, error handling, and tool wiring for brevity. For complete, runnable code, use the sample repo.

Initialize the SDK

1import ldclient
2from ldclient import Context
3from ldclient.config import Config
4from ldai.client import LDAIClient, AIAgentConfigDefault
5
6# Initialize once at startup
7SDK_KEY = os.environ.get('LAUNCHDARKLY_SDK_KEY')
8ldclient.set_config(Config(SDK_KEY))
9ld_client = ldclient.get()
10ai_client = LDAIClient(ld_client)

Fetch agent configs

1def build_context(user_id: str, **attributes):
2 """Build LaunchDarkly context for targeting."""
3 builder = Context.builder(user_id)
4 for key, value in attributes.items():
5 builder.set(key, value)
6 return builder.build()
7
8def get_agent_config(config_key: str, context: Context, variables: dict = None):
9 """Get agent-mode AI Config from LaunchDarkly."""
10 fallback = AIAgentConfigDefault(enabled=False)
11 return ai_client.agent_config(config_key, context, fallback, variables or {})

Wire it to LangGraph

LangGraph orchestrates multi-agent workflows as a graph of nodes, but you can use any orchestrator—CrewAI, LlamaIndex, Bedrock AgentCore, or custom code. To compare options, read Compare AI orchestrators.

By wiring AI Configs to each node, your agents fetch their model, instructions, and tools dynamically from LaunchDarkly. This lets you swap models within a provider (e.g., Sonnet to Haiku), update prompts, or disable agents without redeploying.

Tools require runtime handlers

The AI Config defines tool schemas, but your code must implement the actual tool handlers. The sample repo shows how to bind config.tools to LangChain tool functions. For this tutorial, the tools are defined but not wired—the agents respond based on their instructions alone.

Each agent becomes a node in your graph:

1from langchain_anthropic import ChatAnthropic
2from langchain_core.messages import HumanMessage, SystemMessage
3from langgraph.graph import StateGraph, END
4
5def idea_validator_node(state: SideProjectState) -> SideProjectState:
6 context = build_context(state["user_id"])
7 config = get_agent_config("idea-validator", context, {
8 "idea": state["idea"],
9 "target_audience": state["target_audience"],
10 "problem_statement": state["problem_statement"]
11 })
12
13 if config.enabled:
14 llm = ChatAnthropic(model=config.model.name)
15 messages = [
16 SystemMessage(content=config.instructions),
17 HumanMessage(content="Please validate this idea and provide your analysis.")
18 ]
19 response = llm.invoke(messages)
20 state["idea_validation"] = response.content
21 config.tracker.track_success() # Track metrics
22
23 return state
24
25# Build the graph
26workflow = StateGraph(SideProjectState)
27workflow.add_node("validate_idea", idea_validator_node)
28workflow.add_node("write_landing_page", landing_page_writer_node)
29workflow.add_node("recommend_stack", tech_stack_advisor_node)
30
31workflow.set_entry_point("validate_idea")
32workflow.add_edge("validate_idea", "write_landing_page")
33workflow.add_edge("write_landing_page", "recommend_stack")
34workflow.add_edge("recommend_stack", END)
35
36app = workflow.compile()
37
38# Don't forget to flush before exiting
39ld_client.flush()

To see a full example running across LangGraph, Strands, and OpenAI Swarm, read Compare AI orchestrators.

What you can do next

Once your agents are in LaunchDarkly:

  • A/B test variations: split traffic between prompt variations or model sizes (e.g., Sonnet vs Haiku) to see which performs better
  • Target by segment: premium users get one variation, free users get another
  • Kill switch: disable a misbehaving agent instantly from the UI
  • Track costs: monitor tokens and latency per variation

To learn more about targeting and experimentation, read AI Configs Best Practices.

Troubleshooting

Skills installed but not working: Restart your editor after installing skills. They load on startup.

“Permission denied” errors: Check that your API token has createProject and createAIConfig permissions. The writer role includes both.

Config comes back disabled: Your targeting rules may not match the context you’re passing. Check that default targeting is enabled, or that your context attributes match your rules.

Tools defined but not executing: The AI Config defines tool schemas, but your code must implement handlers. See the sample repo for tool binding examples.

Can’t find SDK key: After Agent Skills creates your project, find the SDK key in your project’s Settings > Environments > SDK key. Copy it to your .env file.

FAQ

Do I need Claude Code, or does this work in Cursor/Windsurf?

Agent Skills work in any editor that supports the Agent Skills specification. This includes Claude Code, Cursor, and Windsurf. The installation process is the same.

What’s the difference between Agent Skills and the MCP server?

Both give your AI assistant access to LaunchDarkly. Agent Skills are text-based playbooks that teach the assistant workflows. The MCP server exposes LaunchDarkly’s API as tools. You can use either or both.

What permissions does my API token need?

The writer role works, or use a custom role with createProject and createAIConfig permissions.

Where do I see the created AI Configs?

In the LaunchDarkly UI: go to your project, then AI Configs in the left sidebar. Each config shows its instructions, model, tools, and targeting rules.

How do I delete or reset generated configs?

In the LaunchDarkly UI, open the AI Config and click Archive (or Delete if available). Or ask the assistant: “Delete the AI Config called researcher-agent in project valentines-day.”

Can I use this with frameworks other than LangGraph?

Yes. The SDK returns model name, instructions, and tools as data. You wire that into whatever framework you use: CrewAI, LlamaIndex, Bedrock AgentCore, or custom code.

Does this work for completion mode (chat) or just agent mode?

Both. Use ai_client.completion_config() for completion mode (chat with message arrays) or ai_client.agent_config() for agent mode (instructions for multi-step workflows). To learn more, read Agent mode vs completion mode.

Next steps