Multi-armed bandits
Overview
This section contains documentation on multi-armed bandits (MABs), which are a type of experiment that use a decision-making algorithm that dynamically allocates traffic to the best-performing variation of a flag based on a metric you choose.
Unlike traditional A/B experiments, which split traffic between variations and waits for performance results, MABs continuously evaluate variation performance and automatically shift traffic toward the best performing variation. MABs are useful when fast feedback loops are important, such as optimizing calls to action, pricing strategies, or onboarding flows.
To learn how to create and read the results for MABs, read Creating multi-armed bandits and Multi-armed bandit results.