Skip to main content
menu-icon.png

 

x
Optimizely Knowledge Base

Maximize lift with multi-armed bandit optimizations

THIS ARTICLE WILL HELP YOU:
  • Understand what Optimizely’s multi-armed bandit optimization is and how it works
  • Decide when to choose a multi-armed bandit optimization instead of an A/B experiment

If you're an Optimizely user, you probably have a good understanding of how to interpret the results of a traditional A/B test. Those interpretations won't work for multi-armed bandits, for two important reasons:

  • Multi-armed bandits don't generate statistical significance, and

  • Multi-armed bandits don't use a control or a baseline experience

Instead of statistical significance, the MAB results page focuses on improvement over equal allocation as its primary summary of your optimization's performance. This article breaks down the key differences between multi-armed bandits and traditional A/B tests, culminating in a demonstration of how each approach would unfold in identical situations.

You can use multi-armed bandit optimizations in Full Stack; however, you can't use them for feature rollouts in Feature Management.

Why MABs do not show statistical significance

With a traditional A/B test, the goal is exploration: collecting data to discover if a variation performs better or worse than the control. This is expressed through the concept of statistical significance.

Statistical significance tells you whether a change had the effect you expected. You can use those lessons to make your variations better each time. Fixed traffic allocation strategies are usually the best ways to reduce the time it takes to reach a statistically significant result.

On the other hand, Optimizely’s multi-armed bandit algorithms are designed for exploitation: MABs will aggressively push traffic to whichever variations are performing best at any given moment, because the MAB doesn’t consider the reason for that superior performance to be all that important.

Since multi-armed bandits essentially ignore statistical significance, Optimizely will do the same. This is why statistical significance does not appear on the results page for MABs: It avoids confusion about the purpose and meaning of multi-armed bandit optimizations.

Why MABs do not use a baseline

In a traditional A/B test, statistical significance is calculated relative to the performance of one baseline experience. But MABs don’t do this. They’re intended to explicitly evaluate the tradeoffs between all variations at once, which means there is no control or baseline experience to compare to.

What’s more, MABs are "set-and-forget" optimizations. In an A/B test, you follow up an experiment with a decision: do you deploy a winning variation, or stick with the control? But since MABs continuously make these decisions throughout the experiment’s lifetime, there’s never any need for a baseline reference point for that decision, because you'll never need to make it yourself.

Improvement over original

Improvement over original is an estimate of the gain in total conversions compared to simply delivering all traffic to the original variation.

To calculate it, Optimizely examines the cumulative average conversions per visitor for each variation. Then it multiplies the original's conversion rate by the total number of visitors in the test. Finally, this number is compared to the observed conversion counts in the test.

There are no statistical significance measures associated with this calculation. It does not predict or guarantee any reproducibility in future tests or campaigns. Also, the original variation in this context is the first variation in the list, and may not be named "original" if you've changed it.

MAB optimization vs. A/B testing: a demonstration

In this head-to-head comparison, simulated data is sent to both an A/B test with fixed traffic distribution and a multi-armed bandit optimization. Traffic distribution over time and the cumulative count of conversions for each mode are both observed. The true conversion rates driving the simulated data are:

  • Original: 50%

  • Variation 1: 50%

  • Variation 2: 45%

  • Variation 3: 55%

mab_gif.gif

The multi-armed bandit algorithm senses that Variation 3 is higher-performing from the start. Even without any statistical significance information for this signal (remember, the multi-armed bandit does not show statistical significance), it still begins to push traffic to Variation 3 in order to exploit the perceived advantage and gain more conversions.

For the ordinary A/B experiment, the traffic distribution remains fixed in order to more quickly arrive at a statistically significant result. Because fixed traffic allocations are optimal for reaching statistical significance, MAB-driven experiments generally take longer to find winners and losers than A/B tests.

By the end of the simulation, the multi-armed bandit has optimized the experiment to achieve roughly 700 more conversions than if traffic had been held constant.

Because traffic distribution will be updated frequently, Full Stack customers should implement sticky bucketing to avoid exposing the same visitor to multiple variations. To do this, implement the user profile service. See our developer documentation for more detail.

FAQs

What algorithms or frameworks does the multi-armed bandit support?
For binary metrics, Optimizely uses a procedure inspired by Thompson Sampling (Russo, Van Roy 2013). Optimizely characterizes each variation as a Beta distribution, where its parameters are the variation’s observed number of conversions and visitors. These distributions are sampled several times, and Optimizely allocates traffic to the variations according to their win ratio.

For numeric metrics, Optimizely uses a form of Epsilon Greedy, where a small fraction of traffic is uniformly allocated to all variations and a large amount is allocated to the variation with the highest observable mean.
Does the multi-armed bandit algorithm work with MVT and Personalization?
Yes. To use multi-armed bandit in MVT, select Partial Factorial. In the Traffic Mode dropdown, select Multi-Armed Bandit.

In Personalization, multi-armed bandit can be applied on the experience level; this works best if when you have two variations aside from the holdback.
How often does the multi-armed bandit make a decision?
The multi-armed bandit model is updated hourly. If you need a different frequency for model updates, please let us know.
Why is a baseline variation listed on the Results page for my multi-armed bandit campaign?
In MVT and Personalization, your Results page will still designate one variation as a baseline. However, this designation doesn't actually mean anything, since MABs do not measure success relative to a baseline variation. It's just a label that will have no effect on your experiment or campaign.

You should not see a baseline variation when using MAB with a Web or Full Stack experiment.
What happens if I change my primary metric?
If you change the primary metric mid-experiment in MVT or Personalization, the multi-armed bandit will begin optimizing for the new primary metric, instead of the one you originally selected. For this reason, we suggest you do not change the primary metric once you begin the experiment or campaign.

It is not possible to change your primary metric in Optimizely Web or Full Stack once your experiment has begun.
What happens when I stop or pause a variation?
If you pause or stop a variation, Optimizely’s multi-armed bandit will ignore data from those variations data when it adjusts traffic distribution among the remaining live variations.
How do multi-armed bandits handle conversion rates that change over time, and Simpson's Paradox?
Optimizely uses an exponential decay function that weighs recent visitor behavior more strongly, to better adapt to the effect of time variation more quickly. This approach gives less weight to earlier observations and more weight to recent ones.

On top of that, Optimizely reserves a portion of traffic for pure exploration, so that time variation is easier to detect.