relevant products:
  • Optimizely X Web Experimentation
  • Optimizely X Web Personalization
  • Optimizely X Full Stack

THIS ARTICLE WILL HELP YOU:
  • Understand Optimizely's Stats Accelerator, its algorithms, and how it affects your results
  • Distinguish between the two Stats Accelerator algorithms
  • Determine whether to use Stats Accelerator for your experiments, as well as which algorithm to use
  • Enable Stats Accelerator (beta) for your account

If you run a lot of experiments, you face two challenges. First, data collection is costly, and time spent experimenting means you have less time to exploit the value of the eventual winner. Second, creating more than one or two variations can delay statistical significance longer than you might like.

Stats Accelerator helps you algorithmically capture more value from your experiments, either by reducing the time to statistical significance or by increasing the number of conversions collected. It does this by monitoring ongoing experiments and using machine learning to adjust traffic distribution among variations.

You may hear Stats Accelerator concepts described as the “multi-armed bandit” or “multi-armed bandit algorithms.” See the Glossary of Optimizely Terminology for definitions for important phrases and concepts.

How Stats Accelerator works

Stats Accelerator applies one of two machine learning algorithms (or optimization strategies) for the primary metric: Accelerate Impact or Accelerate Learnings. Think of these algorithms as two distinct strategies for optimization, each with its own advantages and use cases:

  • Accelerate Impact is a regret minimization strategy. Use it when you want to weight visitor experiences toward the leading variation during the experiment lifecycle

  • Accelerate Learnings is a time minimization strategy. Use it when you want to create more variations (at least three) but still reach statistical significance quickly

The Accelerate Impact algorithm

The Accelerate Impact algorithm is not intended to produce statistical significance. Instead, it works to maximize the payoff of the experiment by showing more visitors the leading variations. For example, if you are trying to increase revenue, Accelerate Impact will figure out which variation does that the best, and then send more traffic to it. The usual measurements and statistics generated by an A/B test may not be valid for Accelerate Impact.

This may not be what you need, so before switching Accelerate Impact on, be sure you understand the differences between the Stats Accelerator algorithms.

The Accelerate Impact algorithm automatically optimizes your primary metric by dynamically reallocating traffic to whichever variation is performing the best. This will help you extract as much value as possible from the leading variation during the experiment lifecycle, so you avoid the opportunity cost of showing sub-optimal experiences.

Here are a couple cases that may be a better fit for Accelerate Impact:

  • Promotions and offers: users who sell consumer goods on their site often focus on driving higher conversion rates. One effective way to do this is to offer special promotions that run for a limited time. Using the Accelerate Impact algorithm (instead of running a standard A/B/n test) will send more traffic to the over-performing variations and less traffic to the underperforming variations.

  • Long-running campaigns: some Optimizely Personalization users have long-running campaigns to which they continually add variations for each experience. For example, an airline may deliver destination-specific experiences on the homepage based on past searches. Over time, they might add different images and messaging. For long-running Personalization campaigns, the goal is often to drive as many conversions as possible, making it a perfect fit for Accelerate Impact.

To use the Accelerate Impact algorithm, you'll need a primary metric and at least two variations, including the original or holdback (baseline) variation.

Metrics are often correlated, so optimizing one optimizes another (for example, revenue and conversion rate). However, if metrics are independent of each other, optimizing the allocation for the primary metric may come at the expense of the secondary metric.

The Accelerate Learnings algorithm

By contrast, the Accelerate Learnings algorithm isn't aimed at any specific business case. It's designed to get an actionable result as quickly as possible, for experiments with a single primary metric tracking unique conversions and at least three variations. Read our Stats Accelerator technical FAQ to learn more.

Accelerate Learnings shortens experiment duration by showing more visitors the variations that have a better chance of reaching statistical significance. Accelerate Learnings attempts to discover as many significant variations as possible.

The advantage of this algorithm is that it will help maximize the number of insights from experiments in a given time frame, so you spend less time waiting for results.

To use the Accelerate Learnings algorithm, you'll need a unique conversion primary metric and at least three variations, including the original or holdback (baseline) variation.

If you're trying to measure more than one metric and have any reason to suspect your secondary metric might not move in the same direction your primary metric does, your experiment is not a good fit for Accelerate Learnings.

To launch Stats Accelerator and implement the best algorithm for your experiment or personalization experience, navigate to the Traffic Allocation tab and select the algorithm you want to use from the Distribution Mode dropdown list.

traffic-allocation.png

Stats Accelerator only works in partial factorial mode. Once Stats Accelerator is enabled, you cannot switch directly from partial factorial to full factorial mode. If you want to use full factorial mode, you will have to set your distribution mode to Manual

Weighted improvement

Stats Accelerator relies on dynamic traffic allocation to achieve its results. Anytime you allocate traffic dynamically over time, you run the risk of introducing bias into your results. Left uncorrected, this bias can have a significant impact on your reported results. This is known as Simpson's Paradox.

To illustrate this, let's look at the charts below. The first chart shows conversion rates for two variations when traffic allocation is kept static. In this example, conversions for both variations begin to decline after each has been seen by 5,000 visitors. And while we see plenty of fluctuation in conversion rates, the gap between the winning and losing variations never strays far from the true lift.

weighted improvement 1.png

The steady decline in the observed conversion rates shown above is caused by the sudden, one-time shift in the true conversion rates at the time when the experiment has 10,000 visitors. 

In the next chart, we see what happens when traffic is dynamically allocated instead, with 90 percent of all traffic directed to the winning variation after each variation has been seen by 5,000 visitors. Here, the winning variation shows the same decline in conversion rates as it did in the previous example. However, because the losing variation has been seen by far fewer visitors, its conversion rates are slower to change.

weighted improvement 2.png

This gives the impression that the difference between the two variations is much less than it actually is.

Simpson's Paradox is especially dangerous when the true lift is relatively small. In those cases, it can even cause the sign on your results to flip, essentially reporting winning variations as losers and vice versa: 

weighted improvement 3.png

weighted improvement 4.png

Stats Accelerator neutralizes this bias through a technique we call weighted improvement. 

Weighted improvement is designed to estimate the true lift as accurately as possible by breaking down the duration of an experiment into much shorter segments called epochs. These epochs cover periods of constant allocation: in other words, traffic allocation between variations does not change for the duration of each epoch.

Results are calculated for each epoch, which has the effect of minimizing the bias in each individual epoch. At the end of the experiment, these results are all used to calculate the estimated true lift, filtering out the bias that would have otherwise been present.

Impact on reporting and results

When Stats Accelerator is enabled, the experiment's results will differ from other experiments in four visible ways:

  • Stats Accelerator adjusts the percentage of visitors who see each variation. This means visitor counts will reflect the distribution decisions of the Stats Accelerator.

  • Stats Accelerator experiments use a different calculation to measure the difference in conversion rates between variations: weighted improvement. Weighted improvement represents an estimate in the true difference conversion rates that is derived from inspecting the individual time intervals between adjustments. See the last question in the Technical FAQ for details ("How does Stats Accelerator handle conversion rates that change over time and Simpson's Paradox?").

  • Stats Accelerator experiments and campaigns use absolute improvement instead of relative improvement in results to avoid statistical bias and to reduce time to significance. 

    Relative improvement is computed as:
         relative.png

    Absolute improvement is computed as:
         absolute.png

  • Stats Accelerator reports absolute improvements in percentage points, denoted by the "pp" unit:
    Screen Shot 2018-08-28 at 4.29.55 PM.png
    Additionally, the winning variation displays its results in terms of approximate relative improvement as well. This can be found just below the absolute improvement (in this example, the relative improvement is -12.15%), and is provided for continuity purposes, so that customers who are accustomed to using relative improvement can develop a sense of how absolute improvement and relative improvement compare to each other.

Because traffic distribution will be updated frequently, Full Stack customers should implement sticky bucketing to avoid exposing the same visitor to multiple variations. To do this, implement the user profile service. See our developer documentation for more detail.

Modify an experiment when Stats Accelerator is enabled

It is possible to modify an experiment if you have Stats Accelerator enabled. However, there are some limitations you should be aware of.

Prior to starting your experiment, you can add or delete variations for Web, Personalization and Full Stack experiments as long as you still have the minimum number of variations required by the algorithm you’ve selected. For Accelerate Impact, this number is two, while for Accelerate Learnings, it’s three.

You can also add or delete sections or section variations for multivariate tests, provided that you still have the minimum number of variations required by the algorithm you’re using.

Once you’ve started your experiment, you can add, stop, or pause variations in Web, Personalization, and Full Stack experiments. However, for a multivariate test, you can only add or delete sections. You cannot add or delete section variations once the experiment has begun.

Technical FAQ

How does Stats Accelerator work with Stats Engine?
 
Stats Engine will continue to decide when a variation has a statistically significant difference from the control, just as it always has. We would never compromise statistical validity by introducing a new feature. But because some differences are easier to spot than others, each variation will require a different amount of samples allocated to it in order to reach significance.
For the Accelerate Learnings approach, Stats Accelerator decides how many samples each variation should be allocated in real-time to get the same statistically significant results as standard A/B/n testing, but in less time. These algorithms are only compatible with always-valid p-values, such as those used in Stats Engine, that hold with all sample sizes and support continuous peeking/monitoring. This means that you may use the Results page for Stats Accelerator-enabled experiments just like any other experiment.
What algorithms or frameworks does Stats Accelerator support?
Stats Accelerator is not a single algorithm, but a suite of algorithms that each adapts its allocation for a different specified goal.
For tasks that balance exploration-versus-exploitation (Accelerate Impact), Optimizely uses a procedure inspired by Thompson Sampling, which is known to be optimal in this regime (Russo, Van Roy 2013).
Optimizely also draws from the research area of multi-armed bandits. Specifically, for pure-exploration tasks (Accelerate Learnings), such as discovering all variants that have statistically significant differences from the control, algorithms in use are based on the popular upper confidence bound heuristic known to be optimal for pure-exploration tasks (Jamieson, Malloy, Nowak, Bubeck 2014).
Can I use my own algorithm?
Using the REST API, you can programmatically adjust Traffic Allocation weights as needed. Optimizely’s out-of-the-box Stats Accelerator feature was finely tuned based on millions of historic data and state-of-the-art work in the field of bandits and adaptive sampling.
How much time will I save with the Accelerate Learnings algorithm?
This depends on the variations you are exploring. Users typically achieve statistical significance two to three times faster than standard A/B/n testing with Accelerate Learnings. This means with the same amount of traffic, you can reach significance using two to three times as many variants at a time as was possible with standard A/B/n testing.
How often does Stats Accelerator make a decision?
The model that dictates Stats Accelerator is updated hourly. Even for Optimizely users with extremely high traffic, this is more than sufficient to get the maximum benefits of a dynamic, adaptive allocation. If you require a greater or lower frequency of model updates, please let us know.
What happens if I change the baseline on the Results page?
There is no adverse impact to selecting another baseline, but the numbers may be difficult to interpret. We suggest keeping the original baseline when you interpret Results data.
What happens if I change my primary metric?
The Stats Accelerator scheme reacts and adapts to the primary metric. If you change the primary metric mid-experiment, the Stats Accelerator scheme will change its policy to optimize that metric. For this reason, we suggest you do not change the primary metric once you begin the experiment or campaign.
What happens when I pause or stop a variation?
If you pause or stop a variation, Stats Accelerator will ignore those variations’ results data when adjusting traffic distribution among the remaining live variations.
How does Stats Accelerator handle revenue and numeric metrics?
For numeric metrics like revenue, the number of parameters to fully describe the distribution may be unbounded. In practice, we use robust estimators for the first few moments (for example, the mean, variance, and skew) to construct confidence bounds that are used, just like those of binary metrics.
How does Stats Accelerator work with Personalization?
Stats Accelerator will automatically adjust traffic distribution between variations within campaign experiences. This will not affect the holdback. To maximize the benefit of Accelerate Learnings, you should increase your holdback to a level that would normally represent uniform distribution. For example, if you have 3 variations and a holdback, consider a 25% holdback.
What is the mathematical difference between Accelerate Learnings (pure exploration) and Accelerate Impact (explore-vs.-exploit)?
In simple terms, if your goal is to learn whether any variations are better or worse than the baseline and take actions that have longer-term impact to your business based on this information, use Accelerate Learnings. If, on the other hand, you just want to maximize conversions among these variations, choose Accelerate Impact.
In traditional A/B/n testing, a control schema is defined in contrast to a number of variants that are to be determined better or worse than the control. Typically, such an experiment is done on a fraction of web traffic to determine the potential benefit or detriment of using a particular variant instead of the control. If the absolute difference between a variant and control is large, only a small number of impressions of this variant are necessary to confidently declare the variant as different (and by how much). On the other hand, when the difference is small, more impressions of the variant are necessary to spot this small difference. The goal of Accelerate Learnings is to spot the big differences quickly and divert more traffic to those variants that require more impressions to attain statistical significance. Although nothing can ever be said with 100% certainty in statistical testing, we guarantee that the false discovery rate (FDR) is controlled, which bounds the expected proportion of variants falsely claimed as having a statistically significant difference when there is no true difference (users commonly specify to control the FDR at 5%).
In a nutshell, use Accelerate Learnings when you have a control or default and you’re investigating optional variants before committing to one and replacing the control. In Accelerate Impact, the variants and control (if it exists) are on equal footing. Instead of trying to reach statistical significance on the hypotheses that each variant is either different or the same as the control, Accelerate Impact attempts to adapt the allocation to the variant that has the best performance.
How does Stats Accelerator handle conversion rates that change over time and Simpson's Paradox?
Time variation is defined as a dependence of the underlying distribution of the metric value on time. More simply, time variation occurs when a metric’s conversion rate changes over time. Stats Engine assumes this distribution is identically distributed.
Time variation is caused by a change in the underlying conditions that affect visitor behavior. Examples include more purchasing visitors on weekends; an aggressive new discount that yields more customer purchases; or a marketing campaign in a new market that brings in a large number of visitors with different interaction behavior than existing visitors.
We assume identically distributed data because this assumption enables us to support continuous monitoring and faster learning (see the Stats Engine article for details). However, Stats Engine has a built-in mechanism to detect violations of this assumption. When a violation is detected, Stats Engine updates the statistical significance calculations. We call this a “stats reset.”
Time variation affects experiments using Stats Accelerator because the algorithms adjust the percentage of traffic exposed to each variation during the experiment. This can introduce bias in the estimated improvement, known as Simpson's Paradox. The result is that stats resets may be much more likely to occur.
The solution is to change the way the Improvement number is calculated. Specifically, we compare the conversion rates of the baseline and variation(s) within each interval between traffic allocation changes. Then, we compute statistics using weighted averages across these time intervals. For example, the difference of observed conversion rates is scaled by the number of visitors in each interval to generate an estimate of the true difference in conversion rates. This estimate is represented as weighted improvement.
Furthermore, time variation has less of an effect on the Accelerate Impact approach because it does not seek to reduce time to statistical significance declaration. The Accelerate Impact approach seeks to exploit the best-performing variation, weighting recent data more heavily to account for uncertainty. Therefore, the business impact of a stats reset is lower than a stats reset on an experiment that is trying to achieve statistical significance. 
To mitigate the effects of time variation even further for Accelerate Impact, we are implementing an exponential decay function that will weigh more recent visitor behavior more strongly to adapt to the effect of time variation more quickly. For both the Accelerate Learnings and Accelerate Impact algorithms, we reserve a portion of the traffic for pure exploration so that we can detect when time variation happens.
An exponential decay function, which we’ve implemented for the Accelerate Impact algorithm, is a good approach to addressing time variation. Exponential decay is a smooth mathematical function to give less weight to earlier observations and more weight to recent observations. It is broadly used to model the effect of early observations, gradually becoming less relevant in the face of changing trends over time.
 

Additional resources