Skip to main content
menu-icon.png

 

x
Optimizely Knowledge Base

Why Stats Engine controls for false discovery instead of false positives

THIS ARTICLE WILL HELP YOU:
  • Understand the difference between the concepts of false positive rate and false discovery rate, and why Optimizely's Stats Engine uses one over the other.

Optimizely’s Stats Engine focuses on false discovery rate instead of false positive rate, which helps you make business decisions based on reliable results.

You may already be familiar with the idea of a false positive rate. It can be calculated as the ratio between:

  • the number of negative events incorrectly categorized as positive, and

  • the overall number of actual negative events.

On every experiment, there is a risk of getting a false positive result. This happens when an experiment reports a conclusive winner (or loser, depending on the direction of the observed difference) but there is in fact no real difference in visitor behavior between your variations.

false_discovery_1.png

Here, with traditional statistics, the risk of generating at lease one false positive result increases as you add more metrics and variations to your experiment. This is true even though the false positive rate stays the same for each individual metric or variation.

This may sound like a theoretical problem, and it is. But it can also have significant real-world impact. When an experiment is run with more than one variation or more than one metric, these are collectively referred to as multiple hypotheses. It is important to correct the statistical significance calculations when you are performing tests on these multiple hypotheses at the same time. This is because if you perform several hypotheses tests simultaneously you run into the multiple comparisons problem (i.e., multiplicity or the look-elsewhere effect), where your probability of making an error by basing a critical business decision on a false positive result increases rapidly with the number of hypothesis tests you are running simultaneously.

False discovery rate

Optimizely helps you avoid this by taking a more stringent approach to controlling errors. Instead of focusing on the false positive rate, Optimizely uses procedures that manage the false discovery rate, which we define like this:

False Discovery Rate = (average number of incorrect winning and losing declarations) / (total number of winning and losing declarations)

These procedures are designed to control the expected proportion of conclusive results that are incorrect.

In statistical language, this would be described as the number of incorrect rejections of the null hypothesis (that null hypothesis being the claim that there was no change to visitor behavior as a result of a particular change to your website).

False discovery rate control is a statistical procedure for correcting multiplicity caused by running multiple hypothesis tests at once. Optimizely's Stats Engine incorporates a tiered version of the Benjamini-Hochberg procedure for false discovery rate control to correct statistical significance across multiple metrics and variations. The procedure offers a way to increase the power to detect statistically significant differences between variations, all while maintaining a principled bound on error.

You can go in-depth into how Optimizely incorporates false discovery rate control by reading the article "Peeking at A/B Tests: Why it Matters, and what to do about it" (See pg. 1523-1525). [LINK]

Citations for the specific false discovery rate control algorithms incorporated into Optimizely's Stats Engine are:

Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1), 289-300. [LINK]

Benjamini, Y., & Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of statistics, 1165-1188. [LINK]

Here is an example of how false discovery rate control delivers better results in an experiment using multiple variations and metrics. Imagine a hypothetical experiment with five variations and two distinct metrics:

In this experiment, there are ten different opportunities for a conclusive result. There are two winners reported; however, one of them (the one labeled “false winner”) is actually inconclusive.

If we were to (incorrectly) use the false positive rate as our metric, we would think the likelihood of choosing the false winner is ten percent, because only one of the ten potential results is incorrect. We would likely consider this to be an acceptable rate of risk.

But looking at the false discovery rate, we see that our chances of selecting a false winner are actually fifty percent. That’s because the false discovery rate only looks at actual conclusive results, instead of merely all opportunities for results.

If you were running this experiment, the first thing you probably would do is discard all the inconclusive variation / metric combinations. You would then have to decide which of the two winning variations to implement. In doing so, you would have no better than a 50-50 chance of selecting the variation that would actually help drive the visitor behavior you wanted to encourage.

A false discovery rate of fifty percent would definitely be alarming. But because Optimizely uses techniques that work to keep the false discovery rate low—approximately ten percent—your chances of selecting a true winning variation to implement are much higher than if you were using a tool that relied on more traditional statistical methods.

To learn how to capture more value from your experiments, either by reducing the time to statistical significance or by increasing the number of conversions collected, see our article on Stats Accelerator.

Rank your metrics to minimize risk

We updated false discovery rates in Optimizely to better match customers' diverse approaches to running experiments. We explained above how your chance of making an incorrect business decision increases as you add more metrics and variations (the “multiple comparisons problem”). This is true, but it's not the whole story.

Consider an experiment with seven events: one headline metric that determines success of your experiment; four secondary metrics tracking supplemental information; and two diagnostic metrics used for debugging. These metrics aren't all equally important. Also, statistical significance isn't as meaningful for some (the diagnostic metrics) as it is for others (the headline metric).

Optimizely solves this problem by allowing you to rank your metrics. The first ranked metric is still your primary metric. Metrics ranked 2 through 5 are considered secondary. Secondary metrics take longer to reach significance as you add more of them, but they don't impact the primary metric's speed to significance. Finally, any metrics ranked beyond the first five are monitoring metrics. Monitoring metrics take longer to reach significance if there are more of them, but have minimal impact on secondary metrics and no impact on the primary metric.

The result is that your chance of making a mistake on your primary metric is controlled. The false discovery rate of all other metrics is also controlled, all while prioritizing reaching statistical significance quickly on the metrics that matter most.

Stats Engine, like any other statistical test, does not completely shield the user from the effects of randomness. Even in cases where there is no underlying difference between the control and the treatment, differences between characteristics of users bucketed to different variations will generate slight differences in performance between the variants.

Occasionally, these fluke differences may be large enough to cause Stats Engine to declare statistical significance on a variation, but this should not happen very frequently. The rate at which this happens is controlled according to the confidence threshold set by the user. With a confidence level (statistical significance threshold) of 90%, for example, the user can expect an A/A comparison to result in statistical significance no more than 10% of the time.

But notably, this still allows for the occasional false discovery. If the user observes a false discovery only rarely, then there is no reason to suspect any issue. If it happens consistently, then there may be just cause for a deeper dive.