Skip to main content
menu-icon.png

 

x
Optimizely Knowledge Base

Why Stats Engine controls for false discovery instead of false positives

THIS ARTICLE WILL HELP YOU:
  • Understand the difference between the concepts of false positive rate and false discovery rate, and why Optimizely's Stats Engine uses one over the other.

Optimizely’s Stats Engine focuses on false discovery rate instead of false positive rate, which helps you make business decisions based on reliable results.

You may already be familiar with the idea of a false positive rate. It can be calculated as the ratio between:

  • the number of negative events incorrectly categorized as positive, and

  • the overall number of actual negative events.

On every experiment, there is a risk of getting a false positive result. This happens when an experiment reports a conclusive winner but there is in fact no real difference in visitor behavior between your variations.

false_discovery_1.png

With traditional statistics, the risk of generating at least one false positive result increases as you add more metrics and variations to your experiment. This is true even though the false positive rate stays the same for each individual metric or variation.

This may sound like a theoretical problem, and it is (it’s known as the multiple comparisons problem). But it can also have significant real-world impact. The reason is that even if the false positive rate for each individual metric or variation stays the same, the chances that you will make a critical business decision based on a false positive result grow very quickly.

False discovery rate

Optimizely helps you avoid this by taking a more stringent approach to controlling errors. Instead of focusing on the false positive rate, Optimizely uses procedures that manage the false discovery rate, which we define like this:

False Discovery Rate = (average number of incorrect winning and losing declarations) / (total number of winning and losing declarations)

These procedures are designed to control the expected proportion of conclusive results that are incorrect.

In statistical language, this would be described as the number of incorrect rejections of the null hypothesis (that null hypothesis being the claim that there was no change to visitor behavior as a result of a particular change to your website).

If you’re not familiar with the concept of false discovery rate control, it’s a modern statistical procedure that has been demonstrated to be more accurate for testing multiple hypotheses at once, which is exactly what you’re doing when you run an experiment with more than one variation or more than one metric. You can learn more about how false discovery rate control works by reading this article or scrolling through this slide deck (they’re both pretty technical reads).

Here is an example of how false discovery rate control delivers better results in an experiment using multiple variations and metrics. Imagine a hypothetical experiment with five variations and two distinct metrics:

In this experiment, there are ten different opportunities for a conclusive result. There are two winners reported; however, one of them (the one labeled “false winner”) is actually inconclusive.

If we were to (incorrectly) use the false positive rate as our metric, we would think the likelihood of choosing the false winner is ten percent, because only one of the ten potential results is incorrect. We would likely consider this to be an acceptable rate of risk.

But looking at the false discovery rate, we see that our chances of selecting a false winner are actually fifty percent. That’s because the false discovery rate only looks at actual conclusive results, instead of merely all opportunities for results.

If you were running this experiment, the first thing you probably would do is discard all the inconclusive variation / metric combinations. You would then have to decide which of the two winning variations to implement. In doing so, you would have no better than a 50-50 chance of selecting the variation that would actually help drive the visitor behavior you wanted to encourage.

A false discovery rate of fifty percent would definitely be alarming. But because Optimizely uses techniques that work to keep the false discovery rate low—approximately ten percent—your chances of selecting a true winning variation to implement are much higher than if you were using a tool that relied on more traditional statistical methods.

To learn how to capture more value from your experiments, either by reducing the time to statistical significance or by increasing the number of conversions collected, see our article on Stats Accelerator.

Rank your metrics to minimize risk

We updated false discovery rates in Optimizely X to better match customers' diverse approaches to running experiments. We explained above how your chance of making an incorrect business decision increases as you add more metrics and variations (the “multiple comparisons problem”). This is true, but it's not the whole story.

Consider an experiment with seven events: one headline metric that determines success of your experiment; four secondary metrics tracking supplemental information; and two diagnostic metrics used for debugging. These metrics aren't all equally important. Also, statistical significance isn't as meaningful for some (the diagnostic metrics) as it is for others (the headline metric).

Optimizely X solves this problem by allowing you to rank your metrics. The first ranked metric is still your primary metric. Metrics ranked 2 through 5 are considered secondary. Secondary metrics take longer to reach significance as you add more of them, but they don't impact the primary metric's speed to significance. Finally, any metrics ranked beyond the first five are monitoring metrics. Monitoring metrics take longer to reach significance if there are more of them, but have minimal impact on secondary metrics and no impact on the primary metric.

The result is that your chance of making a mistake on your primary metric is controlled. The false discovery rate of all other metrics is also controlled, all while prioritizing reaching statistical significance quickly on the metrics that matter most.