Skip to main content

We are currently experiencing an issue that is preventing customers from submitting support tickets. Please contact us at (866) 819-4486 if you need immediate assistance.

Optimizely Knowledge Base

Create mutually exclusive experiments in Optimizely X Full Stack

  • Create mutually exclusive experiments and prevent interaction effects
  • Ensure that a single user doesn't see overlapping experiments that test the same feature
  • Create an experiment group to keep experiments mutually exclusive and prevent interaction effects
  • Decide when to use experiment groups and when to allow experiment overlap

Mutually exclusive experiments help you ensure that a single user doesn't see two overlapping A/B tests. In other words, they ensure that users who are exposed to Experiment #1 never see Experiment #2, and vice versa.

Use mutually exclusive experiments when you'd like to control for interaction effects.

For example, imagine that you're running two experiments that test the same algorithm. You don't want any users to see both experiments, because there's strong overlap between variations in each experiment. Certain users may see both -- and they may behave differently from users who see each experiment in isolation. In order to clearly understand the impact of each set of changes, you make the two experiments mutually exclusive.

In Full Stack, you can make two or more experiments mutually exclusive to each other.

Create an experiment group

In Full Stack, use experiment groups to make multiple experiments mutually exclusive to each other.

  1. Navigate to the Experiments dashboard and select the Groups tab.

  2. Click New Group.

  3. Name and describe the group.
    For example, the group above is named the "Search Algorithm Group." All experiments on the search algorithm that are added to this group will be mutually exclusive. No user will see more than one experiment in this group.

  4. Click Create Group.

Congratulations! You've set up a group of mutually exclusive experiments.

How groups work

For experiments that are not mutually exclusive, Optimizely uses a unique random seed for each experiment to bucket the user. The seed determines whether a user enters a particular experiment. Because the seeds are random, unique, and not mutually exclusive across experiments, some users enter multiple experiments. For example, imagine two experiments: A and B, each receives 20% traffic allocation (the percentage of total traffic that is eligible for the experiment). Here's the expected traffic allocation:

  • 16% of traffic falls in experiment A only
  • 16% of traffic falls in experiment B only
  • 4% of traffic falls in both experiment A and experiment B
  • 64% of traffic is not in any experiment

In the example above, results from experiment A and experiment B may be skewed. If users who see both A and B behave differently from users who see just A or just B, then the results for A and B are skewed by the overlap. This is called an interaction effect.

If experiments A and B are mutually exclusive, Optimizely chooses the same random seed (which is unique to the experiment group) to bucket users in experiments A and B -- and distribute traffic to one or the other. This method ensures that experiments can't overlap for the same users. If experiments A and B are mutually exclusive, the traffic allocation looks something like this:

  • 20% of traffic falls in experiment A only
  • 20% of traffic falls in experiment B only
  • 60% of traffic is not in any experiment

Optimizely also ensures mutual exclusivity between experiments in a group that run at different times. How? By assigning bucket ranges to experiments using a stratified sample of available buckets, with strata that consist of all current and previously allocated bucket ranges.

Best practices

To guard against any possibility of interaction effects, you might consider making all your experiments mutually exclusive. But sometimes making all experiments in the project mutually exclusive requires more traffic than is available. In this case, some experiments should overlap and some experiments should be mutually exclusive, depending on the traffic levels you need to reach significance and which parts of your code base are being tested.

Below are a few tips for when to create mutually exclusive experiments.

You're more likely to see interaction effects if:

  • You're running two experiments on the same area of an application
  • You're running two experiments on the same flow where there's likely going to be strong overlap
  • You're running two experiments that are actually better to run together, for instance as a multivariate experiment, to see the potential interaction

If the above points don't apply, then it's usually unnecessary to create mutually exclusive experiments. Both variations of the experiments are exposed to the other experiment proportionally. See below for a breakdown of how running two experiments would interact.

  • Each user in variation (version) A or B has an equal chance of seeing variation C or D
  • Each user in variation C or D has an equal chance of seeing variation A or B
  • So if there are any (slight) interaction biases, they would even out, as long as both audiences are eligible to see both experiments.

In other words, your experiments are less likely to have overlapping effects if they:

  • Run on different areas of your application
  • Track different goals

However, there are a few scenarios when creating mutually exclusive experiments, or running sequential tests (waiting for one to end in order for the next to start) is recommended. If you’re concerned about interaction effects between experiments running at different times, finish all experiments in a group before creating new experiments.

For example, suppose you created a group with four experiments (A, B, C, and D), running at 25% traffic allocation each. If you stop experiment D and start another experiment E, the experiment results of E could be biased because all users in E were previously given the treatment from D. Wait for experiments A, B, and C to finish before starting experiment E to make sure the traffic is evenly sampled across all previous experiments.

When making important decisions for your business, evaluate your risk tolerance for experiment overlap. Evaluate your prioritized roadmap to ensure that you are planning your variation designs, goals, and execution schedule to best meet your business needs.