- Decide whether to run mutually exclusive experiments, or run simultaneous and sequential experiments
- Run multiple experiments on the same page or flow simultaneously, keeping the audiences mutually exclusive
- Evaluate the effect of single or controlled changes on your conversion rates
- Avoid skewing the results of one experiment by running another experiment simultaneously
- Establish groups of experiments to be mutually exclusive among themselves but not the rest of your experiments
When running multiple experiments on your site, you may want to set them so they are mutually exclusive -- in other words, you may set them so that visitors who are exposed to Experiment #1 never see Experiment #2, and vice versa.
The reason you're thinking about this is that you want to eliminate noise or bias from your results.
Decide whether to run simultaneous experiments
To answer the question of whether you should create mutually exclusive experiments, ask yourself how likely you are to see interaction effects on the experiment. You're more likely to see these effects if:
- You're running two experiments on the same page.
- You're running two experiments on the same flow where there's likely going to be strong overlap from one page to the next (for example, a checkout funnel or multi-step form).
- You're running two experiments that are actually better to run together, for instance as a multivariate experiment, to see the potential interaction
If the above don't apply to you, then it's usually unnecessary to create mutually exclusive experiments. This is because both variations of the tests are exposed to the other test proportionally. For example, see below for a breakdown of how running two tests (one on your homepage and one on your search results page) would interact.
- Each visitor in variation (version) A or B has an equal chance of seeing variation C or D
- Each visitor in variation C or D has an equal chance of seeing variation A or B
- So if there are any (slight) interaction biases, they would even out, as long as both audiences are eligible to see both experiments.
In other words, your tests are less likely to have overlapping effects if they:
- Run on different areas of your site
- Track different goals
However, there are a few scenarios when creating mutually exclusive experiments, or running sequential tests (waiting for one to end in order for the next to start) is recommended.
Below are scenarios where overlap can cause confusion when it comes time to evaluate and action your experiment results. These include:
- Testing elements on the same page with the same goal. Consider whether to run these experiments simultaneously, sequentially, or test all the element changes in the same multivariate experiment to see individual and combined performance.
- Testing subsequent pages of the funnel with the same goal. Consider whether to run these experiments simultaneously, sequentially (starting from the bottom of the funnel and moving up), or multi-page experiment.
Some organizations choose to run simultaneous, mutually exclusive experiments, intending to preserve data integrity. If that’s the route you’d like to take, you can accomplish this in Optimizely by using the code we’ve provided below, but bear in mind that keeping experiments mutually exclusive increases the effort it takes to implement experiments and can significantly slow down your testing cycle.
When making important decisions for your business, evaluate your risk tolerance for experiment overlap. Evaluate your prioritized roadmap to ensure that you are planning your variation designs, goals, and execution schedule to best meet your business needs.
Mutually exclusive experiments in Optimizely
By default, Optimizely will evaluate whether to run each experiment independently. If more than one experiment matches the targeting conditions on a given page then they will all execute serially, one after another. However, you cannot assume a certain order of execution. You'll need to ensure that the experiments do not collide with each other, with respect to the changes they make to your site.
If you don't want to run the same experiments on the same page at the same time, there is an option you can pursue to ensure your experiments are mutually exclusive:
This option requires that the Privacy setting labelled Mask descriptive names in project code and third-party integrations be unchecked. If you would like to keep this privacy setting checked, see Option 2.
There is no limit to how many experiments can be tagged as mutually exclusive.
In the example above, Experiment 1 and 2 are mutually exclusive to each other, which means a visitor will not be able to enter both. However, a visitor could enter Experiment 1 and 3; alternatively, a visitor could enter Experiment 2 and 3. This is because Experiment 3 is not tagged as mutually exclusive.
If you run many mutually exclusive experiments, you may want to prevent a small portion of visitors from entering any of them, giving you a baseline to measure your goals against. This is known as a holdout, and it can be accomplished by tagging an experiment name with '[ME] holdout'. If created, there is a 5% chance that a visitor will enter the holdout experiment.
In the example above, a visitor has a 47.5% (95% divided by 2) chance of entering Experiment 1 or 2, and a 5% chance of entering Experiment 4.
If you have a large testing team or many experiments running concurrently, you may want to group experiments that share the same targeting conditions. This will help you modify their exclusivity relative to one another, and ensure that other experiments don't interfere with the group. Experiments using the '[Group_X]' tag (where X can be replaced with a character of your choice) can be tagged as mutually exclusive, as a holdout, or neither.
In the example above, if a visitor enters a Experiment 5 or 6, they will not enter any other experiment. The holdout, Experiment 7, is also mutually exclusive. When a holdout is used in a group, 5% of traffic is held back. Finally, a visitor will be exposed to both Experiment 8 and 9 if they are not exposed to either ME or the holdout experiment.
- Change the expArray parameter to list the experiment IDs of all involved experiments.
- Change the curExperiment variable to highlight the current experiment that you are creating the Audience for.
curExperiment = 1111111111;
- Repeat this process for each mutually exclusive experiment.
If you have distinct groups of experiments that you wish to exclude from one another, then you must use the optional groupName parameter to specify a name unique to each expArray.
Once this is all set up, then one experiment within the group will be chosen randomly to execute for each visitor, and they will be forcibly excluded from all other experiments in the group. Once included in this experiment, the visitor will continue to be included in that experiment unless changes are made to the audience conditions. However, if the traffic allocation settings of the selected experiment dictate someone is excluded from that experiment, they remain eligible for the others in the group.
This mutual exclusivity does not stretch across the groups; e.g. if you specify a groupA and a groupB of experiments, a visitor can be bucketed into one experiment from each group. Further, for other experiments outside of your groups, a visitor may also be bucketed into them.
If your business requirements are not satisfied by either of these approaches, please contact support or let us know in the Optiverse Community.