This article will help you:
  • Decide whether to run mutually exclusive experiments, or run simultaneous and sequential experiments
  • Run multiple experiments on the same page or flow simultaneously, keeping the audiences mutually exclusive
  • Evaluate the effect of single or controlled changes on your conversion rates
  • Avoid skewing the results of one experiment by running another experiment simultaneously
  • Establish groups of experiments to be mutually exclusive among themselves but not the rest of your experiments

When running multiple experiments on your site, you may want to set them so they are mutually exclusive -- in other words, you may set them so that visitors who are exposed to Experiment #1 never see Experiment #2, and vice versa.

The reason you're thinking about this is that you want to eliminate noise or bias from your results.

Decide whether to run simultaneous experiments

To answer the question of whether you should create mutually exclusive experiments, ask yourself how likely you are to see interaction effects on the experiment. You're more likely to see these effects if:

  • You're running two experiments on the same page.
  • You're running two experiments on the same flow where there's likely going to be strong overlap from one page to the next (for example, a checkout funnel or multi-step form).
  • You're running two experiments that are actually better to run together, for instance as a multivariate experiment, to see the potential interaction

If the above don't apply to you, then it's usually unnecessary to create mutually exclusive experiments. This is because both variations of the tests are exposed to the other test proportionally. For example, see below for a breakdown of how running two tests (one on your homepage and one on your search results page) would interact.

  • Each visitor in variation (version) A or B has an equal chance of seeing variation C or D
  • Each visitor in variation C or D has an equal chance of seeing variation A or B
  • So if there are any (slight) interaction biases, they would even out, as long as both audiences are eligible to see both experiments.

In other words, your tests are less likely to have overlapping effects if they:

  • Run on different areas of your site
  • Track different goals

However, there are a few scenarios when creating mutually exclusive experiments, or running sequential tests (waiting for one to end in order for the next to start) is recommended.

Below are scenarios where overlap can cause confusion when it comes time to evaluate and action your experiment results. These include:

  • Testing elements on the same page with the same goal. Consider whether to run these experiments simultaneously, sequentially, or test all the element changes in the same multivariate experiment to see individual and combined performance.
  • Testing subsequent pages of the funnel with the same goal. Consider whether to run these experiments simultaneously, sequentially (starting from the bottom of the funnel and moving up), or multi-page experiment.

Some organizations choose to run simultaneous, mutually exclusive experiments, intending to preserve data integrity. If that’s the route you’d like to take, you can accomplish this in Optimizely by using the code we’ve provided below, but bear in mind that keeping experiments mutually exclusive increases the effort it takes to implement experiments and can significantly slow down your testing cycle.

When making important decisions for your business, evaluate your risk tolerance for experiment overlap. Evaluate your prioritized roadmap to ensure that you are planning your variation designs, goals, and execution schedule to best meet your business needs. 

Mutually exclusive experiments in Optimizely

By default, Optimizely will evaluate whether to run each experiment independently. If more than one experiment matches the targeting conditions on a given page then they will all execute serially, one after another. However, you cannot assume a certain order of execution. You'll need to ensure that the experiments do not collide with each other, with respect to the changes they make to your site.

If you don't want to run the same experiments on the same page at the same time, there is an option you can pursue to ensure your experiments are mutually exclusive:


Option 1: Project JavaScript

Project JavaScript allows you to evaluate all of your experiments’ targeting conditions before the main portion of the snippet runs. It's available to customers on select Enterprise plans.

This option requires that the Privacy setting labelled Mask descriptive names in project code and third-party integrations be unchecked.  If you would like to keep this privacy setting checked, see Option 2.

Once you implement this JavaScript code from our Developer Docs, you will be able to use specific name tags to automatically modify the exclusivity of experiments relative to one another. These name tags are case-sensitive and must be placed before the experiment name. The first tag, [ME], should be used on all experiments that you would like to keep mutually exclusive.


There is no limit to how many experiments can be tagged as mutually exclusive.

In the example above, Experiment 1 and 2 are mutually exclusive to each other, which means a visitor will not be able to enter both. However, a visitor could enter Experiment 1 and 3; alternatively, a visitor could enter Experiment 2 and 3. This is because Experiment 3 is not tagged as mutually exclusive.


Important: Mutually exclusive experiments should share the same URL Targeting and Audience conditions, and they should each have Traffic Allocation set to 100%.  

Advanced Users:

If you run many mutually exclusive experiments, you may want to prevent a small portion of visitors from entering any of them, giving you a baseline to measure your goals against. This is known as a holdout, and it can be accomplished by tagging an experiment name with '[ME] holdout'. If created, there is a 5% chance that a visitor will enter the holdout experiment.

In the example above, a visitor has a 47.5% (95% divided by 2) chance of entering Experiment 1 or 2, and a 5% chance of entering Experiment 4.

If you have a large testing team or many experiments running concurrently, you may want to group experiments that share the same targeting conditions.  This will help you modify their exclusivity relative to one another, and ensure that other experiments don't interfere with the group. Experiments using the '[Group_X]' tag (where X can be replaced with a character of your choice) can be tagged as mutually exclusive, as a holdout, or neither.

In the example above, if a visitor enters a Experiment 5 or 6, they will not enter any other experiment. The holdout, Experiment 7, is also mutually exclusive. When a holdout is used in a group, 5% of traffic is held back. Finally, a visitor will be exposed to both Experiment 8 and 9 if they are not exposed to either ME or the holdout experiment.

Use Custom JavaScript Audiences for mutually exclusive experiments

You can use a custom JavaScript audience condition, available on certain Optimizely plans, to ensure that only one experiment within a particular group will execute at a time.

To ensure mutual exclusivity between experiments, you must create an Audience for each experiment using a Custom JavaScript Audience Condition. Paste this code into the custom JavaScript code box:

 * Traffic allocation weighting is confined to the context of each 
 * individual experiment. In other words, each experiment has an 
 * equal chance of bucketing the visitor, regardless of if the 
 * chosen experiment has a large exclusion percentage due to 
 * traffic allocation.

 /* If left empty by changing this line to "expArray = []", all active experiments from the project will 
 * be used - this will update every time the snippet runs & 
 * experiments are started & paused. */

curExperiment = experimentId1;
/* [curExperiment] - Set the id for the current experiment being 
 * evaluated */

groupName = "groupA";
/* [groupName] - Optional. Needed if excluding multiple groupings */

chooseRandom = 1;
/* 1 - YES - if no match is found pick at random from expArray
 * 0 - NO - if no match is found, pick this experiment */

logging = 0;
/* 1 - YES   0 - NO */

/*--  Do not modify below this line --*/
/* Internal comments below:
 * Iterate over current bucket mappings and
 * set the global variable to the experiment
 * the user is already included in.

groupName = window.groupName || "groupA";

// Safety
groupName = "__" + groupName;

if(expArray.length == 0){
   var allExp = window.optimizely.allExperiments || [];
   for (key in allExp) {
      if (allExp.hasOwnProperty(key) && allExp[key].hasOwnProperty("enabled")) {

var cookieMatch = document.cookie.match("optimizelyBuckets=([^;]*)");
    for(var key = 0; key < expArray.length; key++) {
         /* Check 3 things to find a match and break the loop:
          * 1-The experimentID is in the cookie
          * 2-The experimentID is set to enabled
          * 3-The value of the experiment's variation ID is not zero
         if (cookieMatch[1].indexOf(expArray[key]) > -1 && window.optimizely.allExperiments[expArray[key]].hasOwnProperty("enabled") && decodeURIComponent(cookieMatch[1]).indexOf('"'+expArray[key]+'":"0"') < 0) {
               optimizely[groupName] = expArray[key];
               break; /* we found what we're looking for so end loop */

        console.log("Experiment " + optimizely[groupName] + " is active - No others in the array will be distributed.")
    } else{
        console.log("No active experiments. " + curExperiment + " is eligible for distribution.")

/* If the global variable hasn't been set, set it now to 
 * curExperiment or random choice from expArray.
 * To prioritize a certain experiment over another instead of 
 * random choice, change the Math.floor function.

optimizely[groupName] = optimizely[groupName] || (chooseRandom ? expArray[Math.floor(Math.random()*expArray.length)] : curExperiment);

/* Check if the current experiment matches the global experiment. Return boolean */
optimizely[groupName] == curExperiment;

After implementing the above code in a JavaScript Audience Condition, you'll have to make two more changes to this code:

  1. Change the expArray parameter to list the experiment IDs of all involved experiments.
  2. Change the curExperiment variable to highlight the current experiment that you are creating the Audience for.
    curExperiment = 1111111111;
  3. Repeat this process for each mutually exclusive experiment.

Optional Changes:

If you have distinct groups of experiments that you wish to exclude from one another, then you must use the optional groupName parameter to specify a name unique to each expArray.

Once this is all set up, then one experiment within the group will be chosen randomly to execute for each visitor, and they will be forcibly excluded from all other experiments in the group. Once included in this experiment, the visitor will continue to be included in that experiment unless changes are made to the audience conditions. However, if the traffic allocation settings of the selected experiment dictate someone is excluded from that experiment, they remain eligible for the others in the group.

This mutual exclusivity does not stretch across the groups; e.g. if you specify a groupA and a groupB of experiments, a visitor can be bucketed into one experiment from each group. Further, for other experiments outside of your groups, a visitor may also be bucketed into them.


If your business requirements are not satisfied by either of these approaches, please contact support or let us know in the Optiverse Community