This article will help you:
  • Evaluate whether to change an experiment that is already running
  • Understand the impact of changing a running experiment
  • Pause experiments and start new ones, instead of modifying running experiments

Though it is not forbidden by the Optimizely Editor, we strongly recommend that you do not make changes to an active experiment. What type of changes don't we recommend?

  • Editing active variations after an experiment has started

  • Adding new variations to a running experiment, or removing variations

  • Adding, removing, or changing Audiences, URL Targeting, or Traffic Allocation for a running experiment

Why not change an running experiment?

When an experiment is running we collect the conversion data for all time and compare it to the control group conversion data for the same span of time. If a change is made midway through the experiment then the effects of that change can only be measured starting at that time.

Why is this bad? Suppose a change that you made improves conversions by 5%. You take note of that change and then decide you are going to add another change to that variation that you "think" will have the same effect.

All of a sudden your conversion rate drops back to be the same as the control group. Now you don't know whether the reason for the drop is because you made a change or because the original change actually performed worse than the numbers initially indicated.

Let's say the conversion rate didn't drop down to the control group rate, but dipped to 2%. Again, you can't be sure what caused that dip, but you see it as an all around improvement, however, it is possible that the second change actually has a negative effect on conversions but that decrease was being negated by the positive effect of the first change.

Best practices

We recommend that you pause the existing experiment and start a new one if you want to make a different change. This way you do not contaminate the data from either change. You can also duplicate an experiment, which won't duplicate the results, if you want to run a new experiment similar to an existing one.

We also recommend that if you intend to pause a variation, that the decision be final and that you do not later use that information to compare to the still running variations. If there are any events that impact all variations the changes will not be reflected by the paused variation.

These best practices apply to Optimizely and to all A/B testing tools.