Skip to main content

We are currently experiencing an issue that is preventing customers from submitting support tickets. Please contact us at (866) 819-4486 if you need immediate assistance.

x
Optimizely Knowledge Base

Plan experiments that make an impact

This article will help you:
  • Design and scope an effective experiment
  • Create a basic or advanced experiment plan for an individual test
  • Communicate your plan to external stakeholders, including designers and developers
  • Create a rigorous QA checklist 

So, now you have a prioritized list of ideas for optimization. Great! Your next step is to implement the top ideas in that list. For many teams, the challenge at this stage is planning the scope, design, and implementation of a test with multiple stakeholders.

In Optimizely Testing, an experiment plan can help you scope and launch individual experiments. You’ll estimate the cost of a test in terms of traffic and time -- and weigh that cost against the potential value it will yield.

To turn a line item on your roadmap into a launchable experiment or campaign, create a single deliverable that covers the 5Ws + How:

  • Why are you running this experiment? (hypothesis)
  • When and where will your variations run?
  • Who do you want see this experiment?
  • What changes does your variation make?
  • How are you measuring success?

Share it with the strategists, designers, developers and approvers involved in your optimization efforts to secure approval and improve visibility.

This article discusses the process of scoping an experiment and creating a basic or advanced test plan. Use it to design and implement individual ideas and communicate with stakeholders.

What you need to get started:
Materials to prepare
    • Experiment hypothesis
    • Business goals
    • Variation descriptions (wireframes or screenshots)
    • Summary of all technical and design assets needed for the experiment
    • Parameters for significance and lift that indicate that the change will be implemented permanently

People and resources
Actions you'll perform 
    • Create a test plan document
    • Create a rigorous QA checklist
    • Review and update plan with stakeholders
    • Confirm scope of test
    • Define primary, secondary, and monitoring goals
    • Confirm stakeholders who will create required resources
    • Document responsibilities and deadlines (in Kanban, gantt chart, or other internal method)
    • Finalize test plan


Deliverables


What to watch out for

    • Ill-defined scope
    • Lack of true hypothesis or goals
    • Lack of executive buy-in
    • Missing screenshots
    • Poor understanding of resource needs
    • Inaccurate effort estimates
    • Inadequate documentation for QA
    • Plan not shared with the proper stakeholders
    • Lack of adherence to experiment plan when building the test
 
Tip:

To get started, download this template to create your experiment plan.

This Test Idea Worksheet can also help you visualize and and design your experiment. For more downloadable resources, check out the Optimizely Testing Toolkit.

Read on to learn more about scoping your experiment for the test plan, below. This article focuses on experiment design in Testing, but many of the same principles apply to campaigns in Personalization.

Create a basic experiment plan

A basic plan (shown above) helps you implement a line item in your roadmap. If you have a lean team that makes most strategic and design decisions about testing on its own, use this basic plan to manage the project and set expectations for external stakeholders.

The 5Ws + How establish the fundamental intentions of the proposed test for developers and designers who help you execute the plan. Provide all details needed to execute the six steps to build an experiment.

The plan also standardizes your testing practice and streamlines the approval process with stakeholders outside of your team. Create a list of all QA visitor use cases, or all the ways that a visitor arrives or navigates through the experiment -- along with the expected result. Your QA team will use this list to evaluate the experiment before launch.

Create this plan as a shareable document that multiple stakeholders can reference: a presentation slide, an email template, or a wiki page that covers all basic information about the test. Strategic planners use this document to communicate the concept to the designers and developers responsible for implementing the experiment.

Click here to download a template for a basic test plan. For maximum visibility, link your individual test plans to your prioritized list of ideas.

Use MDE to scope your experiment

Once you decide on a hypothesis and identify where you want to run your test, you’ll need to scope your experiment. How many variations should you create? What kind of test - A/B, multivariate, or multi-page - should you run?

For most organizations, a key part of the cost calculation is how long it takes an experiment to reach significance. When you evaluate this factor, you’re implicitly considering opportunity cost: what other (potentially more impactful) ideas you might be testing at that time. One tool that helps you answer these questions is a statistical calculation called the minimum detectable effect (MDE). MDE helps you connect the cost calculation to your experiment design.

MDE is an estimate of the smallest level of improvement you’re willing to be able to detect; in other words, the MDE determines how “sensitive” a test is. It helps you estimate how long a test will take, given a certain baseline conversion rate, statistical significance, and traffic allocation. So, let’s say that your baseline conversion rate is 15% and you’d like to detect a 10% lift (this is your MDE) with 95% statistical significance. According to Optimizely’s Sample Size Calculator, you’d likely need 8,000 visitors per variation (including the original) to detect that lift with 95% significance.

Of course, you won’t know the actual improvement in advance -- if you did, you wouldn’t be running the test, right? But estimating the minimum lift you’d like to detect to a given level of certainty helps you establish boundaries for how much traffic or time you’ll invest. This calculation enables you to plan and scope your test more accurately.

Let’s follow the above example one step further. So, you design an experiment with four variations and your site averages 10,000 unique visitors per week. If you show this experiment to 100% of visitors, it would likely take this experiment 3.2 weeks to reach significance.

8,000 visitors per variation x 4 variations = 32,000 visitors / 10,000 visitors per week = 3.2 weeks

At this point you might ask yourself:

  • Are the results of this experiment likely valuable enough to justify the amount of traffic or time?
  • Should I reduce the number of variations to speed up my test? If so, how would I re-design this experiment with fewer variations?
  • Should I increase the drama - or degree of difference - between the variation and original to reach statistical significance sooner and speed up my test?
  • How can I create variations that are focused on maximizing lift in my primary goal?

BEST PRACTICES

Make decisions about the sensitivity of your experiment based on its potential impact on your business goals. You need less traffic to detect a large improvement (change in lift over the baseline) and more traffic to detect a small one. Consider, for example, the impact of a large hero banner at the top of your page compared to the small “next” button in the corner. The baseline conversion rates of these two features is likely very different. If you make a dramatic change - to the hero for instance, or, if you re-structure a flow on your site - you might be able to expect a 10-15% lift. If you alter the color of the button, you may expect that change to effect a 5% lift or less. The expected impact can help you decide your MDE; how dramatic is the change and how much traffic are you willing to invest?

Many programs trade speed for a less sensitive test. But your appetite for a lower MDE may increase if a conversion event is directly connected to revenue. This low-MDE experiment requires a larger amount of traffic, but even small amounts of lift in revenue-generating goals can make a big impact.

Use the estimated MDE as a guide rather than an exact prediction. The whole concept of testing is based on the fact that you don’t know what effect a given change will generate. Instead of trying to pinpoint the MDE, use the calculation as a guide: to set boundaries on the time you’re willing to invest and the value you expect to generate.  

Design impactful variations. If traffic is a concern, consider limiting your variation scope to changes that directly influence the primary conversion event.

 
Note:

To learn more, check out this article on using MDE to prioritize tests.

The roadmap template in this article also helps you scope your test. It calculates a “minimum number of days” to run your experiment based on the baseline conversion rate, MDE, and statistical significance.

Define primary, secondary, and monitoring goals

When you create your experiment plan, decide how you'll measure success. In Optimizely, you'll set a primary goal that measures how your changes affect your visitors' behaviors. Consider setting secondary and monitoring goals as well to gain greater visibility into your customers' behaviors and make sure the lift you see sets your program up for long-term success. 

To learn more, read this article on primary, secondary, and monitoring goals.

Decide what type of experiment to run

The type of experiment you run depends on how you expect your changes to impact your primary conversion event.

Run an A/B test when improvement in your primary goal can be attributed to a single change in your user experience.

If it’s important for your business to precisely measure how multiple changes are interacting with each other and influencing conversions, create a multivariate test that evaluates each combination of variables against all others.

If you’re measuring success in conversions across a series of pages, design a multi-page test to measure how changes affect how visitors through each stage of the funnel.

Finally, if you want to test multiple changes but don’t want to run a full multivariate test - which can be costly in terms of time and traffic - consider a blended A/B/n approach. Test multiple versions of your page (A, B, and n more pages) without comparing all possible combinations of variations against each other. This test type is more economical, while still allowing you to attribute lift to certain changes.

Now that you’ve scoped your experiment based on impact and MDE, you’re ready to create an experiment plan.

Create an advanced experiment plan

As an optimization organization matures, the experiment plan evolves into a more collaborative document. Companies with a strong optimization culture are likely to draw resources from all over the company; stakeholders on design or developer teams are also focused on testing, and help provide feedback on how to execute on an idea more efficiently and effectively.

The advanced test plan gathers decisions from all these invested parties into a single deliverable. To expand the scope of your planning document, use the 5Ws + How described above and also consider including:

  • the actual code used for implementation
  • alignment with sprint planning
  • experiment ID
  • roles/responsibilities (gantt chart)
  • specifically correlating goals to business value (we’re tracking this goal because it directly influences the metric that our team is measuring)

Your advanced plan provides a detailed summary of the motivations and mechanics of your experiment or campaign. When shared, it helps build visibility across the organization and scale your A/B testing strategy among a broader set of stakeholders by establishing a standardized process.

Make a QA Checklist

When you plan your experiment, include your QA team so they can create a detailed QA checklist that lists all the components of the experiment and the use cases they'll evaluate. By looping in the QA team early, you'll build a QA document that prepares for all cases to check against -- and eliminates those that don't need to be checked, based the way the experiment is built. Use the checklist to build rigor and efficiency into your QA process, from the beginning.

 
Tip:

Download this QA Checklist template to outline your own rigorous QA process. Modify it as needed to fit the processes used at your company.


Your QA checklist should include:

  • All goals that were added, as well as how each is triggered
  • All functionality that’s been added (for example, a new button)
  • Common use cases including expected user flows to and from the page
  • Audiences that are eligible and ineligible
  • URLs where the experiment should run
  • Sample workflow to fire a goal (especially for custom events)

Your QA team will grade each use case pass or fail and set the experiment live only after every item on the list has passed the test.

If you’re part of a more sophisticated team with separate development and production environments, notify the development team before running the experiment on the site.

Maintaining separate environments can help you mitigate the risk of accidentally deploying an unfinished experiment to your live site. If you have separate environments, we recommend you build experiments and QA by the following process:

  1. First, make sure each environment has its own Optimizely snippet in the head tag
  2. Build your experiment in the development environment
  3. QA the development environment
  4. Push the experiment live in the QA environment to confirm goal firing on the Results page and all analytics data collection
  5. Duplicate the experiment into your production environment
  6. Set a test cookie so only you can see the experiment on the live site
  7. Make the experiment live in the production environment and QA

Your QA team will use this document to review the experiment before you publish it live to your visitors.