Skip to main content
menu_icon.png

Everything you need to switch from Optimizely Classic to X in one place: See the Optimizely X Web Transition Guide.

x
Optimizely Knowledge Base

Create an advanced experiment plan and QA checklist

THIS ARTICLE WILL HELP YOU:
  • Create a full experiment plan to manage design and implementation
  • Create a QA checklist to build rigor and efficiency into your QA process

A full experiment plan gathers decisions from stakeholders into a single, collaborative document. It provides a detailed summary of the motivations and mechanics of your experiment or campaign. When shared, it helps build visibility across the organization and scale your experimentation strategy by establishing a standardized process.

A QA checklist helps you build rigor and efficiency around your experiment QA process -- so your experiments and campaigns work the way you intend.

See requirements
Materials to prepare
    • Experiment hypothesis
    • Business goals
    • Variation descriptions (wireframes or screenshots)
    • Summary of all technical and design assets needed for the experiment
    • Parameters for significance and lift that indicate that the change will be implemented permanently

People and resources
Actions you'll perform 
    • Create a test plan document
    • Create a rigorous QA checklist
    • Review and update plan with stakeholders
    • Confirm scope of test
    • Define primary, secondary, and monitoring goals
    • Confirm stakeholders who will create required resources
    • Document responsibilities and deadlines (in Kanban, gantt chart, or other internal method)
    • Finalize test plan
Deliverables
    • Test plan document containing:
      • All details for building an experiment
      • Technical requirements
      • Scope of the experiment
      • Creative assets or wireframes
      • Screenshots of variations
    • QA checklist
What to watch out for
    • Ill-defined scope
    • Lack of true hypothesis or goals
    • Lack of executive buy-in
    • Missing screenshots
    • Poor understanding of resource needs
    • Inaccurate effort estimates
    • Inadequate documentation for QA
    • Plan not shared with the proper stakeholders
    • Lack of adherence to experiment plan when building the test

If your team is just starting to run its first few tests, check out this basic experiment plan.

Template: Experiment design document

Download this experiment design template to keep track of the pieces for implementing your experiment.

Your experiment design document should include:

  • The actual code used for implementation

  • Alignment with sprint planning

  • Experiment ID

  • Roles/responsibilities (gantt chart)

  • Primary, secondary, and monitoring goals for the experiment

  • Specifically correlating goals to business value (we’re tracking this goal because it directly influences the metric that our team is measuring)

You may also want to use minimum detectable effect to choose what type of test to run.

Template: QA checklist

Download this QA Checklist template to outline your team's QA process. Your team will use this document to review the experiment before you publish it live to your visitors.

When you plan an experiment, include the QA team so they can create a QA checklist. By looping the QA team early, you'll build a rigorous, efficient process that prepares for all cases to check against -- and eliminates those that don't need to be checked.

Your QA checklist should include:

  • All goals or events in the experiment, and how each is triggered

  • All functionality that’s been added (for example, a new button)

  • Common use cases including expected user flows to and from the page

  • Audiences that are included or excluded

  • URLs where the experiment should run

  • Sample workflow to fire a goal (especially for custom events)

Your QA team should grade each use case pass or fail. They'll push the experiment live once everything on the list has passed the test.

QA for separate development and production environments

If you have access to separate development and production environments, notify the development team before running the experiment on the site.

Maintaining separate environments can help you mitigate the risk of accidentally deploying an unfinished experiment to your live site. If you have separate environments, we recommend you build experiments and QA by the following process:

  1. First, make sure each environment has its own Optimizely snippet in the head tag.

  2. Build your experiment in the development environment.

  3. QA the development environment.

  4. Push the experiment live in the QA environment. Confirm goal firing on the Results page and all analytics data collection.

  5. Duplicate the experiment into your production environment.

  6. Set a test cookie so only you can see the experiment on the live site.

  7. Push the experiment live in the production environment and QA.