relevant products:
  • Optimizely X Web Experimentation
  • Optimizely X Full Stack
  • Optimizely X Mobile
  • Optimizely X OTT

This article will help you:
  • Learn how Optimizely X Web Experimentation calculates results to enable business decisions
  • Determine the definitions and formulas for result metrics

The Results page at a glance

Optimizely's Results page helps you measure success in an experiment. Dive into each metric and variation to see how visitors respond to changes you made to your site.

Key tips

What to watch out for

  • If you add more than five metrics to an experiment, the additional metrics may take longer to reach statistical significance
  • Visitors are bucketed into variations by chance according to the traffic distribution percentage you set, so you may not see the exact same number of visitors in each variation

Optimizely's Results page is powered by Stats Engine. It provides a data-rich picture of how your visitors interact with your site. Use it to measure success in an experiment and learn about visitors to your site.

This article walks through the Results page for Optimizely X. If you're using Optimizely X Web Personalization, there's a slightly different Results page.

Here's what you'll see in the left-hand navigation: 

  • Options to pause, preview, or archive the experiment

  • The date when changes were last published

  • Number of days running

  • Audience(s) targeted in the experiment

  • Pages included in the experiment

  • Number of visitors

The summary and metric modules provide an in-depth view of your visitors' behavior on your site. We'll discuss those below. You can also learn how to interpret the results you see in Optimizely.

All results in Optimizely X are in local time, according to the time zone set on your machine.

Find the Results page

Here are two ways to find the Results page:

  • Experiments dashboard > Results

  • Manage Experiment dashboard > View Results

Modules

The Results page provides a high-level summary and a module for each metric attached to your experiment. We'll walk through the summary and modules below. You'll use them to check which variations are winning, losing, or inconclusive.

Summary

The summary provides a high-level overview of the experiment. It allows you to compare how each variation is performing for the primary metric, compared to the original.

Here's what you see once visitors enter your experiment:

  • Visitors: Optimizely shows the number of unique visitors who've been bucketed into each variation.

    Above, 7,406 visitors (or 32.88% of visitors in this experiment) have seen the original variation and 7,542 visitors have seen the "CTA Changed" variation.

  • Improvement: The summary also shows the improvement to the primary metric (above, the metric is Sample Size Calculator CTA Click) for each variation, compared to the baseline.

    Above, clicks to the Sample Size calculator fell by 38.63% in "CTA Changed" variation and rose by 126.12% in the "CTA Higher Up" variation.

Metrics

Below the summary, you'll see results for each metric that you added to your experiment. The primary metric is always at the top.

  • Unique Conversions (or Total Conversions): By default, Optimizely shows the number of unique visitors who triggered the event. Optimizely deduplicates conversions, so a single visitor triggers the same event multiple times is counted just once. 

    To see total conversions (not deduplicated), click the Uniques dropdown and select Totals

  • Conversion Rate (or Conversions per Visitor): By default, Optimizely shows the conversion rate: the percentage of unique visitors in the variation who triggered the event.

    In the Totals view, you'll see Conversions per Visitor: the average conversions per visitor, for visitors in the variation.

  • Improvement: Optimizely displays the relative improvement in conversion rate for the variation over the baseline as a percentage. For example, if the baseline is conversion rate is 5% and the variation conversion rate is 10%, the improvement for that variation is 100%.

  • Confidence interval: The confidence interval measures uncertainty around improvement. Stats Engine provides a range of values where the conversion rate for a particular experience actually lies. It starts out wide -- as Stats Engine collects more data, the interval narrows to show that certainty is increasing.

    Once a variation reaches statistical significance, the confidence interval always lies entirely above or below 0.

Statistically significant and positive Statistically significant and negative Not yet conclusive
  • Statistical significance: Optimizely shows you the statistical likelihood that the improvement is due to changes you made on the page, not chance. Until Stats Engine has enough data to declare statistical significance, the Results page will state that more visitors are needed and show you an estimated wait time based on the current conversion rate.

Filter results

Use graphs, date range, attributes, and the baseline to dig into results. We'll show you how below.

Graphs

You can toggle between different graphs for each metric. To see or hide graphs, click Hide graph or Show graph.

  • Improvement over Time (the default): Improvement in this metric for each variation, compared to the baseline

  • Conversions over Time: Conversions per day in this metric for each variation, including the original

  • Conversion Rate over Time: The cumulative conversion rate for each variation, including the original

  • Statistical Significance over Time: Cumulative statistical significance for the variation

Filter by date range

Use the Date Range dropdown to select start and end dates for your Results page view. Then, click Apply. The results generated will be in your computer's timezone.

Segment results

By default, Optimizely shows results for all visitors who enter your experiment. However, not all visitors behave like your average visitors. Segmenting your results is a powerful way to gain deeper insights about your customers and design data-driven experiments and personalization campaigns. 

Use the Segment dropdown to drill down into a segment of your visitors.

For Optimizely X Web Experimentation, the default segments are:

  • Browser: Firefox, Google Chrome, Internet Explorer, Opera, Safari, Unknown

  • Source: Campaign, Direct, Referral, Search

  • Campaign

  • Referrer

  • Device

You can also segment by up to 100 custom attributes in Optimizely X Web (including custom dimensions, if you're using these from Optimizely Classic) or Optimizely X Mobile.

Change the baseline

Sometimes, you may want to see how all your variations compare to one variation in particular -- which may not be the original. Use the Baseline dropdown to select a different variation as the baseline.

Share results

Use the Share feature to send your Results page to key stakeholders. Click Share and copy the URL provided.

The Share link provides access to the Results page for that specific experiment. Users can segment data, view charts, filter by data range, and more. However, they can't navigate out of the specific experiment or campaign.

If you'd like to reset the link that you shared, click Reset Link. Users with the previous link will no longer have access to the Results page.

Manage metrics

Click Manage Metrics to add or remove metrics, or set a new primary metric. 

Remember, if you add more than five metrics to an experiment, the additional metrics will take longer to reach statistical significance. This is because Optimizely's Stats Engine controls the False Discovery Rate of your experiment, or the "chance of making an incorrect business decision." 

However, the additional metrics don't slow down the speed of your overall test. Stats Engine ensures that the primary metric (which signals whether a variation "wins" or "loses") always reaches significance as quickly as possible.

Edit experiment

Click Edit Experiment to make changes to your experiment. Use this option to pause a variation or adjust your traffic distribution.

Reset results

This feature is only available on Optimizely X Web Experimentation projects. The Reset Results button allows you to reset your entire experiment. You might use this if you need to adjust your experiment setup and clear previous results. 

Take care when you reset results, as the cleared results data is not retrievable.

A safer way to reset an experiment: pause and clone it. Then, archive the old experiment and publish the new one.