This article will help you:
  • Define testing goals for quick wins and long-term gains
  • Set primary, secondary, or monitoring goals
  • Create experiment and campaign goals that support your broader program metrics

Choosing the right metrics help you validate (or disprove) your hypothesis and ensure that you're making progress towards your overall business goals.

In experience optimization, metrics play different roles depending on where you set them and what you want them to tell you. Below, we walk you through each role type: primary, secondary, or monitoring goals.

The primary metric determines whether the test "wins" or "loses"; it tracks how your changes impact your visitors’ behaviors. Secondary goals and monitoring goals provide additional information about your visitors’ behavior in the vicinity of your change and across your site.

Below are a few general tips for setting goals:

  • Focus on a direct visitor action that is on the same page as the changes you made.

  • Consider how your changes affect other parts of your site. Set goals to measure potential interaction effects, so you know if your test is truly moves customers in the right direction.

  • Place different types of goals at different points in your funnel to gather timely data about your visitors’ behavior.

Primary metric

Optimizely allows you to set a primary metric for each experiment to determine its success. It’s the most important goal of the experiment and decides whether your hypothesis is proven or disproven.

In Optimizely, the primary metric will always achieve statistical significance at full speed, regardless of any other goals added. Stats Engine treats this goal separately, because it's the most important and tells you whether your hypothesis is supported.

Here's how to set up a primary metric in Optimizely X. In Optimizely Classic, there's no formal concept of "primary metric" in the goal module, but here's how to set up a goal.

In Optimizely, the primary metric will always achieve statistical significance at full speed, regardless of any other goals added. Stats Engine treats this goal separately, because it's the most important and tells you whether your hypothesis is supported.

Roughly speaking, the more goals and variations you include in an experiment, the longer each will take to reach significance. For this reason, it's important to be mindful in distinguishing a primary goal from secondary and monitoring goals. Stats Engine corrects for false discovery rate to help you make better business decisions.

When choosing a primary goal, ask yourself these questions:

What visitor action indicates that this variation is a success?

Often, the best path is to measure the action that visitors take as a direct result of this test.

Does this event directly measure the behavior you’re trying to influence?

Many optimization teams automatically track revenue per visitor as the primary goal, but this isn't the best way to design a test. Top-level metrics like revenue and conversion rate are important, but the events involved are often far away from the changes made. If this is the case, your test may take a long time to reach statistical significance or end up inconclusive.

Use an action that's directly affected by your change to decide whether your test helped or hurt.

Does the event fully capture the behavior you’re trying to influence?


Consider whether your primary goal fully captures the behavior you’re trying to influence. What's the best way to capture the change?

Imagine you're testing the design and placement of an Add-to-Cart button. Your business cares about revenue, but it's measured five pages down the funnel. You're likely to devote a large amount of traffic to this test and you risk an inconclusive result.

So, you measure clicks to the Add-to-Cart on product pages instead. It's a primary goal that's directly impacted by the changes you made. And with a goal tree, you know that this metric rolls directly up to company goals.

Imagine that you're testing bulk discounts on your site. Your primary goal might be conversion rate; or, it might be average order value (AOV). Neither metric fully accounts for the behavior you're trying to impact.

The conversion rate could rise as customers are incentivized or decrease as customers wait to create large, discounted orders. AOV could rise as customers buy more in bulk or decrease as discounts take the place of full-price orders. 

From this perspective, revenue-per-visitor is the best metric. It equals the conversion rate (how often customers purchase) multiplied by the AOV (how much they spend). It's the best overarching goal in this test, where smaller goals may provide conflicting information.

Speed and impact

Think of your primary goal in terms of distance. In a funnel, the most immediate effects are directly downstream from the changes you made. The closer an event is to the change, the louder the signal and the bigger the measurable impact. As you move downstream, the signal starts to fade as visitors from different paths and motivations enter the stream. At the end of the funnel, the effect may be too faint to measure.

Remember, all other things being equal, goals that have a lower conversion rate require more visitors to reach statistical significance. Events that are further from the page you’re testing will have lower improvement in conversion rates due to your variation as visitors enter from different paths, leave the site before they convert, and more. If this is the case, your test will take longer to reach significance.

Instead, consider setting a primary goal on the same page as your change. The impact of your change will be picked up immediately, so you find a winning variation quickly. Quick wins help generate credibility and interest in your testing program. They also provide fast, reliable insights on how your visitors behave. By focusing on small, grounded wins, you build a testing program that is data-rich and able to quickly iterate on the insights it generates.

Ambitious, program-level goals like revenue and conversion rate make excellent secondary goals, and help keep your program focused on long-term success.

Secondary and monitoring goals

Secondary and monitoring goals are all goals you add that aren’t the primary goal. They help you gather insights that are key to long-term success. In Optimizely X, the metrics that you rank from 2-5 are secondary metrics. All metrics after the fifth are diagnostic, and have minimal impact on the speed of secondary goals and no impact on the speed of the primary goal.

Secondary goals track long-distance goals and more ambitious metrics. End-of-funnel goals like order value and order confirmation make excellent secondary goals because they provide valuable information but are generally slower to reach significance. If you don’t make these long-term wins your primary goal, you don’t have to wait.

Secondary goals are also useful for gaining visibility across the different steps of your funnel. For example, if you make a change to your product page and display shipping costs, your secondary goal might measure the change in drop-offs from the shipping page in your funnel. In general, use secondary goals to learn when visitors drop off, when they navigate back to the home page, and how these patterns compare between the original and the variations.

Below is a list of common secondary goals:

Searches submitted See how many searches are submitted
Category pageview See whether vistiors navigate the site via Category pages
Subcategory pageview See whether Subcategory pages are reached
Product pageview Know the percentage of visitors who do or don't view a product during a visit
Add-to-Cart Understand what percentage of visitors Add-to-Cart per test, category, product type
Shopping cart pageview See how many visitors progress to the shopping cart
Checkout pageview Understand how many visitors continue from shopping cart to checkout
Payment pageview See what percentage of visitors continue from checkout to payment
Conversion rate Know what percentage of visitors ultimately convert or complete payment

To learn more about interpreting results across a funnel, check out this article.

Monitoring goals track whether your experiment is truly moving visitors in the right direction. Every time you create an experiment, you’re trying to optimize the user experience to improve a business outcome. But your change might also create adverse effects in another metric. Monitoring goals help you answer the question: where am I optimizing this experience, and where (if anywhere) am I worsening it? These goals form a warning system that alerts you when you’re cannibalizing another revenue path.

For example, imagine that you show visitors more products on the product category page and find with your primary goal that people view more products. You might also wonder:

  • Are people more price conservative when initially presented with more products?
    Monitoring goal: average order value

  • Are people actually buying more products?
    Monitoring goal: conversion rate

  • Are people frustrated and unable to find what they’re looking for?
    Monitoring goal: subcategory filters

Below is a list of common monitoring goals:

Search bar opened See what percentage of search bar interactions do not lead to submissions
Top menu CTR See how often visitors navigate via the top menu per page or step in funnel
Home page CTR See how often visitors exit to the Home page from any given page
Category page filter usage Understand the frequency of filter usage
Product page quantity selection Understand percentage of visitors who interact with quantity selection
Product page more info Understand how many visitors seek more information on a product
Product page tabs See how often each tab is interacted with
Payment type chosen See which payment type users prefer, per experiment
Return / Back button CTR See how often visitors exit a page via a particular button

Set fewer, high signal goals

As we discussed previously, Optimizely's Stats Engine reacts to the number of goals and variations in your experiment to align Statistical Significance with your risk in making business decisions from experiments. We also mentioned that significance takes longer to achieve when there are more goals and variations in an experiment, and that your primary metric is exempt from this slowdown. Here we'll add to the story.

Adding more goals and variations to your experiment increases your chance of implementing a falsely significant result with traditional statistics, and this is what Stats Engine corrects (a detailed explanation is here), but not all goals are equally guilty. High signal goals, that is, goals which you  believe will be impacted by your variations, are less likely to contribute to false discoveries. This is because high signal goals are usually less noisy, and so it is easier to tell if your variation is having an impact on them. 

One analogy to think of is to consider experimentation with multiple goals and variations as trying to pick needles of true differences from a haystack of noise. It is easier to find a large (high signal) needle in the haystack, than a small (low signal) one. 

Very similarly, with Stats Engine, the more high signal goals in your experiment, the faster all your secondary and monitoring goals will tend to reach significance. So if speed to significance is a primary concern for your organization, consider limiting the number of non-primary goals in your experiment, and focusing on goals you believe are related to your variations. 

Of course, you are free to add many goals to your experiments. The strength of Stats Engine is that you will not be exposed to higher error rates, but be aware that the cost of broad, undirected exploration is longer time to significance.  

Want to estimate how much longer it will take for multiple secondary goals to reach statistical significance? Here's an easy, back-of-the-envelope way.

In Optimizely's Sample Size Calculator, fill out your baseline conversion rate and MDE as usual. But for the Statistical Significance threshold enter 100 - (100 - S)/N where S is your desired threshold (the default is 90), and N is the number of goals times variations other than baseline.

For example, if you were running an experiment with 2 goals and 2 variations plus a baseline, at 90 Significance, you would estimate your secondary goal to require the number of visitors it takes to reach 100 - (100 - 90)/(2*2) = 97.5 Significance with 1 goal and 1 variation.

Note this is an upper bound on the number of visitors you’ll need on average, which means you’ll likely see significance sooner.