Skip to main content
menu-icon.png

 

x
Optimizely Knowledge Base

Study guide for Experimentation Evangelist certification

This article will help you prepare for your Optimizely Experimentation Evangelist certification exam. Good luck!

Align experimentation to your business 

Align experiments with business goals 
  • Business goals are as important as having a destination when on a road trip.

  • Experiments must align with metrics your business most cares about.

  • With each experiment, you should be able to express and predict how changes will:

    • Affect user behavior

    • Directly benefit a business goal

  • Using an experiment to settle an argument about which design is better, or to answer the question “I wonder what happens if I do xyz…” isn’t the best priority: always make your business goals a priority.

  • Consider a hierarchy of goals to help you iterate your experiment in order to influence your highest goal of the company:

    • Company goal: increase total revenue

    • Business Unit goal: increase revenue per visit

    • Optimization goal

    • Experiment goal: more granular (like clicks to add to cart button) a concrete action

  • Doing this creates your goal tree, where you can see where your experiment goals can work up and up towards the ultimate company goal.

Building a goal tree 
  • The purpose of creating goal tree is to organize the metrics that feed into the company goal. It can also help you decide which goal to pursue first.

    • It’s the foundation of your ideation strategy.

To go deeper on this subject, check out our article on primary and secondary metrics and monitoring goals.

Build your optimization team

Decide whom to involve:
  • Planning your optimization program involves five core functions:

    • Ideation

    • Planning

    • Implementation

    • Interpretation

    • Communication

To go deeper, check out our article on best practices for building an effective optimization team and this blog post on improving testing and optimization.

Nurturing an optimization culture
  • Challenge: coordinating with permanent team members and intermittent team members (like developers, QA and UX designers) is tough!

  • Communication is key to building an optimization culture.

  • How can you do this effectively?

    • Schedule a standing team meeting and consider opening it to others, as a way to glean ideas.

    • Consider creating an email listserv: being open to ideas from within the company is important, not just for the experiments (though you never know who'll come up with a great idea), but to build a company culture that focuses on optimization.

    • Include executives: while not very involved at the day-to-day level, including them in your messaging helps them understand the impact you’re making with your experiments.

Read about the five traits of best-in-class optimization teams on the Optimizely blog!

Optimization statistics 

What is Statistical Significance? 
  • Statistical significance is a mathematical demonstration that your results are reliable.

  • The higher you set statistical significance, the higher the probability that those results are accurate and not due to random chance.

  • There is no such thing as complete certainty in statistics.

  • The significance level you choose is a reflection of how comfortable you are with risk. The lower the number, the more comfortable you are with uncertainty in your results.

  • Optimizely’s default statistical significance setting is 90%.

  • You need a certain number of data points (at least 100 visitors and 25 conversions on each variation to reach this at Optimizely) in order to generate any reliable measurement of statistical significance.

  • With a high significance level and low traffic, stat sig will take much longer to achieve.

  • The more visitors you get, the more accurately you can make data-driven decisions that will impact your business goals.

To go deeper on this subject, check out our article on statistical significance in Optimizely.

What are Statistics, Anyway? 
  • In Optimizely, statistics provide insight into your results and give you confidence in your winning variations.

  • The p-value is the chance that your results would be the same if there were no difference between the variations. If that chance is less than 5%, the results are statistically significant.

  • Never end an experiment too early, even if it reached statistical significance for the first time (even if it happens before reaching required sample size). Doing so could give you bad data.

What is the Stats Engine? 
  • It's a proprietary system that combines innovative statistical methods to give you trustworthy data faster.

  • Sequential testing: a technique designed to evaluate experiment data as it is collected. This is different from traditional statistics that assume you will only evaluate your experiment data at one point in time, at a set sample size.

  • Sequential testing calculates an average likelihood ratio—the relative likelihood that the variation is different from the baseline—every time a new visitor triggers an event.

  • False discovery rate control allows for many goals and variations with guaranteed accuracy. If you report a rate of 10% it means that, at most, 10% of winners and losers have no difference between variation and baseline, the same chance of making an incorrect business decision.

  • With the Stats Engine, winners and losers are reported with low false discovery rate instead of low false positive rate. As goals and variations are added to the experiment, Optimizely corrects more for false discoveries and becomes more conservative in calling winners/losers.

  • This control gives you transparent assessment of the risk you have of making an incorrect decision.

  • It also gives you the ability to continuously monitor the data coming in in real time.

For more detailed information on Stats Engine, check out this video and our article about how Optimizely's Stats Engine calculates results.

Understanding the Results page 

Interpreting the Results page 
  • Example experiment: image on product detail page

    • Lifestyle imagery vs product imagery

    • Access results by clicking on your experiment and clicking Results.

      • Summary has a high-level overview of the experiment: it shows improvement to primary metric for each variation compared to baseline.

      • Your primary metric is always at the top: conversions are also there, calculated by uniques and totals.

        • A unique conversion occurs only the first time a visitor triggers a goal.

        • Total conversions count each time a visitor triggers a goal.

      • Why are they useful?

        • Unique conversions help you control for outliers and track conversions such as clicking the Add To Cart button.

        • Total conversions help you track repeat behavior: for example, social media sharing or viewing multiple articles on a media site.

      • Can we be confident of value of the change?

        • The confidence interval measures uncertainty around improvement. It’s a range, and the conversion rate for a particular experience lies within it. It starts out wide, but as more data is collected, the interval narrows to show certainty is increasing.

          • Once statistical significance is reached for a variation, the interval is either all above or all below 0.

          • Confidence interval is set at the same level you chose for the project's statistical significance threshold.

          • Your experiment must collect enough data to declare statistical significance. If you don't have enough, Optimizely will give you an approximation of how long that might take.

Filter your results and segmentation 
  • One example of segmentation might be filtering results based on a certain time period. This is commonly done for holidays, to learn if that holiday affected visitor behavior.

  • Filters include: data range, type of user (desktop vs. mobile), gender, location.

    • Some filters are already in the system, while others are custom and can be added by you (filters are called attributes in the system).

    • Default filters: Browser, Source (Campaign, direct, referral, search), Campaign, Referrer, Device.

    • Filters are useful if you have two variations that beat your baseline, and you want to compare them to each other.

    • The results page can be shared with stakeholders by sharing the URL. You can reset this link, but if you do the original link will no longer work.

To learn more, read our article about the experiment results page for Optimizely X.