Skip to main content
menu-icon.png

 

x
Optimizely Knowledge Base

Study guide for Maestro certification

This article will help you prepare for your Optimizely Maestro certification exam. Good luck!

Implementing the snippet and Optimizely setup  

Projects  
  • These can help you organize your account into sections for different areas of your website, for multiple sites, and for staging vs. production environments.

Managing Collaborators  
  • Doing this enables you to assign people at different levels, and to share projects without giving others direct access to software.

  • Remember, in Optimizely terminology, collaborators and users are the same.

Snippet  

The snippet is a line of code that determines which pages data is pulled from for your experiments. This is critical code that drives your experiments.

  • What’s in the snippet?

  • How often is the snippet added?  

  • Can snippets be ran across projects?

  • Where should the snippet be added, and why?

  • Page flashing

  • Origin

To read more on these topics, see our article on how the Optimizely snippet works.

How to create pages  
  • When you set up a page, you can then choose the settings for that page (or for a group of pages), and then save them.

How to create events  
  • Events are used to track behaviors and measure impact.

How to create audiences  
  • Not all visitors are the same, so experiences should be different too. Creating audiences helps target groups of visitors with a common characteristic (mobile vs desktop, languages, etc.) for your experiments.

Build an Experiment in Optimizely X 

How to build an experiment 
  • Experimentation allows you to make data-driven decisions to help improve your business.

  • It's usually better to set up your pages, events and audiences prior to completing an experiment, because you can reuse them in multiple experiments.

What are integrations? 
  • If you use an analytics platform or other tech tools (like heat-mapping tools), you can link Optimizely to them.

Create a Variation in Optimizely X 

How to create a variation 
  • A variation is what Optimizely calls a version of your site. Compare the results of different variations to determine whether your changes had any effect on visitor behavior. 

How to set up a redirect experiment 
  • You would do this if you wanted to test a dramatic change that requires two separate versions of a page to compare (for example, two homepages with two different URLs).

    • Rather than making significant edits to the original (this can affect load times), create a new page and redirect visitors to that page instead.

  • Set up pages ahead of time to save time later.

Experimentation Strategy

Build your optimization team 

Decide whom to involve
  • Planning your optimization program involves five core functions:

    • Ideation

    • Planning

    • Implementation

    • Interpretation

    • Communication

To go deeper, check out our article on best practices for building an effective optimization team and this blog post on improving testing and optimization.

Nurturing an optimization culture 
  • Challenge: coordinating with permanent team members and intermittent team members (like developers, QA and UX designers) is tough!

  • Communication is key to building an optimization culture.

  • How can you do this effectively?

    • Schedule a standing team meeting and consider opening it to others, as a way to glean ideas.

    • Consider creating an email listserv: being open to ideas from within the company is important, not just for the experiments (though you never know who'll come up with a great idea), but to build a company culture that focuses on optimization.

    • Include executives: while not very involved at the day-to-day level, including them in your messaging helps them understand the impact you’re making with your experiments.

Read about the five traits of best-in-class optimization teams on the Optimizely blog!

Building an experimentation culture 

Leverage your team to build hypotheses 
  • Data > opinions!

  • Best experimentation experiences start with questions. Turn these into hypotheses and then run experiments based on them.

  • Data should be generative, not conclusive.

  • It's important to digest and analyze your data before coming up with your questions:

    • Top trafficked data

    • Conversion CTAs on your top pages

    • Top 3-5 user flows

    • Drop-offs at each stage of those user flows

    • Bounce rate on each page

  • Build an idea submission form you can use to share ideas on experimentation. These are often used to gather different perspectives, socialize data-driven optimization company-wide, and create a pool of ideas for your team to choose from.

For more context, check out Sara Critchfield's HBR article on how to push your teams to take risks and experiment.

Brainstorming tips and ideas 
  • Build a broad team: diverse perspectives can help identify new experimentation opportunities that provide fresh takes on familiar parts of the site or app. Specifically, consider including:

    • Stakeholders

    • Analysts

    • Developers

    • Designers

    • People who see your visitors at different stages of the customer journey (like your customer service associates especially)

  • Explore the “why” of your experiments. This can help focus your team on the problems at hand, and generate some surprising insights.

Check out our article on fostering a culture of experimentation at your organization.

Align experimentation to your business  

Align experiments with business goals
  • Business goals are as important as having a destination when on a road trip.

  • Experiments must align with metrics your business most cares about.

  • With each experiment, you should be able to express and predict how changes will:

    • Affect user behavior

    • Directly benefit a business goal

  • Using an experiment to settle an argument about which design is better, or to answer the question “I wonder what happens if I do xyz…” isn’t the best priority: always make your business goals a priority.

  • Consider a hierarchy of goals to help you iterate your experiment in order to influence your highest goal of the company:

    • Company goal: increase total revenue

    • Business Unit goal: increase revenue per visit

    • Optimization goal

    • Experiment goal: more granular (like clicks to add to cart button) a concrete action

  • Doing this creates your goal tree, where you can see where your experiment goals can work up and up towards the ultimate company goal.

Building a goal tree  
  • The purpose of creating goal tree is to organize the metrics that feed into the company goal. It can also help you decide which goal to pursue first.

    • It’s the foundation of your ideation strategy.

To go deeper on this subject, check out our article on primary and secondary metrics and monitoring goals.

Leverage data to drive experiments

Use analytics to generate hypotheses
  • Combining Optimizely with any current analytics you already have increases the likelihood that you'll see statistically significant winners in your experiments by 32%.

  • Looking at data you already had helps with hypothesis generation.

  • Analytics tell you:

    • Who your customers are

    • How long visitors view a page

    • What visitors click

    • When and where visitors join and leave

  • Spend some time identifying the best pages for experiments (landing pages, home pages, etc.).

  • Pay attention to the paths your visitors take (conversion flow, learning flow, and funnel reports).

  • If you are unsure how to look at analytics, developing that expertise can be a good opportunity to grow your optimization team and learn from the data you already have.

Learning from heat maps and other tools
  • These tools can give you insight to what components on pages may not be working, because they are a window into your visitors’ experiences.

  • Warmer colors indicate areas where users tend to focus the most, while cool colors indicate areas of minimal to no interaction.

  • They're a good way to figure out what might be causing unexpected dropoff on a given page.

Leveraging other data
  • Other ways to collect information about your website is directly from your visitors, by way of:

    • Surveys

    • E-mails from customers

    • Beta testing

    • Customer support experiences, as recalled by your support staff

    • Other direct visitor feedback

These tools will help you identify your customers' goals, and where they’re having problems achieving them.

  • Though it is tempting, it's not usually a good idea to compare your website to others and assume an experiment will turn out the same as theirs with a similar business model.

    • Even if your company is similar to another, don’t test your hypotheses based on another company’s experiment.

  • Above all, TRUST YOUR DATA.

Solving problems that matter

From analysis to hypothesis
  • Develop a problem statement by defining the problem you want to solve (who? when? what?).

  • Solution: describe the proposed solution

  • Result: suggest metrics to measure results

Use solutions maps
  • Having multiple potential solutions (variations) is a strategy that is more likely to achieve statistically significant uplift, which is why having a solutions map is helpful.

    • Exploring different solutions will give you a better sense of how to solve your visitors’ problems.

  • Brainstorm at least ten different options from multiple sources (e.g. customer feedback, an idea from a boss, something on a competitor’s website).

 

  • Don’t forget to consider options that might not be externally visible on a heat map or in your other data. For example: are people possibly leaving the product page because they selected this product, but it’s not quite the product they’re looking for and there’s nothing on the page to tell visitors you have similar product that might be exactly what they want?

  • Having all solutions and strategies in one place helps you figure out how to optimize your visitors’ experience and reach your ultimate goal.

Check out our article on best practices in hypothesis creation.

Write an effective hypothesis

Design data-driven hypotheses
  • Experiments can be surprising: people are often unpredictable and they can send your experiment in the opposite direction you expected them to go.

  • Your hypothesis should state your current problem, present a solution, and predict a result.

    • Draw from your experience, prior experiments, and data.

    • Good hypothesis design sets you up for long term gains because:

      • It establishes a mechanism for constant inquiry

      • It encourages you to see your website from a visitor’s perspective

      • It helps you discover and prioritize the potential benefits of your experiments

      • It establishes a common language for ideation and research

      • It connects experiments directly to your company’s goals

  • Unfocused experimentation can lead to a waste of resources, or worse.

Focusing on high-impact changes
  • Local Maximum vs Global Maximum:

    • Focusing on hitting a local maximum is working with a refinement approach: you’re getting better results than before, but it could lead to endless rounds of refining, potentially not arriving at the best solution.

    • Focusing instead on a global maximum will allow you to explore multiple paths, and is more likely to get you to the best solution.

  • Experimenting on many possibilities treats each component as its own variation; each possibility is a road that could lead you to the best solution.

    • Most successful optimizers are those that reach statistical significance on multiple variations, often four or more.

    • The refinement approach isn’t always bad, especially when you’ve done some exploration experiments and feel like you're close to finding a solution that works.

  • After you form hypothesis and begin to experiment, measure macro-conversions (primary conversion goals like purchases, revenue per visitor, or leads created) but keep track of micro-conversions too (like pageviews for each page in conversion funnel, video views, or newsletter sign-ups). Micro-conversions often precede macro-conversions, so it can be helpful to follow that information and find out if you have an uplift in attention from the experiment—even if it didn’t result in a full-fledged conversion.

Creating a Testing Roadmap 

Creating an Experimentation Roadmap 
  • There are four questions to ask yourself:

    • What are we experimenting on?

    • When are we experimenting?

    • Who is involved in experimentation? (Establish timeline for reviews, etc., to ensure people are available when you need them.)

    • How will our experiments align with company-wide strategic objectives? (This is a good question for executive-level folks. Experiments must align with KPIs to make sure they're relevant, and to increase likelihood that they will yield a real impact.)

To go deeper on this subject, check out our article on creating an experimentation roadmap.

Understanding Primary and Secondary Metrics 
  • Consider primary and secondary metrics for your experiments: the primary metric is your goal, while secondary metrics are monitoring goals.

    • Primary metric: the yardstick by which you can tell if your experiment was a success.

    • Secondary: supporting events that provide more insight and connection to overall business goal; these also give visibility across different steps along the funnel.

    • Experimenting further down-funnel can make statistical significance harder for experiments to achieve at all.

  • Your business goal is not necessarily going to match your primary experiment goal.

Experiment Duration and Sample Size 
  • You can’t properly plan or roadmap until you know how long your experiments will take to reach statistical significance.

    • In order to determine how long an experiment will run, you'll need an estimate of your sample size (the number of people who will be exposed to experiment)

    • Optimizely’s stats engine uses a process called sequential testing that collects evidence as your test runs to flag when your experiment reaches stat sig, so you can see winners and losers as quickly and accurately as possible.

    • The sample size calculator can help you determine the projected stopping point for an experiment.

    • If you don’t have an analytics platform to tell you your baseline conversion rate, you can use Optimizely and just run an experiment without a variation for a predetermined amount of time.

    • Minimum detectable effect: the percentage lift you want to detect. This helps you clarify the likely relationship between impact and effort.

Prioritizing your experiments

Prioritization frameworks
  • Impartially evaluate your experimentation ideas.

  • This phase should be about dispassionate evaluation of the experiment, rather than justifying what you want to experiment on first.

  • Look at impact and effort: higher priority experiments should have greater impact and require less effort.

  • Consider factors as you weight your experiments (for example, having a strong tech team but not a lot of resources).

  • Hard vs. soft impact: quantifiable (hard), meaningful (soft).

    • Examples of hard impact: checkouts, pageviews, cost savings.

    • Examples of soft impact: ability to generate excitement, internal buy-in.

  • Be sure to consider both your technical and staffing needs (QA, graphic designers, etc.).

  • Measure both impact and effort on a matrix of high, medium, and low.

  • Remember: low effort and high impact is most desirable!

  • Use rubrics to determine both impact and effort.

  • You may want to consider running “quick win” experiments for incremental learnings while you're running a longer experiment.

The experimentation cycle

Building an optimization methodology
  • This includes five major stages: implementation, ideation, planning, development, and analysis.

    • Implementation: when you first establish experimentation. This should only happen once, and will be used and reference throughout life of experimentation program.

    • Ideations: research analytics and customer feedback, decide what to test, brainstorm experimentation ideas, create your hypothesis.

    • Planning: organize and prioritize your ideas, especially if there is more than one experiment.

    • Development: create your experiment in Optimizely, QA, and launch it.

    • Analysis: after statistical significance is reached, results can be analyzed and actions can be taken on findings.

Building individual experiments
  • Now for the fun part: How to coordinate and scope your experiment?

  • We have an experiment design template to manage experiments to communicate effectively with stakeholders. You can hand them off to team members.

  • Create a shareable doc about your experiment. This can help build excitement from other employees.

  • To choose the best experiment, consider how you expect your changes to affect the primary conversion event you’re testing for.

  • Types of tests:

    • A/B tests: These are the most common. They test the original version against a single variation.

    • Multivariate tests: These precisely measure how multiple changes interact and influence each other.

    • Multi-page tests: These measure success in conversions across a series of pages (like a purchase funnel). Be sure to define primary and secondary metrics, and keep in mind that your business goal and your primary goal may not necessarily be the same.

    • A/B/n tests: these are good when you want to experiment on multiple changes but don’t want to wait for traffic to get you to statistical significance. Instead, test entirely different variations of a page against once another (this is not as thorough an approach, but it is quicker).

QA your Optimizely experiment 

How to QA an experiment
  • Five things to check: does it look right? Is it consistent? Does the variation show when it should, and without flashing? Does the experiment appear on the right pages and for the intended audiences? Are your events firing when they should and capturing accurate data?

  • Cross-browser testing is a quick way to identify any visual issues that affect specific devices or browsers.

  • Sharelink allows you to share preview with anyone, including non-Optimizely users.

  • QA ball will show you what campaign you’re running, on which pages, etc.

  • Clicking Override shows you different variations of your experiment, so you can ensure variations are working properly. This is useful if you are running an A/B/C test or have multiple variations.

  • You can also see what each audience will see by choosing which audience you want to emulate.

  • The feed shows you everything Optimizely is tracking on a given page.

  • To check if functions are working correctly, right-click the element and open it in a new tab. On the feed, you’ll be able to see the function.

  • You can navigate through your conversion funnel to make sure each page looks the way it should, and to make sure the key metrics are tracking as they should.