Skip to main content

 

x
Optimizely Knowledge Base

Study guide for Idea Contributor Level 1 certification

This article will help you prepare for your Optimizely Idea Contributor Level 1 exam where you can earn the Idea Guru Level 1 certification. Good luck!

Align Experimentation to Your Business 

Align Experiments with Business Goals 
  • Business goals are as important as having a destination when on a road trip.

  • Experiments must align with metrics your business most cares about.

  • With each experiment, you should be able to express and predict how changes will:

    • Affect user behavior

    • Directly benefit a business goal

  • Using an experiment to settle an argument about which design is better, or to answer the question “I wonder what happens if I do xyz…” isn’t the best priority: always make your business goals a priority.

  • Consider a hierarchy of goals to help you iterate your experiment in order to influence your highest goal of the company:

    • Company goal: increase total revenue

    • Business Unit goal: increase revenue per visit

    • Optimization goal

    • Experiment goal: more granular (like clicks to add to cart button) a concrete action

  • Doing this creates your goal tree, where you can see where your experiment goals can work up and up towards the ultimate company goal.

Building a Goal Tree 
  • The purpose of creating goal tree is to organize the metrics that feed into the company goal. It can also help you decide which goal to pursue first.

    • It’s the foundation of your ideation strategy.

To go deeper on this subject, check out our article on primary and secondary metrics and monitoring goals.

Creating a Testing Roadmap 

Creating an Experimentation Roadmap 
  • There are four questions to ask yourself:

    • What are we experimenting on?

    • When are we experimenting?

    • Who is involved in experimentation? (Establish timeline for reviews, etc., to ensure people are available when you need them.)

    • How will our experiments align with company-wide strategic objectives? (This is a good question for executive-level folks. Experiments must align with KPIs to make sure they're relevant, and to increase likelihood that they will yield a real impact.)

To go deeper on this subject, check out our article on creating an experimentation roadmap.

Understanding Primary and Secondary Metrics 
  • Consider primary and secondary metrics for your experiments: the primary metric is your goal, while secondary metrics are monitoring goals.

    • Primary metric: the yardstick by which you can tell if your experiment was a success.

    • Secondary: supporting events that provide more insight and connection to overall business goal; these also give visibility across different steps along the funnel.

    • Experimenting further down-funnel can make statistical significance harder for experiments to achieve at all.

  • Your business goal is not necessarily going to match your primary experiment goal.

Experiment Duration and Sample Size 
  • You can’t properly plan or roadmap until you know how long your experiments will take to reach statistical significance.

    • In order to determine how long an experiment will run, you'll need an estimate of your sample size (the number of people who will be exposed to experiment)

    • Optimizely’s stats engine uses a process called sequential testing that collects evidence as your test runs to flag when your experiment reaches stat sig, so you can see winners and losers as quickly and accurately as possible.

    • The sample size calculator can help you determine the projected stopping point for an experiment.

    • If you don’t have an analytics platform to tell you your baseline conversion rate, you can use Optimizely and just run an experiment without a variation for a predetermined amount of time.

    • Minimum detectable effect: the percentage lift you want to detect. This helps you clarify the likely relationship between impact and effort.

Prioritizing your experiments 

Prioritization frameworks 
  • Impartially evaluate your experimentation ideas.

  • This phase should be about dispassionate evaluation of the experiment, rather than justifying what you want to experiment on first.

  • Look at impact and effort: higher priority experiments should have greater impact and require less effort.

  • Consider factors as you weight your experiments (for example, having a strong tech team but not a lot of resources).

  • Hard vs. soft impact: quantifiable (hard), meaningful (soft).

    • Examples of hard impact: checkouts, pageviews, cost savings.

    • Examples of soft impact: ability to generate excitement, internal buy-in.

  • Be sure to consider both your technical and staffing needs (QA, graphic designers, etc.).

  • Measure both impact and effort on a matrix of high, medium, and low.

  • Remember: low effort and high impact is most desirable!

  • Use rubrics to determine both impact and effort.

  • You may want to consider running “quick win” experiments for incremental learnings while you're running a longer experiment.

Leverage data to drive experiments 

Use analytics to generate hypotheses 
  • Combining Optimizely with any current analytics you already have increases the likelihood that you'll see statistically significant winners in your experiments by 32%.

  • Looking at data you already had helps with hypothesis generation.

  • Analytics tell you:

    • Who your customers are

    • How long visitors view a page

    • What visitors click

    • When and where visitors join and leave

  • Spend some time identifying the best pages for experiments (landing pages, home pages, etc.).

  • Pay attention to the paths your visitors take (conversion flow, learning flow, and funnel reports).

  • If you are unsure how to look at analytics, developing that expertise can be a good opportunity to grow your optimization team and learn from the data you already have.

Learning from heat maps and other tools 
  • These tools can give you insight to what components on pages may not be working, because they are a window into your visitors’ experiences.

  • Warmer colors indicate areas where users tend to focus the most, while cool colors indicate areas of minimal to no interaction.

  • They're a good way to figure out what might be causing unexpected dropoff on a given page.

Leveraging other data 
  • Other ways to collect information about your website is directly from your visitors, by way of:

    • Surveys

    • E-mails from customers

    • Beta testing

    • Customer support experiences, as recalled by your support staff

    • Other direct visitor feedback

These tools will help you identify your customers' goals, and where they’re having problems achieving them.

  • Though it is tempting, it's not usually a good idea to compare your website to others and assume an experiment will turn out the same as theirs with a similar business model.

    • Even if your company is similar to another, don’t test your hypotheses based on another company’s experiment.

  • Above all, TRUST YOUR DATA.

Solving problems that matter 

From analysis to hypothesis 
  • Develop a problem statement by defining the problem you want to solve (who? when? what?).

  • Solution: describe the proposed solution

  • Result: suggest metrics to measure results

Use solutions maps 
  • Having multiple potential solutions (variations) is a strategy that is more likely to achieve statistically significant uplift, which is why having a solutions map is helpful.

    • Exploring different solutions will give you a better sense of how to solve your visitors’ problems.

  • Brainstorm at least ten different options from multiple sources (e.g. customer feedback, an idea from a boss, something on a competitor’s website).

 

  • Don’t forget to consider options that might not be externally visible on a heat map or in your other data. For example: are people possibly leaving the product page because they selected this product, but it’s not quite the product they’re looking for and there’s nothing on the page to tell visitors you have similar product that might be exactly what they want?

  • Having all solutions and strategies in one place helps you figure out how to optimize your visitors’ experience and reach your ultimate goal.

Check out our article on best practices in hypothesis creation.

Write an effective hypothesis 

Design data-driven hypotheses 
  • Experiments can be surprising: people are often unpredictable and they can send your experiment in the opposite direction you expected them to go.

  • Your hypothesis should state your current problem, present a solution, and predict a result.

    • Draw from your experience, prior experiments, and data.

    • Good hypothesis design sets you up for long term gains because:

      • It establishes a mechanism for constant inquiry

      • It encourages you to see your website from a visitor’s perspective

      • It helps you discover and prioritize the potential benefits of your experiments

      • It establishes a common language for ideation and research

      • It connects experiments directly to your company’s goals

  • Unfocused experimentation can lead to a waste of resources, or worse.

Focusing on high-impact changes 
  • Local Maximum vs Global Maximum:

    • Focusing on hitting a local maximum is working with a refinement approach: you’re getting better results than before, but it could lead to endless rounds of refining, potentially not arriving at the best solution.

    • Focusing instead on a global maximum will allow you to explore multiple paths, and is more likely to get you to the best solution.

  • Experimenting on many possibilities treats each component as its own variation; each possibility is a road that could lead you to the best solution.

    • Most successful optimizers are those that reach statistical significance on multiple variations, often four or more.

    • The refinement approach isn’t always bad, especially when you’ve done some exploration experiments and feel like you're close to finding a solution that works.

  • After you form hypothesis and begin to experiment, measure macro-conversions (primary conversion goals like purchases, revenue per visitor, or leads created) but keep track of micro-conversions too (like pageviews for each page in conversion funnel, video views, or newsletter sign-ups). Micro-conversions often precede macro-conversions, so it can be helpful to follow that information and find out if you have an uplift in attention from the experiment—even if it didn’t result in a full-fledged conversion.

Where should you experiment?

Where should you experiment?
  • Find ways to narrow down your options. Consider experimenting only on your most optimal pages (those with more visitors or more traffic) at first.

  • Which pages those actually are will depend on your industry and your audience, but here are a few ideas:

    • Homepage: what’s the main action you want visitors to take?

    • Product/Category page: keep your CTA very prominent on this page.

    • Article page (especially for media sites): headline experimentation is a great approach; also look into dropoff and the performance of above-the-fold changes.

    • Form pages

    • Landing page: you're probably already measuring entry and exit rates here, and you may have a clear objective that’s targeted to specific audiences.

    • Search results page: are users getting strong, relevant results?

    • Subscription/pricing page: what prices are you highlighting and recommending here?

    • Checkout pages: these are often lower-traffic pages, but you could try making changes on higher traffic pages to see how it affects these lower-traffic pages.

Which elements should be part of your experiment?
  • Think about which page elements you’re interested in. Your data suggests there’s an issue on a certain page, and you need to decide what action to take: is the corresponding page element prominent? Easy to engage with? Is there clutter on the page that would distract visitors from taking the action you want?

    • Think about breadcrumb content: are the category links too broad or narrow?

    • Call To Action: This is the most important element on the page. Is it placed too low? Is it prominent enough? Is it obscured by other elements? Is it easily identifiable?

    • Rail (the narrower column on the left or right side of a page that’s not part of the body content): is this distracting visitors from the CTA?

    • Headlines: Are they effective? Is the message supporting the CTA?

    • Hero/Carousel (rotating image or images at top of page): can this be used to inspire visitors to explore more?

    • Navigation (nav) bar: could it be adjusted to include important elements? Are there too many options here?

    • Modal (pop-up window that appears and requires action to be taken, like a “sign up for the newsletter” etc.): these should be used sparingly (visitors find them annoying), so if you see significant dropoff on a page with a modal window, consider an experiment where you replace the modal with something else.

    • Forms: lead-gen and B2B marketing sites frequently use these. Long forms might discourage visitors from completing them: consider shortening them. Is there a way to fill in some of it automatically, based on what you already know about the visitor?