This article will help you:
  • Form your own data-driven hypothesis
  • Create strong hypotheses using "If [variable], then [result], because [rationale]"
  • Learn from “winning” and “losing” hypotheses
  • Identify the components of an effective hypothesis with a hypothesis activity 

Many optimization programs focus on gathering data about their site experience, but neglect to turn those data-driven insights into hypotheses when testing or personalizing. Instead, experiments and campaigns are created based on intuition, or even the raw data itself.

Companies that run tests and campaigns without first formulating hypotheses miss an opportunity to connect data and human creativity to their optimization process. A strong hypothesis is the heart of data-driven optimization. Hypothesis-driven testing sets your company up for long-term gains:

  • Build a mechanism for constant inquiry and learning, leading to improved brainstorming processes.
  • Construct a reliable, consistent, and informed (based on data) understanding of your unique online business.
  • Size up the potential (and proven) benefits of your tests and prioritize accordingly.
  • Drive the direction of your marketing and product offerings.

A hypothesis helps you synthesize your data into a proposal about your visitors’ behavior so you can test and learn from it. Every confirmed or rejected hypothesis generates new insights for future rounds of research and hypothesis generation.

So, if you hypothesize that the links in your funnel are distracting visitors from completing the purchase, your results tell you more than whether removing those links creates lift. Those results also tell you how accurate your current understanding of your site experience - as captured by your hypothesis - is. This process helps you decide what to optimize, why you’re optimizing, and how you'll measure results.

Many teams also use their hypotheses to connect individual experiments to the company’s major challenges. A hypothesis can also help you tell a single coherent story about a series of tests designed around a central theme.

 
Tip:

Ready to get started? Download this hypothesis worksheet to start drafting your hypothesis!

 

What is a hypothesis?

A hypothesis is a prediction you create prior to running an experiment. It states clearly what is being changed, what you believe the outcome will be, and why you think that’s the case. The outcome of the experiment will either prove or disprove your hypothesis.

The nuts and bolts

A complete hypothesis consists of three parts: a variable that you modify, a quantifiable result that measures the outcome of that change, and a rationale that connects that outcome to a theory about your customer experience.

If your hypothesis has all three components, you should be able to write it out in the following form:

"If ____, then ____, because ____."

In the section below, we discuss each component in greater detail.

The Variable (or, what to test)

When you create a hypothesis, focus on ideas that tell a story about your key metrics rather than on minor aspects of site design. A business intelligence report can help you synthesize your data into insights that drive your hypothesis creation.

Identify your leading indicators, high-traffic pages, and most valuable flows to identify where you you can make the greatest business impact. During the ideation stage, use these insights to ask productive questions about your visitors and their experiences on your site.

Then, identify variables you'll use to evaluate those questions and consider. How might a change in this variable alter your customers’ experience and behavior? What changes would bring you one step closer to solving your company’s most pressing challenges?

The Result

The result you predict ties your hypothesis back to your key business metrics. We recommend that you predict a range for lift or loss (for instance, a 5-15% reduction in bounce rate or a 20-25% increase in click-throughs). 

A proposed range keeps your hypothesis specific and encourages you to estimate how much traffic and time your experiment will cost, based on an MDE calculation, and the impact you expect it to make. This practice will can also help you prioritize the idea in your roadmap.

Once you know what metric you'll use to measure success, identify goals that you'll use to track those metrics.

The Rationale

Numbers are compelling, but so are stories. The rationale is the heart of the hypothesis. It's the why -- your interpretation of the cause-and-effect relationship in your hypothesis. The rationale proposes a theory about your customer experience, and ties changes to that experience to a proposed outcome.

This is your stake in the ground. Is the change in experience going to produce an incremental or large-scale effect? Why do you think so? Use qualitative and quantitative data to support your theory. (Good luck, and fingers crossed!)

Best practices

Testable

A hypothesis is only testable if you can measure both the change you make and the effect of that change. For example, you may hypothesize that removing breadcrumb navigation from the checkout page will help visitors stay in the funnel and increase conversions. The difference between the original and variation is the presence or absence of breadcrumbs. The effect of that change can be measured in the number of conversions.

A Learning Opportunity

A hypothesis helps you design tests that provide more insight into your customers’ behaviors. These insights can help generate additional questions about your visitors and site experience and drive an iterative learning process.

Ideally, the cycle follows this general pattern:

  • Gather data about your visitors’ behaviors and industry and use insights from that data to ask questions about your site experience
  • Formulate a hypothesis based insights from your data
  • Based on your hypothesis, design and implement an Experiment or Campaign
  • Analyze your results to decide whether your hypothesis is confirmed or rejected -- for which segment of your visitors? Based on what information?
  • Synthesize your conclusions and document those learnings
  • Use those learnings to ask, new data-driven questions

Connected to your company’s main challenges

The most impactful hypotheses are those that are aligned to your company metrics. They help connect the process of learning about customers’ behaviors with higher-level goals to help grow the business. What are your KPIs and company-wide goals? By designing tests and experiences that impact these metrics, you focus your efforts on questions that matter.

 
Note:

As with all experiments, you must be prepared for the possibility that your hypothesis could be disproven. Your results may contradict your initial expectations. While this result doesn’t lead to a short-term win, it gives you fuel for better understanding the customer! Share this information with your team. To learn more, see our article on documenting and sharing test results.

 
Important:

The most powerful experiments isolate one variable and eliminate extraneous factors. For this reason, we suggest that you limit the number of variables tested in each experiment and that you DO NOT introduce any changes once the experiment has started. To learn more, see our article on changing an experiment while it is running.

Examples of strong and weak hypotheses

Example 1 (Strong)

If we personalize the Call to Action to users who clicked on our poker ad campaign, then we will see a 20% lift in click goals, because our heatmap data shows that users who focus on poker-related copy on the page click through our links 20% of the time.

Why this is strong

  • Tests changes to one element
  • Predicts a specific, measurable result
  • Hypothesis drives learning about one segment of user base: customers who click on the poker ad and their affinity to personalized poker messaging
  • Strong rationale
  • Connected to key business interest

Example 1 (Weak)

If we personalize the Call to Action to users who clicked on our poker ad and remove distracting images around the page, we’ll see an increase in revenue on our site.

Why this is weak

  • Two elements (CTA and images) that are unrelated are being tested simultaneously
  • The result (increase in revenue) is not specific in relation to the variables
  • The rationale is not obvious

Example 2 (Strong)

Removing the second page of our lead generation form will increase completion rates (conversions) by 10%. We have a higher than average drop-off rate according to industry standards and are adjusting according to recommendations submitted from our user experience expert based on user research.

Why this is strong

  • Proposes specific, testable changes
  • Informed by data: industry standards and user research
  • Takes a strong stance: predicts a 10% increase

Example 2 (Weak)

Removing elements from our sign-up form will increase completion of the form. Our VP of Marketing suggested that we try this for one week.

Why this is weak

  • Proposed changes aren’t specific
  • Changes are not clearly tied to  business objective
  • Rationale not obvious
  • Restricted time limit might not produce winning variation, and not tied to requirements for statistical significance


With hypothesis-driven optimization, you consistently generate data that feeds back into your iterative process, regardless of whether a specific hypothesis is confirmed or rejected. The goal of hypothesis-driven optimization is not just immediate gains in terms of conversions, but also a deeper understanding of your customer and how you can improve their experience with your business. Hypothesis-driven optimization will often lead you to more questions and look deeper for answers.

 
Note:

What happens when you get it wrong?

Your team has actively engaged with customer data before and after your test is executed, whether your hypothesis is ultimately confirmed or not. This culture of continuously turning data about your customer into action will be the strategic edge that catapults both short-term lift and long-term customer engagement. Document and share your results, bring your insights to your next hypothesis brainstorm, and test on!

Start with what you know

Ready to begin? Before proposing a hypothesis, engage in discovery. Consider building a business intelligence report. One key difference between a well-formulated hypothesis and a guess is data. A hypothesis should be informed by what you know already. Dive into your existing data sources such as analytics and carefully observe and consider your customer’s journey through your site.

Use data that is strongly linked to your company goals to ensure that you’re focusing on areas of significant impact rather than making UX changes in isolation. How does your company define success on the web? 

Direct Data

  • Visitor path and popular pages
  • Brand vs non brand search terms
  • Voice of the customer / surveys
  • Interviews and feedback forms
  • Previous results / other analytics
  • User testing

Indirect Data

  • Competitive overview
  • Shared industry data
  • Industry leadership and academic work
  • Indirect competitors
  • Eye-tracking and heat maps
  • Open questions/”phenomenon”
  • Your unique perspective

Ask questions that will lead you toward forming a stronger theory about your visitors’ experiences. If there is a conversion funnel, where are visitors falling out? What pain points or usability issues might be caused by your website design?

Hypothesis activity

Here's an activity to help you practice designing a hypothesis. 

Use direct and indirect data to identify the most important pages involved in your visitors' path to conversion and the places where there are bottlenecks. For each page, let’s create a hypothesis with the two components we listed above.

Look at the page through the eyes of a visitor. Then ask yourself these questions, and write down the answers.

  • Why do visitors come to this page?
  • What are visitors trying to do on this page?
  • How easy is it to find the thing they’re trying to do? 
  • What are the biggest potential distractions? 

You should be able to combine the answers to these questions into theories that follow this format: “On [PAGE], visitors are trying to [ACTION], but [SUMMARY OF OBSTACLE].” For example:

  • On the checkout page, visitors are trying to complete their checkout, but they’re distracted by the coupon code box and bouncing from the site.
  • On the lead submission form page, visitors are trying to finish the form, but they’re overwhelmed by the number of fields to fill out.

See if you can come up with two of these theories for the major pages on your site.

Next, it’s time to match your theories with a quantifiable metric (or several). How would that metric change if you eliminated the obstacle for your visitors?

It’s best if you can you can use one of your primary conversion goals as the metric. Sometimes, you’ll want to use a metric that is a “leading indicator” -- in other words, you think it will lead to conversion down the line. 

Once you have your theory and your metric in place, you can test different solutions for your variations. For example, if your theory is that visitors are trying to find your checkout button, but it’s buried on the page, you could try three variations:

  • Making the checkout button bigger
  • Moving the checkout button to the top of the page
  • Removing other content that distracts people from the checkout button

Any of these three might confirm your hypothesis -- your test will help you find that out. After you’ve completed this activity, you’ll have fleshed-out hypotheses that serve as your test ideas.

Create your hypothesis before implementing your experiment or campaign. Use strategic, data-driven hypothesis creation to keep your program focused on the goals that matter most for your business.