Skip to main content

Attend the Opticon Virtual Summit to hear from companies that are turning experimentation into their next competitive advantage. Save your seat!

x
Optimizely Knowledge Base

Prioritize optimization ideas and build a strong roadmap

This article will help you:
  • Identify the right prioritization framework for your testing team and download a template
  • Organize your list of test ideas from first-to-run to last
  • Create a meaningful timeline for running your experiments
  • Implement a process that prioritizes your test ideas automatically

How do you decide what to optimize and when? To figure out which experiments and campaigns to run first and which to place into your backlog, use a prioritization framework to evaluate your ideas.

A basic prioritization framework uses consistent criteria to order the experiments and campaigns you’ll run, from first to last. The more advanced version also includes a scoring rubric and an execution timeline. You’ll use your prioritization framework to manage your backlog and optimization cycles.

Use a prioritization framework to ensure that your most impactful tests run first. This article walks you through basic and advanced options, and shows you how to automate your prioritization process.

A concept called minimum detectable effect (MDE) can also help you prioritize tests based on expected ROI.

What you need to get started:
Materials to prepare
    • List of test ideas including:
      • Dependencies
      • Effort estimates
      • Impact estimates
      • Time to significance and available traffic
      • Likelihood of implementation
    • Criteria for prioritization
    • Resourcing constraints
    • Insights from technical audit

People and resources
    • Program Manager (responsible for final scoring)
    • Developer (responsible for estimating effort)
    • Designer (responsible for estimating effort)
    • Executive sponsor (review, approve, and provide strategic alignment)

Actions you'll perform 
    • Score impact versus effort
    • Use MDE to estimate impact versus effort
    • Add experiment tags or category labels
    • Rank experiments
    • Create a balanced approach that optimizes for different goals
    • Schedule tests and campaigns in the roadmap
    • Document the prioritization process
    • Socialize an optimization culture at the company

Deliverables
    • A prioritized list of experiments and campaigns, or an advanced roadmap
What to watch out for
    • It can be difficult to quantify the impact of experiments
    • If you don't use a prioritization scheme, you may end up prioritizing ideas according to dominant trends in the company, top-down
    • Without outlining dependencies in advance, you may slow down testing or be unable to run a test at all
    • A lack of documentation can slow down a team with a large roadmap
    • An unbalanced roadmap can over-index certain parts of the site and leave other opportunities on the table -- balance by UX theme, location, tactics, and goals pursued
    • Roadmaps that are entirely agile or entirely waterfall each present difficulties in planning

  • This article is part of the Optimization Methodology series.
 
Tip:

Here are a couple of resources to help you get started:

For more downloadable resources, check out the Optimizely Testing Toolkit

Prioritize your test ideas

Below, we walk through the main factors in prioritizing test ideas.

1. Define your prioritization criteria

We suggest you evaluate ideas based on two factors: impact and effort. What counts as high impact, or low effort? This depends on your company's business goals and your team's access resources.

Impact: What metrics will you use to measure the success of your optimization program? Which events in Optimizely directly influence these metrics?

Effort: What is easy or difficult to do? Which resources are dedicated to testing and which are shared or borrowed from other teams?

Team members to consult:

  • The program manager, who is responsible for the overall framework and final scoring
  • The developer, who is responsible for estimating effort
  • The design team, who is responsible for estimating effort
  • The executive sponsor, who will review and approve prioritized list, and provide strategic alignment

Ultimately, the criteria you use to prioritize your ideas will depend on your particular program’s goals and resources.

For example, your team may be technically savvy but low on design resources, so you set up tests easily but have trouble getting mockups. Or if you have executive buy-in but find it difficult to get time with your developers, you may find that advanced test ideas are quickly greenlit but slow to be implemented.

These types of factors are important to consider when deciding on the criteria for effort and impact.

2. Prioritize your list

Assign effort scores and impact scores to every optimization idea and prioritize accordingly.

You can use broad categories like high, medium, and low when evaluating impact and effort.

High-impact, low-effort tests and campaigns should run first.

Or, you can assign numerical scores; scores can help provide a more granular view of the relative ROI of each experiment. This is known as a blended model.

Assign numerical values for effort and impact. Then, simply sum the impact and effort scores for each test and campaign to generate a single prioritization score that combines both sets of criteria.

 
Note:

In the example above, high impact tests and campaigns are given high numerical scores. However, high effort ideas are given low numerical scores.

When we sum the two scores, high-impact, low-effort ideas rise to the top of the prioritized list. These tests and campaigns should run first.

You can further enforce consistency in your prioritization process by building out a detailed rubric. Customize the weights of your effort scores to the strengths of your team. Adjust the weights of your impact scores according to the goals that are most important to your business.

With a rubric, you can consistently and objectively prioritize all your test and campaign.

Tip:

At this stage, you might also evaluate additional attributes beyond effort and impact. To learn more, read Hotwire’s post on how they added additional criteria that are important to their business.

If you have a relatively mature optimization program with dedicated developer resources, you may be able to focus solely on tests based on impact, without needing to weigh effort. For inspiration, check out Hotwire’s binary scoring matrix to learn how they run over 120 tests a year.

3. Review your process

Once you’ve prioritized your ideas and run a few, set time aside to review how well your process works for your team. Two questions to consider when you evaluate your process are:

  • Whether you should prioritize ideas into roadmap or backlog
  • How deeply you should prioritize your list

Backlog or Roadmap

When you prioritize your list of ideas, you can put them into a backlog or a prioritized roadmap. The first option offers more flexibility. The second provides a stable cadence so you can organize a complex workflow.

A prioritized backlog is simply a queue, an order in which you run your tests and campaigns. Once you’re finished with one idea, you reach for the next. If you happen to chance upon an idea with potential and you’d like to focus your optimization efforts there for a while, you can.

With a full, prioritized roadmap, you commit to a timeline based on how long you think a test or campaign should run. Some mature optimization programs prefer this approach, as it allows them to coordinate many stakeholders and schedule a complex workflow.

With a full roadmap you plan more of your work in advance. You also build a regular cadence for incorporating insights and trends from completed tests and campaigns into your new round of prioritization. If you’d like to return to an idea you can re-prioritize and add a second iteration further down the line.

No matter which method you choose - roadmap or backlog - insights from completed tests and campaigns will help you re-prioritize in the next round.

Under or over-prioritization

As you get to know the cadence of your optimization program’s work cycle, evaluate whether you’re over-prioritizing - putting too many ducks in a row - or under-prioritizing your list.

Maybe you prioritize 25 ideas, but find yourself executing only four or five before a planning phase kicks off another round of ideation. Or, maybe you feel that you never have enough high-impact ideas.

Teams that under-prioritize will run out of ideas before end of the cycle and find themselves returning to the ideation phase. If this is the case, consider focusing your efforts on generating more ideas for your backlog. An idea submission form and a rich business intelligence report can help you increase the number of high-quality hypotheses.

Teams that over-prioritize consistently fail to implement ideas lower in the prioritized roadmap. Evaluate whether low-priority hypotheses are worth prioritizing again and again. If fewer ideas would help you align your roadmap to your team's cadence, consider capping the list at a lower number.

Automate idea submissions and ranking

Once you’ve chosen your framework, consider automating an idea submission process at your company. By automatically scoring the ideas submitted to your team, you’ll be able to evaluate and prioritize them more easily.

Create an idea submission form that asks questions about the resources and skill sets required for a given idea. The responses to this form populate a spreadsheet with built-in formulas that add or subtract points based on the responses.

Voila! As soon as an idea is submitted, a score is generated based on your prioritization framework. Your ideas can be automatically sorted by those scores!

Ultimately, a formalized process of collecting and prioritizing ideas focuses your team on running strong tests and campaigns and pursuing huge wins that generate excitement for optimization. Publishing your automated submission form company-wide can also help to democratize prioritization and spread awareness about optimization goals.

By equipping your team with a mechanism for outputting a well-prioritized list of tests and campaigns, you take a critical step towards building a sustainable and effective testing program.

Advanced: Build a testing roadmap

Advanced prioritization moves beyond a basic framework by including an implementation schedule and a breakdown of the experiment workflow. You can create this framework with a simple spreadsheet or with a project management tool.

Here's a template.

With a testing roadmap, you clearly define the key features of your testing program. Use it to track: 

  • What you’re testing
    • Which experiments will be implemented, in prioritized order
    • Do we have a balanced testing schedule (U/X theme, location, tactics, goals being pursued)?
  • When you’re testing
    • What’s our timeline or schedule for executing these tests?
    • Will any of these tests interfere with one another or other planned campaigns, like sprint releases or promotional campaigns?
  • Who is involved in testing
    • How do workflow dependencies fit into the schedule? (resource allocation, reviews, approvals)
    • How do we maximize the availability of our resources based on other company timelines and strategic campaigns?
  • How your tests align with company-wide strategic objectives
    • Are we testing throughout the funnel / important user flows?
    • Do these test ideas have a common strategic theme in line with our KPIs?

A full roadmap can incorporate resource management, plan for dependencies, and accommodate business priorities. It may reveal opportunities to embed A/B testing and personalization into other major initiatives to increase the visibility of your program across the company.

It can also help you coordinate optimization efforts with development sprints, marketing initiatives and campaign launches, and user research or focus groups at scale.

Testing programs are sometimes susceptible to delays when other company initiatives take precedence. For example, if there's a scheduled code release is rolled back or the developer/UX resource you work with is unavailable, your program may be stalled.

In your backlog, plan a few tests that can be launched with minimal effort like simple headline or messaging changes. In these instances, take the opportunity to launch lower effort tests (which may still be highly impactful)!

Ultimately, a well-structured schedule clearly communicates the intent, value, and timing of your optimization program. It will allow you to make clear statements such as:

During the next month, we'll be running these two tests on the product page, these two tests on the shopping cart page, and this test on the payment page. We’ve secured buy-in/resource support from merchandising and relevant Product Managers. Our tests are planned into the next two sprint cycles.

Optimizely Workshop Video: Prioritization Roadmap

Learn more about Roadmap Prioritization in this 24 minute video. Click here if you'd like to check out more of Optimizely's workshop series.