Some optimization programs choose to test small changes, or tweaks, to their sites -- but when it comes to a redesign, they put their testing programs on hold until the redesign is done.
For these organizations, the process can be summed up this way: “We’re doing a redesign; we’ll get to the A/B testing afterwards. No point in testing now because everything’s going to change anyway.”
When companies do this, they actually miss out on one of the most valuable opportunities to test, learn, and iterate. Testing through a redesign offers a unique opportunity to build the best experience for higher conversions.
Our 50-page ebook on redesigns will give you many practical examples and strategies of how to experiment through your next redesign!
For more examples of how experienced testing organizations test through their redesigns, check out these case studies:
- How to Redesign Your Redesign: an interview with Jeff Blettner at Formstack
- Spreadshirt: How A Data-Driven Site Redesign Lifted Clicks 606% and Purchases 11%
- Soccerloco: an e-commerce redesign that literally paid for itself
Aim for the Global vs. Local Maximum
Why test through a major redesign instead of waiting until after the redesign is done? Because this lets you test the global maximum instead of many local maxima.
A/B Testing: The Most Powerful Way to Turn Clicks Into Customers, written by Optimizely co-founders Dan Siroker and Pete Koomen with Cara Harshman, describes the global and local maximum this way:
Imagine you’re climbing a mountain: if your goal is to get to the top of the tallest mountain, and you don’t have a map of the range, it’s probably not a good idea just to start walking up the nearest slope. You’ll climb, and climb, and then ultimately reach some peak—and then what? Where do you move next if this peak doesn’t turn out to be the highest one?
In optimization, the term for the nearby, uphill peak is the local maximum, whereas the distant, largest peak is the global maximum.
By thinking -- and testing -- big changes, you’ll see statistically significant results more quickly, and you’ll be more likely to find optimal experiences. Site-wide redesigns are perhaps the best time to test global, big-picture changes. For example, test out completely different navigation bars or product categories.
Making bigger changes helps you explore potentially winning possibilities, rather than just refine a decision that has already been made.
For some businesses, this is not an easy change to make; however, businesses that continually refine and tweak will most likely see diminishing returns on their optimization program compared to those with a willingness to explore.
For many organizations, sitewide redesigns are based on opinions: from executives, from design firms, and from other internal stakeholders.
When opinions and gut feelings drive the redesign process, you’re unlikely to discover the solution that actually provides the best user experience or drives the most conversions.
Why? It’s simple: opinion-driven redesigns solve the problems of the people who are giving their opinions. You’re solving your stakeholder’s problem, not your user’s. They also encourage you to make big changes that don’t actually solve user experience issues -- or potentially make the user experience worse.
As you redesign, get qualitative feedback from your users through user tests, focus groups, and beta testing. But don’t stop there -- pair the qualitative feedback with quantitative data generated through A/B tests. These techniques will give you a sense of the real issues affecting customers, and which changes will address those issues.
For each test you run, the qualitative feedback that you collect should drive the hypotheses that you test. Using feedback-based hypotheses will help you explore ideas systematically, then develop the solutions that work -- not just the ones that sound cool.
For more information on data-driven redesigns, read our solutions partner Clearhead’s blog post on the topic.
Test Component by Component
So you’ve decided to test through your redesign, instead of rolling out your new experience based on pure opinions. Companies often wonder whether to roll out two dramatically different experiences and test them against each other, or test component-by-component changes until they have a new experience.
Testing entire site designs at once helps you explore dramatically different concepts, but keep in mind that this approach can be affected by visitor bias. Rolling out a completely different experience all at once can be shocking and uncomfortable for your visitors. Chances are that they’ll react more negatively than they would have otherwise.
Because visitors typically don’t like change, they will generally react negatively during a broad redesign.
What role should your design team play? Testing through a redesign empowers your design team to apply human insights, creativity, and knowledge about user experience to the problems that surface in your data. You can then use insights from testing to learn more about what your customers respond well to, and what they don’t. Those insights should form the basis of the hypotheses that you then test.
Just like user testing, A/B testing provides a data-driven way to explore different concepts and treatments, and gather real data on their impact.
Testing big-picture ideas in phases, rather than all at once, can help isolate the impact of each change and minimize the perceived change to your visitors. A more advanced testing program may then test combinations of these changes with a multivariate experiment to see the interaction effects. Finally, you can A/B test entirely different experiences against one another to see the total effect.
Because you’ll have data on each component, you can use the insights from these smaller tests to drive your overall redesign.
In Optimizely, your re-design process might follow this pattern:
- Prioritize the areas of your site that should be redesigned, based on the areas where you see the most user feedback or friction. Remember, not everything needs to be redesigned.
- Start with simpler A/B tests on components of your site. If you’re working with the global navigation or other changes that affect multiple changes, use substring match URL targeting to apply the change across multiple pages.
- Move to multivariate tests to analyze combinations of changes, or multi-page tests to analyze the effect of different changes along a multi-page flow or funnel.
- Finally, run redirect experiments that present completely different experiences to visitors.
Set Interaction-Level and Program-Level Goals
When you test components of your redesign, you have two goals:
- Interaction-Level Goals: Identify whether the new component provides a better user experience (UX)
- Program-Level Goals: Identify whether the new component, paired with the other new components, provide overall value to your program-level business metrics
Typically, these should be in alignment. For example, if your goal is to increase video plays, which lead to higher ad revenue, then testing a layout that yields more video-play conversions is ideal.
But take another example: say your goal is to increase average order value (AOV) because that will lead to higher revenue per visitor (RPV). You may experiment with a site design that makes it easier to select a product and check out, but that doesn’t increase the AOV of those orders. In other words, you may be increasing your checkout button clicks but sacrificing your AOV in the process.
These insights, which may be unintuitive, help you discover which changes are actually best for your online business’s health. In areas where you provide a better or more efficient UX, but at the expense of your program goals, you’ll notice the difference between “local-maximum” decisions and “global-maximum” decisions.
Setting a combination of interaction-level and program-level goals will help you learn more from each piece of the redesign, and keep the overall redesign in alignment with your business goals.
Your business will be less likely to discover these types of insights by testing incremental changes after a redesign, vs. testing through a redesign.