- Optimizely Classic
This article will help you:
- Use a thorough QA process to make sure your experiments work correctly before they go live
- Make sure that your changes appear correctly and on-time
- Make sure that visitors are included or excluded as you expect
- Make sure that goals are tracking correctly
- QA an Optimizely web experiment (or, learn about QA for mobile experiments instead)
Quality assurance, or QA, is the last of six steps in creating an experiment. A rigorous QA process ensures that your experiment looks and works the way you want before you show it to visitors. Once you’ve finished building your experiment, take a little extra time to verify your setup so you can trust your results.
Many mature optimization teams have access to separate staging and production environments. Separate environments can save you a headache; they reduce the risk of accidentally launching an unfinished or unverified experiment to the live site.
If you have separate environments, use a staging environment to set up the experiment and perform initial QA without exposing the experiment to visitors. Then, duplicate the experiment into your production environment project, and perform the same QA process there. When possible, ensure that your production and staging environments match so experiments function the same way in both.
If you don’t have a dedicated staging environment, don’t worry. You can still QA your experiment. Perform all the steps outlined below, but take special care when you use the test cookie to QA in a live environment.
Ultimately, the most effective QA process really depends on the processes that are in place in your organization. Use the framework in this article to build rigor into your QA steps and integrate your testing process with company practices.
Materials to prepare
- QA use case checklist that you prepared during the experiment planning stage that identifies:
- All new functionality to QA
- All goals and analytics to be captured
- All user flows into the experiment
- Places for regression testing to verify that the new experiment doesn't break existing functionality
- Developer or experiment implementer
- Power user
- QA team
- Verify that components in the production environment match the staging environment
- Verify that the experiment functions correctly in staging
- Verify all experiment changes for the purposes of regression testing
- Duplicate experiment to the production environment project
- QA in production environment
- Fully built experiment in staging environment
- Duplicated, fully built experiment in production environment
- A completed QA and use case doc with all cases marked "pass"
- Flashing of original content
- Variation code that doesn't run at all
- Goals don't fire when you expect them to
- Analytics integration doesn't capture any data
- Analytics integration captures a data discrepancy
- Failure in audience or URL targeting
Context before you QA
When you QA, you’ll test these four functional areas for every experiment. In the next section, we’ll break down the workflow so you know how you’ll QA each function, step-by-step.
Take a moment to learn about each below. They should look familiar to you, based on steps you used to create your experiment.
Does your variation look the way it should?
Does your variation show up when it should, without page flashing?
Timing refers to the order in which scripts fire on the page. Flashing means you may see original content load in the browser before the variation content appears. Make sure your variation code executes fast enough to avoid any flashing.
If you're having problems with this, see our article on troubleshooting flashing issues.
Does your experiment appear on the right pages, for the right audience?
If you're having problems with this, see our article on troubleshooting activation issues.
Do your goals fire when they should and capture accurate data?
Goals should fire the proper tracking calls when triggered. In other words, when you trigger the goal, it should report correctly to Optimizely. If you don’t QA your goals properly, you may end up needing to throw out the results of your experiment because it wasn’t set up to capture accurate data.
If you're having problems with this, see our article on troubleshooting goals.
AN OVERVIEW OF YOUR QA TESTING TOOLS
Use the following four methods to QA every experiment:
- preview mode
- cross-browser test
- force parameter
- test cookie (or other heavy audience restriction, such as IP targeting or a specific URL parameter)
The QA checklist that you created as part of your experiment plan will help you perform a thorough check.
As a reminder, your checklist should include:
- All goals that were added, as well as how each is triggered
- All functionality that’s been added (for example, a new button)
- Every visitor use case, including all expected user flows to and from the page
- Audiences that are eligible and ineligible to see the experiment
- URLs where the experiment should and should not run
- Sample workflow to fire a goal (especially for custom events)
Imagine, for example, that you’re targeting the experiment to visitors who come to your site from an organic search traffic source. Your QA list might include the following user paths:
|Audience visitor path||Eligible for experiment?||Pass / Fail|
|A visitor from Bing||Eligible for experiment|
|A visitor from Yahoo!||Eligible for experiment|
|A visitor from Google||Eligible for experiment|
|A visitor who clicks a paid ad||Not eligible for experiment|
|A visitor who clicks an email link||Not eligible for experiment|
The actual checklist that you create will be more rigorous. Your QA team will grade each use case pass or fail. Set the experiment live only after every item on the list has passed the test.
1. Preview mode in Optimizely
You may have used Optimizely's Preview mode to check the visual layout of your experiment when you built it. During QA, use Preview mode primarily to check your variation code, timing, and experiment overlap in Optimizely. (You’ll use other, more rigorous methods to evaluate goals and audiences.)
QA method: Preview Mode
Place to look: Optimizely
Navigate through your experiment as if you are a visitor. Check that each variation appears as intended and pages load without flashing.
Then, use the cross-browser test to verify that your variations display correctly in different browsers. Watch this video to learn more about these methods.
2. Variation code and timing
For each page of your experiment, verify that the variation appears as intended and pages load without flashing. Use the force parameter method to QA each variation in the browser, page by page. The force parameter enables you to force the variation to display without launching the experiment for all visitors. It also helps you check the responsive design of your variations.
QA method: Force parameter
Place to look: Your website
At a glance:
If your variation isn't appearing at all, your snippet may not be implemented correctly. Learn about the basics of snippet implementation. If your variation code doesn't work when the experiment runs but it functions properly when you load it in the console, consider manual or conditional activation.
If a variation appears but doesn’t look quite right, check out this variation code troubleshooting guide.
Not sure if you're seeing a variation code issue or a timing issue? Use your Developer Console to identify the problem. Copy your variation code from Optimizely’s Edit Code box and paste it directly into the console.
If a variation looks good in the browser console but it doesn't work in the Editor, you may have a scoping issue. Check your jQuery scope or ask your developer.
An easy way to share your variation designs with stakeholders is to send the URL with force parameters attached. However, visitors who enter through the force variation only see variation you selected on the specified page; they won’t be able to navigate through a multi-page experiment unless they manually add the parameter on every page.
3. Targeting conditions
Next, use Optimizely's data object to check your experiment's targeting. You'll set a test cookie and target it to create strong, persistent audience restrictions. When you set the experiment live, only visitors with the test cookie can see the experiment.
The test cookie method is the best way to rigorously QA your experiment end-to-end. We strongly recommend that you build this method into your standard QA process.
QA method: Test cookie
Place to look: Data object in console tab
By typing the commands outlined below into your browser console, you'll discover how Optimizely buckets you into experiments. Use this to check whether URL and audience targeting are working correctly.
Sometimes the test cookie can be useful in a staging environment as well as production. If large internal teams have access to the staging environment, launching an experiment can cause confusion. Use the test cookie to keep your experiment private for stakeholders involved in testing. If you have a large amount of control over your staging environment, you may decide to launch the experiment directly in staging and QA it live before copying it to production.
If your company doesn't allow adding cookies to staging or production environments, you can still create heavy audience restrictions so you don’t show the experiment to visitors yet. Use the test query parameter or IP targeting instead.
At a glance:
If you failed the targeting conditions when you expected to pass, or vice versa, go back and check your URL Targeting and Audience conditions. Use the URL Match Validator to make sure that the URL you're testing passes your targeting conditions. Check if any audience conditions you set fail to match the conditions you used to view your page.
If you're still having problems, probe the Optimizely log and filter by your Experiment ID for greater detail.
4. Goals and data capture
Finally, check that your goals fire properly and verify that data is captured on your Results page and in your analytics integrations.
QA method: Test cookie
Place to look: Network traffic and your analytics dashboard
Network traffic is a log within your browser of all events that occur as you interact with a page. When you trigger a goal in Optimizely, it fires a tracking call, which will be picked up in the network traffic.
At a glance:
If you want to test goals on a specific variation, use both the force variation and force tracking parameters.
Because force variation parameter disables goal tracking by default, you should add another parameter to enable tracking:
Your URL would look something like: http://www.example.com/testpage?opti..._tracking=true
Once your QA team verifies every item on your QA checklist, you're ready to launch the test to visitors. Remove the test cookie audience condition. Then, select your experiment on the Optimizely Home page and click Start Experiment.
Congratulations! Your experiment is now live for visitors to your site.