- QA feature tests and AB tests in Optimizely X Full Stack
Before you publish an experiment or rollout a feature live to your visitors, it's important to make sure it works the way you expect.
Optimizely X Full Stack provides tools to help you QA your experiments. Use them to test your experiment end-to-end, checking that each variation of your app looks, feels, and operates correctly. It’s also important that any metrics being tracked report correctly on the Results page.
Here's a QA checklist:
Start your experiment in a non-production environment.
Force yourself into each experience using force bucketing or whitelisted users.
Trigger each conversion event being tracked by the experiment.
If you don’t see the variation or event tracking you’d expect, increase the log level and consult your error handler to debug.
Repeat steps 2 and 3 for each variation of your experiment.
Confirm your results. There is a 5 minute interval on results page updates, so you might have to wait up to 5 minutes for it to update. The time of the last update is shown on the results page.
Launch in Production.
Below, we provide more detail about certain steps.
We recommend using a feature called Environments to QA Optimizely Full Stack Features, Feature Tests, and AB Tests. Many mature optimization teams have access to separate staging and production environments. Separate environments reduce the risk of accidentally launching an unfinished or unverified experiment to the live site.
Optimizely gives each Project a default Environment called “Production.” Add a staging Environment to your Project to use for QA testing. Each environment has a different datafile with which you will initialize our SDKs.
With your Production and Staging projects, you’ll be able to choose in which Project to run the an experiment:
You don’t necessarily need separate environments to make use of Optimizely’s Environments feature. You can, for example, create an environment for users with a test cookie. You will just need to implement the logic that would initialize Optimizely using the test cookie Environment when the user has your test cookie.
For more information on Environments see our article on how to use environments to test your experiment code.
When activating an experiment, you must provide a userId. Normally, we use the userId to randomize which bucket to respond with. When QA testing, you should check the behavior of each variation. There are two options you can use to tell Optimizely to skip the randomized bucketing and return a specific bucket.
Both of these options require the experiment to be running, so go ahead and start the experiment in your staging Environment. If for some reason you are unable to use Environments, you can start your experiment with 0% of traffic allocated to it and use these options to test the experiment.
Use the forceVariation APIs to set a specific variation for a given userIds within your code.
If your experiment is running in a web server you may want to use a query parameter to set the bucket and facilitate testing of your variations. To do this, you can detect a parameter in the URL such as ?force-opty-<experiment_key>=<variation_key> or ?force-opty-my_optimizely_experiment=variation. Parse this parameter and use the forceVariation API to force that specific variation of the experiment for the current user prior to activating the experiment. After that functionality is set up, forcing yourself into each variation will be as simple as adding the parameter to your URL.
Use whitelisting to specify the variation for certain userIds within your experiment configuration rather than in your code.
If you have the ability to set your userId when QA testing, you can tell Optimizely which variation you should get based on the userId. This allows you to set the variation within your Optimizely configuration rather than setting the variation within your code.
Here are some tools to help you identify issues if results aren't showing as you expect.
Logger – Increase the log level to see when events fire, when activations occur, and more.
Error Handler – Handle cases when unknown experiment keys or event keys are referenced.
Notification Listeners – Use notification listeners to bind callbacks to experiment activations and event tracking.