Skip to main content


Optimizely Knowledge Base

QA in Optimizely X Full Stack

  • Optimizely X Full Stack

  • QA feature tests and AB tests in Optimizely X Full Stack

Before you publish a test or rollout a feature live to your visitors, it's important to make sure it works the way you expect. 

Optimizely X Full Stack provides tools to help you QA your tests. Use them to QA your test end-to-end, checking that each variation of your app looks, feels, and operates correctly. It’s also important that any metrics being tracked report correctly on the Results page. 

Here's a QA checklist:

  1. Start your test in a non-production environment.

  2. Force yourself into each experience using forced bucketing or whitelisted users.

  3. Trigger each conversion event being tracked by the test.

  4. If you don’t see the variation or event tracking you’d expect, increase the log level and consult your error handler to debug.

  5. Repeat steps 2 and 3 for each variation of your test.

  6. Confirm your results. There is a 5 minute interval on results page updates, so you might have to wait up to 5 minutes for it to update. The time of the last update is shown on the results page.

  7. Launch in Production.

Below, we provide more detail about certain steps.


We recommend using a feature called Environments to QA Optimizely X Full Stack features, feature tests, and A/B tests. Many mature optimization teams have access to separate staging and production environments. Separate environments reduce the risk of accidentally launching an unfinished or unverified test to the live site.

Optimizely gives each project a default environment called “Production.” Add a staging environment to your project to use for QA testing. Each environment has a different datafile, with which you will initialize our SDKs. 



You’ll be able to choose the environment where you want to run the experiment:


You don’t necessarily need separate environments to make use of Optimizely’s Environments feature. You can, for example, create an environment for users with a test cookie. You will just need to implement the logic that would initialize Optimizely using the test cookie environment when the user has your test cookie.

For more information on environments, see our article on how to use environments to QA your test code

Force variation

When activating a test, you must provide a userId. Normally, we use the userId to randomize which bucket to respond with. When QA testing, you should check the behavior of each variation. There are two options you can use to tell Optimizely to skip the randomized bucketing and return a specific bucket. 

Both of these options require the test to be running, so go ahead and start the test in your staging environment. If for some reason you are unable to use Environments, you can start your test with 0% of traffic allocated to it and use these options to QA the test. 

Forced bucketing

Use the forceVariation APIs to set a specific variation for a given userIds within your code. 

If your test is running in a web server, you may want to use a query parameter to set the bucket and facilitate testing of your variations. To do this, you can detect a parameter in the URL such as ?force-opty-<experiment_key>=<variation_key> or ?force-opty-my_optimizely_experiment=variation. Parse this parameter and use the forceVariation API to force that specific variation of the test for the current user prior to activating the test. After that functionality is set up, forcing yourself into each variation will be as simple as adding the parameter to your URL.


Use whitelisting to specify the variation for certain userIds within your test configuration rather than in your code. 

If you have the ability to set your userId when QA testing, you can tell Optimizely which variation you should get based on the userId. This allows you to set the variation within your Optimizely configuration rather than setting the variation within your code.


Here are some tools to help you identify issues if results aren't showing as you expect.

Logger – Increase the log level to see when events fire, when activations occur, and more.

Error handler – Handle cases when unknown experiment keys or event keys are referenced.

Notification listeners – Use notification listeners to bind callbacks to test activations and event tracking.