Skip to main content

Everything you need to switch from Optimizely Classic to X in one place: See the Optimizely X Web Transition Guide.

Optimizely Knowledge Base

Create tests in a Full Stack project

There are two versions of Optimizely
What version do you have?
Optimizely Classic
This is what the Optimizely Classic user interface looks like.
Optimizely X
This is what the Optimizely X user interface looks like.
. If you're using Optimizely Classic Mobile, check this article out instead.
Relevant products:
  • Optimizely X Full Stack

  • Create a test in a Full Stack project
  • Set up test parameters, including the experiment key, traffic allocation, variations, audiences, and metrics
  • Find the experiment code block that Optimizely generates for your primary language

If you're using Optimizely X Full Stack, you'll create tests inside of Full Stack projects. Once you've set up a Full Stack project in the primary language that you'll use to split traffic in your test, you can create your first test.

See additional resources
Here are all our articles about the Optimizely dashboard, which you'll use to create projects, add collaborators, manage privacy settings, and more: 

Here's our developer documentation, where you'll find code samples, full references for our SDKs, and getting started guides.

This article explains how to create a test using the Optimizely X Full Stack interface, but not how to actually set up the test with the SDK (which is the majority of the effort). For information about using the SDK, check out our developer documentation.

To get started, navigate to the Experiments dashboard and click Create  New... > A/B Test.


1. Set an experiment key

The experiment key is a unique identifier for your test. You can use it as a reference in your code.

Specify an experiment key. For example, "NEW_SEARCH_ALGORITHM."


Your experiment key must contain only alphanumeric characters, hyphens, and underscores. The key must also be unique for your Optimizely project so you can correctly disambiguate tests in your application.

Don’t change the experiment key without making the corresponding change in your code. If you want to learn more, read about test activation in our developer documentation.

2. Set experiment traffic allocation

The traffic allocation is the fraction of your total traffic to include in the test, specified as a percentage. In this example, we allocated 50% of traffic to the test:


The traffic allocation is determined at the point where you call activate() in the SDK.

In the example above, the test is triggered when a visitor does a search, but it won’t be triggered for all users. 50% of users who do a search will be in the test, but 50% of users who do a search won't. In addition, users who don’t do a search also won't be in the test. In other words, the traffic allocation percentage may not apply to all traffic for your application.

You can also add your test to an exclusion group at this point.

3. Set variation keys and traffic distribution

Variations are the different code paths you want to test.

Each variation requires a unique variation key to identify the variation in the test. In this example, we added two variations, Var1 and Var2:


You must specify at least one variation. There’s no limit to how many variations you can create.

A short, human-readable description for each variation will help make reports clear.

You can specify any traffic distribution you’d like. By default, variations are given equal traffic distribution.

You can also use one variation to gradually roll out a feature without A/B testing the impact. Make sure that you're executing the correct code paths when users are bucketed in the control variation and in the default case when visitors are not allocated to the test.

To learn more about activating tests, check out experiment activation in our developer documentation. You can also learn about distribution modes and Stats Accelerator settings.

4. Add an audience

Use audiences if you want to show your test only to certain groups of users. You don’t have to set up audiences if you don’t need them.

Click () to add an existing audience. Or, click Create new audience to define a new audience.

Optimizely takes the union of audiences as the eligible traffic for the test. So, a user browsing with Mobile web qualifies for the audience. A user browsing with an iPhone also qualifies.

Learn more about defining audiences. Or, read about attributes for Full Stack projects, including passing audience data correctly in your application code.

Audience evaluation may affect the exact traffic allocation you’ve specified for the test. For example, imagine that mobile users (iOS, Android, or mobile web) constitute 40% of total traffic. In step 2, we set the experiment traffic allocation to 50% of total traffic. So in this example, our expected total fraction of traffic in the test is 40% of the 50% traffic allocation. The actual total fraction of traffic in the test could be higher or lower -- it depends on how many mobile users there actually are during the test.

5. Add a metric

Next, add events that you’re tracking with the Optimizely SDKs as metrics to measure impact. Add at least one metric to a test.

Events help you track the actions visitors take on your site, like clicks, pageviews, and form submissions. When you add an event to a test to measure success, it's called a metric. You have to create events before you can use them as metrics. Currently, only one type of metric is available: a binary conversion rate on an event. Check out this article for details about events and metrics.

Click () to add existing events as metrics to your test.


To re-order the metrics, click and drag them into place. 

The top metric in a test is the primary metric. Stats Engine uses the primary metric to determine whether an A/B test wins or loses, overall. Learn about the strategy behind primary and secondary metrics.

Learn more about tracking events with an Optimizely SDK in our developer documentation.

6. Add experiment code

Once you enter the unique experiment keys and variation keys, Optimizely creates a code block in your primary language at the bottom of the page.

  1. Copy and paste the code block directly into your application code.

    For example, Optimizely created this Python code block for a Python project:

    The sample code block shows how to call activate() for your experiment key, a user ID that you provide, and the different variations. The code block distinguishes between bucketing the user in the control variation of the test (variation_a) and the default case, where the user doesn't enter the test.

  2. Click Create Experiment to complete your test setup.

Here's some example code for passing attributes to activate() so that Optimizely can evaluate that audience in the SDK:

# attributes of the user
attributes = {‘device’: user.device}
# activate user in the experiment
variation = optimizely.activate(‘SEARCH_RESULTS_ALGORITHM’,, attributes)
if variation == ‘variation_a’:
  # User is in the control variation.
  # Roughly 1% of traffic.
elif variation == ‘variation_b’:
  # User is in the treatment variation.
  # Roughly 1% of traffic.
  # User is not the experiment.
  # Roughly 98% of traffic (95% of total, plus another 3% for non-mobile traffic)

To learn more about correctly passing audience data in your application code, check out user attributes in our developer documentation.