Skip to main content
menu-icon.png

 

x
Optimizely Knowledge Base

Create experiments in a Full Stack project

Relevant products:
  • Optimizely X Full Stack

THIS ARTICLE WILL HELP YOU:
  • Create an experiment in a Full Stack project
  • Set up experiment parameters, including the experiment key, traffic allocation, variations, audiences, and metrics
  • Find the experiment code block that Optimizely generates for your primary language

If you're using Optimizely X Full Stack, you'll create experiments inside of Full Stack projects. Once you've set up a Full Stack project in the primary language that you'll use to split traffic in your experiment, you can create your first experiment.

This article explains how to create an experiment using the Optimizely X Full Stack interface. For information about using the SDK, check out our developer documentation.

To get started, navigate to the Experiments dashboard and click Create  New... > A/B Test.

create-fs.png

Does this interface look different from the one you see? Please check out this article instead.

1. Set an experiment key

The experiment key is a unique identifier for your experiment. You can use it as a reference in your code.

Specify an experiment key. For example, "NEW_SEARCH_ALGORITHM."

new-ab-fs.png

Your experiment key must contain only alphanumeric characters, hyphens, and underscores. The key must also be unique for your Optimizely project so you can correctly disambiguate experiments in your application.

Don’t change the experiment key without making the corresponding change in your code. If you want to learn more, read about experiment activation in our developer documentation.

2. Set experiment traffic allocation

The traffic allocation is the fraction of your total traffic to include in the experiment, specified as a percentage. In this example, we allocated 50% of traffic to the experiment:

traffic-allocation-new-fs.png

The traffic allocation is determined at the point where you call activate() in the SDK.

In the example above, the experiment is triggered when a visitor does a search, but it won’t be triggered for all users. 50% of users who do a search will be in the experiment, but 50% of users who do a search won't. In addition, users who don’t do a search also won't be in the experiment. In other words, the traffic allocation percentage may not apply to all traffic for your application.

You can also add your experiment to an exclusion group at this point.

3. Set variation keys and traffic distribution

Variations are the different code paths you want to experiment on.

Each variation requires a unique variation key to identify the variation in the experiment. In this example, we added two variations, var1 and var2:

vars-new-fs.png

You must specify at least one variation. There’s no limit to how many variations you can create.

A short, human-readable description for each variation will help make reports clear.

You can specify any traffic distribution you’d like. By default, variations are given equal traffic distribution.

You can also use one variation to gradually roll out a feature without A/B testing the impact. Make sure that you're executing the correct code paths when users are bucketed in the control variation and in the default case when visitors are not allocated to the experiment.

To learn more about activating experiments, check out experiment activation in our developer documentation. You can also learn about distribution modes and Stats Accelerator settings.

4. Add an audience

Use audiences if you want to show your experiment only to certain groups of users. You don’t have to set up audiences if you don’t need them.

Click an existing audience to add it. Or, click Create new audience to define a new audience.

audience-new-fs.png

Optimizely takes the union of audiences as the eligible traffic for the experiment. So, a user browsing with Mobile web qualifies for the audience. A user browsing with an iPhone also qualifies.

Learn more about defining audiences. Or, read about attributes for Full Stack projects, including passing audience data correctly in your application code.

Audience evaluation may affect the exact traffic allocation you’ve specified for the experiment. For example, imagine that mobile users (iOS, Android, or mobile web) constitute 40% of total traffic. In step 2, we set the experiment traffic allocation to 50% of total traffic. So in this example, our expected total fraction of traffic in the experiment is 40% of the 50% traffic allocation. The actual total fraction of traffic in the experiment could be higher or lower -- it depends on how many mobile users there actually are during the experiment.

5. Add a metric

Next, add events that you’re tracking with the Optimizely SDKs as metrics to measure impact. Add at least one metric to a experiment.

Events help you track the actions visitors take on your site, like clicks, pageviews, and form submissions. When you add an event to an experiment to measure success, it's called a metric. You have to create events before you can use them as metrics. Currently, only one type of metric is available: a binary conversion rate on an event. Check out this article for details about events and metrics.

Click existing events to add them as metrics to your experiment.

metrics-new-fs.png

To re-order the metrics, click and drag them into place. 

The top metric in an experiment is the primary metric. Stats Engine uses the primary metric to determine whether an A/B test wins or loses, overall. Learn about the strategy behind primary and secondary metrics.

Learn more about tracking events with an Optimizely SDK in our developer documentation.

6. Add experiment code

After you enter the unique experiment keys and variation keys, Optimizely creates a code block in your primary language at the bottom of the page.

  1. In your language of choice, copy and paste the experiment code block into your application code.
    exp-code-new-fs.png

    The sample code block shows how to call activate() for your experiment key, a user ID that you provide, and the different variations. The code block distinguishes between bucketing the user in the control variation of the experiment (variation_a) and the default case, where the user doesn't enter the experiment.

  2. Click Create Experiment to complete your experiment setup.

Here's some example code for passing attributes to activate() so that Optimizely can evaluate that audience in the SDK:

# attributes of the user
attributes = {‘device’: user.device}
# activate user in the experiment
variation = optimizely.activate(‘SEARCH_RESULTS_ALGORITHM’, user.id, attributes)
if variation == ‘variation_a’:
  # User is in the control variation.
  # Roughly 1% of traffic.
  execute_default_code()
elif variation == ‘variation_b’:
  # User is in the treatment variation.
  # Roughly 1% of traffic.
  execute_treatment_code()
else:
  # User is not the experiment.
  # Roughly 98% of traffic (95% of total, plus another 3% for non-mobile traffic)
  execute_default_code()

To learn more about correctly passing audience data in your application code, check out user attributes in our developer documentation.