Skip to main content
menu-icon.png

 

x
Optimizely Knowledge Base

Create experiments in a Full Stack project

THIS ARTICLE WILL HELP YOU:
  • Create an experiment in a Full Stack project
  • Set up experiment parameters like the experiment key, traffic allocation, variations, audiences, and metrics
  • Find the experiment code block that Optimizely generates for your primary language

If you're using Optimizely X Full Stack, you'll create experiments inside of Full Stack projects. Once you've set up a Full Stack project in the primary language that you'll use to split traffic in your experiment, you can create your first experiment.

This article explains how to create an experiment using the Optimizely X Full Stack interface. For information about using the SDK, check out our developer documentation.

To get started, navigate to the Experiments dashboard and click Create New... > A/B Test or Create New... > Feature Test.

create-test.png

Get started with Optimizely X Full Stack describes the differences between A/B tests and feature tests in Full Stack.

Create an experiment

1. Set an experiment key

The experiment key is a unique identifier for your experiment. You can use it as a reference in your code.

Your experiment key must contain only alphanumeric characters, hyphens, and underscores. The key must also be unique for your Optimizely project so you can correctly disambiguate experiments in your application.

If you are setting up a feature test, select the feature you want to use. After you select a feature, Optimizely automatically generates an experiment key by appending “_test” to the end of the feature key for the feature you selected. You can edit the experiment key if you like, as long as you always use a unique key.

feature-test-exp-name.png

If you are setting up an A/B test, you'll specify your own experiment key. For example, "NEW_SEARCH_ALGORITHM."

If you need to change the experiment key after you save your experiment, click Settings in the Manage Experiment (or Manage Feature Test) menu.

settings.png

Don’t change the experiment key without making the corresponding change in your code. To learn more, read about experiment activation in our developer documentation.

2. Create variations

Variations are the different code paths you want to experiment on.

Each variation requires a unique variation key to identify the variation in the experiment. In the examples below, we are using two variations, var1 and var2.

A/B test:

ab-vars.png

Feature test:

feature-vars.png

You must specify at least one variation. There’s no limit to how many variations you can create.

By default, Optimizely provides two variations with the keys “variation_1” and “variation_2”. Like experiment keys, you can edit variation keys. If you add variations, Optimizely will provide automatic suggestions according to the variation number: “variation_3”, “variation_4”, and so on. Deleting a variation will not affect the automatic numbering of Optimizely's automatic variation key suggestions. 

A short, human-readable description for each variation will help make reports clear.

You can specify any traffic distribution you’d like. By default, variations are given equal traffic distribution.

You can also use one variation to gradually roll out a feature without A/B testing the impact. Make sure that you're executing the correct code paths when users are bucketed in the control variation and in the default case when visitors are not allocated to the experiment.

To learn more about activating experiments, check out experiment activation in our developer documentation. You can also learn about distribution modes and Stats Accelerator settings.

Use feature toggles and configurations

Feature test variations include a feature toggle and the feature configuration (if one exists). By default, the toggle will be set to ON and the configuration default values will load.

A common feature test includes a feature with no configuration, with one variation set to test “toggle=ON” and another variation set to test “toggle=OFF.” This allows you to experiment on the performance of your application in its current form vs. its performance with your new feature enabled.

If the feature includes a feature configuration and you set a variation to “toggle=OFF,” Optimizely will disable the option to modify variable values and revert to the default variable values.

To create variations using feature configurations, update the variable values under each variation.

feature-config.gif

When this feature test is live, the getFeatureVariable APIs will return the values specified for the variation assigned to a visitor. Experimenting using a feature configuration allows you to iterate on a feature in between code deploys. Run a sequence of experiments with different combinations of variable values to determine the optimal experience for your users.

If a feature test is running on a feature that uses a feature configuration, the feature configuration is locked until you pause the test.

3. Add experiment code

After you enter the unique experiment keys and variation keys, Optimizely creates a code block in your primary language at the bottom of the page.

  1. In your language of choice, copy and paste the experiment code block into your application code.
    experiment_code_ui.png
    The sample code block shows how to call activate() for your experiment key, a user ID that you provide, and the different variations. The code block distinguishes between bucketing the user in the control variation of the experiment (variation_a) and the default case, where the user doesn't enter the experiment.

  2. Click Create Experiment to complete your experiment setup.

Here's some example code for passing attributes to activate() so that Optimizely can evaluate that audience in the SDK:

# attributes of the user
attributes = {‘device’: user.device}
# activate user in the experiment
variation = optimizely.activate(‘SEARCH_RESULTS_ALGORITHM’, user.id, attributes)
if variation == ‘variation_a’:
  # User is in the control variation.
  # Roughly 1% of traffic.
  execute_default_code()
elif variation == ‘variation_b’:
  # User is in the treatment variation.
  # Roughly 1% of traffic.
  execute_treatment_code()
else:
  # User is not the experiment.
  # Roughly 98% of traffic (95% of total, plus another 3% for non-mobile traffic)
  execute_default_code()

To learn more about correctly passing audience data in your application code, check out user attributes in our developer documentation.

4. Add an audience

Use audiences if you want to show your experiment only to certain groups of users. You don’t have to set up audiences if you don’t need them.

Click an existing audience to add it. Or, click Create new audience to define a new audience.

audiences.png

Optimizely takes the union of audiences as the eligible traffic for the experiment. So, a user browsing with Mobile web qualifies for the audience. A user browsing with an iPhone also qualifies.

Learn more about defining audiences. Or, read about attributes for Full Stack projects, including passing audience data correctly in your application code.

Audience evaluation may affect the exact traffic allocation you’ve specified for the experiment. For example, imagine that mobile users (iOS, Android, or mobile web) constitute 40% of total traffic. In step 2, we set the experiment traffic allocation to 50% of total traffic. So in this example, our expected total fraction of traffic in the experiment is 40% of the 50% traffic allocation. The actual total fraction of traffic in the experiment could be higher or lower -- it depends on how many mobile users there actually are during the experiment.

5. Add a metric

Next, add events that you’re tracking with the Optimizely SDKs as metrics to measure impact. Add at least one metric to a experiment.

Events help you track the actions visitors take on your site, like clicks, pageviews, and form submissions. When you add an event to an experiment to measure success, it's called a metric. You have to create events before you can use them as metrics. Currently, only one type of metric is available: a binary conversion rate on an event. Check out this article for details about events and metrics.

Click existing events to add them as metrics to your experiment.

metrics.png

To re-order the metrics, click and drag them into place. 

The top metric in an experiment is the primary metric. Stats Engine uses the primary metric to determine whether an experiment wins or loses, overall. Learn about the strategy behind primary and secondary metrics.

Learn more about tracking events with an Optimizely SDK in our developer documentation.

6. Set traffic allocation

The traffic allocation is the fraction of your total traffic to include in the experiment, specified as a percentage. In this example, we allocated 50% of traffic to the experiment:

The traffic allocation is determined at the point where you call activate() in the SDK.

In the example above, the experiment is triggered when a visitor does a search, but it won’t be triggered for all users. 50% of users who do a search will be in the experiment, but 50% of users who do a search won't. In addition, users who don’t do a search also won't be in the experiment. In other words, the traffic allocation percentage may not apply to all traffic for your application.

You can also add your experiment to an exclusion group at this point.

7. Set up whitelisting

Use whitelisting to specify the variation for certain userIds within your test configuration rather than in your code. 

If you have the ability to set your userId when QA testing, you can tell Optimizely which variation you should get based on the userId. This allows you to set the variation within your Optimizely configuration rather than setting the variation within your code.

whitelist.png

More information

After saving your changes, use environments to QA your workflow.

In our developer documentation, you'll also find code samples, full references for our SDKs, and guides for getting started.