Skip to main content
menu_icon.png

Everything you need to learn Optimizely X in one place: See our resource pages for Web Experimentation and Personalization.

x
Optimizely Knowledge Base

Create experiments in an SDK project

This article is about Optimizely X. If you're using Optimizely Classic Mobile, check this article out instead.
 
Relevant products:
  • Optimizely X Full Stack
  • Optimizely X Mobile
  • Optimizely X OTT

THIS ARTICLE WILL HELP YOU:
  • Create an experiment in an SDK project
  • Set up experiment parameters, including the experiment key, traffic allocation, variations, audiences, and metrics
  • Find the experiment code block that Optimizely generates for your primary language

If you're using Optimizely X Full Stack, Mobile, or OTT, you'll create experiments inside of SDK projects. Once you've set up an SDK project in the primary language that you'll use to split traffic in your experiment, you can create your first experiment.

See additional resources
Here are all our articles about the Optimizely dashboard, which you'll use to create projects, add collaborators, manage privacy settings, and more: 

Here's our developer documentation, where you'll find code samples, full references for our SDKs, and getting started guides.

This article explains how to create an experiment using the Optimizely X Full Stack interface, but not how to actually set up the experiment with the SDK (which is the majority of the effort). For information about using the SDK, check out our developer documentation.

To get started, navigate to the Experiments dashboard and click New Experiment.

1. Set an experiment key

The experiment key is a unique identifier for your experiment. You can use it as a reference in your code.

Specify an experiment key. For example, "SEARCH_RESULTS_ALGORITHM."

Your experiment key must contain only alphanumeric characters, hyphens, and underscores. The key must also be unique for your Optimizely project so you can correctly disambiguate experiments in your application.

Don’t change the experiment key without making the corresponding change in your code. If you want to learn more, read about experiment activation in our developer documentation.

2. Set experiment traffic allocation

The traffic allocation is the fraction of your total traffic to include in the experiment, specified as a percentage. In this example, we allocated 5% of traffic to the experiment:

The traffic allocation is determined at the point where you call activate() in the SDK.

In the example above, the experiment is triggered when a visitor does a search, but it won’t be triggered for all users. 5% of users who do a search will be in the experiment, but 95% of users who do a search won't. In addition, users who don’t do a search also won't be in the experiment. In other words, the traffic allocation percentage may not apply to all traffic for your application.

3. Set variation keys and traffic distribution

Variations are the different code paths you want to test.

Each variation requires a unique variation key to identify the variation in the experiment. In this example, we added two variations, variation_a and variation_b:

You must specify at least one variation. There’s no limit to how many variations you can create.

A short, human-readable description for each variation will help make reports clear.

You can specify any traffic distribution you’d like. By default, variations are given equal traffic distribution.

You can also use one variation to gradually roll out a feature without A/B testing the impact. Make sure that you're executing the correct code paths when users are bucketed in the control variation and in the default case when visitors are not allocated to the experiment.

To learn more about activating experiments, check out experiment activation in our developer documentation.

4. Add an audience

Use audiences if you want to show your experiment only to certain groups of users. You don’t have to set up audiences if you don’t need them.

Click () to add an existing audience. Or, click Create new audience to define a new audience.

Optimizely takes the union of audiences as the eligible traffic for the experiment. So, a user browsing with Mobile web qualifies for the audience. A user browsing with an iPhone also qualifies.

Learn more about defining audiences. Or, read about attributes for SDK projects, including passing audience data correctly in your application code.

Audience evaluation may affect the exact traffic allocation you’ve specified for the experiment. For example, imagine that mobile users (iOS, Android, or mobile web) constitute 40% of total traffic. In step 2, we set the experiment traffic allocation to 5% of total traffic. So in this example, our expected total fraction of traffic in the experiment is 2% (40% of the 5% traffic allocation). The actual total fraction of traffic in the experiment could be higher or lower than 2% -- it depends on how many mobile users there actually are during the experiment.
 

5. Add a metric

Next, add events that you’re tracking with the Optimizely SDKs as metrics to measure impact. Add at least one metric to an experiment.

Events help you track the actions visitors take on your site, like clicks, pageviews, and form submissions. When you add an event to an experiment to measure success, it's called a metric. You have to create events before you can use them as metrics. Currently, only one type of metric is available: a binary conversion rate on an event. Check out this article for details about events and metrics.

Click () to add existing events as metrics to your experiment.

To re-order the metrics, click and drag them into place. 

The top metric in an experiment is the primary metric. Stats Engine uses the primary metric to determine whether an A/B test wins or loses, overall. Learn about the strategy behind primary and secondary metrics.

Learn more about tracking events with an Optimizely SDK in our developer documentation.

6. Add experiment code

Once you enter the unique experiment keys and variation keys, Optimizely creates a code block in your primary language at the bottom of the page.

  1. Copy and paste the code block directly into your application code.

    For example, Optimizely created this Python code block for a Python project:


    The sample code block shows how to call activate() for your experiment key, a user ID that you provide, and the different variations. The code block distinguishes between bucketing the user in the control variation of the experiment (variation_a) and the default case, where the user doesn't enter the experiment.

  2. Then, click Create Experiment to complete your experiment setup.

Here's some example code for passing attributes to activate() so that Optimizely can evaluate that audience in the SDK:

# attributes of the user
attributes = {‘device’: user.device}
# activate user in the experiment
variation = optimizely.activate(‘SEARCH_RESULTS_ALGORITHM’, user.id, attributes)
if variation == ‘variation_a’:
  # User is in the control variation.
  # Roughly 1% of traffic.
  execute_default_code()
elif variation == ‘variation_b’:
  # User is in the treatment variation.
  # Roughly 1% of traffic.
  execute_treatment_code()
else:
  # User is not the experiment.
  # Roughly 98% of traffic (95% of total, plus another 3% for non-mobile traffic)
  execute_default_code()

To learn more about correctly passing audience data in your application code, check out user attributes in our developer documentation.

Pause a variation

Once you've started an SDK experiment, you cannot delete a variation. However, you can pause variations instead. When you pause a variation, traffic from that variation will be redistributed among the experiment's other variations, but the results will still be accessible for the paused variation.

Clicking Confirm will tell Optimizely to stop sending traffic to treatment_b. However, the variation can be un-paused later by clicking Resume.

All previous results will still be available for any resumed variations.

User profiles and sticky bucketing

Pausing a variation is only necessary if you are using Optimizely's user profiles feature. User profiles allow Optimizely users to ensure variation assignments are sticky in any SDK.

If you are working with user profiles and want to ensure that a variation no longer receives any new traffic, you have two options:

  • Pause the variation, which ensures that it will no longer receive any traffic, or

  • Set variation traffic distribution percentage to zero, to ensure the variation will no longer receive new traffic, while users who have previously been exposed to the experiment will remain in their assigned variation.

If you haven't implemented user profiles in the SDK, pausing a variation is no different than setting that variation's traffic distribution percentage to zero.