Skip to main content
menu_icon.png

Everything you need to switch from Optimizely Classic to X in one place: See the Optimizely X Web Transition Guide.

x
Optimizely Knowledge Base

Implementation checklist: Prepare Full Stack for production

This article is about Optimizely X Full Stack.
 
relevant products:
  • Optimizely X Full Stack

THIS ARTICLE WILL HELP YOU:
  • Set up your Full Stack implementation for a production environment
  • Optimize your Full Stack configuration
  • Understand best practices of Full Stack implementation

When preparing to implement Optimizely Full Stack in a production environment, it's a good idea to thoroughly familiarize yourself with the configuration details and best practices that will streamline the entire process.

If you haven’t done so already, check out our Getting Started guide, which runs through the basics of using a Full Stack SDK. 

Datafile management

You should begin by configuring the method used by the SDK for retrieving the datafile. Once that’s done, make sure the datafile itself is up to date.

Aside from the iOS and Android SDKs, datafile management is not provided out of the box. Instead, it should be encapsulated as a datafile synchronization service that will be responsible for datafile storage, refresh frequency, and fetch method.

Optimizely can fetch the datafile using its CDN or REST API. Requests to the API must be authenticated with a token. Once you can access the datafile, decide on a strategy for synchronizing the datafile with your servers. In production, we recommend polling at 5-minute intervals, though you can also use a “push” model, based on webhooks and configured to point to a synchronization service. You can read more about these tradeoffs in our article on best practices for datafile management.

To ensure webhook requests originate from Optimizely, you can secure it using a token in the request header. See our developer documentation to learn more about securing webhooks.

The synchronization service should contain an endpoint that is responsible for pulling down the datafile and re-instantiating the Optimizely object. This should be set up as a one-off service (i.e. Microservice), and the object should be stored in this service. This service should also be responsible for propagating the update to other servers, by pushing or notifying subscribes to re-instantiate the object. Otherwise, a synchronizer behind a load balancer could cause servers in the fleet to become out of sync.

To find out more about accessing the Optimizely datafile, check out our Knowledge Base article. Or for a real in-depth look at the subject, see our developer documentation instead.

SDK configuration

The SDKs are highly configurable and can meet the needs of any production environment, but adequate scaling may require overriding some default behavior to best meet the needs of your application.

Event dispatcher

By default, each SDK maintains an out-of-the-box dispatcher that supports either synchronous or asynchronous event dispatching. This means every event captured by the SDK is sent as a separate network request to Optimizely’s event-logging servers. This could affect performance when processing events at scale, as each event introduces network overhead when an HTTP request is sent. Because of that, you should use a custom dispatcher that efficiently handles batching events with retry logic. You can either build a dispatcher from scratch, or start with the provided method.

Consider creating a separate service that works within your networking requirements, and would be responsible for queuing and flushing events. In this scenario, the SDK acts as the producer and writes all events to a datastore (e.g., a queue). The microservice—now acting as the consumer—then builds a single event object containing all items in the datastore and dispatches it with a single request to Optimizely. The dispatch frequency can be based on the number of events in the queue or time-base, or whichever comes first. Once the request has been successfully received by the logging servers, it’s safe to flush the events.

For more information, see our documentation on the bulk dispatcher reference implementation and the event dispatcher.

Logger

Verbose logs are critical. By default, a no-op SDK logger is provided, giving you the scaffolding to create a customer logger. It’s fully customizable and can support use cases like writing logs to an internal logging service or vendor, but is intentionally not functional out of the box. You should create a logger that suits your needs and pass it to the Optimizely client.

See our developer docs  on the logger, or the SimpleLogger reference implementation to find out more.

Error handler

In a production environment, errors must be handled consistently across the application. The Optimizely SDK allows you to provide a custom handler to catch configuration issues, like an unknown experiment key or unknown event key. This handler should cause the application to fail gracefully in order to deliver a normal user experience. It should also ping an external service, like Sentry, to alert the team of an issue.

If you don’t provide a handler, the errors will not surface in your application.

To find out more about the error handler, see our developer docs.

User profile service

Building a user profile service (UPS) helps maintain consistent variation assignments between users when experiment configuration settings change.

The Optimizely SDK buckets users via a deterministic hashing function, so as long as the datafile and userID are consistent, it will always evaluate to the same variation. But when experiment configuration settings change, adding a new variation or changing traffic allocation can change a user’s variation and alter the user experience.

Learn more about bucketing behavior in Full Stack.

A UPS solves this by persisting information about the user in a datastore. At a minimum, it should create a mapping of userID to variation assignment. Implementing a UPS requires exposing a lookup and save function that either returns or persists a user profile dictionary. The JSON schema for this dictionary can be found in our developer documentation. This service also assumes all userIDs are consistent across all use cases and sessions.

We recommend caching user information after first lookup to speed future lookups.

Let’s walk through an example. Using Redis or Cassandra for the cache, we can store user profiles in a key value pair mapping. We can use a hashed email address mapping to a variation assignment. We want to keep sticky bucketing around for six hours at a time, so we'll set a TTL on each record. As Optimizely buckets each user, the UPS will interface with this cache and make reads/writes to check assignment before bucketing normally.  

Check out our developer docs for more information on user profiles.

Build an SDK wrapper

Many developers prefer to use wrappers as a way to both encapsulate the functionality of an SDK and simplify maintenance. This can be done for all the configuration options described above. You can see a few examples in our developer documentation under SDK Wrappers.  

Environments

The environments feature allows you to confirm behavior and run tests in siloed environments, like development or staging. This in turn makes it easier to safely deploy experiments in production. Environments are customizable and should mimic your team’s workflow (most customers use two environments: development and production). This allows dev and QA teams to safely inspect experiments in an isolated setting, while site visitors are exposed to experiments running in the production environment.

You should view production as your real-world workload. A staging environment should mimic all aspects of production, and be able to test and run sanity checks before deployment. In these environments, all aspects of the SDK—dispatcher, logger, etc.—should be production-grade. In local environments like test or development, it’s okay to use the out-of-the-box implementations instead.

By default, each Optimizely project contains a production environment. We recommend that you create a secondary environment to expose experiments to internal teams before users ever see them. Environments are kept separate and isolated from each other by using their own datafile.

For a deep dive on environments, see our Knowledge Base article.

User IDs and attributes

User IDs identify the unique users in your experiments. It’s especially important in a production setting to both carefully choose the type of user ID, and set a broader strategy of maintaining consistent IDs across channels. Our developer documentation explores several different approaches and best practices for choosing a user ID.   

Attributes allow you to target users based on specific properties. In Optimizely, you can define which attributes should be included in an experiment. Then, in the code itself, you can pass an attribute dictionary on a per-user basis to the SDK, which will determine which variation a user sees.

Attribute fields and user IDs are always sent to Optimizely’s backend through impression and conversion events. It is up to the customer to responsibly handle fields (for example, email addresses) that may contain personally identifiable information (PII). Many customers use standard hash functions to obfuscate PII.  

Integrations

You can build custom integrations with Full Stack using the notification listeners feature. Developers can use it to programatically observe and act on various events that occur within the SDK. This opens the door to countless integrations by passing data to external services.

Learn more about notification listeners in our developer documentation.

Let’s walk through a few examples:

  • Send data to an analytics services and report that user_123 was assigned to variation A

  • Send alerts to data monitoring tools like New Relic and Datadog with SDK events to better visualize and understand how experiments can affect service-level metrics

  • Pass all events to an external data tier, like a data warehouse, for additional processing and the ability to leverage business intelligence tools

QA and Testing

Before you go live with your experiment, we have a few final tips:

  • Consider your QA options. To manually test different experiences, you can force yourself into a variation using whitelisting, or you can set up a handler to focus variations.

  • Ensure everything is working smoothly in a test or staging environment paired with the corresponding datafile generated from a test environment within Optimizely. This will confirm the datafile is accurate and can be verified by checking your SDK logs.

  • Run an A/A test to double-check data is being captured correctly. This is a great sanity check to ensure there are no differences in conversions between the control and variation treatments. Read more about A/A testing here.

Now, you should be ready to go! If you run into any issues, please contact our support team. And if you think you’ve found a bug, please file an issue in the SDK’s GitHub repo and we’ll check it out as soon as possible.