- Incorporate Optimizely Full Stack into your product development process
- Base product decisions on data generated from the experimentation process
- Create, test, and roll out features with Optimizely Feature Management
For product teams, one of the pitfalls of the feature development lifecycle is the risk of investing time, effort, and money in building features that your customers won't use. A new feature may not deliver actual business value, or be what your customers need. Integrating experimentation into your product development process will give you several advantages when building and iterating on new features:
Use real-world data to find out how your customers engage with new features.
Deliver functional prototypes of new features early to learn about their potential value.
Roll out new features gradually to mitigate risk.
Optimize for growth and performance through conversion rate optimization and growth experiment techniques.
You can use Full Stack's Feature Management capabilities to drive adoption, engagement, and conversion for new features, at any stage of the product development process.
Painted door tests are a commonly-used way to validate demand and gather data before committing resources to specific projects. Companies with tighter integration between experimentation and product development appreciate the ability to create minimum viable product experiments and engage in feature rollouts. They use experimentation to build data and fine-tuned controls into the entire product development process.
In this article, we'll show you an overview of how to use Full Stack’s Feature Management capabilities to bring a culture of experimentation to your company’s product development process.
Discover what your customers want
If you're just getting started with integrating experimentation into your product development lifecycle, you should probably start with the discovery phase. What do your users want? Many development teams think they know the answer. But those answers aren't always based on data.
One way to find out is to see what new feature offerings your users will actually interact with, through a painted door test. Instead of building out an entire feature, you simply create the suggestion of a feature. This will show you how many users might want to use it before you invest significant resources in its development. They’re also useful for assessing discoverability.
In most cases, you’d build a painted door test in Optimizely Web, and not Full Stack. However, the test itself would usually be intended to validate investment in a higher-cost, higher-fidelity Full Stack feature.
For example, let’s say Optimizely believed some customers might be interested in an automated personalization feature. We could run a painted door test comparing different approaches for naming and describing this feature:
When a user clicks on either of those highlighted menu options, they'll see this message:
In the process, Optimizely learns several important things:
Is there enough interest in the feature to justify its development?
Which customers have use cases that this feature can address?
Which approach to naming and description drives engagement better?
Painted door tests are a great, low-investment way to gather data from users and validate the features you decide to build.
Design features and test them
Sometimes, you may want to deploy several quick and efficient experiments, to evaluate a feature idea without allocating scarce resources to develop a comprehensive version of it in advance. A minimum viable product test, in which you build out a basic, bare-bones version of a feature you’re considering, is a good way to do this. The MVP version of the feature would be functional, but only just. Conceptually, it’s similar to a feature rollout in Full Stack; the difference is that in an MVP test, the feature isn't expected to be finished.
For example, a media company might consider a feature that will allow readers to easily share the site’s content with their own contacts. With an MVP test, they can start by building it out for whichever platform would be easiest, whether that's mobile, web-only, or even email. The idea is to get users engaging with the new feature in a way that validates the idea that people will use it, but also delivers insights on how they’re most likely to use it, and how often.
At this point, you've already collected data on your users' needs and preferences, and you've got a design in mind. The next step is to build your feature in Full Stack. That means setting up feature flags that let you roll out a new feature—or switch it off if something goes wrong—at the exact moment you're ready, as well as a set of variables that define how the feature looks and acts.
For more detailed information on feature flags, see our Developer Docs.
Run experiments on new features
But what if you haven't settled on one design for your new feature? Maybe your data didn't point to a single likely solution to your users' problems. Maybe you'd like to run an experiment that pits two or three potential designs against each other, and then roll out the winner.
This is where you'd use a feature test. They're very similar to A/B tests in Optimizely, but include a few components specific to Full Stack. Use them to experiment with different versions of your new feature to see which one performs best, or to measure the new feature's performance against the current experience. You'll get a better understanding of the feature's potential impact on your key metrics, without requiring you to deploy any new code. You can also use feature tests to iterate on failed features until they meet your organization’s standards for launch.
Launch and roll out new features
Once your feature is built and you've selected the variation you want to use, it's time to roll it out to your customers.
If you’re concerned that your new feature might actually degrade the visitor experience instead of improving it, you can start small by launching the feature to only a small percentage of visitors, and then observe the results. Feature rollouts in Full Stack use feature targeting to control which users get access to new features, and when that happens.
Use targeted rollouts to provide beta access to new features, or to experiment with features internally before a public rollout. A slower, controlled rollout mitigates risk by limiting the impact of any newly-surfaced bugs that may have slipped through an internal QA process.
You can also use a rollout to launch winning variations that you identify through experimentation. Just gradually expose a winning feature variation to your visitors in a controlled fashion. You can set feature visibility based on a specified environment, user characteristics like subscription tier, authentication state, region, or language, and manage the process on your own terms.
Iterate and improve
Now you've rolled out your new feature to your entire customer base. And so far, they seem to like it. Adoption is high and engagement is up. Congratulations!
But that doesn't mean you're done. Optimizing for greater and greater success is a never-ending process. For example, after your feature launches, you might notice something unexpected about the way your customers use it. Maybe you want to iterate on your new feature, so you can improve conversions or engagement or whichever metric you use to gauge success, and roll those changes out quickly—without waiting for the next code deploy.
Use feature configurations to make changes and create variations through Optimizely without deploying code. Usually, this will involve running feature tests to determine the optimal combination of variable values, and then setting those values as your default feature configuration and launch using a rollout.
See our Developer Docs on the subject for even more in-depth info.