How and why to use feature flags

Posted on Friday, April 12, 2019 by RICHARD HARRIS, Executive Editor

Some of the world’s most successful and best-known tech companies, including Netflix, Instagram and Facebook, use product experimentation and feature flagging to capture accurate data about customers and product ideas and gain intelligence on market trends.

Split CEO and co-founder Adil Aijaz says organizations that embrace feature flagging and experimentation are able to learn more about their customers, make data-driven decisions, and increase their rate of innovation. We had the chance to catch up with Adil to discuss these technologies in more detail and how organizations can use them to continuously develop and deliver innovation and measure business outcomes.

ADM: What do feature flags and experimentation help companies accomplish?

Aijaz: Feature flags and experimentation are both critical for companies to innovate faster and learn from their customers.

Experimentation provides a critical feedback loop for feature iteration and continuous improvement.  Engineering and product teams define any number of feature variations and randomly assign users to each treatment. Then the impact of these variations is measured against key business metrics such as conversions, engagement, growth, subscription rates, or revenue. The idea is to facilitate hypothesis-driven development and data-driven product decisions to help companies iterate and improve features based on statistical analysis.

Feature flags provide the foundation to run experiments. Feature flags are if/else statements in code that create multiple execution paths for an application. The specific feature that is exposed to a user is determined by user targeting rules in a feature flag management console. Metrics are assigned to each flag to tie the negative or positive movement of the business back to the original changes that are causing them to move. Feature flags have a widespread number of use cases, including allowing teams to perform functional performance tests, safely roll out new functionality, and take a more measured approach to product development.


ADM: How does experimentation differ from other development processes such as A/B testing, canary rollouts, and blue-green deployments?

Aijaz: A/B testing is also a form of experimentation. We find in the marketplace that it typically references tests for online advertising campaigns run by digital marketers who want to optimize conversion rates.  Academically, there is no difference between experimentation and A/B testing. Practically, however, they are very different. A/B testing is often visual, in that it tests for changes like colors, fonts, text, and placement. It’s also narrow, as it is concerned with improving click-through rates (CTR). And it’s episodic since it runs out of things to A/B test once CTR is optimized. Meanwhile, experimentation is full-stack. It tests for changes everywhere, whether visual or deep in the backend. It’s comprehensive because it’s concerned with improving, or not degrading, cross-functional KPIs. And it’s continuous, meaning every feature is an experiment, and you never run out of features to build.

Canary releases are simple experiments run at release time. A software team releases a canary, a new version of an application, to a small subset of production equipment to get an idea of how a new version will perform -- how it will integrate with other apps, CPU, memory, disk usage, and so on. A load balancer routes users to one or the , set of servers.

A blue-green deployment is also a hardware-based approach, where you have two production environments, as identical as possible. Only one of them is live -- in other words, exposed to users -- at any given time. Let’s call that the blue environment. You conduct your final stage of testing in the non-live, or green, environment. Once the software is working in the test/green environment, you switch the router so that all incoming requests go to this environment, and the blue environment is now idle.

ADM: In your experience, how much expertise in features flags and experimentation do development teams possess, typically?

Aijaz: Most development teams are typically aware of, or already using, feature flags in some capacity. In fact, for many product delivery organizations, it has become the norm for any change to be managed by feature flags. Product changes are no longer “launched” or “released”—they are incrementally rolled out.

Experimentation is growing in adoption as both development and product teams begin to understand the benefits. I expect it will become more mainstream as more organizations improve their ability to tie feature releases to business metrics, closing and tightening the feedback loop that powers product delivery.


ADM: Many development teams use their own homegrown approach to making product decisions, sometimes with tools that have been cobbled together. What are the drawbacks of that approach?

Aijaz: From development-driven to business-driven use cases, the challenges of developing an in-house feature flagging system expands as fast as, if not faster than, the list of requirements. The scope of the in-house application grows and in turn, the list of challenges of in-house development also grows. Common challenges when building an in-house feature flagging system include dealing with manual config changes, manual compilation of target segments, problems arising with technical support, and application performance taking a hit.

Experimentation is built on top of the feature flagging system, and the challenges continue to grow there as well. Building an experimentation platform requires a statistician and data engineers to create a sound statistical engine and interpret the results.

ADM: What problems are customers facing that experimentation can help them solve?

Aijaz: Experimentation shortens time-to-value for development teams by providing a safe way to test new ideas in production, with real users and real data. This allows teams to keep up with changing user preferences and stiff competition. Development teams can push innovative ideas to market faster by letting users determine winning functionality, avoiding internal arguments or inefficient guesswork.

Research has shown that 80 to 90 percent of new application features have a negative or neutral impact on the metrics they were designed to improve. How much does it cost to develop all this poor functionality? Measurement is needed to quickly iterate and refine functionality. How do you know if the features you are building have the expected impact on business metrics? You can’t improve if you don’t measure. Through experimentation, product and engineering teams can test a minimum viable feature, then iterate and improve to meet customer requirements faster.

ADM: What sort of barriers exist that are holding companies back from embracing feature experimentation?

Aijaz: Delivering what customers want at the speed they demand requires a cultural shift to hypothesis- driven development and data-driven decisions, where development and product management work hand-in-hand with definition and analysis to make outcome-based product decisions.

Embracing experimentation is an evolution. For example, one of the first cultural shifts for all teams to understand is that, even armed with reliable statistical data, experiments often won’t succeed as expected. Making all feature experimentation visible across the organization -- and getting the whole company, not just engineers but customer service, product and marketing involved in early product testing -- means that the whole company can quickly learn which of their ideas work and which don’t. Everyone in the organization needs to understand that even a failed idea is an important step on the road to a product’s ultimate growth and success. Experimentation provides the backdrop for a data-driven organization to make sound product decisions based on measurement and metrics, facilitating a culture of continuous improvement. Data is a competitive advantage when constant learning and sharing of knowledge achieve a common goal.


ADM: What are some of the steps organizations need to take to get on the path to continuous development and delivery?

Aijaz: In today’s software-driven climate, many of the biggest and best known tech companies — Facebook, Amazon, Netflix, Google — are releasing software updates thousands of times a day. Every organization is always striving to get closer to the cadence of these famous, bleeding-edge companies. There is lots of advice out there about how to move in this direction, but it is often easier said than done. There are a breadth of changes needed to move from releasing once every hundred days to releasing hundreds of times a day.

My first recommendation is to adopt continuous integration (CI). This means that every engineer regularly checks in code to a central repository with an automated build verifying each check-in. The aim of CI is to catch problems early in development through regular integration. Without CI, almost every step an organization takes to increase its rate of deployment will be met with obstacles and bottlenecks.

Next, you should implement feature flags. Feature flags provide a foundation for minimizing risk at speed. They enable organizations to move from release trains to canary releases by allowing engineers and product managers to incrementally roll out a feature to subsets of users without doing a code deployment. Imagine a dial in the cloud that can be turned up to release a feature to more users and turned down in an emergency to roll back a feature. The magic of flags is that they let engineers to test in production with actual users while giving control over the blast radius of any mistakes and significantly reducing risk.

Finally, I’d recommend adopting trunk-based development (TBD). TBD is a branching model that requires all engineers to commit to one shared branch. Since everyone works off the same branch, problems are caught early, saving all the time that is usually wasted in later integration. CI is a prerequisite of TBD. While TBD can be done without feature flags, it is best to use them in conjunction to make it easier to do long-running development off the trunk. A feature that is flagged off can have code committed directly to the trunk without risk of the code becoming accidentally visible to customers.

Although there will constantly be new ways to evolve development to speed it up, integrating these three techniques – continuous integration, feature flags, and trunk-based development – will ensure a solid foundation and go a long way toward increasing release frequency. These changes alone can help you move to daily releases, and may even allow multiple releases a day.


Adil Aijaz is CEO and co-founder at Split Software. Adil brings over ten years of engineering and technical experience having worked as a software engineer and technical specialist at some of the most innovative enterprise companies such as LinkedIn, Yahoo!, and most recently RelateIQ (acquired by Salesforce). Prior to founding Split in 2015, Adil’s tenure at these companies helped build the foundation for the startup giving him the needed experience in solving data-driven challenges and delivering data infrastructure. Adil holds a Bachelor of Science in Computer Science & Engineering from UCLA and a Master of Engineering in Computer Science from Cornell University.

More App Developer News

NEX22-DO personal observatory dome from NexDome



L eXtreme dual passband light pollution filter from Optolong



Focal Reducer and Field Flattener for TV102 scopes from Tele Vue



Powertank 12V Power Supply from Celestron



ARCO camera rotator and field de rotator



Copyright © 2024 by Moonbeam Development

Address:
3003 East Chestnut Expy
STE# 575
Springfield, Mo 65802

Phone: 1-844-277-3386

Fax:417-429-2935

E-Mail: contact@appdevelopermagazine.com