How to Accelerate Your DevOps Application Workflow
|Val Bercovici in DevOps Thursday, April 21, 2016|
In today’s digital economy, cloud platforms enable startups to disrupt any business, no matter how established. It’s also the key technology driving innovation within enterprises.
Innovation is solving new problems that companies don’t even know exist, and DevOps is an increasingly popular organizational structure which lets you optimize both your innovation and operational efficiency in the cloud.
However, there’s a common misconception that all clouds are created equal, which results in a one-size-fits all approach for how organizations use cloud from a development perspective. To truly succeed in a DevOps context, organizations should plan to develop and deploy apps across multiple clouds simultaneously depending on the business needs.
This is where the Lean Cloud comes in. The concept rests on the idea that you will often need to optimally relocate your app modules over their natural life cycles.
Application Lifecycle Milestones
There are three major milestones in the lifecycle of most modern applications. Interestingly, these categories and percentages map to three tech spend ratios reported by CXOs and heads of IT, according to research by Gartner. The three application milestones and their corresponding spend ratios are:
- Transform (14 percent): Often spent on application development and testing,
- Grow (20 percent): Often spent on production deployments and scaling of new and existing applications,
- Run (66 percent): All about maintenance, usually of legacy application infrastructure.
The first phase, Transform, is development and testing. That's where you go through fast fail approaches, experimenting and discovering new business value. This exercise quickly and economically separates those workflows that actually have value from those that do not.
Hyperscale cloud platforms are ideal for this phase. Not doing this on hyperscale public cloud platforms places you at a steep disadvantage relative to your competitors and therefore borders on professional malpractice.
The second phase, Grow, is where you start to deploy and scale your application in production. It’s also where developers often realize they’re incurring too many costs.
Modern ‘composite’ applications are comprised of several different modules, which can be individually examined for quality of service and operational efficiency. DevOps can now focus on the inefficient and expensive modules. Many offending modules are typically backing services that aren’t directly visible to the user and don’t change the top-line value of the overall application or the service to the business.
Experienced application architects will usually separate workload affinity from data in backing services. NetApp Data Fabric, our approach to data management, helps mirror the agility of workload mobility with data mobility, moving just the right data from one cloud to another at just the right time.
As a result, DevOps can re-factor the offending modules and move the resulting services to different cloud providers as appropriate, based on price, security, compliance, quality of service, vertical industry integrations or regional latency.
Refactoring in this phase is non-trivial. You have to take those successful functional prototypes from the transform phase that were developer experiments with crazy new ideas and figure out how to operate them over the long-term. This often means weaning offending application modules from dependencies on proprietary cloud services, then re-coding those modules using equivalent open source projects hosted on alternate clouds.
The third phase, Run, is typically associated with age and maturity of the application (module). After a few years all deployed applications evolve into legacy code. Based on hard lessons learned during the Grow phase, legacy code is extremely well-understood from a reliability, semantic and performance perspective. Such code often runs best on dedicated infrastructure where the value of elasticity is low.
Data and the Application Lifecycle
There's a very popular term in the industry known as data gravity. Meaning that, once the data growth reaches a certain size, for all intents and purposes, it can't move. You have to re-architect your entire application and cost model around your data’s location. The resulting loss of control, flexibility and choice is a dilemma savvy businesses plan ahead to avoid.
An advanced mode of the Data Fabric is NetApp Private Storage which lets you stage your cloud data centrally between your clouds. This mode opens up maximum cloud freedom and choice, delivering composite application flexibility for modules to run on their optimal clouds. Compute stays in the cloud, but your data is never locked in. You now strategically own your data, yet are making it instantly available to the right cloud at the right time during each phase of the lean cloud lifecycle.
All clouds are not created equal
Netflix for example, realized the bulk of their content is relatively static. Movie files are added and removed on roughly a weekly basis. As a result Netflix operationalized the expense of their infrastructure by moving all of their applications entirely to the cloud.
Dropbox is a perfect example at the opposite end of the cloud spectrum. Having gone through all three lifecycle phases and realizing the scale at which they manage constantly changing data, Dropbox moved completely off the cloud and built their own private cloud.
Fortunately, most companies don't operate anywhere near these two polar extremes. There is now a well-paved middle ground which covers the three phases of the lean cloud in order to successfully develop, scale and operate successful applications in support of digital transformation and innovation.
Developers should rarely think about infrastructure. Their mentality should be NoOps. Imagination, creativity and focus on abundance inspires the best invention and innovation. In this regard, hyperscale public clouds offer an irresistible wealth of developer resources with practically limitless elasticity.
For DevOps, the lean cloud is the ability to seize on that creativity and business agility in order to make it real. The lean cloud allows for economics of elasticity that hyperscale clouds like Amazon, Azure, GCE and SoftLayer offer, while maintaining visibility and mobility for your most critical lasting asset – your data.
Application (modules) can move individually to other clouds or back on premises as elasticity diminishes in value and better cost, quality of service or compliance options become available. As features or modules in an application mature with instrumentation and logging over time, the workload becomes highly predictable.
Any service provider or professional IT organization can build a relatively static infrastructure for a predictable workload, operating with much greater control and at a much lower overall cost than a hyperscale cloud alternative.
Being lean and staying lean results in much higher gross profit margin on your feature or service compared to keeping everything in one cloud. As we can see, hyperscale clouds have a significant role to play in sparking innovation.
For a savvy CIO or CTO, the lean cloud offers an ideal framework to leverage rapid development and agility, then bring it in house to minimize operational costs when the time is right. Innovate like a startup, deploy like an enterprise!
Read more: http://net2vault.com
This content is made possible by a guest author, or sponsor; it is not written by and does not necessarily reflect the views of App Developer Magazine's editorial staff.