Open Integration Principles – The Antidote for Lock In
|Karen Tegan Padir in Programming Tuesday, April 22, 2014|
Are developers cynical? I’ve heard the charge made more than once in my life. And it might just be true. Maybe that’s because developers are the people who get to roll their eyes when they hear what marketers have promised to deliver. And they are the ones who know what the reality is behind the day’s buzzwords that everyone else tosses off with confidence. They know the score.
You can’t have a discussion anymore, even at home with family, without someone referencing the cloud. And I suspect that’s another one of the buzzwords that developers understand more clearly than most do. And most of them are appropriately excited about what it can mean for development and for fielding amazing functionality faster than ever. A few are probably a bit cynical – and one of the reasons is probably cloud lock-in.
It has become so easy to move functionality to the cloud – either for small, tactical things, or for things that are more strategic -- that many people have stopped thinking critically about what’s at stake. But with the cloud, it’s still important to be wary and make conscious, well-thought-out decisions.
The cloud, at least the public cloud embodied in services provided by the likes of Amazon, Rackspace, GoGrid, and HP– is an amazing opportunity that offers transformative possibilities for industries, institutions, and even individuals. That’s because it offers instant access and agility, infinite capacity and scalability, moderate costs, and the ability to ‘switch it on and switch it off.’
Still, with cloud providers competing for low-cost-provider bragging rights, is it any surprise that they have built terms and practices into their agreements with the rest of us that will at least help to keep us as hooked as customers – loyal or otherwise?
It is important to remember, for example, that cloud providers have no incentive to help you keep your costs low. Switching on instances is easy, but it is most often a manual process and sometimes a complicated process to switch them off. Not only do you have to remember to stop machine instances that are not being used, but you also have to manually remove unnecessary snapshots and release unused elastic IP addresses, among other things. Who knows how many rogue instances are out there in the cloud, conceived for a specific project and then more or less abandoned, but still steadily incurring charges?
That’s a problem in and of itself but the even bigger problem with the cloud is lock-in, just like with the traditional proprietary operating systems offered by IBM, Novell, or Unisys. Similarly, these days, once you are seduced onto the platform of any given cloud provider, your days of freedom are numbered. Spinning up systems and applications may provide lots of benefits and you may even love the arrangement, but should you ever decide to go elsewhere, to another cloud, or if you decided to bring your data and applications “back home” to your own premises, it could be very troublesome.
Lock-in is a particularly serious problem when applications are developed within and specifically for a particular public cloud environment. Unless you deliberately avoided using vendor specific features, which is no small task, the chances of moving your application elsewhere become practically nil. Sure, it’s possible, but it’s not very practical.
What’s the solution? In an ideal world, cloud providers would decide to make nice and make it easier to be portable. However, that’s not likely. Not only is there no incentive for them to do so, to some extent lock in is just a by-product of the way their cloud service has been built and optimized.
As you build an architecture for your organization – or even consider setting up a development environment in a public cloud -- it is critical to avoid one way streets that can lead to lock in. Although the public cloud often integrates standards, each cloud Infrastructure-as-a-Service (IaaS) vendor brings specific elements to the table, such as ease-of-use features that end up contributing to those layers of lock-in. Many of these features are common to multiple IaaS provider’s offerings, but the implementation details are platform-specific and non-portable.
While it’s safe to say that cloud vendor lock-in isn’t as absolute as hardware and operating system lock-in was in days gone by, it is still a very important consideration.
One option to help avoid lock-in is to choose a platform stack that can be deployed anywhere: on premise, or on private or public cloud. This provides better control and tooling at every stage of a move to the cloud and also reduces the leverage that a cloud provider has over you. It is even better if that platform stack is a Platform-as-a-Service (PaaS).
I think 2014 is shaping up to be the year of PaaS – based on what our customers tell us and with all the major software vendors getting in on the game, including the recent arrival of IBM’s $1B investment in their PaaS, with their BlueMix offering.
PaaS helps because it facilitates the deployment of applications without incurring the cost and complexity of buying and managing hardware and software or of endless provisioning cycles. It is today’s version of middleware all bundled together and can be a critical “next step” for organizations as they move from on-premise to cloud.
In fact, many PaaS providers offer a much-needed private cloud option, which can serve as an insulating layer that prevents cloud-vendor lock-in and preserve your freedom of action.
In the PaaS model, you can build functionality using the tools or libraries that are built in to the PaaS, as well as whatever is familiar from your own environment. And, critically, PaaS gives you the tools to control configuration and manage deployment, obviating dependence on most of the cloud-vendor specifics that lead to lock in.
There are many different PaaS vendors offering variations on this theme. Advice on selecting from among them we can save for another article. However, if you decided to consider adopting PaaS as a way to avoid cloud-vendor lock-in, you should make sure that your PaaS investment specifically provides the ability to create, connect, and integrate applications and data and do so across a wide range of environments (which is another key way to prevent lock-in). In addition, PaaS should provide high productivity tools that are easy to use so that you can develop and deploy your applications quickly.
So, in considering a move to the cloud, make sure you are moving toward best practices and building or maintaining an architecture that is flexible. Consider these “rules of thumb.”
- Make sure you have controls in place so that you don’t consume more cloud resources than you need, especially if it is just because ‘someone forgot’ to throttle back a cloud activity.
- Cloud is often but not always less expensive than on-premises. Be sure to have metrics in place and consider how you are evolving a given function. It might turn out that building your own capacity on premises or in a private cloud actually makes more sense financially.
- Look for approaches that are modular in nature and allow you to mix and match functionality to meet your needs, while only paying for what you use.
- Look at open source options. Open source can reduce your price of admission to mission-critical infrastructure. Open Source offers direct access to the source code and those who wrote it – which can be a positive if you have sufficient in-house capability to make use of this information.
- Look at PaaS as a potential steppingstone that can help you chart a path to hybrid and cloud deployments.
Then, as you look to the cloud to add capacity, cut costs, streamline dev/test or add new functionality, be sure to keep the blinders off.
And, try not to be cynical! The cloud IS amazing. And lock-in is something that tools like PaaS can keep at bay.
This content is made possible by a guest author, or sponsor; it is not written by and does not necessarily reflect the views of App Developer Magazine's editorial staff.