Cloud Services 16,861 views
Posted Wednesday, March 09, 2016 by RICHARD HARRIS, Executive Editor
READ MORE: https://www.loggly.com/...
We visited with Sven Dummer, senior director of product marketing at Loggly, to discuss Docker’s recent announcement of the Docker Datacenter (DDC), an integrated, end-to-end platform for agile application development and management that companies can use to deploy an on-premises (on in a virtual private cloud) Containers-as-a Service (CaaS) solution (CaaS is an IT-managed and secured application environment where developers can build and deploy applications in a self-service manner).
Whew! That’s a lot to ingest! However, not to worry, in this article Sven provides a nice, easily digestible approach to understanding how companies can take advantage of the Docker Datacenter.
ADM: How do Docker Containers and Docker’s recent Datacenter announcement impact the DevOps industry?
Dummer: Containers offer a whole lot of very compelling features. They are similar in nature to virtualization technologies, like Xen, kvm or VMware, but they are different in that they don't virtualize an entire operating system and require less system resources and can be launched faster.
You can think of software containers as a standardized storage unit (just like the containers on a ship or truck) that you can put your application into. You can then run your application on any operating system (or cloud environment) that supports these containers (just like a shipping container will fit in any ship, truck or crane designed to support that standardized type of metal box).
That means application developers and operation teams no longer have to worry about things like operating system versions, vendors or patch levels. This used to be a major headache, for example, if an OS update or a particular bug fix would break your application. Most DevOps teams love containers just for that reason alone.
But there's more. Containers allow users to modularize application stack by breaking it into small building blocks, each running in their own container. You can easily re-use these building blocks when creating a new application, share them with others and benefit from using components others have already created. These building blocks are typically referred to as microservices.
Docker is the name of a specific open-source implementation of containers on Linux and Docker Datacenter is a product of the company Docker. It allows you to run and manage Docker containers and offers commercial support. It also has the potential to make cloud migration significantly easier.
ADM: How does DDC make cloud migration easier?
Dummer: Think of DDC as a virtual datacenter on your hard drive containing your entire application stack packaged into containers. These containers will run on any Linux system and in any cloud environment that supports Docker containers. So it doesn't really matter if this environment is public cloud provider A today and public cloud provider B tomorrow or if the user decides to move the stack to their own private cloud in their own datacenter.
ADM: But, isn't Docker referring to DDC as an “on-premise” solution -- how does that relate to clouds?
Dummer: Infrastructure - from on-premises datacenters to public cloud, across a vast array of network and storage providers" is one of the advantages that DDC's "Containers as a Service" model offers, as opposed to the established Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) models.
ADM: How do you see technologies like Docker DDC affecting cloud providers, specifically?
Dummer: Cloud migration and supporting multiple clouds is typically extremely costly and painful, and the business model of many cloud providers is based on locking the customers into their specific technologies. DDC has the potential to remove this vendor lock-in and opens up the floor for more competition.
ADM: And what about OS vendors? Will it still matter which Linux distribution I use?
Dummer: Much less so than in the past. Just like shipping containers fit on any ship, train or truck that supports certain standardized dimensions and stacking mechanisms, applications packaged in Docker containers will run on any Linux operating system that supports that particular container format.
Note in this case, "Docker" refers to an open-source technology and not a product by the company of the same name. It uses capabilities of the Linux operating system present in every major Linux distribution.
So Docker (the open-source technology) removes a lot of painful OS and OS vendor dependencies. Now Docker (the company) sets sail to do the same with cloud provider lock-ins.
So, in other words, the next step after making containers somewhat OS-independent is to create containerized datacenters that are cloud-independent?
I think the answer is yes. The move to the cloud removed a lot of dependencies, like the ones on local datacenter resources and on-premise software solutions with often overpriced licensing models.
ADM: How is the vendor lock-in we've seen with legacy on-premise software different from what we're seeing from cloud providers today?
Dummer: Cloud providers were able to replace these old models with dynamic solutions where customers only use what they need and only pay for what they use. If your servers need to handle more traffic temporarily, you only pay for additional compute power for the time needed. Modern cloud environments will even handle this automatically for you and launch additional server instances as needed. When the traffic goes down, they will shut them down again.
However, public cloud providers also want to retain their customers just like the legacy software vendors did. We're seeing similar lock-in mechanisms come into play, maybe just a tad more subtle than before. In particular, the large cloud providers offer a growing variety of services that are easy and convenient to use once you're inside of their environment.
Once you become dependent on those, moving to a different cloud provider who offers better prices or better service, will be significantly harder. Such services include anything from system management to monitoring, log management and many, many more.
ADM: In terms of log management, monitoring, system management - what will DevOps need to be aware of, and where?
Dummer: In a nutshell – the more you rely on your cloud provider's specific solutions, the harder it will be to move to somewhere else or to operate a hybrid environment. For example, it might be extremely convenient if your cloud provider has a ready-to-use monitoring or log management solution.
But, more often than not, it will only work with systems inside of your provider's cloud. What if you ever want to move all – or some – of your systems to a different provider or to your own colo?
I would caution DevOps organizations and their decision makers to carefully review their cloud architecture specifically from the perspective of potential vendor lock-in. It might be worth it to go with solutions that live and function outside of a specific vendor's cloud. It might not seem urgent or even necessary today, but likely will save your company a lot money and headache a few years down the road.
Editors Note: Sven discusses the subject further in this blog post. He also provides insight on the subject - as Loggly is a Docker Ecosystem Technology Partner (ETP) - that Loggly’s multiple integrations with Docker containers work just fine with DDC. Check it out.
READ MORE: https://www.loggly.com/...