The Challenges of Legacy Storage Containers
Tuesday, March 8, 2016
We recently visited with Gou Rao, Co-Founder and CTO of Portworx, to discuss issues associated with legacy storage containers and the solution of container-defined storage.
ADM: What challenges does legacy storage pose for containers?
Rao: Most scale-out clouds have an intelligent storage fabric on commodity storage, which causes issues with scaling, cost and complexity. Containers have similar issues: They spin up quickly and consume large amounts of storage on demand. Enterprises want both containers and storage to scale out without wasting resources on legacy storage.
The complexity and cost of managing stateful containers with legacy storage are very restrictive. Today, developers can fluidly deploy containers in a development environment, but they face challenges when managing their state. Legacy storage architectures do not support container scale or speed – they’re just not designed for a container level of dependence.
What’s more, to get accessible and efficient container storage, DevOps must use outdated processes such as filling tickets for additional capacity because legacy storage must be manually reconfigured.
ADM: What is container-defined storage?
Rao: Containers bring new requirements for data persistence and management to the enterprise. For containers to be portable, data must be persistent across multiple nodes. Storage performance isn't tuned for containers, and storage features such as snapshots and replication are not container-specific.
Conversely, container-defined storage is purpose-built for containers and supports Docker-volume plug-ins and schedulers. It ensures data persistence across nodes; storage policies such as class of service, IOPS and availability can be set at a container level, and it provides container-level snapshots.
ADM: How is it different from legacy storage?
Rao: The biggest benefit of container-defined storage is that DevOps is free to deploy and scale applications, rather than having to manage hardware-centric issues such as sizing capacity to ensure high availability. With container-defined storage, storage can be spun up instantaneously, simplifying work in and out of production.
Because container-defined storage is deployed in a container, it runs natively on premises, in the cloud and in Linux environments. Container-defined storage runs in a converged storage environment and utilizes the bare-metal performance of x86 servers to avoid the unnecessary overhead of VMs.
ADM: Why is container-defined storage needed?
Rao: Current technology does not offer an easy way to get up and running with stateful containers. On the other hand, container-defined storage provides clustered, highly-available storage by simply running a container on each host. It saves time, money and IT resources, while allowing applications to run more smoothly and safely.
ADM: How will app developers use Portworx’s new PX-Lite container storage solution?
Rao: Portworx’s PX-Lite solution makes it easy for DevOps to deploy and scale storage as it moves containers from DevTest into production, freeing users from reliance on SANs and NAS. DevOps can deploy and scale applications rather than having to manage hardware-centric issues such as sizing capacity or to ensure high availability. Portworx provides container-granular SAN functionality on commodity servers to provide higher performance and much lower costs.
Portworx simplifies application distribution and deployment. As with virtualization, containers remove the physical association with hardware resources to an application. Portworx allows provisioning of late-binding resources to a Dockerized application on any hardware. Applications are virtualized from the underlying storage and run with bare-metal performance. Applications are deployed and started instantaneously, on any hardware, just like a Docker container.
ADM: How will IT ops benefit from Portworx PX-Lite?
Rao: Portworx enables self-service IT for SAN-like functionality. End users such as DevOps and app owners can self-provision storage for their containerized apps using the same orchestration tools they use for deploying containers. DevOps can go back to managing hardware and servers without worrying about apps.
ADM: Where do you see the future of container storage in five years?
Rao: Much can happen in five years in this rapidly growing space. We’re already seeing the introduction of cutting-edge infrastructure solutions alongside the Portworx storage solution. We see containers going mainstream and moving from DevOps into enterprise production. The benefits in production are just too large for CIOs to ignore.
The next step: enabling container deployment, which will signify the mature quality of core container technology. Then we will see a slew of new infrastructure technologies coming to market in storage, networking, monitoring, orchestration and security that are all container-focused. These new technologies will help drive container adoption in production. In the next five years, we believe we will see the new super fast thin datacenter, enabled by containers.
Read more: http://portworx.com/
Preventing bad app reviews according to Poq Tuesday, October 16, 2018
Game developers get the WRLDS SDK for free Monday, October 15, 2018
NGINX gets new capabilities to help app teams develop and scale Monday, October 15, 2018
Simplifying digital transformation with VANTIQ Modelo 3.0 Friday, October 12, 2018
RAPIDS data access acceleration comes to MapR Friday, October 12, 2018
Stay UpdatedSign up for our newsletter for the headlines delivered to you