Posted 6/22/2016 4:02:37 PM by RICHARD HARRIS, Executive Editor
Michele Casey, Oracle Linux Senior director of Product Management, reached out to provide insight into Oracle Linux and the platform’s place in the evolution of containers for next-generation application development.
ADM: To set the stage, what are some notable container use cases where people are pursuing Oracle Linux as a solution?
Casey: Oracle Linux supports abroad portfolio of applications from many vendors including, of course, those provided by Oracle itself. With both the Oracle Public Cloud and environments maintained on-premises, our customers are interested in the availability of Oracle Applications and tools as certified container-based solutions.
Today, DevOps is driving container momentum, as developers seek tools that aim to bridge the gap between development and operations by providing flexible, agile and scalable environments for delivering applications. With tools like Docker, developers can easily provide the libraries, tools and dependencies needed to run an application as a single executable.
While many people associate containers with next-generation application development, there are also a number of customers who are interested in using containers to consolidate workloads, increase application density per server, provide efficiencies in resource management and eliminate overhead associated with traditional hypervisors.
The total transition from virtualization to containers will take time, as many of the applications used in enterprise environments today were not developed for containers. Oracle Linux works to mitigate these limitations by providing choice in the tools to support containers. Oracle Linux provides for both system-based (LXC) and application-based (Docker) containers.
ADM: What kinds of container uses, environments, and industry verticals are we talking about here?
Casey: We are still in the early stages of container adoption. There are notable examples of organizations using containers, such as Facebook, Google and Twitter, but those types of workloads are not necessarily found in typical enterprise data centers. Also, the basis for containers in Linux, specifically cgroups and namespaces, has been available in the kernel for a number of years, but the tools and features supporting containers are still evolving.
Customers are looking for ways to integrate containers into their roadmaps but, as mentioned earlier, not all applications are suited for containers. The emerging DevOps model, in its purest form, would advocate for infrastructure that is immutable – separating configuration and other metadata from the application and allowing systems to be replaced vs updated.
With many enterprise customers, this level of transformation will take time to plan and implement. There are also areas of concern that need to be addressed before an organization begins such a change. For example, enterprise IT leaders are bound by corporate governance and compliance, which will still be required. For many enterprise customers, containers are in evaluation and testing stages as organizations look for ways to introduce this new technology within the guidelines of their businesses.
ADM: Please share at least three detailed examples.
Casey: At Oracle, our product teams are actively investigating and certifying solutions on containers today. Oracle Linux supports initiatives across Oracle for containerized delivery of applications, including the Oracle Cloud and Oracle Cloud Marketplace. In 2015, we began delivering services in Oracle OpenStack as containers. In addition, several applications have certified their solutions for containers. These include:
- Oracle Database 12c (LXC)
- Oracle Fusion Middleware11g and 12c (LXC)
- Oracle E-Business Suite12.2 (LXC)
- Oracle’s PeopleSoft Tools8.54 and later (LXC)
- Oracle WebLogic Server12.2.1 (Docker)
ADM: Explain things as you would to someone with only a moderate understanding of containers so it is crystal clear.
Casey: Containers provide workload isolation without the overhead of a fully virtualized environment. Containers depend on a shared host environment, specifically the Linux kernel, and deliver a dedicated workspace for running applications or services. This environment allows users to consolidate applications with higher density than traditional virtualized environments, with smaller footprints, faster startup and increased portability.
ADM: What is Ksplice? Please go into some detail and explain it as you would to someone with a moderate IT background.
Casey: Ksplice is a service provided with Oracle Linux Premier Support subscriptions that enables administrators to apply critical kernel and userspace updates without system reboots or application restarts [Userspace is limited to certain libraries and packages critical to application availability. The current userspace capabilities include glibc and openssl].
ADM: How does Ksplice enable enterprises to patch the Oracle Linux host kernel with zero downtime?
Casey: Ksplice enables administrators to dynamically apply critical patches to running, production systems without downtime. Since Ksplice patches are applied within microseconds, there is no noticeable delay to production applications.
ADM: How is the traditional maintenance of applications deployed in containers done such that it impacts applications, containers, and enterprises?
Casey: In a traditional model, when there is a critical security event, IT/service providers need to notify users, plan for application downtime or failover, schedule a patch window, bring applications offline, patch the system, restart and then restore the workloads hosted by the system.
As mentioned earlier, in a true DevOps environment the overhead of system maintenance would be eliminated by simply replacing the existing container host with a new node and instances would be migrated. However, for many enterprise customers this type of infrastructure does not exist today. As a result, the impact of downtime is higher because the containers share a host kernel. If that operating system needs to be patched, all the containers must betaken offline.
With Ksplice, you can patch the host kernel without taking containers offline. The actual patch can be applied in microseconds without the need to notify users and without interrupting a single application.
ADM: What kinds of time does all this save?
Casey: When you consider the time and resources involved, the time savings can be considerable. In this whitepaper, there is an example of a large enterprise customer (multiple datacenters,22,000 servers, 400 applications) that estimates Ksplice saves their organization over 500 man-hours per month.
ADM: What are typical examples of expenses this saves on?
Casey: A large portion of the savings are found in the man-hours required to maintain systems. Patching is not the only area where expense is generated. The entire production system’s maintenance process, and the number of parties involved when a system goes offline can be quite costly.
In addition, Oracle uses Ksplice as a tool in our support organization for debugging kernel events. Historically, when a customer experienced a problem involving the kernel, they had to enable a debug kernel to capture more details during an event. These details would be used to evaluate the problem and create a final hot fix or patch. Once again, this required the administrator to plan for system downtime to apply the debug kernel, creating the same set of events described previously.
With Ksplice, a debug kernel is applied to a production system without reboot. This allows us to gather the additional information needed to identify the root cause, create a patch and apply the fix with zero downtime.
Ksplice provides a critical tool for system management and maintenance.
ADM: What is the start to finish process for most enterprises when using Ksplice to do a single kernel update in service? Where does the process start? What is the chronological order of events and the path for getting the updates?
Casey: For kernel patching, administrators can use Ksplice immediately. There is no requirement for a system update or restart to enable the service.
Ksplice provides multiple options, leaving the choice to the customer. Most enterprise customers prefer to keep systems restricted to their local networks. We provide options for mirroring the Ksplice patches to a local repository where the administrator can easily integrate updates into their existing patch processes. More information can be found in the Ksplice User Guide.
ADM: What are other benefits to using Ksplice?
Casey: One of the main advantages of Ksplice is the value it brings to emerging Cloud technologies, like containers. As more customers move to container-based deployments, the services offered by Ksplice are essential. If you have multiple hosts with hundreds of application containers running on each host, the last thing you want is a system reboot.
The foundation of container-based technologies is the shared kernel on the host system. The ability to patch the system kernel dynamically, without a restart, is a powerful tool. The ability to troubleshoot a resource or performance issue involving the system kernel, without a restart or impact to running applications, is an incredibly valuable tool in this new model of delivery. There are many exciting possibilities for this technology going forward.
Read More https://www.oracle.com/linux/...