What’s on tap for Linux container technology in 2016

Some debates remain, including what operating system will run under container applications

crystal ball business prediction data woman
Credit: Shutterstock

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

White hot interest in containers has been driven by cloud computing’s demand to simplify deployment, streamline time to production, and automatically deliver the resources an application needs. Linux containers provide that in a nice package: a simple tool for developing, testing and delivering an application to the end user.

Containers are designed to make it easier and quicker for developers to create complete application operating environments. Gone is the painful validation process of traditional application deployments that require developers to identify the minimum system requirements needed to run the application.

There are other important benefits. Linux containers package just about any type of server application to run everywhere – on your desktop, in a cloud, or anywhere Linux is available – regardless of kernel version or Linux distribution. Containers also can have a considerably smaller footprint than VMs, which means your systems can see higher densities and run more cost effectively with containers than with VMs on the same host.

While containers are now established, debates remain. For example, enterprises will have to decide what private cloud infrastructure (and most importantly, what operating system) will run under container applications. In 2015 many proclaimed a thin operating system would win out, but a popular thin technology has yet to emerge. We predict that adoption of thin operating systems will take longer than expected to ramp up in the data center, making only modest headway in 2016.

Another important hurdle to broad adoption of containers is portability. Linux containers’ “write-once, run anywhere” philosophy is essential for simplifying application development and deployment across a multi-cloud environment.

In 2015 the Open Container Initiative, of which Oracle is a sponsor, gained broad industry support. Both LXC and Docker have become popular among all major Linux distributions for packaging and distributing cloud applications in Linux containers.

Now we’re seeing the Open Container Initiative make strides towards common specifications for container portability across operating systems, hardware and clouds without dependency on any particular commercial vendor or project.

Containers may be the answer for simplifying application deployment, but as this technology moves into wide enterprise use customers will raise the bar further. What enterprise workloads are currently suited for containers? Is it necessary for an application vendor to refactor their solution for container deployments, or will containers evolve to include things besides next generation applications developed specifically for containers? Or is it a combination of both?

How will containers be managed? If there ever was concern about VM sprawl, the situation could be exponentially more challenging with containers. VM monitoring is not too hard. It's been around and people know how to do it. But most enterprises have not yet established processes for monitoring a system with containers. For example, with containers memory is shared and CPU cycles are shared, even disk space is potentially shared.

How do you do metering and billing in such a world?  How can IT management or a Line-of-Business organization have more visibility into what the DevOps groups are delivering in containers? How do you know the sources used to create the container are trusted sources? How can you protect against security vulnerabilities hidden within the container?

How does identity management and access control work within the scope of a large container deployment? Are there expectations for providing auditing and compliance to meet standard security deliverables like we see with workloads deployed in VMs and on bare metal?  How far can containers be scaled for large workloads?

Google’s Kubernetes has been a popular choice for deploying containers in clusters. In 2015 Docker released its own Swarm clustering software which provides native clustering capabilities. Docker says that Swarm has been tested for up to one thousand (1,000) nodes and fifty thousand (50,000) containers. In 2016 container clustering will continue to make significant progress and we will begin to see whether containers can scale as an enterprise customer would expect.

High availability will also be a significant challenge for containers. Container HA is fairly rudimentary today – mostly basic failover. There are other HA features that enterprises need. For instance, rolling patches are a key to maintaining uptime in the cloud. A single kernel patch can take a container farm with 100’s of containers offline for considerable time to update.

When it comes to cloud computing, containers will be part of that story. More enterprise features will be needed to deliver a large and diverse portfolio of commercial enterprise software in containers, including management tools for automating scale on-demand, auditing content, verifying compliance, enforcing compliance, high availability, providing reporting and administrative visibility across form factors (physical servers, VM’s, containers). These are all elements that enterprise customers will need and are not widely addressed by software providers today.

Ultimately, containers are a part of an IT solution, not separate islands of resources. And the world is not going to switch to containers overnight. An enterprise might have a multi-tier application consisting of a few Docker front ends or LXC front ends, a few middle tier VMs and a few backend physical database servers, along with a mix of physical and virtual appliances. Enterprises need to be able to run applications with networks, storage, and management and monitoring tools that span across bare metal, VMs, and LXC and Docker containers.

And of course, containers may not be the only answer to cloud application deployment. New technologies such as hypervisor unikernels are being discussed as a potential deployment tool for microservices-based applications. This model has a much smaller footprint by eliminating the traditional operating system and very rapid boot times. These attributes can be valuable in highly distributed application environments.

No doubt, containers are here to stay. Addressing enterprise needs will be key to rapid growth. 2016 looks to be a very interesting year indeed.

This story, "What’s on tap for Linux container technology in 2016" was originally published by Network World.

ITWorld DealPost: The best in tech deals and discounts.
Shop Tech Products at Amazon