Over the last 18 months, especially since the general availability of Red Hat Enterprise Linux 7, “containers” have emerged as a hot topic. With the more recent introduction of Red Hat Enterprise Linux Atomic Host, an operating system optimized for running the next generation of applications with Linux containers, one might wonder... what about virtualization? In that the benefits of containerization seem to overlap those of traditional virtualization, how do organizations know when to pick one approach over the other?

Many Red Hat customers who are working with containers are deploying them on virtual machines. This is especially true of those who already have an established virtualization infrastructure.  Without a doubt, containers and hypervisors each have their own respective pros and cons and choosing when to use which depends heavily on the applications and workloads you’re seeking to deploy. However, at the end of the day, virtualization and containerization are, in fact, complementary technologies.  In effect, it's likely not a question of "which to use when" but one of "how might I best leverage both".

Virtualization technology increases efficiency in your data center by enabling today's x86 servers to run multiple operating systems and applications.  Server consolidation has been the focus of virtualization, requiring hardware abstraction to create an environment that can run multiple operating systems.  Applications run on virtual machines abstracted away from the hardware.

Containerization, on the other hand, allows a customer to run multiple copies of the same application and provides greater density. Containers run on a common underlying kernel and are abstracted away into logical partitions. Linux containers with the docker packaging format allows a user to bundle application code with its runtime dependencies, and deploy in a container. In fact, it is the application packaging and deployment capability that is revolutionizing DevOps -- providing the capability for developers and operations to work side by side.

Containerization While similar in concept, virtualization and containerization differ significantly in how they enable multi-tenancy and server consolidation. Virtualization provides a virtualized hardware environment where a guest OS is able to run one or more applications. On the other hand, a container host merely provides a logically isolated runtime environment for the application within the same OS instance. Containers, unlike virtual machines, do not require the overhead of booting, managing and maintaining a guest OS environment. While Linux containers and virtualization differ in how they work, both have use cases for which they are best suited.

Virtualization provides flexibility by abstracting from hardware, while containers provide speed and agility with light-weight application isolation. When considering which to use, consider the type of workloads that you’re planning to run. For example, if you are running the services that make up modern web and Linux applications, such as MongoDB and Apache HTTP, these might be better suited for Linux containers because you can choose to run multiple copies of the same application at times.

Some requirements to consider are:

  • start-up speed
  • lightweight deployments
  • application-centric packaging and DevOps
  • workload deployments
  • Linux vs. Windows applications
  • Security required for the workload

Containers provide an attractive means of application packaging and delivery because of their low overhead and greater portability. Instead of virtualizing the hardware and carrying forward multiple full stacks of software from the application to the operating system (resulting in considerable replication of core system services and the maintenance efforts required to keep the stack secure), Linux containers rest on top of a single Linux operating system instance. Each container has fencing between itself and other containers but shares the same core OS kernel underneath. These lightweight and portable containers (amongst certified container hosts) enable applications and their dependencies to be packaged together, making the containers self-sufficient to run across any certified host environment.

That said, hypervisors will remain critical in the virtualization footprint and to many forms of cloud. For the most part, hypervisor virtualization is operating system agnostic when it comes to guest operating systems. This is very effective for server consolidation when you're looking at consolidating existing workloads running on multiple operating systems into a virtualized environment. In addition, hypervisors offer full control of the operating system and its parameters as well as dedicated resources (CPU, RAM, and disk) to the virtual machine.

Virtualization has matured to include many resilient capabilities such as live migration, high availability, SDN, and storage integration which, to date, are not as mature with containerization. Virtualization also provides a higher level of security by running the workload inside a guest operating system that is completely isolated from the host operating system. The virtual machine can be confined further using security technology, such as the Red Hat Enterprise Virtualization hypervisor.

It is likely that many will conclude that both containers and hypervisors deserve first-class status, as they solve different problems. As mentioned above, Red Hat sees virtualization and containerization as complementary technologies, with virtualization abstracting from physical resources for compute, network, and storage and containerization providing superior application delivery capabilities. Use of these two technologies together, for example by running Red Hat Enterprise Linux Atomic Host instances in virtual machines provided by a Red Hat Enterprise Linux OpenStack Platform deployment, combines the best of both worlds.