We often compare the security of containers to virtual machines and ask ourselves "...which is more secure?"  I have argued for a while now that comparing containers to virtual machines is really a false premise - we should instead be comparing containers to

processes.

We aren’t forced to get rid of virtual machines when we run containers.  Containers can be run, in conjunction with virtual machines, in three ways - so it’s a straw man comparison.

  1. Containers inside of virtual machines.
  2. Containers in some places, virtual machines in others (the comparison).
  3. Virtual machines in containers (yes, you can do this).

OpenStack and Containers - Container Patterns.png

The Premise

We can run workloads using any of the three techniques as listed above... so forcing a security comparison isn’t exactly "natural". I would argue that it's more "natural" to think about the tenancy requirements of the workloads and the "amount" of isolation required.

The Tenancy Scale

What is the Tenancy Scale?  It's the result of a brainstorming session with the leader of Red Hat's product security team (i.e. Josh Bressers).

Container Defense in Depth - The Tenancy Scale.png

While I'm not sure that everyone remembers now, but when I started college (back in 1997), multi-user Unix systems were still "all the rage".  Individual users would telnet, yes telnet, into a Unix server and each user would run their own processes. Some users would run research batch jobs, while others would run their own web servers, or use the system’s shared web server.  When you logged into the system, you could do a process list and view everybody’s processes. In fact, if a given user had the permissions in their home directory set wrong, you could even get into their personal files. Crazy times.

In 2016, not many systems administrators would consider regular Linux process isolation enough to allow multiple users to log into a system - especially if those tenants worked for different organizations or were private individuals.

But, hypothetically, let’s say that I am a systems administrator for a university and I have different research teams that want to run jobs. Let’s say I have a one group running biology computations and I have another group running geology computations - would containers be enough isolation? I would argue, yes.

In another hypothetical scenario, I am a systems administrator working for a public cloud provider and I have users from different companies, government organizations, and research facilities all wanting to share physical resources. Containers probably wouldn’t provide enough isolation by themselves. I would argue that we should slide up the scale to virtual machines for isolation.

With those two hypothetical situations out of the way, what are some of the next questions an end user who is security conscious will ask for?

  1. Can you add anti-affinity rules to make sure that my workloads run in different virtual machines on different physical machines?
  2. Can you make sure that those different physical machines are in different racks, so that they use different power distribution units (PDUs) and different rack switches?
  3. Can you make sure that two copies of my workload run in two different data centers that are affected by different weather and earthquake patterns (note: I worked at a data center and customers really did ask this question)?
  4. Can you have one workload run on the moon in case the earth gets blown up?

OK, I made the last one up, but I think you get the point!  ;-)

Conclusion

Isolation and tenancy are granular needs. Typically, a workload needs “enough” isolation. What is enough? Well, due diligence is different for every application.

I would argue, let’s stop comparing virtual machines and containers and starting thinking about how we can use them together to achieve enough isolation to meet a given workload's integrity requirements.

Questions?  Feedback?  Reach out using the comments section (below).