Container Tidbits: Understanding the docker-latest Package

Does your team want to move as quickly as possible? Are you and your development team looking for the latest features and not necessarily optimizing on stability? Are you just beginning with the docker runtime and not quite ready for container orchestration? Well, we have the answer, and it’s called the docker-latest package.

Background

About 6 months ago, Red Hat added a package called docker-latest. The idea is to have two packages in Red Hat Enterprise Linux and Red Hat Enterprise Linux Atomic Host. A very fast moving docker-latest package and a slower, but more stable package called, well of course, docker.

The reasoning is, the larger and more sophisticated your container infrastructure becomes, a more stable version is often what people want – but when split into small agile teams, or when just starting out, many teams will optimize on the latest features in a piece of software. Either way, we have you covered with Red Hat Enterprise Linux and Red Hat Enterprise Linux Atomic Host.

Continue reading “Container Tidbits: Understanding the docker-latest Package”

Evolution of Containers: Lessons Learned at ContainerCon Europe

Linux containers, and their use in the enterprise, are evolving rapidly. If I didn’t know this already, what I’m seeing at conferences like ContainerCon would confirm it. We’ve moved on from “what are containers, anyway?” to “let’s hunker down and get it right.”

Recently, I attended and spoke at LinuxCon/ContainerCon Europe. Like LinuxCon/ContainerCon North America, many of the keynotes touched on Linux container work going on in the community. At the European edition there was a particularly strong focus on Linux container security and networking. At least six sessions were focused on kernel security, orchestration security, and general container security. Four talks focused on container networking. Along with container security and networking, there were a lot of sessions about cloud native and containerized applications. 

Continue reading “Evolution of Containers: Lessons Learned at ContainerCon Europe”

Announcing Red Hat Enterprise Linux Atomic Host 7.2.6

Red Hat Enterprise Linux Atomic Host is a small footprint, purpose-built version of Red Hat Enterprise Linux that is designed to run containerized workloads. Building on the success of our last release, Red Hat’s Atomic-OpenShift team is excited to announce the general availability of Red Hat Enterprise Linux Atomic Host 7.2.6. This release features improvements in rpm-ostree, cockpit, skopeo, docker, and the atomic CLI. The full release notes can be found here. This post is going to explore a major new feature

Continue reading “Announcing Red Hat Enterprise Linux Atomic Host 7.2.6”

Architecting Containers Part 5: Building a Secure and Manageable Container Software Supply Chain

Background

In Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization we investigated the level of effort necessary to containerize different types of workloads. In this article I am going to address several challenges facing organizations that are deploying containers – how to patch containers and how to determine which teams are responsible for the container images. Should they be controlled by development or operations?

In addition, we are going to take a look at

Continue reading “Architecting Containers Part 5: Building a Secure and Manageable Container Software Supply Chain”

Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization

Many development and operations teams are looking for guidelines to help them determine what applications can be containerized and how difficult it may be. In Architecting Containers Part 3: How the User Space Affects Your Applications we took an in depth look at how the user space affects applications for both developers and operations. In this article we are going to take a look at workload characteristics and the level of effort required to containerize different types of applications.

The goal of this article is to provide guidance based on current capabilities and best practices within

Continue reading “Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization”

Container Tidbits: When Should I Break My Application into Multiple Containers?

There is a lot of confusion around which pieces of your application you should break into multiple containers and why. I recently responded to this thread on the Docker user mailing list which led me to writing today’s post. In this post I plan to examine an imaginary Java application that historically ran on a single Tomcat server and to explain why I would break it apart into separate containers. In an attempt to make things interesting – I will also aim to

Continue reading “Container Tidbits: When Should I Break My Application into Multiple Containers?”

Container Tidbits: Can Good Supply Chain Hygiene Mitigate Base Image Sizes?

With Docker moving all of their official images to Alpine, base image size is a hot topic.  Sure, having sane and minimal base images is important, but software supply chain hygiene is equally (if not more) important – interested to understand why?

Among other things, it’s important in a production container environment to have provenance (i.e. knowledge of where your container images came from). Using

Continue reading “Container Tidbits: Can Good Supply Chain Hygiene Mitigate Base Image Sizes?”

Container Tidbits: Does The Pets vs. Cattle Analogy Still Apply?

Background

So, most of us have heard the pets vs. cattle analogy. The saying goes, that in a cloud environment, you don’t waste time fixing individual virtual machines or containers – instead – you just delete them and re-provision. But, does this apply to the entire cloud environment? The analogy is that you don’t take cattle to the vet, you just send them to slaughter. But, is this really true? Cattle are worth a lot of money. I have never really liked the pets vs. cattle analogy.  I think it lacks sensitivity and may not be appropriate when talking to a CIO.  The real problem, however, is that the analogy fails to fully relate the changes in IT that are happening.

I propose that Pets vs. cattle is not really about how or when we kill animals – instead it’s about the simplicity of consuming animals, and the complexity of maintaining the environment in which they live.

Pets

At the end of the day – in small quantities, pets are actually quite easy to take care of – when they are young, you take them to the vet for their shots. As they grow, you provide them with food, water, and a clean litter box (or take them outside once in awhile) and they are pretty much “good to go”.

Like pets, you give traditional virtual machines their “shots” when they are first created (via Puppet, Chef, Ansible, or through manual updates) and they are pretty much “good to go”.  Of course, if they get “sick”, you take virtual machines to “the vet” – you log into them, troubleshoot problems, fix problems, or run update scripts. Usually by hand, or driven by some automation, but managed individually.

The problem is, raising pets in a house doesn’t scale. I don’t want 2000 cats and dogs at my house (and, let’s be honest, neither do you).

Cattle

Raising cattle is quite different than a household pet. It’s actually quite a bit more complex. Cows, sheep, and chickens are raised on farms because it’s more efficient. Farms are set up to handle the scale. This requires large amounts of land, tractors, fences, silos for grain/feed, specialized trailers for your truck, specialized train cars, and specialized processing plants. In addition, farms have to keep shifting which fields are used for grazing so that they don’t become unusable over time.  If you really think about it – I’m only just skimming the surface. Farms are more efficient, but quite a bit more expensive than a house to run day to day.

Clouds (e.g. OpenStack, OpenShift) are more akin to farms than houses. Firing up a cloud is like setting up a farm from scratch. It requires a lot of planning and execution. After firing up your cloud, there is constant technical care and maintenance – e.g. adding/removing storage – fixing hung instances – adding/removing VLANS – fixing pods stuck in a pending state, returning highly available services (Cinder, API nodes, OSE/Kube Master, Hawkular Metrics) back to production, upgrading the cloud platform, etc. etc. There is a lot of farm work with a cloud.

Farms are quite efficient at raising thousands of animals. I do not think, however, that you just tear down an entire farm when it is no longer running in an optimal state – instead – you fix it.  Clouds are quite similar. Clouds are more work for operators, but less work for developers. Just like farms are a lot of work for farms, but much less work for shoppers at the store.  Raising large amounts of chicken is harder for farmers and easier for consumers. The farmers hide the complexity from consumers.

Conclusion

I propose that it’s not really about pets vs. cattle, but really about houses vs. farms. It’s far easier to buy chicken breast at the store than it is to raise hundreds of chickens in your backyard. I propose this as an improved analogy. Farms require quite a bit of work, are sophisticated and more expensive than a house, but quite efficient at supporting a lot more animals. At scale, I would take a farm any day over raising thousands of animals at my house. The same is true with a cloud environment. At scale, a cloud wins every time.

On a side note, people often conflate the notion of scale up and scale out with pets vs. cattle. In my mind, bigger and smaller bulls (scale up/down) or a greater number of smaller bulls (scale out) is arbitrary and a constant challenge in terms of both pets and cattle….

Finally, for those that still don’t like pets vs. cattle or houses vs. farms – let’s try a beer analogy. Bottles vs. home brew – while it’s easy to drop by the store and buy a bottle of beer… it’s way more fun to brew it. Let’s brew some beer together, leave a comment below!

Schrodinger’s Container: How Red Hat is Building a Better Linux Container Scanner

The rapid rise of Linux containers as an enterprise-ready technology in 2015, thanks in no small part to the technology provided by the Docker project, should come as no surprise: Linux containers offer a broad array of benefits to the enterprise, from greater application portability and scalability to the ability to fully leverage the benefits of composite applications.

But these benefits aside, Linux containers can, if IT security procedures are not followed, also cause serious harm to mission-critical operations. As Red Hat’s Lars Herrmann has pointed out, containers aren’t exactly transparent when it comes to seeing and understanding all of their internal code. This means that tools and technologies to actually see inside a container are critical to enterprises that want to deploy Linux containers in mission-critical scenarios.

Continue reading “Schrodinger’s Container: How Red Hat is Building a Better Linux Container Scanner”

  • Page 1 of 2
  • 1
  • 2
  • >