Red Hat Virtualization: Bridging the Gap with the Cloud and Hyperconverged Infrastructure

Red Hat Virtualization offers a flexible technology for high-intensive performance and secure workloads. Red Hat Virtualization 4.0 introduced new features that enable customers to further extend the use case of traditional virtualization in hybrid cloud environments. The platform now easily incorporates third party network providers into the existing environment along with other technologies found in next generation cloud platforms such as Red Hat OpenStack Platform and Red Hat Enterprise Linux Atomic Host. Additionally, new infrastructure models are now supported including selected support for hyperconverged infrastructure; the native integration of compute and storage across a cluster of hosts in a Red Hat Virtualization environment.

Container Tidbits: Understanding the docker-latest Package

Does your team want to move as quickly as possible? Are you and your development team looking for the latest features and not necessarily optimizing on stability? Are you just beginning with the docker runtime and not quite ready for container orchestration? Well, we have the answer, and it’s called the docker-latest package.

Background

About 6 months ago, Red Hat added a package called docker-latest. The idea is to have two packages in Red Hat Enterprise Linux and Red Hat Enterprise Linux Atomic Host. A very fast moving docker-latest package and a slower, but more stable package called, well of course, docker.

The reasoning is, the larger and more sophisticated your container infrastructure becomes, a more stable version is often what people want – but when split into small agile teams, or when just starting out, many teams will optimize on the latest features in a piece of software. Either way, we have you covered with Red Hat Enterprise Linux and Red Hat Enterprise Linux Atomic Host.

Arm in Arm: Explore Enterprise Server Options at ARM’s Annual Technical Conference

If you have ever wanted to learn about Red Hat’s involvement in the ARM server ecosystem, and are in the San Francisco Bay Area, this week may be a perfect opportunity. Red Hat will be exhibiting at ARM TechCon, ARM Holdings’ premier yearly show at the Santa Clara Convention center. Attendees will be presented with a variety of great technical sessions and training topics, along with expert keynotes, solutions-based Expo Theater sessions and an expo floor filled with new and emerging technologies for the datacenter.  Note that the expo floor can be accessed with the free

Red Hat Virtualization and Security

The usage of open source technologies has grown significantly in the public sector. In fact, according to a published memo, open source technologies allow the Department of Defense to “develop and update its software-based capabilities faster than ever, to anticipate new threats and respond to continuously changing requirements”. Cybersecurity threats are on the rise and organizations need to ensure that the software they use in their environments is safe. IT teams need the ability to quickly identify and mitigate breaches. They also need to deploy preventative measures and ensure that all stakeholders are protected.

Evolution of Containers: Lessons Learned at ContainerCon Europe

Linux containers, and their use in the enterprise, are evolving rapidly. If I didn’t know this already, what I’m seeing at conferences like ContainerCon would confirm it. We’ve moved on from “what are containers, anyway?” to “let’s hunker down and get it right.”

Recently, I attended and spoke at LinuxCon/ContainerCon Europe. Like LinuxCon/ContainerCon North America, many of the keynotes touched on Linux container work going on in the community. At the European edition there was a particularly strong focus on Linux container security and networking. At least six sessions were focused on kernel security, orchestration security, and general container security. Four talks focused on container networking. Along with container security and networking, there were a lot of sessions about cloud native and containerized applications. 

Secure Your Containers with this One Weird Trick

Did you know there is an option to drop Linux capabilities in Docker? Using the docker run --cap-drop option, you can lock down root in a container so that it has limited access within the container. Sadly, almost no one ever tightens the security on a container or anywhere else.

The Day After is Too Late

There’s an unfortunate tendency in IT to think about security too late. People only buy a security system the day after they have been broken into.

Dropping capabilities can be low hanging fruit when it comes to improving container security.

What are Linux Capabilities?

According to the capabilities man page, capabilities are distinct units of privilege that can be independently enabled or disabled.

The way I describe it is that most people think of root as being all powerful. This isn’t the whole picture, the root user with all capabilities is all powerful. Capabilities were added to the kernel around 15 or so years ago to try to divide up the power of root.

ARMing IoT with Linaro LITE

Linaro has announced a new project focused on IoT – LITE, or Linaro IoT and Embedded. This project will focus on developing core technology to be used in IoT devices and gateways.

Linaro is a consortium focused on the Linux ecosystem for ARM based systems — see www.linaro.org for details. Much of their work to date has been focused on Android phones and tablets. Active development efforts include server and networking as well as Digital Home. The Digital Home project focuses on set-top boxes and home gateways. Linaro’s goal is to avoid fragmentation of the ARM ecosystem by providing a common foundation that can be used to build a wide range of value-added applications.

LITE extends existing Linaro projects by addressing both

Notes from the Field – A Summary of the KVM Forum

I attended the KVM Forum in August of this year, and as a new Red Hatter with a lot of VMware experience, it was eye opening. I am seeing a lot of interest in Red Hat Virtualization from my customers, and so I wanted to understand the platform at a much deeper level. KVM is the technology that underpins the Red Hat Virtualization platform. A number of themes emerged for me as I attended sessions and enjoyed the hallway track. This was a forum for developers by developers, so infrastructure types like myself were far and few between but that did not impact my enjoyment of the conference. In fact, as a technical guy coming from a server background, I learned a lot more than I would have learned from a typical infrastructure focused conference. Below, I will highlight some topics that stood out to me.

Peace in Our Time: Bringing Ops and Dev Together

Frustrated by long delays getting new code into production? Worried that developers are adopting unapproved technologies? In an increasingly automated, containerized world it’s time to adapt your processes and policies so that developers can utilize the latest and most appropriate technology — and operations have full awareness of everything running in their environment.

The Problem and How to Solve It

IT processes, driven by business reliance on Mode 1 applications, have not been designed nor are equipped to handle rapid change. This creates friction between management and operations on one side, and developers on the other. For example, when developer teams want to employ new or different tools than the standards accepted by operations it often creates friction. It doesn’t have to be this way, though.

In this post, we are going to take a deeper look at the collaboration that happens between development and operations when building and working with the “latest and greatest” technology stack.

From Checkpoint/Restore to Container Migration

The concept to save (i.e. checkpoint / dump) the state of a process, at a certain point in time, so that it may later be used to restore / restart the process (to the exact same state) has existed for many years. One of the most prominent motivations to develop and support checkpoint/restore functionality was to provide improved fault tolerance. For example, checkpoint/restore allows for processes to be restored from previously created checkpoints if, for one reason or another, these processes had been aborted.

Over the years there have been several different implementations of checkpoint/restore for Linux. Existing implementations of checkpoint/restore differ in terms of  “what level” (of the operating system) they are operating; the lowest level approaches focus on implementing checkpoint/restore directly in the kernel while other “higher level” approaches implement checkpoint/restore completely in user-space. While it would be difficult to unearth each and every approach /  implementation – it is likely fair to