Notes from the Field – A Summary of the KVM Forum

I attended the KVM Forum in August of this year, and as a new Red Hatter with a lot of VMware experience, it was eye opening. I am seeing a lot of interest in Red Hat Virtualization from my customers, and so I wanted to understand the platform at a much deeper level. KVM is the technology that underpins the Red Hat Virtualization platform. A number of themes emerged for me as I attended sessions and enjoyed the hallway track. This was a forum for developers by developers, so infrastructure types like myself were far and few between but that did not impact my enjoyment of the conference. In fact, as a technical guy coming from a server background, I learned a lot more than I would have learned from a typical infrastructure focused conference. Below, I will highlight some topics that stood out to me.

Continue reading “Notes from the Field – A Summary of the KVM Forum”

Peace in Our Time: Bringing Ops and Dev Together

Frustrated by long delays getting new code into production? Worried that developers are adopting unapproved technologies? In an increasingly automated, containerized world it’s time to adapt your processes and policies so that developers can utilize the latest and most appropriate technology — and operations have full awareness of everything running in their environment.

The Problem and How to Solve It

IT processes, driven by business reliance on Mode 1 applications, have not been designed nor are equipped to handle rapid change. This creates friction between management and operations on one side, and developers on the other. For example, when developer teams want to employ new or different tools than the standards accepted by operations it often creates friction. It doesn’t have to be this way, though.

In this post, we are going to take a deeper look at the collaboration that happens between development and operations when building and working with the “latest and greatest” technology stack.

Continue reading “Peace in Our Time: Bringing Ops and Dev Together”

From Checkpoint/Restore to Container Migration

The concept to save (i.e. checkpoint / dump) the state of a process, at a certain point in time, so that it may later be used to restore / restart the process (to the exact same state) has existed for many years. One of the most prominent motivations to develop and support checkpoint/restore functionality was to provide improved fault tolerance. For example, checkpoint/restore allows for processes to be restored from previously created checkpoints if, for one reason or another, these processes had been aborted.

Over the years there have been several different implementations of checkpoint/restore for Linux. Existing implementations of checkpoint/restore differ in terms of  “what level” (of the operating system) they are operating; the lowest level approaches focus on implementing checkpoint/restore directly in the kernel while other “higher level” approaches implement checkpoint/restore completely in user-space. While it would be difficult to unearth each and every approach /  implementation – it is likely fair to

Continue reading “From Checkpoint/Restore to Container Migration”

PCI Series: Requirement 6 – Develop and Maintain Secure Systems and Applications

This post is the fifth installment in my PCI DSS series – a series dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement six (i.e. the requirement to develop and maintain secure systems and applications). The outline and mapping of individual articles to requirements can be found in the overarching post that started the series.

Section six of the PCI DSS standard covers guidelines related to secure application development and testing. IdM and its ecosystem can help in multiple ways to address requirements in this part of the PCI-DSS standard. First of all, IdM includes a set of Apache modules for

Continue reading “PCI Series: Requirement 6 – Develop and Maintain Secure Systems and Applications”

In Defense of the Pet Container, Part 3: Puppies, Kittens and… Containers

In our third and final installment (see: part one & part two), let’s take a look at some high-level use cases for Linux containers as well as finally (finally) defending what I like to call “pet” containers. From a general perspective, we see three repeated high-level use cases for containerizing applications:

  1. The fully orchestrated, multi-container application as you would create in OpenShift via the Red Hat Container Development Kit;
  2. Loosely orchestrated containers that don’t use advanced features like application templates and Kubernetes; and
  3. Pet containers.

Continue reading “In Defense of the Pet Container, Part 3: Puppies, Kittens and… Containers”

Announcing Red Hat Enterprise Linux Atomic Host 7.2.6

Red Hat Enterprise Linux Atomic Host is a small footprint, purpose-built version of Red Hat Enterprise Linux that is designed to run containerized workloads. Building on the success of our last release, Red Hat’s Atomic-OpenShift team is excited to announce the general availability of Red Hat Enterprise Linux Atomic Host 7.2.6. This release features improvements in rpm-ostree, cockpit, skopeo, docker, and the atomic CLI. The full release notes can be found here. This post is going to explore a major new feature

Continue reading “Announcing Red Hat Enterprise Linux Atomic Host 7.2.6”

Bringing Intelligence to the Edge

In my last post, we discussed how the needs of an enterprise-grade Internet of Things (IoT) solution require a more diligent approach than what’s involved when putting together a Proof of Concept (PoC). In this post, we’ll explore how businesses can leverage their existing infrastructure to create scalable IoT deployments.

While my previous post reviewed a “list of ingredients” needed to build out an industrial-grade IoT solution, the massive scale and reach of IoT solutions for businesses requires some additional considerations, namely

Continue reading “Bringing Intelligence to the Edge”

Red Hat Hyperconverged Solutions

Hyperconvergence is a key topic in IT planning across industries today. As customers look to lower costs and simplify day to day management of their IT operations, the hyperconverged model emerges as fit in a number of operational use cases.

Convergence began at the hardware level, with compute, network, and storage appearing in consolidated platforms, but it’s now accelerating as hyperconvergence goes “software defined”. As a leading software infrastructure stack provider, Red Hat recognizes that reducing the overall moving parts in your infrastructure and simplifying the procurement and deployment processes are core requirements of the next generation elastic datacenter.

Applying a solutions-aligned lens, Red Hat is innovating software defined compute-storage solutions across the portfolio, designed to meet the needs of a broad customer base with diverse requirements. As a vendor-partner in this journey, we recognize the value of bringing storage close to your compute and eliminating the need for discreet storage tier. Doing so across both traditional virtualization and cloud, as well as containers and leveraging our industry-proven software defined storage assets – Red Hat Gluster and Red Hat Ceph Storage – we’ve defined a robust set of efficient, solution-aligned hyperconverged offerings.

This blog provides a short overview of several areas where we see hyperconverged software defined architectures aligning with use cases, with a focus on

Continue reading “Red Hat Hyperconverged Solutions”