Viewing the Horizon from the Cockpit

One of my favorite things about technology is seeing what’s next. I often find myself asking, “…what’s on the horizon?” Or, better yet, “…what’s beyond the horizon?” In the case of Red Hat Enterprise Virtualization (RHEV), specifically the hypervisor, “next generation node” is hovering in the distance. I anticipate this advance to be significant  for both Red Hat partners and customers.

Continue reading “Viewing the Horizon from the Cockpit”

Virtual Machine Migration Best Practices

Lately, there has been in increase in IT organizations migrating their traditional virtualization workloads to open-source platforms such as Red Hat Enterprise Virtualization (RHEV) . Although there are many reasons for migrating (e.g. cost, features), one key advantage stands out for the open source alternatives. Organizations are now seeing the viability of building on the same platform to integrate open source cloud solutions with traditional applications. No single platform is optimized for each workload type or tier. Not only do organizations get to take advantage of the fast innovation of open source, but they also realize significant cost savings.

Continue reading “Virtual Machine Migration Best Practices”

Why Use SSSD Instead of a Direct LDAP Configuration for Applications?

In my Identity Management and Application Integration blog post I talk about how applications can make the most of the identity ecosystem. For example, a number of applications have integrated Apache modules and SSSD to provide a more flexible authentication experience.  Despite this progress – some (people) remain unconvinced. They wonder why they should use Apache modules and SSSD in conjunction with, for example, Active Directory instead of using a simple LDAP configuration… essentially asking: why bother?

Let’s look at this scenario in greater detail.  If an application supports

Continue reading “Why Use SSSD Instead of a Direct LDAP Configuration for Applications?”

Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization

Many development and operations teams are looking for guidelines to help them determine what applications can be containerized and how difficult it may be. In Architecting Containers Part 3: How the User Space Affects Your Applications we took an in depth look at how the user space affects applications for both developers and operations. In this article we are going to take a look at workload characteristics and the level of effort required to containerize different types of applications.

The goal of this article is to provide guidance based on current capabilities and best practices within

Continue reading “Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization”

Scaling Up On Demand with Red Hat Enterprise Virtualization

One of the most compelling features of Red Hat Enterprise Virtualization 3.6 is the ability to hot plug memory. Red Hat Enterprise Virtualization 3.5 provided the ability to hot plug vCPU’s to running virtual machines. Red Hat Enterprise Virtualization 3.6 completes this vision of hot plugging resources on demand.

Why do resource hot plugging capabilities matter to an enterprise IT organization? The two

Continue reading “Scaling Up On Demand with Red Hat Enterprise Virtualization”

How Badlock Was Discovered and Fixed

Severity analysis of vulnerabilities by experts from the information security industry is rarely based on real code review. In the ‘Badlock’ case, most read our CVE descriptions and built up a score representing a risk this CVE poses to a user. There is nothing wrong with this approach if it is done correctly.  CVEs are analyzed in isolation; as if no other issue exists. In the case of a ‘Badlock‘ there were eight CVEs. The difference is the fact that one of them was in a foundational component used by most of the code affected by the remaining seven CVEs. That very specific CVE was

Continue reading “How Badlock Was Discovered and Fixed”

Continuous Delivery / Deployment with OpenShift Enterprise

In our previous posts, we’ve explored the Red Hat container ecosystem, the Red Hat Container Development Kit (CDK), OpenShift as a local deployment and OpenShift in production. In this final post of the series, we’re going to take a look at how a team can take advantage of the advanced features of OpenShift in order to automatically move new versions of applications from development to production — a process known as Continuous Delivery (or Continuous Deployment, depending on the level of automation).

OpenShift supports

Continue reading “Continuous Delivery / Deployment with OpenShift Enterprise”

OpenShift Enterprise in Production

In a previous blog post we took a look at the Red Hat Container Development Kit (CDK) and how it can be used to build and deploy applications within a development environment that closely mimics a production OpenShift cluster. In this post, we’ll take an in-depth look at what a production OpenShift cluster looks like — the individual components, their functions, and how they relate to each other. We’ll also check out how OpenShift supports scaling up and scaling out applications in a production environment.

Continue reading “OpenShift Enterprise in Production”

Steps to Optimize Network Quality of Service in Your Data Center

Virtualization technologies have evolved such that support for multiple networks on a single host is a must-have feature. For example, Red Hat Enterprise Virtualization allows administrators to configure multiple NICs using bonding for several networks to allow high throughput or high availability. In this configuration, different networks can be used for connecting virtual machines (using layer 2 Linux bridges) or for other uses such as host storage access (iSCSI, NFS), migration, display (SPICE, VNC), or for virtual machine management.  While it is possible to consolidate all of these networks into a single network, separating them into multiple networks enables simplified management, improved security, and an easier way to track errors and/or downtime.

The aforementioned configuration works great but leaves us with a network bottleneck at the host level. All networks compete on the same queue in the NIC / in a bonded configuration and Linux will only enforce a trivial quality of service queuing algorithm, namely: pfifo_fast, which queues side by side, where packets can be enqueued based on their Type of Service bits or assigned priority. One can easily imagine a case where a single network is hogging the outgoing link (e.g. during a migration storm where many virtual machines are being migrated out from the host simultaneously or when there is an attacker VM). The consequences of such cases can include things like lost connectivity to the management engine or lost storage for the host.

A simple solution is to configure

Continue reading “Steps to Optimize Network Quality of Service in Your Data Center”

No Joking: No-cost Red Hat Enterprise Linux is Now Available for Developers

No, last night’s news wasn’t an early April Fool’s Day joke: Red Hat Enterprise Linux is now available through a no-cost developer subscription as part of the Red Hat Developers Program. All that’s needed is an email address to register for the program and developers then have access to not only Red Hat Enterprise Linux (as part of the Red Hat Enterprise Linux Developer Suite) but also the entire Red Hat JBoss Middleware portfolio and the Red Hat Container Development Kit (CDK).

Continue reading “No Joking: No-cost Red Hat Enterprise Linux is Now Available for Developers”