Scaling Up On Demand with Red Hat Enterprise Virtualization

One of the most compelling features of Red Hat Enterprise Virtualization 3.6 is the ability to hot plug memory. Red Hat Enterprise Virtualization 3.5 provided the ability to hot plug vCPU’s to running virtual machines. Red Hat Enterprise Virtualization 3.6 completes this vision of hot plugging resources on demand.

Why do resource hot plugging capabilities matter to an enterprise IT organization? The two

Continue reading “Scaling Up On Demand with Red Hat Enterprise Virtualization”

How Badlock Was Discovered and Fixed

Severity analysis of vulnerabilities by experts from the information security industry is rarely based on real code review. In the ‘Badlock’ case, most read our CVE descriptions and built up a score representing a risk this CVE poses to a user. There is nothing wrong with this approach if it is done correctly.  CVEs are analyzed in isolation; as if no other issue exists. In the case of a ‘Badlock‘ there were eight CVEs. The difference is the fact that one of them was in a foundational component used by most of the code affected by the remaining seven CVEs. That very specific CVE was

Continue reading “How Badlock Was Discovered and Fixed”

Continuous Delivery / Deployment with OpenShift Enterprise

In our previous posts, we’ve explored the Red Hat container ecosystem, the Red Hat Container Development Kit (CDK), OpenShift as a local deployment and OpenShift in production. In this final post of the series, we’re going to take a look at how a team can take advantage of the advanced features of OpenShift in order to automatically move new versions of applications from development to production — a process known as Continuous Delivery (or Continuous Deployment, depending on the level of automation).

OpenShift supports

Continue reading “Continuous Delivery / Deployment with OpenShift Enterprise”

OpenShift Enterprise in Production

In a previous blog post we took a look at the Red Hat Container Development Kit (CDK) and how it can be used to build and deploy applications within a development environment that closely mimics a production OpenShift cluster. In this post, we’ll take an in-depth look at what a production OpenShift cluster looks like — the individual components, their functions, and how they relate to each other. We’ll also check out how OpenShift supports scaling up and scaling out applications in a production environment.

Continue reading “OpenShift Enterprise in Production”

Steps to Optimize Network Quality of Service in Your Data Center

Virtualization technologies have evolved such that support for multiple networks on a single host is a must-have feature. For example, Red Hat Enterprise Virtualization allows administrators to configure multiple NICs using bonding for several networks to allow high throughput or high availability. In this configuration, different networks can be used for connecting virtual machines (using layer 2 Linux bridges) or for other uses such as host storage access (iSCSI, NFS), migration, display (SPICE, VNC), or for virtual machine management.  While it is possible to consolidate all of these networks into a single network, separating them into multiple networks enables simplified management, improved security, and an easier way to track errors and/or downtime.

The aforementioned configuration works great but leaves us with a network bottleneck at the host level. All networks compete on the same queue in the NIC / in a bonded configuration and Linux will only enforce a trivial quality of service queuing algorithm, namely: pfifo_fast, which queues side by side, where packets can be enqueued based on their Type of Service bits or assigned priority. One can easily imagine a case where a single network is hogging the outgoing link (e.g. during a migration storm where many virtual machines are being migrated out from the host simultaneously or when there is an attacker VM). The consequences of such cases can include things like lost connectivity to the management engine or lost storage for the host.

A simple solution is to configure

Continue reading “Steps to Optimize Network Quality of Service in Your Data Center”

Red Hat Federation Story: Ipsilon & Keycloak… a “Clash of the Titans”

Some time ago, two different projects were started in the open source community, namely: Ipsilon and Keycloak. These projects were started by groups with different backgrounds and different perspectives. In the beginning, it seemed like these two projects would have very little in common… though both aimed to include

Continue reading “Red Hat Federation Story: Ipsilon & Keycloak… a “Clash of the Titans””

Identity Management and Application Integration

Identity management solutions integrate systems, services, and applications into a single ecosystem that provides authentication, access control, enterprise SSO, identity information and the policies related to those identities. While I have dedicated time to explaining how to provide these capabilities to Linux systems – it is now time to broaden the scope and talk a little bit about services and applications.

In some ways, services and applications are very similar. They are both usually

Continue reading “Identity Management and Application Integration”

Why Now is the Perfect Time to Adopt Red Hat Enterprise Virtualization

IT decision makers seem to be up in arms regarding discussions on “next generation” technologies.  In the past three months it has been nearly impossible to hold a conversation where the terms cloud, OpenStack, or (Linux) containers don’t surface.  Hot topics and buzzwords aside, it has become clear (to me) that the right mix of market conditions are causing organizations to express a renewed interest in enterprise virtualization.

Many organizations are now ready to adopt the next generation of server hardware.  The popular Ivy Bridge and Sandy Bridge chipsets from Intel are four to five years old and those who purchased such hardware tend to refresh their equipment every four to five years.  In addition, we see Intel Haswell technology approaching its third anniversary.  Organizations that lease hardware on a three year cycle will also be looking at what the next generation of hardware has to offer.

What does a potential wave of hardware refresh have to do with a renewed interest in enterprise virtualization?  To no one’s surprise

Continue reading “Why Now is the Perfect Time to Adopt Red Hat Enterprise Virtualization”

Connecting the Dots at Linaro Connect

Red Hat has long advocated for the importance of cross-industry IT standards, with the intention of enabling ecosystems with broad industry participation and providing a common basis for innovation. Perhaps even more importantly, these standards can help drive adoption of new technologies within enterprises, pushing the cycle of innovation even further along.

With ARM being one of these emerging ecosystems, we wanted to provide a snapshot of a recent event that highlights some of the standards-based work happening in this growing community: last week’s Linaro Connect conference in Bangkok, Thailand.

Continue reading “Connecting the Dots at Linaro Connect”

Why is Indirect Integration Better?

In last year’s blog series, I covered both direct and indirect Active Directory integration options. But I never explained what we actually suggest / recommend. Some customers looking at indirect integration saw only the overhead of providing an interim server and the costs related to managing it. To be clear, these costs are real and the overhead does exist. But we still recommend

Continue reading “Why is Indirect Integration Better?”