.NET Core on Red Hat Enterprise Linux

In November 2015, I blogged about the announcement to bring .NET to RHEL from the .NET Core upstream project to enterprise customers and developers, both as an RPM and as a Linux container.  That was quite a moment for the industry and, quite frankly, for me as well, having participated in the discussions that led to the significant announcement with Microsoft.  Since then, we have been in tight collaboration to make sure this day would actually arrive.  Despite the usual challenges with a relatively new open source project, the project was

Continue reading “.NET Core on Red Hat Enterprise Linux”

Red Hat at DockerCon 16 in Seattle

If you’re heading to DockerCon 16 next week in Seattle, connect with us to see why Fortune 500 organizations trust Red Hat for enterprise deployments. Red Hat subject matter experts will be onsite to walk you through real-world use cases for securely developing, deploying and managing container-based applications. 

Attend the State of Container Security Session

Join two of Red Hat’s Docker contributors discussing the state of container security today. Senior Software Engineer Mrunal Patel and Thomas Cameron, Global Evangelist of Emerging Technology are presenting on how you can secure your containerized microservices without slowing down development.

Continue reading “Red Hat at DockerCon 16 in Seattle”

In Defense of the Pet Container, Part 1: Prelude – The Only Constant is Complexity

It’s been just over three years since Solomon Hykes presented the world with the (so far) most creative way to use the tar command: the Docker project. Not only does the project combine existing container-technologies and make them easier to use, but its well-timed introduction drove an unprecedented rate of adoption for new technology.

Did people run containers before the Docker project? Yes, but it was harder to do so. The broader community was favoring LXC, and Red Hat was working on a libvirt-based model for Red Hat Enterprise Linux. With OpenShift 2, Red Hat had already been running containers in production for several years – both in an online PaaS as well as on-premise for enterprise customers. The model pre-Docker however was fundamentally different from what we are seeing today: rather than enabling completely independent runtimes inside the containers, the approach in

Continue reading “In Defense of the Pet Container, Part 1: Prelude – The Only Constant is Complexity”

Red Hat Delivers High Performance on Critical Enterprise Workloads with the Latest Intel Xeon E7 v4 Processor Family

Not long ago, Intel introduced a new Xeon processor platform to enable faster computing for the enterprise world. Codenamed Broadwell, this architecture brought additional cores to the chip and many improvements, from faster memory support to various security enhancements. As with three generations of Intel Xeon processors before this one, these benefits span beyond simple increases in transistor counts or the number of cores within each processor.

Today, Intel launched the Intel Xeon E7 v4 processor family, a high-end, enterprise-focused class of processors based on Broadwell architecture and targeted at large systems with four or more CPUs. Accompanying the launch are several new world record industry-standard benchmarks; this is where things like increased memory capacity or larger on-chip caches benefit overall system performance, resulting in the highest reported scores on various standard benchmarks. The Xeon E7 v4 launch, along with other announcements like it, typically send a ripple of innovation throughout Red Hat’s partner ecosystem in the form of new and improved performance results. The ability to support these partners is of paramount importance to Red Hat and, as a result, Red Hat Enterprise Linux is often selected by these ongoing benchmarking efforts.

Here is how Red Hat Enterprise Linux scored this time:

Continue reading “Red Hat Delivers High Performance on Critical Enterprise Workloads with the Latest Intel Xeon E7 v4 Processor Family”

Architecting Containers Part 5: Building a Secure and Manageable Container Software Supply Chain

Background

In Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization we investigated the level of effort necessary to containerize different types of workloads. In this article I am going to address several challenges facing organizations that are deploying containers – how to patch containers and how to determine which teams are responsible for the container images. Should they be controlled by development or operations?

In addition, we are going to take a look at

Continue reading “Architecting Containers Part 5: Building a Secure and Manageable Container Software Supply Chain”

Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization

Many development and operations teams are looking for guidelines to help them determine what applications can be containerized and how difficult it may be. In Architecting Containers Part 3: How the User Space Affects Your Applications we took an in depth look at how the user space affects applications for both developers and operations. In this article we are going to take a look at workload characteristics and the level of effort required to containerize different types of applications.

The goal of this article is to provide guidance based on current capabilities and best practices within

Continue reading “Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization”

How Badlock Was Discovered and Fixed

Severity analysis of vulnerabilities by experts from the information security industry is rarely based on real code review. In the ‘Badlock’ case, most read our CVE descriptions and built up a score representing a risk this CVE poses to a user. There is nothing wrong with this approach if it is done correctly.  CVEs are analyzed in isolation; as if no other issue exists. In the case of a ‘Badlock‘ there were eight CVEs. The difference is the fact that one of them was in a foundational component used by most of the code affected by the remaining seven CVEs. That very specific CVE was

Continue reading “How Badlock Was Discovered and Fixed”

Continuous Delivery / Deployment with OpenShift Enterprise

In our previous posts, we’ve explored the Red Hat container ecosystem, the Red Hat Container Development Kit (CDK), OpenShift as a local deployment and OpenShift in production. In this final post of the series, we’re going to take a look at how a team can take advantage of the advanced features of OpenShift in order to automatically move new versions of applications from development to production — a process known as Continuous Delivery (or Continuous Deployment, depending on the level of automation).

OpenShift supports

Continue reading “Continuous Delivery / Deployment with OpenShift Enterprise”

OpenShift Enterprise in Production

In a previous blog post we took a look at the Red Hat Container Development Kit (CDK) and how it can be used to build and deploy applications within a development environment that closely mimics a production OpenShift cluster. In this post, we’ll take an in-depth look at what a production OpenShift cluster looks like — the individual components, their functions, and how they relate to each other. We’ll also check out how OpenShift supports scaling up and scaling out applications in a production environment.

Continue reading “OpenShift Enterprise in Production”

Steps to Optimize Network Quality of Service in Your Data Center

Virtualization technologies have evolved such that support for multiple networks on a single host is a must-have feature. For example, Red Hat Enterprise Virtualization allows administrators to configure multiple NICs using bonding for several networks to allow high throughput or high availability. In this configuration, different networks can be used for connecting virtual machines (using layer 2 Linux bridges) or for other uses such as host storage access (iSCSI, NFS), migration, display (SPICE, VNC), or for virtual machine management.  While it is possible to consolidate all of these networks into a single network, separating them into multiple networks enables simplified management, improved security, and an easier way to track errors and/or downtime.

The aforementioned configuration works great but leaves us with a network bottleneck at the host level. All networks compete on the same queue in the NIC / in a bonded configuration and Linux will only enforce a trivial quality of service queuing algorithm, namely: pfifo_fast, which queues side by side, where packets can be enqueued based on their Type of Service bits or assigned priority. One can easily imagine a case where a single network is hogging the outgoing link (e.g. during a migration storm where many virtual machines are being migrated out from the host simultaneously or when there is an attacker VM). The consequences of such cases can include things like lost connectivity to the management engine or lost storage for the host.

A simple solution is to configure

Continue reading “Steps to Optimize Network Quality of Service in Your Data Center”