Not long ago, Intel introduced a new Xeon processor platform to enable faster computing for the enterprise world. Codenamed Broadwell, this architecture brought additional cores to the chip and many improvements, from faster memory support to various security enhancements. As with three generations of Intel Xeon processors before this one, these benefits span beyond simple increases in transistor counts or the number of cores within each processor.
Today, Intel launched the Intel Xeon E7 v4 processor family, a high-end, enterprise-focused class of processors based on Broadwell architecture and targeted at large systems with four or more CPUs. Accompanying the launch are several new world record industry-standard benchmarks; this is where things like increased memory capacity or larger on-chip caches benefit overall system performance, resulting in the highest reported scores on various standard benchmarks. The Xeon E7 v4 launch, along with other announcements like it, typically send a ripple of innovation throughout Red Hat’s partner ecosystem in the form of new and improved performance results. The ability to support these partners is of paramount importance to Red Hat and, as a result, Red Hat Enterprise Linux is often selected by these ongoing benchmarking efforts.
Here is how Red Hat Enterprise Linux scored this time:
Continue reading “Red Hat Delivers High Performance on Critical Enterprise Workloads with the Latest Intel Xeon E7 v4 Processor Family”
The recent release of Red Hat Cloud Suite marked a new milestone for Red Hat and our customers. First, it is the first in what will become a family of suites. Second, it enables enterprise IT to transform their application development and operations toward an agile, innovation center based on hybrid cloud and devops technologies. Curating a broad set of open source technologies, Red Hat Cloud Suite offers a turnkey Cloud solution with a container-based app-development platform, private-cloud infrastructure, and a common management framework. Specifically, Red Hat Cloud Suite includes
Continue reading “Cloud Solutions Made Simple”
This year’s SAPPHIRE NOW + ASUG Annual Conference in Orlando, Florida, from May 17-19, 2016, is packed with Red Hat events – happening at our booth, the SAP Mini Theater, and the SAP Demo Theater. We’re also presenting around the show at the Intel, Hitachi Data Systems, HP Enterprise, and Lenovo booths.
We look forward to meeting our community, showcasing our solutions, and highlighting the top companies we’ve been working with. We have excellent customers and partners and are eager to tell you their stories. Plus we’re giving away
Continue reading “Put Your ‘Red Hat’ on at SAPPHIRE 2016!”
Many development and operations teams are looking for guidelines to help them determine what applications can be containerized and how difficult it may be. In Architecting Containers Part 3: How the User Space Affects Your Applications we took an in depth look at how the user space affects applications for both developers and operations. In this article we are going to take a look at workload characteristics and the level of effort required to containerize different types of applications.
The goal of this article is to provide guidance based on current capabilities and best practices within
Continue reading “Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization”
One of the most compelling features of Red Hat Enterprise Virtualization 3.6 is the ability to hot plug memory. Red Hat Enterprise Virtualization 3.5 provided the ability to hot plug vCPU’s to running virtual machines. Red Hat Enterprise Virtualization 3.6 completes this vision of hot plugging resources on demand.
Why do resource hot plugging capabilities matter to an enterprise IT organization? The two
Continue reading “Scaling Up On Demand with Red Hat Enterprise Virtualization”
Severity analysis of vulnerabilities by experts from the information security industry is rarely based on real code review. In the ‘Badlock’ case, most read our CVE descriptions and built up a score representing a risk this CVE poses to a user. There is nothing wrong with this approach if it is done correctly. CVEs are analyzed in isolation; as if no other issue exists. In the case of a ‘Badlock‘ there were eight CVEs. The difference is the fact that one of them was in a foundational component used by most of the code affected by the remaining seven CVEs. That very specific CVE was
Continue reading “How Badlock Was Discovered and Fixed”
In our previous posts, we’ve explored the Red Hat container ecosystem, the Red Hat Container Development Kit (CDK), OpenShift as a local deployment and OpenShift in production. In this final post of the series, we’re going to take a look at how a team can take advantage of the advanced features of OpenShift in order to automatically move new versions of applications from development to production — a process known as Continuous Delivery (or Continuous Deployment, depending on the level of automation).
Continue reading “Continuous Delivery / Deployment with OpenShift Enterprise”
In a previous blog post we took a look at the Red Hat Container Development Kit (CDK) and how it can be used to build and deploy applications within a development environment that closely mimics a production OpenShift cluster. In this post, we’ll take an in-depth look at what a production OpenShift cluster looks like — the individual components, their functions, and how they relate to each other. We’ll also check out how OpenShift supports scaling up and scaling out applications in a production environment.
Continue reading “OpenShift Enterprise in Production”
Virtualization technologies have evolved such that support for multiple networks on a single host is a must-have feature. For example, Red Hat Enterprise Virtualization allows administrators to configure multiple NICs using bonding for several networks to allow high throughput or high availability. In this configuration, different networks can be used for connecting virtual machines (using layer 2 Linux bridges) or for other uses such as host storage access (iSCSI, NFS), migration, display (SPICE, VNC), or for virtual machine management. While it is possible to consolidate all of these networks into a single network, separating them into multiple networks enables simplified management, improved security, and an easier way to track errors and/or downtime.
The aforementioned configuration works great but leaves us with a network bottleneck at the host level. All networks compete on the same queue in the NIC / in a bonded configuration and Linux will only enforce a trivial quality of service queuing algorithm, namely: pfifo_fast, which queues side by side, where packets can be enqueued based on their Type of Service bits or assigned priority. One can easily imagine a case where a single network is hogging the outgoing link (e.g. during a migration storm where many virtual machines are being migrated out from the host simultaneously or when there is an attacker VM). The consequences of such cases can include things like lost connectivity to the management engine or lost storage for the host.
A simple solution is to configure
Continue reading “Steps to Optimize Network Quality of Service in Your Data Center”
Some time ago, two different projects were started in the open source community, namely: Ipsilon and Keycloak. These projects were started by groups with different backgrounds and different perspectives. In the beginning, it seemed like these two projects would have very little in common… though both aimed to include
Continue reading “Red Hat Federation Story: Ipsilon & Keycloak… a “Clash of the Titans””