Supercomputing & Red Hat: What’s Happening at ISC 2017?

Twice a year the most prominent supercomputing sites in the world get to showcase their capabilities and compete for a Top500 spot. With Linux dominating the list, Red Hat is paying close attention to the latest changes that will be announced at International Supercomputing (ISC) show in Frankfurt, Germany between June 18 to June 22, 2017.

While supercomputers of the past were often proprietary, the trend of building them out of commodity components has dominated the landscape in the past two decades. But recently the definition of “commodity“ in HPC has been morphing. Traditional solutions are routinely augmented by various acceleration technologies, cache-coherent interconnects are becoming mainstream and boutique hardware and software technologies previously reserved for highly specialized solutions are being adopted by major HPC sites at scale.

Developing new and adapting existing highly scalable applications to take advantage of the new technological advances across multiple deployment domains is the greatest challenge facing HPC sites. This is where the operating system can provide

Continue reading “Supercomputing & Red Hat: What’s Happening at ISC 2017?”

Red Hat Virtualization Reporting Evolution: Transitioning to Metrics Store

As Red Hat engineers, we are always looking to incorporate features that empower administrators and decision makers. Our goal is to enable them to be proactive, efficient, and to help them maximize value from their infrastructure.

To this end, we are currently working on how to significantly improve the reporting and metrics API in Red Hat Virtualization Manager, our management platform for virtualized resources. Until recently, Red Hat Virtualization relied on native reports and data warehouse engines to provide

Continue reading “Red Hat Virtualization Reporting Evolution: Transitioning to Metrics Store”

Microsoft, Red Hat, and HPE Collaboration Delivers Choice & Value to Enterprise Customers

In the world of heterogeneous data centers – having multiple operating systems running on different hardware platforms (and architectures) is the norm.  Even traditional applications and databases are being migrated or abstracted using Java and other interpreted languages to minimize the impact on the end user, if they decide to run on a different platform.

Consider the common scenario where you have both Windows and Linux running in the data center and you need your Linux application to talk to Microsoft SQL Server and get some existing data from it. Your application would need to connect to the Windows server that is running the SQL Server database using one of many available APIs and request information.

While that may sound trivial, in reality you need to: know where that system is located, authenticate your application against it, and pay the penalty of traversing one or more networks to get the data back – all while the user is waiting. This, in fact, was “the way of the world” before Microsoft announced their intent to port MS SQL server to Linux in March of 2016.  Today, however, you have a choice of having your applications connect to a Microsoft SQL Server that runs on either Windows or Linux

Continue reading “Microsoft, Red Hat, and HPE Collaboration Delivers Choice & Value to Enterprise Customers”

Red Hat Enterprise Linux Across Architectures: Everything Works Out of the Box

Since the Red Hat Enterprise Linux Server for ARM Development Preview 7.3 became available I’ve been wanting to try it out to see how the existing code for x86_64 systems works on the 64-bit ARM architecture (a.k.a. aarch64).

Going in, I was a bit apprehensive that some kind of heavy lifting would be needed to get things working on the ARM platform. My experience with cross-architecture ports with other distros (before I joined Red Hat) indicated

Continue reading “Red Hat Enterprise Linux Across Architectures: Everything Works Out of the Box”

PCI Series: Requirement 10 – Track and Monitor All Access to Network Resources and Cardholder Data

This is my last post dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement ten (i.e. the requirement to track and monitor all access to network resources and cardholder data). The outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

Requirement ten focuses on audit and monitoring. Many components of an IdM-based solution, including client components like

Continue reading “PCI Series: Requirement 10 – Track and Monitor All Access to Network Resources and Cardholder Data”

Digital Foundations – Challenges CIOs Must Embrace

When building anything substantial, such as a house or bridge, you start by laying down a solid foundation. Nothing changes this aspect of building brick by brick when you move from traditional constructions to application development and architecting your supporting infrastructure. Throw in Cloud terminology and you might think that the principles of a solid foundation are a bit flighty, but nothing is further from the truth.

When looking to manage an organization’s journey into their digital future, CIOs are dealing with a lot of challenges. Challenges that they face on the road to digital transformation can be daunting as first glance, but must be embraced to properly navigate the road to success.

Digital Foundations

Let’s take a look in this first article at the challenges CIOs must embrace before diving into how to

Continue reading “Digital Foundations – Challenges CIOs Must Embrace”

PCI Series: Requirement 8 – Identify and Authenticate Access to System Components

This post continues my series dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS).  This specific post is related to requirement eight (i.e. the requirement to identify and authenticate access to system components). The outline and mapping of individual articles to requirements can be found in the overarching post that started the series.

Requirement eight is directly related to IdM. IdM can be used to address most of the requirements in this section. IdM stores user accounts, provides user account life-cycle management

Continue reading “PCI Series: Requirement 8 – Identify and Authenticate Access to System Components”

PCI Series: Requirement 7 – Restrict Access to Cardholder Data by Business Need to Know

This is my sixth post dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS).  This specific post is related to requirement seven (i.e. the requirement to restrict access to cardholder data by business need to know).  The outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

Section 7 of the PCI DSS standard talks about access control and limiting the privileges of administrative accounts.  IdM can play a big role in addressing these requirements.  IdM provides several key features that are related to access control and privileged account management.  The first one is

Continue reading “PCI Series: Requirement 7 – Restrict Access to Cardholder Data by Business Need to Know”

Now Available: Red Hat Certificate System 9.1 & Red Hat Directory Server 10.1

Today we are pleased to announce the release of Red Hat Certificate System 9.1 and Red Hat Directory Server 10.1, both supported on Red Hat Enterprise Linux 7.3.

Red Hat Certificate System, based on the open source PKI capabilities of the Dogtag Certificate System, is designed to provide Certificate Life Cycle Management (i.e. to issue, renew, suspend, revoke, archive/recover, and manage the single and dual-key X.509v3 certificates needed to handle strong authentication, single sign-on, and secure communications).

Red Hat Directory Server is an open source LDAP-compliant server that centralizes application settings, user profiles, group data, policies, and access control information in a network-based registry based on the 389 Directory Server project. The Red Hat Directory Server simplifies user management by eliminating data redundancy and automating data maintenance. Red Hat Directory Server also improves security, enabling administrators to store policies and access control information in the directory for a single authentication source across enterprise or extranet applications.

What’s New in Red Hat Certificate System 9.1

Certificate System 9.1 has introduced

Continue reading “Now Available: Red Hat Certificate System 9.1 & Red Hat Directory Server 10.1”