Supercomputing & Red Hat: What’s Happening at ISC 2017?

Twice a year the most prominent supercomputing sites in the world get to showcase their capabilities and compete for a Top500 spot. With Linux dominating the list, Red Hat is paying close attention to the latest changes that will be announced at International Supercomputing (ISC) show in Frankfurt, Germany between June 18 to June 22, 2017.

While supercomputers of the past were often proprietary, the trend of building them out of commodity components has dominated the landscape in the past two decades. But recently the definition of “commodity“ in HPC has been morphing. Traditional solutions are routinely augmented by various acceleration technologies, cache-coherent interconnects are becoming mainstream and boutique hardware and software technologies previously reserved for highly specialized solutions are being adopted by major HPC sites at scale.

Developing new and adapting existing highly scalable applications to take advantage of the new technological advances across multiple deployment domains is the greatest challenge facing HPC sites. This is where the operating system can provide

Continue reading “Supercomputing & Red Hat: What’s Happening at ISC 2017?”

Microsoft, Red Hat, and HPE Collaboration Delivers Choice & Value to Enterprise Customers

In the world of heterogeneous data centers – having multiple operating systems running on different hardware platforms (and architectures) is the norm.  Even traditional applications and databases are being migrated or abstracted using Java and other interpreted languages to minimize the impact on the end user, if they decide to run on a different platform.

Consider the common scenario where you have both Windows and Linux running in the data center and you need your Linux application to talk to Microsoft SQL Server and get some existing data from it. Your application would need to connect to the Windows server that is running the SQL Server database using one of many available APIs and request information.

While that may sound trivial, in reality you need to: know where that system is located, authenticate your application against it, and pay the penalty of traversing one or more networks to get the data back – all while the user is waiting. This, in fact, was “the way of the world” before Microsoft announced their intent to port MS SQL server to Linux in March of 2016.  Today, however, you have a choice of having your applications connect to a Microsoft SQL Server that runs on either Windows or Linux

Continue reading “Microsoft, Red Hat, and HPE Collaboration Delivers Choice & Value to Enterprise Customers”

Red Hat Enterprise Linux Across Architectures: Everything Works Out of the Box

Since the Red Hat Enterprise Linux Server for ARM Development Preview 7.3 became available I’ve been wanting to try it out to see how the existing code for x86_64 systems works on the 64-bit ARM architecture (a.k.a. aarch64).

Going in, I was a bit apprehensive that some kind of heavy lifting would be needed to get things working on the ARM platform. My experience with cross-architecture ports with other distros (before I joined Red Hat) indicated

Continue reading “Red Hat Enterprise Linux Across Architectures: Everything Works Out of the Box”

PCI Series: Requirement 10 – Track and Monitor All Access to Network Resources and Cardholder Data

This is my last post dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement ten (i.e. the requirement to track and monitor all access to network resources and cardholder data). The outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

Requirement ten focuses on audit and monitoring. Many components of an IdM-based solution, including client components like

Continue reading “PCI Series: Requirement 10 – Track and Monitor All Access to Network Resources and Cardholder Data”

Arm in Arm: Explore Enterprise Server Options at ARM’s Annual Technical Conference

If you have ever wanted to learn about Red Hat’s involvement in the ARM server ecosystem, and are in the San Francisco Bay Area, this week may be a perfect opportunity. Red Hat will be exhibiting at ARM TechCon, ARM Holdings’ premier yearly show at the Santa Clara Convention center. Attendees will be presented with a variety of great technical sessions and training topics, along with expert keynotes, solutions-based Expo Theater sessions and an expo floor filled with new and emerging technologies for the datacenter.  Note that the expo floor can be accessed with the free

Continue reading “Arm in Arm: Explore Enterprise Server Options at ARM’s Annual Technical Conference”

ARMing IoT with Linaro LITE

Linaro has announced a new project focused on IoT – LITE, or Linaro IoT and Embedded. This project will focus on developing core technology to be used in IoT devices and gateways.

Linaro is a consortium focused on the Linux ecosystem for ARM based systems — see www.linaro.org for details. Much of their work to date has been focused on Android phones and tablets. Active development efforts include server and networking as well as Digital Home. The Digital Home project focuses on set-top boxes and home gateways. Linaro’s goal is to avoid fragmentation of the ARM ecosystem by providing a common foundation that can be used to build a wide range of value-added applications.

LITE extends existing Linaro projects by addressing both

Continue reading “ARMing IoT with Linaro LITE”

From Checkpoint/Restore to Container Migration

The concept to save (i.e. checkpoint / dump) the state of a process, at a certain point in time, so that it may later be used to restore / restart the process (to the exact same state) has existed for many years. One of the most prominent motivations to develop and support checkpoint/restore functionality was to provide improved fault tolerance. For example, checkpoint/restore allows for processes to be restored from previously created checkpoints if, for one reason or another, these processes had been aborted.

Over the years there have been several different implementations of checkpoint/restore for Linux. Existing implementations of checkpoint/restore differ in terms of  “what level” (of the operating system) they are operating; the lowest level approaches focus on implementing checkpoint/restore directly in the kernel while other “higher level” approaches implement checkpoint/restore completely in user-space. While it would be difficult to unearth each and every approach /  implementation – it is likely fair to

Continue reading “From Checkpoint/Restore to Container Migration”

PCI Series: Requirement 3 – Protect Stored Cardholder Data

Welcome to another post dedicated to the use of Identity Management (IdM) and related technologies in addressing the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement three (i.e. the requirement to protect stored cardholder data). In case you’re new to the series – the outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

Section three of the PCI DSS standard talks about storing cardholder data in a secure way. One of the technologies that can be used for secure storage of cardholder data is

Continue reading “PCI Series: Requirement 3 – Protect Stored Cardholder Data”

PCI Series: Requirement 2 – Do Not Use Vendor-Supplied Defaults for System Passwords and Other Security Parameters

This article is third in a series dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post covers the PCI DSS requirement related to not using vendor-supplied defaults for system passwords and other security parameters. The outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

The second section of the PCI-DSS standard applies to defaults – especially passwords and other security parameters. The standard calls for the reset of passwords (etc.) for any new system before placing it on the network. IdM can help here. Leveraging IdM for centralized accounts and policy information allows for a simple automated provisioning of new systems with

Continue reading “PCI Series: Requirement 2 – Do Not Use Vendor-Supplied Defaults for System Passwords and Other Security Parameters”

In Defense of the Pet Container, Part 3: Puppies, Kittens and… Containers

In our third and final installment (see: part one & part two), let’s take a look at some high-level use cases for Linux containers as well as finally (finally) defending what I like to call “pet” containers. From a general perspective, we see three repeated high-level use cases for containerizing applications:

  1. The fully orchestrated, multi-container application as you would create in OpenShift via the Red Hat Container Development Kit;
  2. Loosely orchestrated containers that don’t use advanced features like application templates and Kubernetes; and
  3. Pet containers.

Continue reading “In Defense of the Pet Container, Part 3: Puppies, Kittens and… Containers”