Exploring Summit: The newest intelligent supercomputer

Summit, the world’s fastest supercomputer running at Oak Ridge National Laboratory (ORNL), was designed from the ground up to be flexible and to support a wide range of scientific and engineering workloads. In addition to traditional simulation workloads, Summit is well suited to analysis and AI/ML workloads – it is described as “the world’s first AI supercomputer”. The use of standard components and software makes it easy to port existing applications to Summit as well as develop new applications. As pointed out by Buddy Bland, Project Director for the ORNL Leadership Computing Facility, Summit lets users bring their codes to the machine quickly, thanks to the standard software environment provided by Red Hat Enterprise Linux (RHEL).

Summit’s system is built using “fat node” building block concept, where each identically configured node is a powerful IBM Power System AC922 server which is interconnected with others via high-bandwidth dual rail Mellanox infiniband fabric, for a combined cluster of roughly 4,600 nodes. Each node in the system has:

SUMMIT SUPERCOMPUTER NODE COMPOSITION

The result is a system with excellent CPU compute capabilities, plenty of memory to hold data, high performance local storage, and massive communications bandwidth. Additionally, prominent use of graphical processing units (GPU) from Nvidia at the node architecture level provides robust acceleration platform for artificial intelligence (AI) and other workloads. All of this is achieved using standard hardware components, standard software components, and standard interfaces.

So why is workload acceleration so important? In the past, hardware accelerators such as vector processors and array processors were exotic technologies used for esoteric applications. In today’s systems, hardware accelerators are mainstream in the form of GPUs. GPUs can be used for everything from visualization to number crunching to database acceleration, and are omnipresent across the hardware landscape, existing in desktops, traditional servers, supercomputers, and everything in-between, including cloud instances . And the standard unifying component across these configurations is Red Hat Enterprise Linux, the operating system and software development environment supporting hardware, applications, and users across variety of environments at scale.

The breadth of scientific disciplines targeted by Summit can be seen in the list of applications included in the early science program. To help drive optimal use of the full system as soon as it was available, ORNL identified a set of research projects that were given access to small subsets of the full Summit system while Summit was being built. This enabled the applications to be ported to the Summit architecture, optimized for Summit, and be ready to scale out to the full system as soon as it was available. These early applications include astrophysics, materials science, systems biology, cancer research, and AI/ML.

Machine learning (ML) is a great example of a workload that stresses systems: it needs compute power, I/O, and memory to handle data. It needs massive number crunching for training, which is handled by GPUs. All of that requires an enormous amount of electrical power to run. The Summit system is not only flexible and versatile in the way it can handle workloads, it also withstands one of the biggest challenges of today’s supercomputers – excessive power consumption. Besides being the fastest supercomputer on the planet, it is equally significant that Summit performs well on the Green500 list – a supercomputer measurement of speed and efficiency which puts a premium on energy-efficient performance for sustainable supercomputing. Summit comes in at #1 in its category and #5 overall on this list, a very strong performance.

In summary, the fastest supercomputer in the world supports diverse application requirements, driven by simulation, big data, and AI/ML, employs the latest processor, acceleration and interconnect technologies from IBM, Nvidia and Mellanox, respectively, and shows unprecedented power efficiency for that scale of machines. Critical to the success of this truly versatile system is Linux, in Red Hat Enterprise Linux, as the glue that brings everything together and allows us to interact with this modern marvel.

Red Hat Virtualization at Red Hat Summit 2018

Hi folks, in this post I wanted to share with you some of the cool activities we have happening next week at the Red Hat Summit in San Francisco. For virtualization there’s a ton of things going on – to say that we are ‘RHV’ed up is an understatement! (sorry, @RedHatRex wrote that joke…) Here’s a short video that will get you goin on RHV:

Continue reading “Red Hat Virtualization at Red Hat Summit 2018”

Ultimate Guide to Red Hat Summit 2018 Labs: Hands-on with RHEL

This year you’ve got a lot of decisions to make before you got to Red Hat Summit in San Francisco, CA from 8-10 May 2018.

There are breakout sessionsbirds-of-a-feather sessionsmini sessionspanelsworkshops, and instructor led labs that you’re trying to juggle into your daily schedule. To help with these plans, let’s try to provide an overview of the labs in this series.

In this article let’s examine a track focusing only on Red Hat Enterprise Linux (RHEL). It’s a selection of labs where you’ll get hands-on with package management, OS security, dig into RHEL internals, build a RHEL image for the cloud and more.

The following hands-on labs are on the agenda, so let’s look at the details of each one.

Continue reading “Ultimate Guide to Red Hat Summit 2018 Labs: Hands-on with RHEL”

My Journey from BASIC to Linux

This post is brought to you by Command Line Heroes, an original podcast from Red Hat.

I think of computing today as being the convergence of at least three major threads that were once largely apart from each other. There were the proprietary hardware and software stacks: mainframes and their minicomputer counterparts. There was the proto-Internet and Unix, proprietary in their own way but leading to Linux and open source. And there was the personal computer.

Continue reading “My Journey from BASIC to Linux”

From Manual to Automated DevOps: One Man’s Journey

This post is brought to you by Command Line Heroes, an original podcast from Red Hat.

My journey, as one might say, in search of the Holy Grail or the great unicorn called DevOps, began well over 20 years ago; yet I never knew it at the time… Actually, it began in 1984 when I was 13 and got my own first computer, a Commodore VIC-20. It wasn’t the first computer that I had ever used but it was mine. I pushed that system with 3 ½ KB of RAM to its limits. Technology has grown by leaps and bounds in such a short timespan. Since those days, I’ve worn many hats. I’ve owned my own company, I’ve helped tech-edit books, and I’ve been pretty active in the open source community. Workshops, social media, MeetUps, Red Hat user groups (RHUGs), virtualization technology user groups (VTUGs), etc. have all allowed me to share and learn at the same time.

Fast forward 10 years, I’m a sergeant in the Army. Sometimes, when in garrison (which was rare), I got tasked out to a bit of side work on UNIX systems simply because nobody knew how to use them. These puppies were coupled together with many pre-internet technologies.

Continue reading “From Manual to Automated DevOps: One Man’s Journey”

Understanding the Concepts Behind Virtual Data Optimizer (VDO) in RHEL 7.5 Beta

In the Red Hat Enterprise Linux 7.5 Beta, we introduced virtual data optimizer (VDO). VDO is a kernel module that can save disk space and reduce replication bandwidth. VDO sits on top of any block storage device and provides zero-block elimination, deduplication of redundant blocks, and data compression. These are the key phases of the data reduction process that allows VDO to reduce data footprint on storage. VDO applies these phases inline and on-the-fly.  Now, lets see what happens in each process (download the beta yourself and try):

Continue reading “Understanding the Concepts Behind Virtual Data Optimizer (VDO) in RHEL 7.5 Beta”

Expand your reality with Red Hat at SuperComputing17

Over the years, the SuperComputing conference has become a focal point for many global supercomputing sites to showcase their capabilities and compete for a placement on the coveted Top500 list. Many powerful supercomputers and new technological advances are showcased during the conference, making it perhaps no surprise that Red Hat is planning to be at SuperComputing17 next week to demonstrate our latest high-performance computing (HPC) solutions (booth #1763).

Red Hat has a packed agenda for the show – here’s more about what you can expect to see from us during SuperComputing17.

Continue reading “Expand your reality with Red Hat at SuperComputing17”

Keeping Pace with Multiple Architectures (Part 2)

In our first post of discussing Red Hat’s multi-architecture strategy, we focused on the disruptive nature of enabling new and rapidly-evolving architectures and how this enablement necessitates a different set of product requirements to fulfill our vision of providing a consistent and familiar experience to our customers across multiple hardware architectures. While we have been working with many original equipment manufacturers (OEMs) on x86_64-based servers for years, we have seen interest from our customer base in delivering parity across multiple architectures, including IBM Power Little Endian (ppc64le) and ARMv8-A (aarch64).

So what exactly are we doing with our partners to make this 

Continue reading “Keeping Pace with Multiple Architectures (Part 2)”

Picking your Deployment Architecture

In the previous post I talked about Smart Card Support in Red Hat Enterprise Linux. In this article I will drill down into how to select the right deployment architecture depending on your constraints, requirements and availability of the smart card related functionality in different versions of Red Hat Enterprise Linux.

To select the right architecture for a deployment where users would authenticate using smart cards when logging into Linux systems you need to 

Continue reading “Picking your Deployment Architecture”