Exploring Summit: The newest intelligent supercomputer

Summit, the world’s fastest supercomputer running at Oak Ridge National Laboratory (ORNL), was designed from the ground up to be flexible and to support a wide range of scientific and engineering workloads. In addition to traditional simulation workloads, Summit is well suited to analysis and AI/ML workloads – it is described as “the world’s first AI supercomputer”. The use of standard components and software makes it easy to port existing applications to Summit as well as develop new applications. As pointed out by Buddy Bland, Project Director for the ORNL Leadership Computing Facility, Summit lets users bring their codes to the machine quickly, thanks to the standard software environment provided by Red Hat Enterprise Linux (RHEL).

Summit’s system is built using “fat node” building block concept, where each identically configured node is a powerful IBM Power System AC922 server which is interconnected with others via high-bandwidth dual rail Mellanox infiniband fabric, for a combined cluster of roughly 4,600 nodes. Each node in the system has:

SUMMIT SUPERCOMPUTER NODE COMPOSITION

The result is a system with excellent CPU compute capabilities, plenty of memory to hold data, high performance local storage, and massive communications bandwidth. Additionally, prominent use of graphical processing units (GPU) from Nvidia at the node architecture level provides robust acceleration platform for artificial intelligence (AI) and other workloads. All of this is achieved using standard hardware components, standard software components, and standard interfaces.

So why is workload acceleration so important? In the past, hardware accelerators such as vector processors and array processors were exotic technologies used for esoteric applications. In today’s systems, hardware accelerators are mainstream in the form of GPUs. GPUs can be used for everything from visualization to number crunching to database acceleration, and are omnipresent across the hardware landscape, existing in desktops, traditional servers, supercomputers, and everything in-between, including cloud instances . And the standard unifying component across these configurations is Red Hat Enterprise Linux, the operating system and software development environment supporting hardware, applications, and users across variety of environments at scale.

The breadth of scientific disciplines targeted by Summit can be seen in the list of applications included in the early science program. To help drive optimal use of the full system as soon as it was available, ORNL identified a set of research projects that were given access to small subsets of the full Summit system while Summit was being built. This enabled the applications to be ported to the Summit architecture, optimized for Summit, and be ready to scale out to the full system as soon as it was available. These early applications include astrophysics, materials science, systems biology, cancer research, and AI/ML.

Machine learning (ML) is a great example of a workload that stresses systems: it needs compute power, I/O, and memory to handle data. It needs massive number crunching for training, which is handled by GPUs. All of that requires an enormous amount of electrical power to run. The Summit system is not only flexible and versatile in the way it can handle workloads, it also withstands one of the biggest challenges of today’s supercomputers – excessive power consumption. Besides being the fastest supercomputer on the planet, it is equally significant that Summit performs well on the Green500 list – a supercomputer measurement of speed and efficiency which puts a premium on energy-efficient performance for sustainable supercomputing. Summit comes in at #1 in its category and #5 overall on this list, a very strong performance.

In summary, the fastest supercomputer in the world supports diverse application requirements, driven by simulation, big data, and AI/ML, employs the latest processor, acceleration and interconnect technologies from IBM, Nvidia and Mellanox, respectively, and shows unprecedented power efficiency for that scale of machines. Critical to the success of this truly versatile system is Linux, in Red Hat Enterprise Linux, as the glue that brings everything together and allows us to interact with this modern marvel.

New SAP HANA benchmark results help customers better plan their deployments

As traditional multi-tier enterprise software is adapting to new realities of cloud infrastructure, it also needs to make use of the latest advances in computational and hardware capabilities. Red Hat has been working with major ISVs and partners, like SAP, on digital transformation scenarios while simultaneously helping them to extract additional performance from their hardware with Red Hat Enterprise Linux.

As part of the quest for enhanced performance, the focus for database and analytics applications has been shifting to in-memory execution, a deployment model that SAP HANA is offering. In the future, that trend is likely to include even more complex designs that incorporate entire software frameworks for processing information in-memory, and that is where SAP Data Hub comes into play. As a result, last year Red Hat introduced an enhanced offering, Red Hat Enterprise Linux for SAP Solutions, that is designed to assist our customers in simplifying their adoption of Red Hat Enterprise Linux and to cater to various use cases they may have, including running SAP S/4 HANA.

To further aid customers and partners in planning, sizing and configuring their environments, SAP and Red Hat, along with other software and hardware partners, have historically used a suite of performance benchmarks. For traditional multi-tier deployments, the Sales and Distribution (SD) module became a “gold standard” for benchmarking across largest enterprises and small businesses alike. With a long history of collaboration with SAP and our mutual hardware OEM partners, like HPE and Dell EMC, among others, Red Hat is no stranger to delivering leading results on these benchmarks across multiple server sizes.

To demonstrate performance and provide additional scalability and sizing information for SAP HANA applications and workloads, SAP introduced the Business Warehouse (BW) edition of SAP HANA Standard Application Benchmark. Presently on version 2, this benchmark simulates a variety of users with different analytical requirements and measures the key performance indicator (KPI) relevant to each of the three benchmark phases defined as follows:

  1. Data load phase, testing data latency and load performance (lower is better)
  2. Query throughput phase, testing query throughput with moderately complex queries (higher is better)
  3. Query runtime phase, testing the performance of running very complex queries (lower is better)

As a result of close collaboration with our OEM partners, Red Hat Enterprise Linux (RHEL) was used in several recent publications of the above benchmark.

Specifically, processing 1.3 billion initial records (a popular dataset size) using a single Dell EMC PowerEdge R940xa server, demonstrated that running the workload on Red Hat Enterprise Linux could deliver the best performance across all three benchmark KPIs and outperform similarly configured servers (see Table 1).

 

Table 1. Results in scale-up category running SAP BW Edition for SAP HANA Standard Application Benchmark, Version 2 with 1.3B initial records

Phase 1

(lower is better)

Phase 2

(higher is better)

Phase 3

(lower is better)

Technology Release

Database Release

Red Hat Enterprise Linux 7.4 [1]

13,421 sec

10,544

99 sec

SAP NetWeaver 7.50 SAP HANA 1.0
SUSE Linux Enterprise Server 12 [2]

14,333 sec

6,901

102 sec

SAP NetWeaver 7.50 SAP HANA 1.0
Red Hat Enterprise Linux advantage

7%

53% 3%

 

Additionally, in a much larger dataset size of 5.2 billion initial records, Dell EMC PowerEdge R840 server running Red Hat Enterprise Linux also outscored similarly configured server on two out of three benchmark KPIs demonstrating better dataset load time and query processing throughput (see Table 2).

 

Table 2. Results in scale-up category running SAP BW Edition for SAP HANA Standard Application Benchmark, Version 2 with 5.2B initial records

Phase 1

(lower is better)

Phase 2

(higher is better)

Phase 3

(lower is better)

Technology Release

Database Release

Red Hat Enterprise Linux 7.4 [3]

74,827 sec

3,095

175 sec

SAP NetWeaver 7.50 SAP HANA 2.0
SUSE Linux Enterprise Server 12 [4]

84,744 sec

2,916

172 sec

SAP NetWeaver 7.50 SAP HANA 2.0
Red Hat Enterprise Linux advantage

13%

6% -1.75%

 

These results demonstrate Red Hat’s commitment to helping OEM partners and ISVs deliver high-performing solutions to our mutual customers, and showcase close alignment between Red Hat and Dell EMC that, in collaboration with SAP, led to the creation of certified, single-source solutions for SAP HANA. Available in both single-server and larger, scale-out configurations, Dell EMC’s solution is optimized with Red Hat Enterprise Linux for SAP Solutions.

Learn more: https://www.redhat.com/en/partners/dell and https://www.redhat.com/en/resources/red-hat-enterprise-linux-sap-solutions-technology-overview

 

Results as of July 30, 2018. SAP and SAP HANA are the registered trademarks of SAP AG in Germany and in several other countries. See http://www.sap.com/benchmark for more information.
[1] Dell EMC PowerEdge R940xa (4 processor / 112 cores / 224 threads, Intel Xeon
Platinum 8180M processor, 2.50 GHz, 64 KB L1 cache and 1024 KB L2 cache per core, 38.5 MB L3 cache per processor, 1536 GB main memory). Certification number #2018023
[2] FUJITSU Server PRIMERGY RX4770 M4 (4 processor / 112 cores / 224 threads, Intel Xeon
Platinum 8180 processor, 2.50 GHz, 64 KB L1 cache and 1024 KB L2 cache per core, 38.5 MB L3 cache per processor, 1536 GB main memory). Certification number #2018017
[3] Dell EMC PowerEdge R840 (4 processor / 112 cores / 224 threads, Intel Xeon
Platinum 8180M processor, 2.50 GHz, 64 KB L1 cache and 1024 KB L2 cache per core, 38.5 MB L3 cache per processor, 3072 GB main memory). Certification number #2018028
[4] HPE Superdome Flex (4 processor / 112 cores / 224 threads, Intel Xeon
Platinum 8180 processor, 2.50 GHz, 64 KB L1 cache and 1024 KB L2 cache per core, 38.5 MB L3 cache per processor, 3072 GB main memory). Certification number #2018025

Scale better with open technologies from Red Hat

At last year’s International supercomputing conference (ISC) we noted the trend of Linux underpinning the vast majority of supercomputers that are being built using sophisticated acceleration and interconnect technologies, effectively redefining the term “commodity“ in high performance computing (HPC).

Fast forward to ISC18 and Linux is the defacto standard operating system for all top supercomputers with Red Hat Enterprise Linux powering some of the largest and most intelligent supercomputers on the planet – Summit and Sierra. Red Hat is looking forward to seeing how these two newest US-based supercomputers have scored on the latest iteration of the Top500 list.

Continue reading “Scale better with open technologies from Red Hat”

Keeping Pace with Multiple Architectures (Part 2)

In our first post of discussing Red Hat’s multi-architecture strategy, we focused on the disruptive nature of enabling new and rapidly-evolving architectures and how this enablement necessitates a different set of product requirements to fulfill our vision of providing a consistent and familiar experience to our customers across multiple hardware architectures. While we have been working with many original equipment manufacturers (OEMs) on x86_64-based servers for years, we have seen interest from our customer base in delivering parity across multiple architectures, including IBM Power Little Endian (ppc64le) and ARMv8-A (aarch64).

So what exactly are we doing with our partners to make this 

Continue reading “Keeping Pace with Multiple Architectures (Part 2)”

Keeping pace with multiple architectures (Part 1)

The open source community and the IT industry as a whole continue to discuss how they can use different hardware architectures to solve hard computational problems and address specific workloads. Customers value choice in their technology implementations, and choice is a key benefit of open source solutions. To best meet these needs, the software ecosystem has to 

Continue reading “Keeping pace with multiple architectures (Part 1)”

Red Hat Enterprise Linux Brings Forth Performance and Scalability Features of New Intel Xeon Processor Family

Last week, Intel launched the new family of Intel® Xeon® scalable processors with new features such as Intel Advanced Vector Extensions 512 (Intel AVX-512), which boost performance of computationally intensive tasks, a new Intel Mesh Architecture for reduced system latency, Intel QuickAssist Technology for hardware acceleration of cryptography and data compression operations and integrated high-speed fabric with Intel Omni-Path Architecture. According to Intel, 

Continue reading “Red Hat Enterprise Linux Brings Forth Performance and Scalability Features of New Intel Xeon Processor Family”

Supercomputing & Red Hat: What’s Happening at ISC 2017?

Twice a year the most prominent supercomputing sites in the world get to showcase their capabilities and compete for a Top500 spot. With Linux dominating the list, Red Hat is paying close attention to the latest changes that will be announced at International Supercomputing (ISC) show in Frankfurt, Germany between June 18 to June 22, 2017.

While supercomputers of the past were often proprietary, the trend of building them out of commodity components has dominated the landscape in the past two decades. But recently the definition of “commodity“ in HPC has been morphing. Traditional solutions are routinely augmented by various acceleration technologies, cache-coherent interconnects are becoming mainstream and boutique hardware and software technologies previously reserved for highly specialized solutions are being adopted by major HPC sites at scale.

Developing new and adapting existing highly scalable applications to take advantage of the new technological advances across multiple deployment domains is the greatest challenge facing HPC sites. This is where the operating system can provide

Continue reading “Supercomputing & Red Hat: What’s Happening at ISC 2017?”

Microsoft, Red Hat, and HPE Collaboration Delivers Choice & Value to Enterprise Customers

In the world of heterogeneous data centers – having multiple operating systems running on different hardware platforms (and architectures) is the norm.  Even traditional applications and databases are being migrated or abstracted using Java and other interpreted languages to minimize the impact on the end user, if they decide to run on a different platform.

Consider the common scenario where you have both Windows and Linux running in the data center and you need your Linux application to talk to Microsoft SQL Server and get some existing data from it. Your application would need to connect to the Windows server that is running the SQL Server database using one of many available APIs and request information.

While that may sound trivial, in reality you need to: know where that system is located, authenticate your application against it, and pay the penalty of traversing one or more networks to get the data back – all while the user is waiting. This, in fact, was “the way of the world” before Microsoft announced their intent to port MS SQL server to Linux in March of 2016.  Today, however, you have a choice of having your applications connect to a Microsoft SQL Server that runs on either Windows or Linux

Continue reading “Microsoft, Red Hat, and HPE Collaboration Delivers Choice & Value to Enterprise Customers”

Arm in Arm: Explore Enterprise Server Options at ARM’s Annual Technical Conference

If you have ever wanted to learn about Red Hat’s involvement in the ARM server ecosystem, and are in the San Francisco Bay Area, this week may be a perfect opportunity. Red Hat will be exhibiting at ARM TechCon, ARM Holdings’ premier yearly show at the Santa Clara Convention center. Attendees will be presented with a variety of great technical sessions and training topics, along with expert keynotes, solutions-based Expo Theater sessions and an expo floor filled with new and emerging technologies for the datacenter.  Note that the expo floor can be accessed with the free

Continue reading “Arm in Arm: Explore Enterprise Server Options at ARM’s Annual Technical Conference”

Red Hat and HPE: Collaborating to Better Address Customer IT Challenges

In the world of hyper-competition the boundaries of software and hardware solutions are quickly dissolving. Rather than buying just software or a server, customers are looking for well-integrated, tested and proven solutions to gain competitive advantage. Hewlett Packard Enterprise and Red Hat understand this IT landscape very well and for many years have worked closely together to fine tune their respective hardware and software to achieve the best customer results.

Continue reading “Red Hat and HPE: Collaborating to Better Address Customer IT Challenges”

  • Page 1 of 2
  • 1
  • 2
  • >