Highlights - November 2022

This is the 60th edition of the TOP500.

The Frontier system at the Oak Ridge National Laboratory, Tennessee, USA remains the No. 1 system on the TOP500 and is still the only system reported with an HPL performance exceeding one Exaflop/s. Frontier brought the pole position back to the USA on the June listing with an HPL score of 1.102 Exaflop/s.

With an HPL score of 1.102 EFlop/s, the Frontier machine at Oak Ridge National Laboratory (ORNL) did not improve upon the score it reached on the June 2022 list. That said, Frontier’s near-tripling of the HPL score received by second-place winner is still a major victory for computer science. On top of that, Frontier demonstrated a score of 7.94 EFlop/s on the HPL-MxP benchmark, which measure performance for mixed precision calculation. Frontier is based on the HPE Cray EX235a architecture and it relies on AMD EPYC 64C 2GHz processor. The system has 8,730,112 cores and a power efficiency rating of 52.23 gigaflops/watt. It also relies on gigabit ethernet for data transfer.

The top position was previously held for two years straight by the Fugaku system at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. With its HPL benchmark score of 442 Pflop/s, Fugaku is now listed as No. 2.

The LUMI system at EuroHPC/CSC in Finland entered the list last June at No. 3. It is again listed as No. 3 but only thanks to an upgrade of the system, which doubled its size. With its increased HPL score of 309 Pflop/s it remains the largest system in Europe.

The only new machine to grace the top of the list was the No. 4 Leonardo system at EuroHPC/CINECA in Bologna, Italy. The machine achieved an HPL score of .174 EFlop/s with 1,463,616 cores.

Here a brief summary of the system in the Top10:

  • Frontier is the No. 1 system in the TOP500. This HPE Cray EX system is the first US system with a performance exceeding one Exaflop/s. It is installed at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, where it is operated for the Department of Energy (DOE). It currently has achieved 1.102 Exaflop/s using 8,730,112 cores. The new HPE Cray EX architecture combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI, with AMD Instinct™ 250X accelerators, and Slingshot-10 interconnect.

  • Fugaku now the No. 2 system is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Pflop/s.

  • The upgraded LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland is the No. 3 with a performance of 309.1 Pflop/s. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is located in CSC’s data center in Kajaani, Finland.

  • The new No. 4 system Leonardo is installed at a different EuroHPC site in CINECA, Italy. It is an Atos BullSequana XH2000 system with Xeon Platinum 8358 32C 2.6GHz as main processors, NVIDIA A100 SXM4 40 GB as accelerators, and Quad-rail NVIDIA HDR100 Infiniband as interconnect. It achieved a Linpack performance of 174.7 Pflop/s.

  • Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, is now listed at the No. 5 spot worldwide with a performance of 148.8 Pflop/s on the HPL benchmark, which is used to rank the TOP500 list. Summit has 4,356 nodes, each one housing two Power9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.

Rank Site System Cores Rmax (TFlop/s) Rpeak (TFlop/s) Power (kW)
1 DOE/SC/Oak Ridge National Laboratory
United States
Frontier - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
HPE
8,730,112 1,102.00 1,685.65 21,100
2 RIKEN Center for Computational Science
Japan
Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D
Fujitsu
7,630,848 442.01 537.21 29,899
3 EuroHPC/CSC
Finland
LUMI - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
HPE
2,220,288 309.10 428.70 6,016
4 EuroHPC/CINECA
Italy
Leonardo - BullSequana XH2000, Xeon Platinum 8358 32C 2.6GHz, NVIDIA A100 SXM4 64 GB, Quad-rail NVIDIA HDR100 Infiniband
EVIDEN
1,463,616 174.70 255.75 5,610
5 DOE/SC/Oak Ridge National Laboratory
United States
Summit - IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
IBM
2,414,592 148.60 200.79 10,096
6 DOE/NNSA/LLNL
United States
Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
IBM / NVIDIA / Mellanox
1,572,480 94.64 125.71 7,438
7 National Supercomputing Center in Wuxi
China
Sunway TaihuLight - Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway
NRCPC
10,649,600 93.01 125.44 15,371
8 DOE/SC/LBNL/NERSC
United States
Perlmutter - HPE Cray EX235n, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 SXM4 40 GB, Slingshot-10
HPE
761,856 70.87 93.75 2,589
9 NVIDIA Corporation
United States
Selene - NVIDIA DGX A100, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100, Mellanox HDR Infiniband
Nvidia
555,520 63.46 79.22 2,646
10 National Super Computer Center in Guangzhou
China
Tianhe-2A - TH-IVB-FEP Cluster, Intel Xeon E5-2692v2 12C 2.2GHz, TH Express-2, Matrix-2000
NUDT
4,981,760 61.44 100.68 18,482
  • Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA, is at No. 5. Its architecture is very similar to the #4 systems Summit. It is built with 4,320 nodes with two Power9 CPUs and four NVIDIA Tesla V100 GPUs. Sierra achieved 94.6 Pflop/s.
  • Sunway TaihuLight is a system developed by China's National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, China's Jiangsu province, is listed at the No. 6 position with 93 Pflop/s.
  • Perlmutter at No. 7 is based on the HPE Cray "Shasta" platform, and a heterogeneous system with AMD EPYC based nodes and 1536 NVIDIA A100 accelerated nodes. Perlmutter achieved 64.6 Pflop/s
  • Now at No. 8, Selene is an NVIDIA DGX A100 SuperPOD installed inhouse at NVIDIA in the USA. The system is based on an AMD EPYC processor with NVIDIA A100 for acceleration and a Mellanox HDR InfiniBand as network and achieved 63.4 Pflop/s.
  • Tianhe-2A (Milky Way-2A), a system developed by China's National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzhou, China is now listed as the No. 9 system with 61.4 Pflop/s.

Highlights from the List

  • A total of 179 systems on the list are using accelerator/co-processor technology, up from 169 six months ago. 64 of these use NVIDIA Ampere chips, 1 use 18, and 84 systems with NVIDIA Volta.

  • Intel continues to provide the processors for the largest share (75.80 percent) of TOP500 systems, down from 77.60 % six months ago. 101 (20.20 %) of the systems in the current list used AMD processors, up from 18.60 % six months ago.

  • The entry level to the list moved up to the 1.73 Pflop/s mark on the Linpack benchmark.

  • The last system on the newest list was listed at position 460 in the previous TOP500.

  • Total combined performance of all 500 exceeded the Exaflop barrier with now 4.86 exaflop/s (Eflop/s) up from 4.40 exaflop/s (Eflop/s) 6 months ago.

  • The entry point for the TOP100 increased to 5.78 Pflop/s.

  • The average concurrency level in the TOP500 is 189,586 cores per system up from 182,864 six months ago.

General Trends

Installations by countries/regions:

HPC manufacturer:

Interconnect Technologies:

Processor Technologies:

Green500

HPCG Results

On the HPL-MxP (formally HPL-AI) benchmark, which measures performance for mixed precision calculation, Frontier already demonstrated 6.86 Exaflops! The HPL-MxP benchmark seeks to highlight the use of mixed precision computations.  Traditional HPC uses 64-bit floating point computations. Today we see hardware with various levels of floating point precisions, 32-bit, 16-bit, and even 8-bit. The HPL-MxP benchmark demonstrates that by using mixed precision during the computation much higher performance is possible (see the Top 5 from the HPL-MxP benchmark), and using mathematical techniques, the same accuracy can be computed with the mixed precision technique when compared with straight 64-bit precision.

Rank

HPL-MxP

Site

Computer

Cores

HPL-MxP (Eflop/s)

TOP500 Rank

HPL Rmax (Eflop/s)

Speedup

of HPL-MxP over HPL

1

DOE/SC/ORNL, USA

Frontier, HPE Cray EX235a

8,730,112

7.942

1

1.1020

7.2

2

EuroHPC/CSC, Finland

LUMI, HPE Cray EX235a

2,174,976

2.168

3

0.3091

7.0

3

RIKEN, Japan

Fugaku, Fujitsu A64FX

7,630,848

2.000

2

0.4420

4.5

4

EuroHPC/CINECA, Italy

Leonardo, Bull Sequana XH2000

1,463,616

1.842

4

0.1682

11.0

5

DOE/SC/ORNL, USA

Summit, IBM AC922 POWER9

2,414,592

1.411

5

0.1486

9.5

About the TOP500 List

The first version of what became today’s TOP500 list started as an exercise for a small conference in Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how things had changed. About that time they realized they might be onto something and decided to continue compiling the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.