Highlights - November 2023

This is the 62nd edition of the TOP500.

The 62nd edition of the TOP500 shows five new or upgraded entries in the top 10 but the Frontier system still remains the only true exascale machine with an HPL score of 1.194 Exaflop/s.

The Frontier system at the Oak Ridge National Laboratory, Tennessee, USA remains the No. 1 system on the TOP500 and is still the only system reported with an HPL performance exceeding one Exaflop/s. Frontier brought the pole position back to the USA one year ago on the June 2022 listing and has since been remeasured with an HPL score of 1.194 Exaflop/s.  

Frontier is based on the latest HPE Cray EX235a architecture and is equipped with AMD EPYC 64C 2GHz processors. The system has 8,699,904 total cores, a power efficiency rating of 52.59 gigaflops/watt, and relies on HPE’s Slingshot 11 network for data transfer.  

The Aurora system at the Argonne Leadership Computing Facility, Illinois, USA is currently being commissioned and will at full scale exceed Frontier with a peak performance of 2 Exaflop/s. It was submitted with a measurement on half of the final system achieving 585 Petaflop/s on the HPL benchmark which secured the No. 2 spot on the TOP500.

Aurora is built by Intel based on the HPE Cray EX - Intel Exascale Compute Blade which uses Intel Xeon CPU Max Series processors and Intel Data Center GPU Max Series accelerators which communicate through HPE’s Slingshot-11 network interconnect.

The Eagle system installed in the Microsoft Azure cloud in the USA is newly listed as No. 3. This Microsoft NDv5 system is based on Intel Xeon Platinum 8480C processors and NVIDIA H100 accelerators and achieved an HPL score of 561 Pflop/s.

The Fugaku system at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan is at No. 4 the first system listed outside of the USA. It previously held from June 2020 until November 2021 the No. 1 position on the TOP500.  Its HPL benchmark score of 442 Pflop/s, is now only sufficient for the No. 4 spot.   

The LUMI system at EuroHPC/CSC in Finland has been further upgraded and is now listed as No. 5 worldwide. It remains the largest system in Europe and is listed with and HPL score to now 380 Pflop/s.  

Here is a summary of the system in the Top 10:

  • Frontier remains the No. 1 system in the TOP500. This HPE Cray EX system is the first US system with a performance exceeding one Exaflop/s. It is installed at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, where it is operated for the Department of Energy (DOE). It currently has achieved 1.194 Exaflop/s using 8,699,904 cores. The HPE Cray EX architecture combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI, with AMD Instinct™ 250X accelerators, and a Slingshot-11 interconnect.

  • Aurora achieved the No. 2 spot by submitting an HPL score of 585 Pflop/s measured on half of the full system. It is installed at the Argonne Leadership Computing Facility, Illinois, USA, where it is also operated for the Department of Energy (DOE). This new Intel system is based on HPE Cray EX - Intel Exascale Compute Blades. It uses Intel Xeon CPU Max Series processors, Intel Data Center GPU Max Series accelerators, and a Slingshot-11 interconnect.

  • Eagle the new No. 3 system is installed by Microsoft in its Azure cloud. This Microsoft NDv5 system is based on Xeon Platinum 8480C processors and NVIDIA H100 accelerators and achieved an HPL score of 561 Pflop/s.

  • Fugaku, the No. 4 system, is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Pflop/s.

  • The (again) upgraded LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland is now the No. 5 with a performance of 380 Pflop/s. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is located in CSC’s data center in Kajaani, Finland.

Rank Site System Cores Rmax (TFlop/s) Rpeak (TFlop/s) Power (kW)
1 DOE/SC/Oak Ridge National Laboratory
United States
Frontier - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
HPE
8,699,904 1,194.00 1,679.82 22,703
2 DOE/SC/Argonne National Laboratory
United States
Aurora - HPE Cray EX - Intel Exascale Compute Blade, Xeon CPU Max 9470 52C 2.4GHz, Intel Data Center GPU Max, Slingshot-11
Intel
4,742,808 585.34 1,059.33 24,687
3 Microsoft Azure
United States
Eagle - Microsoft NDv5, Xeon Platinum 8480C 48C 2GHz, NVIDIA H100, NVIDIA Infiniband NDR
Microsoft
1,123,200 561.20 846.84
4 RIKEN Center for Computational Science
Japan
Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D
Fujitsu
7,630,848 442.01 537.21 29,899
5 EuroHPC/CSC
Finland
LUMI - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
HPE
2,752,704 379.70 531.51 7,107
6 EuroHPC/CINECA
Italy
Leonardo - BullSequana XH2000, Xeon Platinum 8358 32C 2.6GHz, NVIDIA A100 SXM4 64 GB, Quad-rail NVIDIA HDR100 Infiniband
EVIDEN
1,824,768 238.70 304.47 7,404
7 DOE/SC/Oak Ridge National Laboratory
United States
Summit - IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
IBM
2,414,592 148.60 200.79 10,096
8 EuroHPC/BSC
Spain
MareNostrum 5 ACC - BullSequana XH3000, Xeon Platinum 8460Y+ 40C 2.3GHz, NVIDIA H100 64GB, Infiniband NDR200
EVIDEN
680,960 138.20 265.57 2,560
9 NVIDIA Corporation
United States
Eos NVIDIA DGX SuperPOD - NVIDIA DGX H100, Xeon Platinum 8480C 56C 3.8GHz, NVIDIA H100, Infiniband NDR400
Nvidia
485,888 121.40 188.65
10 DOE/NNSA/LLNL
United States
Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
IBM / NVIDIA / Mellanox
1,572,480 94.64 125.71 7,438
  • The No. 6 system Leonardo is installed at a different EuroHPC site in CINECA, Italy. It is an Atos BullSequana XH2000 system with Xeon Platinum 8358 32C 2.6GHz as main processors, NVIDIA A100 SXM4 40 GB as accelerators, and Quad-rail NVIDIA HDR100 Infiniband as interconnect. It achieved a Linpack performance of 238.7 Pflop/s.

  • Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, is now listed at the No. 7 spot worldwide with a performance of 148.8 Pflop/s on the HPL benchmark, which is used to rank the TOP500 list. Summit has 4,356 nodes, each one housing two POWER9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.

  • The MareNostrum 5 ACC system is new at No. 8 and installed at the EuroHPC/Barcelona Supercomputing Center in Spain. This BullSequana XH3000 system uses Xeon Platinum 8460Y processors with NVIDIA H100 and Infiniband NDR200. It achieved 183.2 Pflop/s HPL performance.

  • The new Eos system listed at No. 9 is a NVIDIA DGX SuperPOD based system at NVIDIA, USA. It is based on the NVIDIA DGX H100 with Xeon Platinum 8480C processors,N VIDIA H100 accelerators, and Infiniband NDR400 and it achieves 121.4 Pflop/s.

  • Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA is at No. 10. Its architecture is very similar to the #7 system’s Summit. It is built with 4,320 nodes with two POWER9 CPUs and four NVIDIA Tesla V100 GPUs. Sierra achieved 94.6 Pflop/s.

Highlights from the List

  • A total of 185 systems on the list are using accelerator/co-processor technology, up from 184 six months ago. 78 of these use NVIDIA Ampere chips, 10 use 18, and 64 systems with NVIDIA Volta.

  • Intel continues to provide the processors for the largest share (67.80 percent) of TOP500 systems, down from 72.00 % six months ago. 140 (28.00 %) of the systems in the current list used AMD processors, up from 24.20 % six months ago.

  • The entry level to the list moved up to the 2.01 Pflop/s mark on the Linpack benchmark.

  • The last system on the newest list was listed at position 456 in the previous TOP500.

  • Total combined performance of all 500 exceeded the Exaflop barrier with now 7.01 exaflop/s (Eflop/s) up from 5.24 exaflop/s (Eflop/s) 6 months ago.

  • The entry point for the TOP100 increased to 7.84 Pflop/s.

  • The average concurrency level in the TOP500 is 212,027 cores per system up from 190,919 six months ago.

General Trends

Installations by countries/regions:

HPC manufacturer:

Interconnect Technologies:

Processor Technologies:

Green500

HPCG Results

HPL-MxP Results

On the HPL-MxP (formally HPL-AI) benchmark, which measures performance for mixed-precision calculation, Frontier already demonstrated 9.95 Exaflops!

The HPL-MxP benchmark seeks to highlight the use of mixed precision computations. Traditional HPC uses 64-bit floating point computations. Today we see hardware with various levels of floating point precisions, 32-bit, 16-bit, and even 8-bit. The HPL-MxP benchmark demonstrates that by using mixed precision during the computation much higher performance is possible (see the Top 5 from the HPL-MxP benchmark), and using mathematical techniques, the same accuracy can be computed with the mixed precision technique when compared with straight 64-bit precision.

Rank (HPL-MxP) Site Computer Cores HPL-MxP (Eflop/s) TOP500 Rank HPL Rmax (Eflop/s) Speedup of HPL-MxP over HPL

1

DOE/SC/ORNL, USA

Frontier, HPE Cray EX235a

8,699,904

9.950

1

1.1940

8.3

2

EuroHPC/CSC, Finland

LUMI, HPE Cray EX235a

2,752,704

2.350

5

0.3797

6.18

3

RIKEN, Japan

Fugaku, Fujitsu A64FX

7,630,848

2.000

4

0.4420

4.5

4

EuroHPC/CINECA, Italy

Leonardo, Bull Sequana XH2000

1,824,768

1.842

6

0.2387

7.7

5

DOE/SC/ORNL, USA

Summit, IBM AC922 POWER9

2,414,592

1.411

7

0.1486

9.5

About the TOP500 List

The first version of what became today’s TOP500 list started as an exercise for a small conference in Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how things had changed. About that time they realized they might be onto something and decided to continue compiling the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.