Highlights - November 2024

This is the 64th edition of the TOP500.

The 64th edition of the TOP500 shows El Capitan as new No. 1.

With El Capitan, Frontier, and Aurora there are now 3 Exascale systems leading the TOP500. All three are installed at DOE laboratories in the USA.

The El Capitan system at the Lawrence Livermore National Laboratory, California, USA is the new No. 1 system on the TOP500. The HPE Cray EX255a system was measured with 1.742 Exaflop/s on the HPL benchmark.

El Capitan has 11,039,616 cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. It uses the Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 60.3 Gigaflops/watt. El Capian is the 3rd system exceeding the Exaflop mark on the HPL benchmark.

The Frontier system at the Oak Ridge National Laboratory, Tennessee, USA is now the No. 2 system on the TOP500. Frontier has been remeasured with an HPL score of 1.353 Exaflop/s.

Frontier is based on the HPE Cray EX235a architecture and is equipped with AMD 3rd generation EPYC 64C 2GHz processors. The system has 8,699,904 total cores and also relies on Cray’s Slingshot 11 network for data transfer.

The Aurora system at the Argonne Leadership Computing Facility, Illinois, USA is currently being commissioned and was submitted with a preliminary measurement achieving 1.012 Exaflop/s on the HPL benchmark which secured it the No. 3 spot on the TOP500.

Aurora is built by Intel based on the HPE Cray EX - Intel Exascale Compute Blade which uses Intel Xeon CPU Max Series processors and Intel Data Center GPU Max Series accelerators which communicate through Cray’s Slingshot-11 network interconnect.

Other changes in the TOP 10 include the new HPC6 system at No. 5, an upgrade to the Alps system now at No. 7, and the new Tuolumne system at No. 10 which is a sister system to El Capitan.

Here is a summary of the system in the Top 10:

  • The El Capitan system at the Lawrence Livermore National Laboratory, California, USA is the new No. 1 system on the TOP500. The HPE Cray EX255a system was measured with 1.742 Exaflop/s on the HPL benchmark. El Capitan has 11,039,616 cores and is based on AMD 4th generation EPYC™ processors with 24 cores at 1.8 GHz and AMD Instinct™ MI300A accelerators. It uses the Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 60.3 Gigaflops/watt.

  • Frontier is now the No. 2 system in the TOP500. This HPE Cray EX system was the first US system with a performance exceeding one Exaflop/s. It is installed at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, where it is operated for the Department of Energy (DOE). It currently has achieved 1.353 Exaflop/s using 8,699,904 cores. The HPE Cray EX architecture combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI, with AMD Instinct™ 250X accelerators, and a Slingshot-11 interconnect.

  • Aurora is currently the No. 3 with a preliminary HPL score of 1.012 Exaflop/s. It is installed at the Argonne Leadership Computing Facility, Illinois, USA, where it is also operated for the Department of Energy (DOE). This new Intel system is based on HPE Cray EX - Intel Exascale Compute Blades. It uses Intel Xeon CPU Max Series processors, Intel Data Center GPU Max Series accelerators, and a Slingshot-11 interconnect.

  • Eagle the No. 4 system is installed by Microsoft in its Azure cloud. This Microsoft NDv5 system is based on Xeon Platinum 8480C processors and NVIDIA H100 accelerators and achieved an HPL score of 561 Petaflop/s.

  • The new No. 5 system is called HPC6 and installed at Eni S.p.A center in Ferrera Erbognone in Italy. It is another HPE Cray EX235a system with 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI, with AMD Instinct™ 250X accelerators, and a Slingshot-11 interconnect. It achieved 477.9 Petaflop/s.

Rank Site System Cores Rmax (TFlop/s) Rpeak (TFlop/s) Power (kW)
1 DOE/NNSA/LLNL
United States
El Capitan - HPE Cray EX255a, AMD 4th Gen EPYC 24C 1.8GHz, AMD Instinct MI300A, Slingshot-11, TOSS
HPE
11,039,616 1,742.00 2,746.38 29,581
2 DOE/SC/Oak Ridge National Laboratory
United States
Frontier - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11, HPE Cray OS
HPE
9,066,176 1,353.00 2,055.72 24,607
3 DOE/SC/Argonne National Laboratory
United States
Aurora - HPE Cray EX - Intel Exascale Compute Blade, Xeon CPU Max 9470 52C 2.4GHz, Intel Data Center GPU Max, Slingshot-11
Intel
9,264,128 1,012.00 1,980.01 38,698
4 Microsoft Azure
United States
Eagle - Microsoft NDv5, Xeon Platinum 8480C 48C 2GHz, NVIDIA H100, NVIDIA Infiniband NDR
Microsoft Azure
2,073,600 561.20 846.84
5 Eni S.p.A.
Italy
HPC6 - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11, RHEL 8.9
HPE
3,143,520 477.90 606.97 8,461
6 RIKEN Center for Computational Science
Japan
Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D
Fujitsu
7,630,848 442.01 537.21 29,899
7 Swiss National Supercomputing Centre (CSCS)
Switzerland
Alps - HPE Cray EX254n, NVIDIA Grace 72C 3.1GHz, NVIDIA GH200 Superchip, Slingshot-11, HPE Cray OS
HPE
2,121,600 434.90 574.84 7,124
8 EuroHPC/CSC
Finland
LUMI - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11
HPE
2,752,704 379.70 531.51 7,107
9 EuroHPC/CINECA
Italy
Leonardo - BullSequana XH2000, Xeon Platinum 8358 32C 2.6GHz, NVIDIA A100 SXM4 64 GB, Quad-rail NVIDIA HDR100 Infiniband
EVIDEN
1,824,768 241.20 306.31 7,494
10 DOE/NNSA/LLNL
United States
Tuolumne - HPE Cray EX255a, AMD 4th Gen EPYC 24C 1.8GHz, AMD Instinct MI300A, Slingshot-11, TOSS
HPE
1,161,216 208.10 288.88 3,387
  • Fugaku, the No. 6 system, is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Petaflop/s. It remains the fastest system on the HPCG benchmark with 16 Teraflop/s.

  • After a recent upgrade the Alps system installed at the Swiss National Supercomputing Centre (CSCS) in Switzerland is now at No. 7. It is an HPE Cray EX254n system with NVIDIA Grace 72C and NVIDIA GH200 Superchip and a Slingshot-11 interconnect. After its upgrade it achieved 434.9 Petaflop/s.

  • The LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland is at the No. 8 with a performance of 380 Petaflop/s. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is located in CSC’s data center in Kajaani, Finland.

  • The No. 9 system Leonardo is installed at another EuroHPC site in CINECA, Italy. It is an Atos BullSequana XH2000 system with Xeon Platinum 8358 32C 2.6GHz as main processors, NVIDIA A100 SXM4 40 GB as accelerators, and Quad-rail NVIDIA HDR100 Infiniband as interconnect. It achieved a HPL performance of 241.2 Petaflop/s.

  • Rounding out the TOP10 is the new Tuolumne system which is also installed at the Lawrence Livermore National Laboratory, California, USA. It is a sister system to the new No. 1 system El Capitan with identical architecture. It achieved 208.1 Petaflop/s on its own.

Highlights from the List

  • A total of 211 systems on the list are using accelerator/co-processor technology, up from 194 six months ago. 72 of these use NVIDIA Ampere chips, 60 use 18, and 33 systems with NVIDIA Volta.

  • Intel continues to provide the processors for the largest share (62.00 percent) of TOP500 systems, down from 63.00 % six months ago. 161 (32.20 %) of the systems in the current list used AMD processors, up from 31.20 % six months ago.

  • The entry level to the list moved up to the 2.31 Pflop/s mark on the Linpack benchmark.

  • The last system on the newest list was listed at position 454 in the previous TOP500.

  • Total combined performance of all 500 exceeded the Exaflop barrier with now 11.73 exaflop/s (Eflop/s) up from 8.21 exaflop/s (Eflop/s) 6 months ago.

  • The entry point for the TOP100 increased to 12.21 Pflop/s.

  • The average concurrency level in the TOP500 is 258,007 cores per system up from 229,426 six months ago.

General Trends

Installations by countries/regions:

HPC manufacturer:

Interconnect Technologies:

Processor Technologies:

Green500

In the Green500 the systems of the TOP500 are ranked by how much computational performance they deliver on the HPL benchmark per Watt of electrical power consumed. This electrical power efficiency is measured in Gigaflops/Watt. This ranking is not driven by the size of a system but by its technology and the ranking order looks therefor very different from the TOP500. The computational efficiency of a system tends to slightly decrease with system size, which among technologically identical system gives smaller system the advantage. Here are the top 10 of the Green500 ranking:

  • The system to claim the No. 1 spot for the GREEN500 is for a second time the JEDI - JUPITER Exascale Development Instrument at EuroHPC/FZJ in Germany. The system has 19,584 total cores, an HPL benchmark of 4.5 PFlop/s, and achieved an efficiency of 72.7 GFlops/Watt. JEDI is a BullSequana XH3000 system with Grace Hopper Superchip 72C 3GHz, NVIDIA GH200 Superchip, and Quad-Rail NVIDIA InfiniBand NDR200.

  • In the second place is the ROMEO-2025 system at the ROMEO HPC Center - Champagne- Ardenne in France. With 47,328 total cores and an HPL benchmark of 9.863 PFlop/s, and achieved an efficiency of 70.9 GFlops/Watt. The architecture of this system is identical to the No. 1 system JEDI, but as it is more than twice as large its energy efficiency is slightly lower.

  • The No. 3 spot was taken by the new Adastra 2 system at the Grand Equipement National de Calcul Intensif - Centre Informatique National de l'Enseignement Superieur (GENCI-CINES) in France. This is a HPE Cray EX255a system with AMD 4th Gen EPYC 24 core 1.8GHz processors, AMD Instinct MI300A accelerator, and Slingshot-11, running RHEL. With 16,128 cores total it achieved 2.529 PFlop/s HPL performance and an efficiency of 69.1 GFlops/Watt.

The data collection and curation of the Green500 project has been integrated with the TOP500 project. This allows submissions of all data through a single webpage at http://top500.org/submit.

Rank TOP500 Rank System Cores Rmax (PFlop/s) Power (kW) Energy Efficiency (GFlops/watts)
1 224 JEDI - BullSequana XH3000, Grace Hopper Superchip 72C 3GHz, NVIDIA GH200 Superchip, Quad-Rail NVIDIA InfiniBand NDR200 , ParTec/EVIDEN
EuroHPC/FZJ
Germany
19,584 4.50 67 72.733
2 122 ROMEO-2025 - BullSequana XH3000, Grace Hopper Superchip 72C 3GHz, NVIDIA GH200 Superchip, Quad-Rail NVIDIA InfiniBand NDR200, Red Hat Enterprise Linux , EVIDEN
ROMEO HPC Center - Champagne-Ardenne
France
47,328 9.86 160 70.912
3 442 Adastra 2 - HPE Cray EX255a, AMD 4th Gen EPYC 24C 1.8GHz, AMD Instinct MI300A, Slingshot-11, RHEL , HPE
Grand Equipement National de Calcul Intensif - Centre Informatique National de l'Enseignement Suprieur (GENCI-CINES)
France
16,128 2.53 37 69.098
4 155 Isambard-AI phase 1 - HPE Cray EX254n, NVIDIA Grace 72C 3.1GHz, NVIDIA GH200 Superchip, Slingshot-11 , HPE
University of Bristol
United Kingdom
34,272 7.42 117 68.835
5 51 Capella - Lenovo ThinkSystem SD665-N V3, AMD EPYC 9334 32C 2.7GHz, Nvidia H100 SXM5 94Gb, Infiniband NDR200, AlmaLinux 9.4 , MEGWARE
TU Dresden, ZIH
Germany
85,248 24.06 445 68.053
6 18 JETI - JUPITER Exascale Transition Instrument - BullSequana XH3000, Grace Hopper Superchip 72C 3GHz, NVIDIA GH200 Superchip, Quad-Rail NVIDIA InfiniBand NDR200, RedHat Linux and Modular Operating System , ParTec/EVIDEN
EuroHPC/FZJ
Germany
391,680 83.14 1,311 67.963
7 69 Helios GPU - HPE Cray EX254n, NVIDIA Grace 72C 3.1GHz, NVIDIA GH200 Superchip, Slingshot-11 , HPE
Cyfronet
Poland
89,760 19.14 317 66.948
8 371 Henri - ThinkSystem SR670 V2, Intel Xeon Platinum 8362 32C 2.8GHz, NVIDIA H100 80GB PCIe, Infiniband HDR , Lenovo
Flatiron Institute
United States
8,288 2.88 44 65.396
9 340 HoreKa-Teal - ThinkSystem SD665-N V3, AMD EPYC 9354 32C 3.25GHz, Nvidia H100 94Gb SXM5, Infiniband NDR200 , Lenovo
Karlsruher Institut für Technologie (KIT)
Germany
13,616 3.12 50 62.964
10 49 rzAdams - HPE Cray EX255a, AMD 4th Gen EPYC 24C 1.8GHz, AMD Instinct MI300A, Slingshot-11, TOSS , HPE
DOE/NNSA/LLNL
United States
129,024 24.38 388 62.803

HPCG Results

The TOP500 list now includes the High-Performance Conjugate Gradient (HPCG) Benchmark results.

  • Supercomputer Fugaku remains the leader on the HPCG benchmark with 16 PFlop/s. It held the top position since June 2020.

  • The DOE system Frontier at ORNL remains on the second position with 14.05 HPCG-Pflop/s.

  • The third position was again captured by the Aurora system with 5.6 HPCG-petaflops.

  • There are no HPCG submissions for El Capitan yet.

Rank TOP500 Rank System Cores Rmax (PFlop/s) HPCG (PFlop/s)
1 6 Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D ,
RIKEN Center for Computational Science
Japan
7,630,848 442.01 16.00
2 2 Frontier - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11, HPE Cray OS ,
DOE/SC/Oak Ridge National Laboratory
United States
9,066,176 1,353.00 14.05
3 3 Aurora - HPE Cray EX - Intel Exascale Compute Blade, Xeon CPU Max 9470 52C 2.4GHz, Intel Data Center GPU Max, Slingshot-11 ,
DOE/SC/Argonne National Laboratory
United States
9,264,128 1,012.00 5.61
4 8 LUMI - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11 ,
EuroHPC/CSC
Finland
2,752,704 379.70 4.59
5 7 Alps - HPE Cray EX254n, NVIDIA Grace 72C 3.1GHz, NVIDIA GH200 Superchip, Slingshot-11, HPE Cray OS ,
Swiss National Supercomputing Centre (CSCS)
Switzerland
2,121,600 434.90 3.67
6 9 Leonardo - BullSequana XH2000, Xeon Platinum 8358 32C 2.6GHz, NVIDIA A100 SXM4 64 GB, Quad-rail NVIDIA HDR100 Infiniband ,
EuroHPC/CINECA
Italy
1,824,768 241.20 3.11
7 19 Perlmutter - HPE Cray EX 235n, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 SXM4 40 GB, Slingshot-11 ,
DOE/SC/LBNL/NERSC
United States
888,832 79.23 1.91
8 14 Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband ,
DOE/NNSA/LLNL
United States
1,572,480 94.64 1.80
9 23 Selene - NVIDIA DGX A100, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100, Mellanox HDR Infiniband ,
NVIDIA Corporation
United States
555,520 63.46 1.62
10 33 JUWELS Booster Module - Bull Sequana XH2000 , AMD EPYC 7402 24C 2.8GHz, NVIDIA A100, Mellanox HDR InfiniBand/ParTec ParaStation ClusterSuite ,
Forschungszentrum Juelich (FZJ)
Germany
449,280 44.12 1.28

HPL-MxP Results

On the HPL-MxP benchmark, which measures performance for mixed-precision calculations, the Aurora system achieved 11.6  Exaflops narrowly ahead of Frontier at 11.4 Exaflops. This is the same situation as last time: both machines submitted new result and Aurora came out ahead for the second time.

The HPL-MxP benchmark seeks to highlight the use of mixed precision computations. Traditional HPC uses 64-bit floating point computations. Today we see hardware with various levels of floating point precisions, 32-bit, 16-bit, and even 8-bit. The HPL-MxP benchmark demonstrates that by using mixed precision during the computation much higher performance is possible (see the Top 5 from the HPL-MxP benchmark), and using mathematical techniques, the same accuracy can be computed with the mixed precision technique when compared with straight 64-bit precision.

Rank Site Computer Cores HPL Rmax (Eflop/s) TOP500 Rank HPL-MxP (Eflop/s) Speedup
1 DOE/SC/ANL USA Aurora, HPE Cray EX, Intel Max 9470 52C, 2.4 GHz, Intel GPU Max, Slingshot-11 8,159,232 1.012 3 11.6 11.5
2 DOE/SC/ORNL USA Frontier, HPE Cray EX235a, AMD Zen-3 (Milan) 64C 2GHz, AMD MI250X, Slingshot-11 8,560,640 1.353 2 11.4 8.4
3 EuroHPC/CSC Finland LUMI, HPE Cray EX235a, AMD Zen-3 (Milan) 64C 2GHz, AMD MI250X, Slingshot-11 2,752,704 0.380 8 2.35 6.2
4 RIKEN Center for Computational Science, Japan Fugaku, Fujitsu A64FX 48C 2.2GHz, Tofu D 7,630,848 0.442 1 2.0 4.5
5 EuroHPC/CINECA Italy Leonardo, BullSequana XH2000, Xeon Platinum 8358 32C 2.6GHz, NVIDIA A100 SXM4 40 GB, Quad-rail NVIDIA HDR100 Infiniband 1,824,768 0.241 9 1.8 7.5
6 CII, Institute of Science Japan TSUBAME 4.0, HPE Cray XD685, AMD EPYC 9654 96C 2.4GHz, NVIDIA H100 SXM5 94 GB, Mellanox NDR200 172,800 0.080 32 0.6 7.5
7 NVIDIA USA Selene, DGX SuperPOD, AMD EPYC 7742 64C 2.25 GHz, Mellanox HDR, NVIDIA A100 555,520 0.063 23 0.5 8.0
8 DOE/SC/LBNL/NERSC USA Perlmutter, HPE Cray EX235n, AMD EPYC 7763 64C 2.45 GHz, Slingshot-10, NVIDIA A100 761,856 0.068 19 0.59 7.5
9 Forschungszentrum Juelich (FZJ) Germany JUWELS Booster Module, Bull Sequana XH2000, AMD EPYC 7402 24C 2.8GHz, Mellanox HDR Infiniband, NVIDIA A100 449,280 0.044 33 0.3 6.8
10 GENCI-CINES France Adastra, HPE Cray EX235a, AMD EPYC 64C 2GHz, AMD 250X, Slingshot-11 319,072 0.030 30 0.3 6.5
  • This year’s winner of the HPL-MxP category is the Aurora system with 11.6 Exaflop/s.

  • Frontier is now in second place with a 11.4 Exaflop/s score on the HPL-MxP benchmark.

  • Lumi remains in third place with a score of 2.35 Exaflop/s.

About the TOP500 List

The first version of what became today’s TOP500 list started as an exercise for a small conference in Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how things had changed. About that time they realized they might be onto something and decided to continue compiling the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.