Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC) has officially launched the Perlmutter (aka NERSC-9), the GPU-accelerated supercomputer built by HPE in collaboration with AMD and Nvidia. NERSC researchers claim that using the GPUs delivers up to 20X performance speedups. A boost that substantially accelerates workflow from a matter of months or weeks to hours.
Wahid Bhimji, lead for NERSC’s data and analytics services group, said in an Nvidia blog post:
AI for science is a growth area at the U.S. Department of Energy, where proof of concepts are moving into production use cases in areas like particle physics, materials science, and bioenergy. People are exploring larger and larger neural network models, and there’s a demand for access to more powerful resources, so Perlmutter, with its A100 GPUs, all-flash file system, and streaming data capabilities, is well-timed to meet this need for AI.
The system is the namesake of Berkeley Lab astrophysicist Saul Perlmutter, who shared the 2011 Physics Nobel Prize for his contributions to research showing that the universe’s expansion is accelerating. Fittingly, one of the Perlmutter supercomputer’s initial use cases will support the Dark Energy Spectroscopic Instrument (DESI), which is probing dark energy’s effects on the universe’s expansion.
The Perlmutter system will process data from DESI – which can capture up to 5,000 galaxies in a single exposure – to map the visible universe-spanning 11 billion light-years. Also, researchers need to assess the expensive instrument’s previous night’s data to know where to point it next. Perlmutter can help dramatically speed this step up by analyzing dozens of exposures fast enough to provide the feedback in time for the next nightly cycle.
Materials science should see similar benefits, with Perlmutter laying the way for advances in batteries and biofuels. Applications like Quantum Espresso can leverage Perlmutter’s machine learning capabilities and traditional simulation, enabling researchers to study more atoms over a more extended period.
Brandon Cook, a NERSC applications performance specialist, said:
In the past, it was impossible to do fully atomistic simulations of big systems like battery interfaces, but now scientists plan to use Perlmutter to do just that.
Dr. Perlmutter said:
This is an exciting time to be combining the power of supercomputer facilities with science, and that is partly because science has developed the ability to collect huge amounts of data and bring them all to bear at one time. This new supercomputer is exactly what we need to handle these datasets. As a result, we are expecting to find discoveries in cosmology, microbiology, genetics, climate change, material sciences, and pretty much any other field you can think of.
The HPE Cray EX supercomputer harnesses 6,159 Nvidia A100 GPUs and ~1,500 AMD Milan CPUs to deliver nearly 3.8 exaflops of theoretical “AI performance” (see endnote) or about 60 petaflops of peak double-precision (standard FP64) HPC performance.
Nvidia reported that Quantum Espresso, BerkeleyGW, and NWChem all could leverage Nvidia’s FP64 Tensor Cores, unlocking double the performance of the standard FP64 format — 19.5 teraflops versus 9.7 teraflops (peak theoretical) per GPU. (Nvidia reports that Perlmutter provides 120 petaflops of peak FP64 Tensor Core performance.)
The first phase of Perlmutter spans 12 GPU-accelerated Cray EX cabinets (aka “Shasta”) housing more than 1,500 nodes and 35 petabytes of an all-flash parallel file system (HPE E1000). According to NERSC, the Lustre filesystem will move data at a rate of more than 5 terabytes/sec, making it the fastest storage system of its kind.
“The Perlmutter system is direct liquid-cooled and uses HPE’s Cray-developed Slingshot interconnect technology.
A second CPU-only phase is planned for later this year. Phase 2 adds 12 CPU cabinets with more than 3,000 nodes, equipped with two AMD Milan CPUs with 512GB of memory per node. According to NERSC, the Phase 2 system also adds 20 more login nodes and four large memory nodes.
On the software side, Perlmutter users will have access to the standard NVIDIA HPC SDK toolkit, and support for OpenMP is forthcoming through a joint development effort with NERSC.
“Python programmers will be able to use RAPIDS, Nvidia’s open software suite for GPU-enabled data science.
The Perlmutter system will play a vital role in advancing scientific research in the United States. It’s also central in several critical technologies, including artificial intelligence, advanced computing, and data science. The system will be a powerful tool in the fight against climate change. It will be heavily used in climate and environment studies, clean energy technologies, microelectronics and semiconductors, and quantum information science. Plus, it’s the gateway to even faster supercomputers.
Planning for the follow-ons to Perlmutter, codenamed NERSC-10 and NERSC-11, is already underway. The team is looking ahead to quantum.
NERSC Director Sudip Dosanjh said during the Perlmutter launch virtual dedication ceremony:
Systems take years and years for us to design and deploy. It’s pretty clear that we’ll have more heterogeneous systems as we enter the post-Moore’s law era. We’re looking at different types of accelerators. I don’t think that it’s likely that NERSC-10 will have a quantum accelerator, but NERSC-11 indeed might. Half the codes that run at NERSC solve some quantum mechanical problem, and that part of the workload might benefit from a quantum accelerator.
With NERSC-10, we’re going to focus on end-to-end DOE Office of Science workflows and hopefully enable new modes of scientific discovery through the integration of experiments, data analysis, and simulation. And so, we want to make sure that the scientists can use AI to analyze the data, and we also want to use AI to manage the system to increase the reliability of the system and the energy efficiency of the system. And in addition, we have a goal of using AI to reconfigure NERSC-10 to accelerate workflows.
The future has arrived.