The term “supercomputer" refers to a computer that operates at a higher level of performance than a standard computer. Often, this means that the architecture, resources, and components of supercomputers make them extremely powerful, giving them the ability to perform at or near the highest possible operational rate for computers.
Supercomputers contain most of the key components of a typical computer, including at least one processor, peripheral devices, connectors, an operating system, and various applications. The major difference between a supercomputer and a standard computer is its processing power.
Traditionally, supercomputers were single, super-fast machines primarily used by enterprise businesses and scientific organizations that needed massive computing power for exceedingly high-speed computations. Today’s supercomputers, however, can consist of tens of thousands of processors that can perform billions—even trillions—of calculations per second.
These days, common applications for supercomputers include weather forecasting, operations control for nuclear reactors, and cryptology. As the cost of supercomputing has declined, modern supercomputers are also being used for market research, online gaming, and virtual and augmented reality applications.
A Brief History of the Supercomputer
In 1964, Seymour Cray and his team of engineers at Control Data Corporation (CDC) created CDC 6600, the first supercomputer. At the time, the CDC 6600 was 10 times faster than regular computers and three times faster than the next fastest computer—the IBM 7030 Stretch—performing calculations at speeds up to 3 mega floating-point operations per second (FLOPS). Although that’s slow by today’s standards, back then, it was fast enough to be called a supercomputer.
Known as the “father of supercomputing,” Seymour Cray and his team led the supercomputing industry, releasing the CDC 7600 in 1969 (160 megaFLOPS), the Cray X-MP in 1982 (800 megaFLOPS), and the Cray 2 in 1985 (1.9 gigaFLOPS).
Subsequently, other companies sought to make supercomputers more affordable and developed massively parallel processing (MPP). In 1992, Don Becker and Thomas Sterling, contractors at NASA, built the Beowulf, a supercomputer made from a cluster of computer units working together. It was the first supercomputer to use the cluster model.
Today’s supercomputers use both central processing units (CPUs) and graphics processing units (GPUs) that work together to perform calculations. TOP500 lists the Fugaku supercomputer, based in Kobe, Japan, at the RIKEN Center for Computational Science, as the world’s fastest supercomputer, with a processing speed of 442 petaFLOPS.
Supercomputers vs. Regular PCs
Today’s supercomputers aggregate computing power to deliver significantly higher performance than a single desktop or server to solve complex problems in engineering, science, and business.
Unlike regular personal computers, modern supercomputers are made up of massive clusters of servers, with one or more CPUs grouped into compute nodes. Compute nodes make up a processor (or a group of processors) and a memory block and can contain tens of thousands of nodes. These nodes interconnect to communicate and work together to complete specific tasks while processes are distributed among or simultaneously executed across thousands of processors.
How the Performance of Supercomputers Is Measured
FLOPS are used to measure the performance of a supercomputer and for scientific computations that use floating-point calculations, i.e., numbers so large they have to be expressed in exponents.
FLOPS are a more accurate measure than a million instructions per second (MIPS). As noted above, some of today’s fastest supercomputers can perform at over a hundred quadrillion FLOPS (petaFLOPS).