Skip to Content

What Is a Supercomputer?

The term “supercomputer" refers to a computer that operates at a higher level of performance than a standard computer. Often, this means that the architecture, resources, and components of supercomputers make them extremely powerful, giving them the ability to perform at or near the highest possible operational rate for computers. 

Supercomputers contain most of the key components of a typical computer, including at least one processor, peripheral devices, connectors, an operating system, and various applications. The major difference between a supercomputer and a standard computer is its processing power.

Traditionally, supercomputers were single, super-fast machines primarily used by enterprise businesses and scientific organizations that needed massive computing power for exceedingly high-speed computations. Today’s supercomputers, however, can consist of tens of thousands of processors that can perform billions—even trillions—of calculations per second.

These days, common applications for supercomputers include weather forecasting, operations control for nuclear reactors, and cryptology. As the cost of supercomputing has declined, modern supercomputers are also being used for market research, online gaming, and virtual and augmented reality applications.

A Brief History of the Supercomputer

In 1964, Seymour Cray and his team of engineers at Control Data Corporation (CDC) created CDC 6600, the first supercomputer. At the time, the CDC 6600 was 10 times faster than regular computers and three times faster than the next fastest computer—the IBM 7030 Stretch—performing calculations at speeds up to 3 mega floating-point operations per second (FLOPS). Although that’s slow by today’s standards, back then, it was fast enough to be called a supercomputer. 

Known as the “father of supercomputing,” Seymour Cray and his team led the supercomputing industry, releasing the CDC 7600 in 1969 (160 megaFLOPS), the Cray X-MP in 1982 (800 megaFLOPS), and the Cray 2 in 1985 (1.9 gigaFLOPS).

Subsequently, other companies sought to make supercomputers more affordable and developed massively parallel processing (MPP). In 1992, Don Becker and Thomas Sterling, contractors at NASA, built the Beowulf, a supercomputer made from a cluster of computer units working together. It was the first supercomputer to use the cluster model.

Today’s supercomputers use both central processing units (CPUs) and graphics processing units (GPUs) that work together to perform calculations. TOP500 lists the Fugaku supercomputer, based in Kobe, Japan, at the RIKEN Center for Computational Science, as the world’s fastest supercomputer, with a processing speed of 442 petaFLOPS.

Supercomputers vs. Regular PCs

Today’s supercomputers aggregate computing power to deliver significantly higher performance than a single desktop or server to solve complex problems in engineering, science, and business.

Unlike regular personal computers, modern supercomputers are made up of massive clusters of servers, with one or more CPUs grouped into compute nodes. Compute nodes make up a processor (or a group of processors) and a memory block and can contain tens of thousands of nodes. These nodes interconnect to communicate and work together to complete specific tasks while processes are distributed among or simultaneously executed across thousands of processors. 

How the Performance of Supercomputers Is Measured

FLOPS are used to measure the performance of a supercomputer and for scientific computations that use floating-point calculations, i.e., numbers so large they have to be expressed in exponents.

FLOPS are a more accurate measure than a million instructions per second (MIPS). As noted above, some of today’s fastest supercomputers can perform at over a hundred quadrillion FLOPS (petaFLOPS).

Test Drive FlashBlade

No hardware, no setup, no cost—no problem. Experience a self-service instance of Pure1® to manage FlashBlade, the industry's most advanced solution delivering native scale-out file and object storage.

Try Now

How Does a Supercomputer Work?

A supercomputer can contain thousands of nodes that use parallel processing to communicate with each other to solve problems. But there are actually two approaches to parallel processing: symmetric multiprocessing (SMP) and massively parallel processing (MPP). 

In SMP, processors share memory and the /O bus or data path. SMP is also known as tightly coupled multiprocessing or referred to as a “shared everything system.”

MPP coordinates the processing of a program among multiple processors that simultaneously work on different parts of the program. Each processor uses its own operating system and memory. MPP processors communicate using a messaging interface that allows messages to be sent between processors. MPP can be complex, requiring knowledge of how to partition a common database and assign work among the processors. An MPP system is known as a “loosely coupled” or “shared nothing” system.

One benefit of SMP is that it allows organizations to serve more users faster by dynamically balancing the workload among computers. SMP systems are considered more suitable than MPP systems for online transaction processing (OTP), where many users are accessing the same database (e.g., simple transaction processing). MPP is better suited than SMP to applications that need to search several databases in parallel (e.g., decision support systems and data warehouse applications).

Types of Supercomputers

Supercomputers fall into two categories: general purpose and special purpose. Within these categories, general-purpose supercomputers can be divided into three subcategories:

General-purpose Supercomputers

  • Vector processing computers: Common in scientific computing, most supercomputers in the ’80s and early ’90s were vector computers. They’re not as popular these days, but today’s supercomputers still have CPUs that use some vector processing.
  • Tightly connected cluster computers: These are groups of connected computers that work together as a unit and include massively parallel clusters, director-based clusters, two-node clusters, and multi-node clusters. Parallel and director-based clusters are commonly used for high-performance processing, while two-node and multi-node clusters are used for fault tolerance.
  • Commodity computers: These include arrangements of numerous standard personal computers (PCs) connected by high-bandwidth, low-latency local area networks (LANs).

Special Purpose Supercomputers 

Special purpose supercomputers are supercomputers that have been built to achieve a particular task or goal. They typically use application-specific integrated circuits (ASICs) for better performance (e.g., Deep Blue and Hydra were both built for playing games like chess). 

Super Computer Use Cases

Given their obvious advantages, supercomputers have found wide application in areas such as engineering and scientific research. Use cases include:

  • Weather and climate research: To predict the impact of extreme weather events and understanding climate patterns, such as in the National Oceanic and Atmospheric Administration (NOAA) system
  • Oil and gas exploration: To collect vast amounts of geophysical seismic data to help find and develop oil reserves
  • Airline and automobile industry: To design flight simulators and simulated automobile environments, as well as to apply aerodynamics for the lowest air drag coefficient
  • Nuclear fusion research: To build nuclear fusion reactors and virtual environments for testing nuclear explosions and weapon ballistics
  • Medical research: To develop new drugs, therapies for cancer and rare genetic disorders, and treatments for COVID-19, as well as for research into the generation and evolution of epidemics and diseases
  • Real-time applications: To maintain online game performance during tournaments and new game releases when there are a lot of users

Supercomputing and HPC

Supercomputing is sometimes used synonymously with high-performance computing (HPC). However, it’s more accurate to say that supercomputing is an HPC solution, referring to the processing of complex and large calculations used by supercomputers.

HPC allows you to synchronize data-intensive computations across multiple networked supercomputers. As a result, complex calculations using larger data sets can be processed in far less time than it would take using regular computers. 

Scalable Storage for Supercomputing

Today’s supercomputers are being leveraged in a variety of fields for a variety of purposes. Some of the world’s top technology companies are developing AI supercomputers in anticipation of the role they may play in the rapidly expanding metaverse.

As a result, storage solutions not only need to support rapid retrieval of data for extremely high computation speeds, but they must also be scalable enough to handle the demands of large-scale AI workloads with high performance.

Virtual and augmented reality technologies call for a lot of data. As do supporting technologies, such as 5G, machine learning (ML), the internet of things (IoT), and neural networks.

Pure Storage® FlashArray//XL delivers top-tier performance and efficiency for enterprise workloads, while FlashBlade® is the industry's most advanced all-flash storage solution. Both offer a scalable, robust storage solution that can power today’s fastest supercomputers.

They’re both available through Pure as-a-Service™, a managed storage-as-a-service (STaaS) solution with a simple subscription model that gives you the flexibility to scale storage capacity as needed. 

Pay only for what you use, get what you need when you need it, and stay modern without disruption. Contact us today to learn more.

11/2024
How Healthy Is Your Data Platform Really?
Complete this self-guided wellness check to help determine if your data platform can successfully adapt with your organization into the future.
Infographic
1 page

Browse key resources and events

CYBER RESILIENCE
The Blueprint for Cyber Resilience Success

Explore how IT and security teams can seamlessly collaborate to minimize cyber vulnerabilities and avoid attacks.

Show Me How
INDUSTRY EVENT
Explore the Pure Storage Platform at SC24
Nov 17-22 • Booth 1231

Learn how Pure Storage can help you meet your AI, HPC, and EDA requirements.

Book a Meeting
INDUSTRY EVENT
Join Pure Storage at Microsoft Ignite
Nov 18-22, 2024 • Booth 403

Discover how Pure Storage can effortlessly scale your workloads, manage unstructured data, and simplify your cloud transition.

Book a Meeting
INDUSTRY EVENT
Future-Proof Your Hybrid Cloud Infrastructure at AWS re:Invent 2024

Meet Pure Storage at AWS re:Invent and prepare your hybrid cloud infrastructure for what’s new and what’s next.

Book a Meeting
CONTACT US
Meet with an Expert

Let’s talk. Book a 1:1 meeting with one of our experts to discuss your specific needs.

Questions, Comments?

Have a question or comment about Pure products or certifications?  We’re here to help.

Schedule a Demo

Schedule a live demo and see for yourself how Pure can help transform your data into powerful outcomes. 

Call Sales: 800-976-6494

Mediapr@purestorage.com

 

Pure Storage, Inc.

2555 Augustine Dr.

Santa Clara, CA 95054

800-379-7873 (general info)

info@purestorage.com

CLOSE
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.