Definition Series: the supercomputer

As part of the continuing definition series, I bring here today an appraisal of a type of computer and a somewhat new concept in the computing world: that of the supercomputer. The notion of a new concept stems from the use of the word to refer to exceptional computing performance of some computers, mainly in a context of parallel computing processes.

But as we can read right from the beginning of the definition, the supercomputer should also be understood as an actual type of computer in its own, and here we see the fact that it is a computer performing near the highest possible optimized performance, with the main applications originating in scientific research or engineering settings. Supercomputers have also found wide commercial adoption through history, as we can read and check the information about this in this useful definition by Glossary):



A supercomputer is a computer that performs at or near the currently highest operational rate for computers.  Traditionally, supercomputers have been used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both). Although advances like multi-core processors and GPGPUs (general-purpose graphics processing units) have enabled powerful machines for personal use (see: desktop supercomputer, GPU supercomputer), by definition, a supercomputer is exceptional in terms of performance.

At any given time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term is also sometimes applied to far slower (but still impressively fast) computers. The largest, most powerful supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP).

As of June 2016, the fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China. A few statistics on TaihuLight:

  • 40,960 64-bit, RISC processors with 260 cores each.
  • Peak performance of 125 petaflops (quadrillion floating point operations per second).
  • 32GB DDR3 memory per compute node,  1.3 PB memory in total.
  • Linux-based Sunway Raise operating system (OS).


Notable supercomputers throughout history:

The first commercially successful supercomputer, the CDC (Control Data Corporation) 6600 was designed by Seymour Cray. Released in 1964, the CDC 6600 had a single CPU and cost $8 million — the equivalent of $60 million today. The CDC could handle three million floating point operations per second (flops).

Cray went on to found a supercomputer company under his name in 1972. Although the company has changed hands a number of times it is still in operation. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal supercomputer aimed at markets such as  aerospace, automotive, academic, financial services and life sciences.

IBM has been a keen competitor. The company’s Roadrunner, once the top-ranked supercomputer, was twice as fast as IBM’s Blue Gene and six times as fast as any of other supercomputers at that time. IBM’s Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on Jeopardy!, a popular quiz show.


Top supercomputers of recent years:

Year Supercomputer Peak speed
2016 Sunway TaihuLight 93.01 PFLOPS Wuxi, China
2013 NUDT Tianhe-2 33.86 PFLOPS Guangzhou, China
2012 Cray Titan 17.59 PFLOPS Oak Ridge, U.S.
2012 IBM Sequoia 17.17 PFLOPS Livermore, U.S.
2011 Fujitsu K computer 10.51 PFLOPS Kobe, Japan
2010 Tianhe-IA 2.566 PFLOPS Tianjin, China
2009 Cray Jaguar 1.759 PFLOPS Oak Ridge, U.S.
2008 IBM Roadrunner 1.026 PFLOPS Los Alamos, U.S.
1.105 PFLOPS


In the United States, some supercomputer centers are interconnected on an Internet backbone known as vBNS or NSFNet. This network is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2 is a university-led project that is part of this initiative.

At the lower end of supercomputing, clustering takes more of a build-it-yourself approach to supercomputing. The Beowulf Project offers guidance on how to put together a number of off-the-shelf personal computer processors, using Linux operating systems, and interconnecting the processors with Fast Ethernet. Applications must be written to manage the parallel processing.


The last two paragraphs in this definition is worth to mention. The possibility of increased adoption of supercomputers on a wider scale, with improvements enabling the still relatively high cost of these machines might become a staple worth of attention for the future. Indeed if high performance computing could one day become of widespread use, the possibilities it could entail would certainly be endless. We just need to be humble enough to recognize that the activity of computing is still hard and highly specialized, limiting any deep optimistic view of what can be achieved, on the one hand, but on the other to recognize also that we find it increasingly hard to have glimpse of what will happen in the future, with the pace of technological change and developments, with the human loop factor belonging to the equation.

Featured Image: Cray and the NSA: Seattle Supercomputers Help Spy Agency Mine Your Megadata


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s