Encyclopedia > Supercomputer

  Article Content

Supercomputer

A supercomputer is a computer that leads the world in terms of processing capacity, particularly speed of calculation, at the time of its introduction. The term is rather fluid, and today's supercomputer tends to become tomorrow's also-ran.

Supercomputers are used for highly calculation-intensive tasks such as weather forecasting, climate research (including research into global warming), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Military and scientific agencies are heavy users.

Seymour Cray is intimately associated with the history of supercomputers, having designed many of the world's fastest computers throughout the 1960s, 1970s, and 1980s for Control Data Corporation and Cray Research.

Supercomputers tradionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialised for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times—in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy design and componentry. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.

Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD processing instructions for general-purpose computers.

Their operating systems, often variants of UNIX, tend not to be as sophisticated as those for smaller machines, since supercomputers are typically dedicated to one task at a time rather than the multitude of simultaneous jobs that makes up the workload of smaller devices.

The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Special-purpose FORTRAN compilers can often generate faster code than the C or C++ compilers, so FORTRAN remains the language of choice for scientific programming, and hence for most programs run on supercomputers. To exploit the parallelism of supercomputers, programming environments such as PVM and MPI for loosely connected clusters and OpenMP[?] for tightly coordinated shared memory machines are being used.

Table of contents

Types of supercomputers

Vector processing machines allow the same (arithmetical) operation to be carried out on a large amount of data simultaneously.

Tightly connected cluster computers use specially developed interconnects to have many processors and their memory communicate with each other. Processors and networking componenets are engineered from the ground up for the supercomputer. The fastest general-purpose supercomputers in the world today use this technology.

Commodity clusters use a large number of commodity PCs, interconnected by high-bandwidth low-latency local area networks.

As of 2002, Moore's Law and economies of scale are the dominant factors in supercomputer design: a single modern desktop PC is now more powerful than a 15-year old supercomputer, and at least some of the design tricks that allowed past supercomputers to out-perform contemporary desktop machines have now been incorporated into commodity PC's. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production.

Additionally, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, particularly, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design which can be programmed to act as one large computer. Many of these use the Linux operating system; they are then called Beowulf clusters.

Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking.

Examples of special-purpose supercomputers:

Supercomputer challenges and technologies

  • A supercomputer generates heat and must be cooled. Cooling a supercomputer is a major HVAC problem.
  • Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's Cray supercomputer designs attempted to keep cable runs as short as possible for this reason.
  • Supercomputers consume and produce massive amounts of data in a very short period of time. Much work is needed to ensure that this information can be transferred quickly and stored.

Technologies developed for supercomputers include:

The fastest supercomputers today

The speed of a supercomputer is generally measured in FLOPS (floating point operations per second); this measurement ignores communication overheads and assumes that all processors of the machine are provided with data and are working at full speed.

As of early 2002, the fastest supercomputer is the Earth Simulator at the Yokohama Institute for Earth Sciences[?]. It is a cluster of 640 custom-designed 8-processor vector processor computers based on the NEC[?] SX-6 architecture (a total of 5120 processors). It uses a customised version of the UNIX operating system.

Its performance is over 5 times that of the previous fastest supercomputer, the cluster computer ASCI White[?] at Lawrence Livermore National Laboratory. The United States Government ASCI[?] initiative aims to replace nuclear testing with simulation, to maintain its strategic advantage in the presence of nuclear test-ban treaties.

PARAM is another series of supercomputers.

A list of the 500 fastest supercomputers is maintained at http://www.top500.org/

History of supercomputers

PeriodSupercomputerSpeedLocation
1945-1950Manchester Mark I University of Manchester, England
1950-1955   
1955-1960   
1960-1965   
1965-1970   
1970-1975   
1975-1980Cray-1160 MFLOPSLos Alamos National Laboratory, New Mexico (1976)
1980-1985   
1985-1990   
1990-1995[[Fujitsu Numerical Wind Tunnel]]  
1995-2000[[Intel ASCI Red]]  
2000-2002IBM ASCI White, SP Power3 375 MHz7226 GFLOPSLawrence Livermore Laboratory, California
2002-Earth Simulator35 TFLOPSYokohama Institute for Earth Sciences[?], Japan
future   

Forthcoming supercomputers:

See also:

External links:



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
French resistance

... Villon[?] (Not to be confused with the current political party Front National in France that is right-wing conservative) Interallie[?] - Intelligence organization of ...

 
 
 
This page was created in 27.5 ms