Databac

Supercomputer.

Publié le 06/12/2021

Extrait du document

Ci-dessous un extrait traitant le sujet : Supercomputer.. Ce document contient 1659 mots. Pour le télécharger en entier, envoyez-nous un de vos documents grâce à notre système d’échange gratuit de ressources numériques ou achetez-le pour la modique somme d’un euro symbolique. Cette aide totalement rédigée en format pdf sera utile aux lycéens ou étudiants ayant un devoir à réaliser ou une leçon à approfondir en : Echange
Supercomputer.
I

INTRODUCTION

Supercomputer, computer designed to perform calculations as fast as current technology allows and used to solve extremely complex problems. Supercomputers are
used to design automobiles, aircraft, and spacecraft; to forecast the weather and global climate; to design new drugs and chemical compounds; and to make
calculations that help scientists understand the properties of particles that make up atoms as well as the behavior and evolution of stars and galaxies. Supercomputers
are also used extensively by the military for weapons and defense systems research, and for encrypting and decoding sensitive intelligence information. See Computer;
Encryption; Cryptography.
Supercomputers are different than other types of computers in that they are designed to work on a single problem at a time, devoting all their resources to the solution
of the problem. Other powerful computers such as mainframes and workstations are specifically designed so that they can work on numerous problems, and support
numerous users, simultaneously. Because of their high cost--usually in the hundreds of thousands to millions of dollars--supercomputers are shared resources.
Supercomputers are so expensive that usually only large companies, universities, and government agencies and laboratories can afford them.

II

HOW SUPERCOMPUTERS WORK

The two major components of a supercomputer are the same as any other computer--a central processing unit (CPU) where instructions are carried out, and the
memory in which data and instructions are stored. The CPU in a supercomputer is similar in function to a standard personal computer (PC) CPU, but it usually has a
different type of transistor technology that minimizes transistor switching time. Switching time is the length of time that it takes for a transistor in the CPU to open or
close, which corresponds to a piece of data moving or changing value in the computer. This time is extremely important in determining the absolute speed at which a
CPU can operate. By using very high performance circuits, architectures, and, in some cases, even special materials, supercomputer designers are able to make CPUs
that are 10 to 20 times faster than state-of-the-art processors for other types of commercial computers.
Supercomputer memory also has the same function as memory in other computers, but it is optimized so that retrieval of data and instructions from memory takes the
least amount of time possible. Also important to supercomputer performance is that the connections between the memory and the CPU be as short as possible to
minimize the time that information takes to travel between the memory and the CPU.
A supercomputer functions in much the same way as any other type of computer, except that it is designed to do calculations as fast as possible. Supercomputer
designers use two main methods to reduce the amount of time that supercomputers spend carrying out instructions--pipelining and parallelism. Pipelining allows
multiple operations to take place at the same time in the supercomputer's CPU by grouping together pieces of data that need to have the same sequence of operations
performed on them and then feeding them through the CPU one after the other. The general idea of parallelism is to process data and instructions in parallel rather
than in sequence.
In pipelining, the various logic circuits (electronic circuits within the CPU that perform arithmetic calculations) used on a specific calculation are continuously in use, with
data streaming from one logic unit to the next without interruption. For instance, a sequence of operations on a large group of numbers might be to add adjacent
numbers together in pairs beginning with the first and second numbers, then to multiply these results by some constant, and finally to store these results in memory.
The addition operation would be Step 1, the multiplication operation would be Step 2, and the assigning of the result to a memory location would be Step 3 in the
sequence. The CPU could perform the sequence of operations on the first pair of numbers, store the result in memory and then pass the second pair of numbers
through, and continue on like this. For a small group of numbers this would be fine, but since supercomputers perform calculations on massive groups of numbers this
technique would be inefficient, because only one operation at a time is being performed.
Pipelining overcomes the source of inefficiency associated with the CPU performing a sequence of operations on only one piece of data at a time until the sequence is
finished. The pipeline method would be to perform Step 1 on the first pair of data and move it to Step 2. As the result of the first operation move to Step 2, the second
pair of data move into Step 1. Step 1 and 2 are then performed simultaneously on their respective data and the results of the operations are moved ahead in the
pipeline, or the sequence of operations performed on a group of data. Hence the third pair of numbers are in Step 1, the second pair of numbers are in Step 2, and the
first pair of numbers are in Step 3. The remainder of the calculations are performed in this way, with the specific logic units in the sequence are always operating
simultaneously on data.
The example used above to illustrate pipelining can also be used to illustrate the concept of parallelism (see Parallel Processing). A computer that parallel-processed data
would perform Step 1 on multiple pieces of data simultaneously, then move these to Step 2, then to Step 3, each step being performed on the multiple pieces of data
simultaneously. One way to do this is to have multiple logic circuits in the CPU that perform the same sequence of operations. Another way is to link together multiple
CPUs, synchronize them (meaning that they all perform an operation at exactly the same time) and have each CPU perform the necessary operation on one of the
pieces of data.
Pipelining and parallelism are combined and used to greater or lesser extent in all supercomputers. Until the early 1990s, parallelism achieved through the
interconnection of CPUs was limited to between 2 and 16 CPUs connected in parallel. However, the rapid increase in processing speed of off-the-shelf microprocessors
used in personal computers and workstations made possible massively-parallel processing (MPP) supercomputers. While the individual processors used in MPP
supercomputers are not as fast as specially designed supercomputer CPUs, they are much less expensive and because of this, hundreds or even thousands of these
processors can be linked together to achieve extreme parallelism.

III

SUPERCOMPUTER PERFORMANCE

Supercomputers are used to create mathematical models of complex phenomena. These models usually contain long sequences of numbers that are manipulated by the
supercomputer with a kind of mathematics called matrix arithmetic. For example, to accurately predict the weather, scientists use mathematical models that contain
current temperature, air pressure, humidity, and wind velocity measurements at many neighboring locations and altitudes. Using these numbers as data, the computer
makes many calculations to simulate the physical interactions that will likely occur during the forecast period.
When supercomputers perform matrix arithmetic on large sets of numbers, it is often necessary to multiply many pairs of numbers together and to then add up each of
their individual products. A simple example of such a calculation is: (4 × 6) + (7 × 2) + (9 × 5) + (8 × 8) + (2 × 9) = 165. In real problems, the strings of numbers
used in calculations are usually much longer, often containing hundreds or thousands of pairs of numbers. Furthermore, the numbers used are not simple integers but
more complicated types of numbers called floating point numbers that allow a wide range of digits before and after the decimal point, for example 5,063,937.9120834.
The various operations of adding, subtracting, multiplying, and dividing floating-point numbers are collectively called floating-point operations. An important way of
measuring a supercomputer's performance is in the peak number of floating-point operations per second (FLOPS or, more commonly, FLOP) that it can do. In the mid1990s, the peak computational rate for state-of-the-art supercomputers was between 1 and 200 gigaflops (billion floating-point operations per second), depending on

the specific model and configuration of the supercomputer.
In July 1995, computer scientists at the University of Tokyo, in Japan, broke the 1 teraflop (1 trillion floating-point operations per second) mark with a computer they
designed to perform astrophysical simulations. Named GRAPE-4 (GRAvity PipE number 4), this MPP supercomputer consisted of 1,692 interconnected processors. In
November 1996, Cray Research debuted the CRAY T3E-900, the first commercially available supercomputer to offer teraflop performance. In 1997 the Intel Corporation
installed the teraflop machine Janus at Sandia National Laboratories in New Mexico. Janus is composed of 9,072 interconnected processors. Scientists use Janus for
classified work such as weapons research as well as for unclassified scientific research such as modeling the impact of a comet on the Earth.
In 2007 the International Business Machines Corporation (IBM) announced that it was introducing the first commercial supercomputer capable of petaflop performance.
A petaflop is the equivalent of 1,000 trillion, or 1 quadrillion, floating-point operations per second. Known as the Blue Gene/P, the supercomputer uses nearly 300,000
processors connected by a high-speech optical fiber network. It can be expanded to include nearly 900,000 processors, enabling it to achieve a performance speed of 3
petaflops. The Department of Energy's Argonne National Laboratory purchased the first Blue Gene/P in the United States and was expected to use it for complex
simulations in particle physics and nanotechnology.
The definition of what a supercomputer is constantly changes with technological progress. The same technology that increases the speed of supercomputers also
increases the speed of other types of computers. For instance, the first computer to be called a supercomputer, the Cray-1 developed by Cray Research and first sold in
1976, had a peak speed of 167 megaflops. This is only a few times faster than standard personal computers today, and well within the reach of some workstations.

Contributed By:
Steve Nelson
Microsoft ® Encarta ® 2009. © 1993-2008 Microsoft Corporation. All rights reserved.

↓↓↓ APERÇU DU DOCUMENT ↓↓↓