Monday, January 8, 2007

Special-Purpose Supercomputers

Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking.

Examples of special-purpose supercomputers:
Deep Blue, for playing chess Reconfigurable computing machines or parts of machines GRAPE, for astrophysics and molecular dynamics Deep Crack, for breaking the DES cipher

Supercomputer Programming

The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Special-purpose Fortran compilers can often generate faster code than C or C++ compilers, so Fortran remains the language of choice for scientific programming, and hence for most programs run on supercomputers. To exploit the parallelism of supercomputers, programming environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are being used.

Supercomputer Operating Systems

Supercomputer operating systems, today most often variants of Linux or UNIX, are every bit as complex as those for smaller machines, if not more so. Their user interfaces tend to be less developed, however, as the OS developers have limited programming resources to spend on non-essential parts of the OS (i.e., parts not directly contributing to the optimal utilization of the machine's hardware). This stems from the fact that because these computers, often priced at millions of dollars, are sold to a very small market, their R&D budgets are often limited. (The advent of Unix and Linux allows reuse of conventional desktop software and user interfaces.)
Interestingly this has been a continuing trend throughout the supercomputer industry, with former technology leaders such as Silicon Graphics taking a back seat to such companies as NVIDIA, who have been able to produce cheap, feature-rich, high-performance, and innovative products due to the vast number of consumers driving their R&D.

Historically, until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. Similarly different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of UNIX operating system variants (such as Cray's Unicos and today's Linux.)
For this reason, in the future, the highest performance systems are likely to have a UNIX flavor but with incompatible system-unique features (especially for the highest-end systems at secure facilities).

Processing Techniques in Super Computer

Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD processing instructions for general-purpose computers.

Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, Graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-dicipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU.)

Supercomputer Design

Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times—in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.

As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to accelerate the remaining bottlenecks.

Software Tools for Super Computer

Software tools for distributed processing include standard APIs such as MPI and PVM, and open source-based software solutions such as Beowulf and openMosix which facilitate the creation of a sort of "virtual supercomputer" from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) pave the way for the creation of ad hoc computer clusters. An example of this is the distributed rendering function in Apple's Shake compositing application. Computers running the Shake software merely need to be in proximity to each other, in networking terms, to automatically discover and use each other's resources. While no one has yet built an ad hoc computer cluster that rivals even yesteryear's supercomputers, the line between desktop, or even laptop, and supercomputer is beginning to blur, and is likely to continue to blur as built-in support for parallelism and distributed processing increases in mainstream desktop operating systems. An easy programming language for supercomputers remains an open research topic in Computer Science.

Supercomputers Introduction

Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for 5 years (1985–1990). Cray, himself, never used the word "supercomputer," a little-remembered fact in that he only recognized the word "computer." In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience, although Cray Inc. still specializes in building supercomputers.

The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's normal computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel become the standard. Typical numbers of processors were in the range 4–16. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. (This is commonly and humorously referred to as the attack of the killer micros in the industry.) Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, IA-64, or x86-64, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.

Supercomputer

A supercomputer is a computer that leads the world in terms of processing capacity, particularly speed of calculation, at the time of its introduction. The term "Super Computing" was first used by New York World newspaper in 1920 to refer to large custom-built tabulators IBM made for Columbia University.

Sunday, January 7, 2007

A Computer

A computer is a machine for manipulating data according to a list of instructions.
Computers take numerous physical forms. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers. [1] Today, computers can be made small enough to fit into a wrist watch and be powered from a watch battery. Society has come to recognize personal computers and their portable equivalent, the laptop computer, as icons of the information age; they are what most people think of as "a computer". However, the most common form of computer in use today is by far the embedded computer. Embedded computers are small, simple devices that are often used to control other devices—for example, they may be found in machines ranging from fighter aircraft to industrial robots, digital cameras, and even children's toys.