You are on page 1of 9

ABSTRACT

The project was awarded the National Medal of Technology and Innovation by U.S. President Barack Obama on

Blue Gene is a massively parallel system being developed at the IBM T. J. Watson Research Center. With its 4 million-way parallelism and 1 Petaflop peak performance, Blue Gene is a unique environment for research in parallel processing. Full exploitation of the machines capability requires 100-way shared memory parallelism inside a singlechip multiprocessor node and messagepassing across 30,000 nodes. New programming models, languages, compilers, and libraries will need to be investigated and developed for Blue Gene, therefore offering the opportunity to break new ground in those areas.

September 18, 2009. The president bestowed the award on October 7, 2009.

WHAT IS BLUE GENE?


A massively parallel

supercomputer using tens of thousands of embedded PowerPC processors supporting a large memory space. With standard compilers and message passing environment.

BLUE GENE IN DETAIL

Blue Gene is a computer architecture project designed to produce several supercomputers, designed to reach operating speeds in the PFLOPS (petaFLOPS) range, and currently reaching sustained speeds of nearly 500 TFLOPS (teraFLOPS). It is a cooperative project among IBM (particularly IBM Rochester and the Thomas J. Watson Research Center), the Lawrence Livermore National Laboratory, the United States Department of Energy (which is partially funding the project), and academia.

A Blue Gene supercomputer

WHY THE NAME BLUE GENE?


Blue: The corporate color of

IBM.
Gene: The intended use of the

Blue Gene clusters

Computational biology, specifically, protein folding.

had a theoretical peak performance of 360 TFLOPS, and scored over 280 TFLOPS sustained on the Linpack benchmark. After an upgrade in 2007 the performance increased to 478 TFLOPS sustained and 596 TFLOPS peak.

HISTORY OF BLUE GENE


Dec99, IBM Research announced $100M US effort to build a Petaflop scale supercomputer. Two goals of The Blue Gene project : Massively parallel machine architecture and software Bio-Molecular Simulation advance orders of magnitude November 2001, Partnership with Lawrence Livermore National Laboratory (LLNL)

BLUE GENE PROJECT


Four Blue Gene projects : BlueGene/L BlueGene/C BlueGene/P BlueGene/Q The term Blue Gene/L sometimes refers to the computer installed at LLNL; and sometimes refers to the architecture of that
A Blue Gene/L cabinet

BLUE GENE/L
The first computer in the Blue Gene series, Blue Gene/L, developed through a partnership with Lawrence Livermore National Laboratory (LLNL), originally

computer. As of November 2006, there are 27 computers on the Top500 list using the Blue Gene/L architecture. All these computers are listed as having architecture of eServer Blue Gene Solution.

around the previous QCDSP and QCDOC supercomputers. In November 2001, Lawrence Livermore National Laboratory joined IBM as a research partner for Blue Gene. On September 29, 2004, IBM announced that a Blue Gene/L prototype at IBM Rochester (Minnesota) had overtaken NEC's Earth Simulator as the fastest
The above block scheme of the Blue Gene/L ASIC including dual PowerPC 440 cores.

computer in the world, with a speed of 36.01 TFLOPS on the Linpack benchmark, beating Earth Simulator's 35.86 TFLOPS. This was achieved with an 8-cabinet system, with each cabinet holding 1,024 compute nodes. Upon doubling this configuration to 16 cabinets, the machine reached a speed of 70.72 TFLOPS by November 2004, taking first place in the Top500 list. On March 24, 2005, the US Department of Energy announced that the Blue Gene/L installation at LLNL broke its speed record, reaching 135.5 TFLOPS. This feat was possible because of doubling the number of cabinets to 32. On the Top500 list, Blue Gene/L installations across several sites worldwide took 3 out of the 10 top positions, and 13 out of the top 64. Three racks of Blue Gene/L are housed at the San Diego

In December 1999, IBM announced a $100 million research initiative for a fiveyear effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding. The project has two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. This project should enable biomolecular simulations that are orders of magnitude larger than current technology permits. Major areas of investigation include: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures. The design is built largely

Supercomputer Center and are available for academic research. On October 27, 2005, LLNL and IBM announced that Blue Gene/L had once again broken its speed record, reaching 280.6 TFLOPS on Linpack, upon reaching its final configuration of 65,536 "compute nodes" (i.e., 216 nodes) and an additional 1024 "I/O nodes" in 64 air-cooled cabinets. The LLNL Blue Gene/L uses Lustre to access multiple filesystems in the 600TB-1PB range. Blue Gene/L is also the first supercomputer ever to run over 100 TFLOPS sustained on a real world application, namely a three-dimensional molecular dynamics code (ddcMD), simulating solidification (nucleation and growth processes) of molten metal under high pressure and temperature conditions. This achievement won the 2005 Gordon Bell Prize. On June 22, 2006, NNSA and IBM announced that Blue Gene/L has achieved 207.3 TFLOPS on a quantum chemical application (Qbox). On November 14, 2006, at Supercomputing 2006, Blue Gene/L was awarded the winning prize in all HPC Challenge Classes of awards. A team from the IBM Almaden Research Center and the University of Nevada on April 27, 2007 ran an artificial neural

network almost half as complex as the brain of a mouse for the equivalent of a second (the network was run at 1/10 of normal speed for 10 seconds). In November 2007, the LLNL Blue Gene/L remained at the number one spot as the world's fastest supercomputer. It had been upgraded since the previous measurement, and was then almost three times as fast as the second fastest, a Blue Gene/P system.

On June 18, 2008, the new Top500 List marked the first time a Blue Gene system was not the leader in the Top500 since it had assumed that position, being topped by IBM's Cell-based Roadrunner system which was the only system to surpass the mythical petaflops mark. Top500 announced that the Cray XT5 Jaguar housed at OCLF is currently the fastest supercomputer in the world for open science.

MAJOR FEATURES
The Blue Gene/L supercomputer is unique in the following aspects:

A schematic overview of a Blue Gene/L supercomputer

Trading the speed of processors for lower power consumption. Dual processors per node with two working modes: co-processor (1 user process/node: computation and communication work is shared by two processors) and virtual node (2 user processes/node)

Each Compute or I/O node is a single ASIC with associated DRAM memory chips. The ASIC integrates two 700 MHz PowerPC 440 embedded processors, each with a double-pipeline-double-precision Floating Point Unit (FPU), a cache subsystem with built-in DRAM controller and the logic to support multiple communication sub-systems. The dual FPUs give each Blue Gene/L node a theoretical peak performance of 5.6 GFLOPS (gigaFLOPS). Node CPUs are not cache coherent with one another. Compute nodes are packaged two per compute card, with 16 compute cards plus up to 2 I/O nodes per node board. There are 32 node boards per cabinet/rack. By integration of all essential sub-systems on a single chip, each Compute or I/O node dissipates low power (about 17 watts, including DRAMs). This allows very aggressive packaging of up to 1024 compute nodes plus additional I/O nodes in the standard 19" cabinet, within reasonable limits of electrical power supply and air cooling. The performance

System-on-a-chip design A large number of nodes (scalable in increments of 1024 up to at least 65,536)

Three-dimensional torus interconnect with auxiliary networks for global communications, I/O, and management

Lightweight OS per node for minimum system overhead (computational noise)

ARCHITECTURE

One Blue Gene/L node board

metrics in terms of FLOPS per watt,

FLOPS per m2 of floorspace and FLOPS per unit cost allow scaling up to very high performance. Each Blue Gene/L node is attached to three parallel communications networks: a 3D toroidal network for peer-to-peer communication between compute nodes, a collective network for collective communication, and a global interrupt network for fast barriers. The I/O nodes, which run the Linux operating system, provide communication with the world via an Ethernet network. The I/O nodes also handle the filesystem operations on behalf of the compute nodes. Finally, a separate and private Ethernet network provides access to any node for configuration, booting and diagnostics. Blue Gene/L compute nodes use a minimal operating system supporting a single user program. Only a subset of POSIX calls are supported, and only one process may be run at a time. Programmers need to implement green threads in order to simulate local concurrency. Application development is usually performed in C, C++, or Fortran using MPI for communication. However, some scripting languages such as Ruby have been ported to the compute nodes.

To allow multiple programs to run concurrently, a Blue Gene/L system can be partitioned into electronically isolated sets of nodes. The number of nodes in a partition must be a positive integer power of 2, and must contain at least 25 = 32 nodes. The maximum partition is all nodes in the computer. To run a program on Blue Gene/L, a partition of the computer must first be reserved. The program is then run on all the nodes within the partition, and no other program may access nodes within the partition while it is in use. Upon completion, the partition nodes are released for future programs to use. With so many nodes, component failures are inevitable. The system is able to electrically isolate faulty hardware to allow the machine to continue to run.

BLUE GENE/C
Blue Gene/C (now renamed to Cyclops64) is a sister-project to Blue Gene/L. It is a massively parallel, supercomputer-on-achip cellular architecture. It was slated for release in early 2007 but has been delayed.

IBM Blue Gene/C Supercomputer

BLUE GENE/P
On June 26, 2007, IBM unveiled Blue Gene/P, the second generation of the Blue Gene supercomputer. Designed to run continuously at 1 PFLOPS (petaFLOPS), it can be configured to reach speeds in excess of 3 PFLOPS. Furthermore, it is at least seven times more energy efficient than any other supercomputer, accomplished by using many small, lowpower chips connected through five specialized networks. Four 850 MHz PowerPC 450 processors are integrated on each Blue Gene/P chip. The 1-PFLOPS Blue Gene/P configuration is a 294,912processor, 72-rack system harnessed to a high-speed, optical network. Blue Gene/P can be scaled to an 884,736-processor, 216-rack cluster to achieve 3-PFLOPS performance. A standard Blue Gene/P configuration will house 4,096 processors per rack.
Blue Gene/P node card

On November 12, 2007, the first system, JUGENE, with 65536 processors is running in the Jlich Research Centre in Germany with a performance of 167 TFLOPS. It is the fastest supercomputer in Europe and the sixth fastest in the world. The first laboratory in the United States to receive the Blue Gene/P was Argonne National Laboratory. The first racks of the Blue Gene/P shipped in fall 2007. The first installment was a 111-teraflops system, which has approximately 32,000 processors, and was operational for the US research community in spring 2008. The full Intrepid system is ranked #3 on the June 2008 Top 500 list. Another Blue Gene/P has been installed on September 9, 2008 in Sofia, the capital of Bulgaria, and is operated by the Bulgarian Academy of Sciences and the Sofia University.

In February 2009 it was announced that JUGENE will be upgraded to reach petaflops performance in June 2009, making it the first petascale supercomputer in Europe. The new configuration has started at April 6, the system will go into production end of June 2009. The new configuration will include 294 912 processor cores, 144 terabyte memory, 6 petabyte storage in 72 racks. The new configuaration will incorporate a new water cooling system that will reduce the cooling cost substantially.

enhance the Blue Gene/L and /P architectures with higher frequency at much improved performance per watt. Blue Gene/Q will have a similar number of nodes but many more cores per node. Exactly how many cores per chip the BG/Q will have is currently somewhat unclear, but 8 or even 16 is possible, with 1 GB of memory per core. The archetypal Blue Gene/Q system called Sequoia will be installed at Lawrence Livermore National Laboratory in 2011 as a part of the Advanced Simulation and Computing Program running nuclear simulations and advanced scientific research. It will consist of 98,304 compute nodes comprising 1.6 million processor cores and 1.6 PB memory in 96 racks covering an area of about 3000 square feet, drawing 6 megawatts of power.

WEB-SCALE PLATFORM:
The IBM Kittyhawk project team has ported Linux to the compute nodes and demonstrated generic Web 2.0 workloads running at scale on a Blue Gene/P. Their paper published in the ACM Operating Systems Review describes a kernel driver that tunnels Ethernet over the tree network, which results in all-to-all TCP/IP connectivity. Running standard Linux software like MySQL, their performance results on SpecJBB rank among the highest on record.

BLUE GENE/Q
The last known supercomputer design in the Blue Gene series, Blue Gene/Q is aimed to reach 20 Petaflops in the 2011 time frame. It will continue to expand and

IBM Blue Gene/Q Super Computer

You might also like