Cornell Theory Center joins team to build world's most powerful high-performance computer for academic research

Cornell Theory Center (CTC) has teamed with the Texas Advanced Computing Center (TACC) at the University of Texas at Austin which will build, operate and support a high-performance computing system to provide unprecedented computational power to the nation's scientists and engineers.

CTC will provide expertise in adapting scientific applications to run on a parallel computer and will develop training modules to help scientists learn to use the massively parallel machine, according to Anthony Ingraffea, the Dwight C. Baum Professor of Engineering and acting director of CTC.

When fully deployed, the system is expected to be the most powerful general-purpose computer system in the world available to academic researchers, and to support scientific research that involves processing massive amounts of data.

The project is funded by a $59 million, five-year grant from the National Science Foundation to a consortium that includes, along with TACC and CTC, the Institute for Computational Engineering and Sciences at Austin and Arizona State University.

TACC will partner with Sun Microsystems to build the hardware, which will incorporate 13,000 AMD quad-core processors operating in parallel to achieve processing speeds of over 400 teraflops (trillions of floating-point arithmetic operations per second -- some 200 times faster than Cornell's fastest supercomputer, the Velocity-3 cluster operated by CTC, which runs at speeds up to 2.1 teraflops), with over 100 terabytes (trillions of bytes) of memory and 1.7 petabytes (quadrillions of bytes) of disk storage.

Using the new TeraGrid fiber-optic network, a high-speed link dedicated to scientific research, researchers throughout the country will be able to run experiments on the new machine. The initial configuration of the computer will go into operation June 1, 2007, and the final configuration will be in operation by October 2007, when it will become the most powerful computational resource in the TeraGrid. User training by CTC will begin shortly before deployment to help researchers use the new resource.

"Our Virtual Workshop technology will help researchers across the U.S. rapidly come up to speed on using the new system," said Dave Lifka, chief technical officer at CTC.

CTC has been developing and delivering education in high-performance computing to users of its own high-performance computers for more than 10 years, and has adapted its extensive set of lectures, online tutorials and laboratory exercises into Web-based modules that can serve as either live lectures or online self-paced materials.

When the TACC computer goes into operation, Cornell experts also will help researchers optimize their applications for the best performance on a massively parallel machine. When CTC was established in 1985 as one of four NSF supercomputing centers, it was the only center devoted to parallel processing. Since then, CTC staff and Cornell computer scientists have developed expertise in large-scale data management in a parallel environment.

"We are pleased to be a part of such an important cyber-infrastructure initiative aimed at helping researchers get the answers they need faster and easier," said Robert Richardson, the Floyd R. Newman Professor of Physics and senior vice provost for research at Cornell.

High-performance computing is an important instrument in all science disciplines because it enables testing and validation of theories, analysis of experiments and the simulation of virtual numerical experiments that would otherwise be expensive, dangerous or even impossible.

Media Contact

Media Relations Office