Dumpster, dumpster spare that junk, it might make a supercomputer – at least in the hands of two Cornell students

Don't throw away that old computer. Cornell University students Bryan Kressler and Nick Burlett might be able to use it to make a supercomputer.

Using a few old computers donated by Mitre Corp., along with odds and ends of Cornell loading-dock castoffs, the two students have assembled a "cluster computer" in which the whole is much faster than the sum of its parts. It's not a supercomputer yet, they say, but expanding it is next year's project.

"It's really quite amazing, and the Mitre folks are extremely impressed," says John Belina, lecturer in electrical and computer engineering and assistant director of Cornell's School of Electrical and Computer Engineering. Belina advises Kressler, an electrical engineering junior, and Burlett, a computer science sophomore, on the project.

A "cluster" is a popular new approach to supercomputing. The central processing units of conventional computers are always being made faster, but the laws of physics eventually will place limits on how fast electrons can be pushed through wires. So engineers have turned to parallel processing, where many calculations run at the same time on separate processors.

The Cornell Theory Center (CTC) operates clusters with up to 256 processors, and its Cornell-developed cluster operating system is being used by several companies. But a side effect of cluster computing is that since individual processors don't have to be all that fast, they don't have to be all that expensive. Cluster computing is making supercomputing affordable enough to make it practical as a student project

"We were looking for a project, and John Belina wanted a cluster," Burlett explains. "He has a group of students working on a program that analyzes the human electrocardiogram. It currently runs really slowly on Matlab." (Matlab is a popular mathematical analysis tool.) Both students had been interested in cluster computing, but "it's not something that's generally accessible to undergraduates," Kressler adds.

Mitre donated six Intel Pentium II machines with speeds ranging from 200 to 400 megahertz (MHz). With those machines, a couple that had been donated previously, and the aforementioned loading dock requisitions, the students currently have a cluster computer with eight nodes, plus one other computer acting as a sort of traffic director.

While the CTC clusters use Windows operating system – with licenses donated by Microsoft – Kressler and Burlett use FreeBSD, a freeware version of Unix. "We went with free software because we don't have much money, and we felt that the money we had would be put to better use by buying better networking equipment than we had." Burlett says, adding that he already had extensive experience with Unix.

The idea of building low-cost clusters is certainly not unique to Cornell, he points out. The ultimate example, although not a student project, is the Stone Soupercomputer at Oak Ridge National Laboratory. Just as stone soup is made up of contributed food, the Stone Soupercomputer consists entirely of cast-off lab computers. "They were doing climate analysis and had no budget for computers," Burlett explains. "It's not low-cost computing but no-cost."

Right now, Kressler says, "I don't think you could call what we have a supercomputer, but it has the potential to be developed into one." With programs written to take advantage of parallel processing, Burlett says, their machine currently is about as fast as a 2000 MHz Pentium.

"I guess I'd call it a mini-cluster at the moment," Belina says. "The speed of the machine is not impressive, but making the processors all work as a team, distributing the computing, and not wasting a lot of the CPU in overhead issues is the challenge."

The students have dubbed their creation "Deep Red," reflecting Cornell's "Big Red" nickname and a takeoff on IBM's "Deep Blue" cluster.

Early test runs have used a program written by Kressler and Burlett that generates fractal images. It's ideal for parallel processing, Burlett says, because each pixel on the screen can be computed separately. The fractal program takes about five minutes to generate a high-resolution image when running on a single node, and about 30 seconds on the cluster. The students are working on an interface that will let the cluster run Matlab, which, Burlett says, will give them "pretty pictures," as well as helping their adviser's electrocardiogram project.

Media Contact

Media Relations Office