Archive for the ‘hpc’ Category.

Distributed Resource Management Application API (DRMAA) proposed

AtNewYork :
DRMAA, according to Peter Jeffcock, Sun Group Marketing Manager for Grid Computing, will expand the reach of grid computing because it will make it easier for independent software vendors to make and promote grid computing applications.

The race to computerise biology

The Economist:
The race to computerise biology
(A layman’s introduction to bioinformatics)

Slashback on open source grid computing

gridMathematica Announced

10-Tflops computer built off the shelf

EE Times:

10-Tflops computer built off the shelf

Lawrence Livermore National Lab is putting together a supercomputer that will boast nearly the same performance as the ASCII White system from IBM Corp. that the lab now uses but it promises to be 10 times cheaper. Called Evolocity, the system will be the fastest clustered supercomputer in the world, according to Lawrence Livermore.

“This network approach is nice because we can use a standard PCI slot on each processor node, which gives a 4.5-microsecond latency,” he said, as opposed to 90-µs latency for Gigabit Ethernet.

The network uses bus host adapters on each node, supporting a 320-Mbyte transfer speed in one direction and 400-Mbyte bidirectional throughput. Each processing node is a server board from SuperMicro Inc. (San Jose, Calif.), built around the Intel E7500 chip set with two Xeon processors running at 2.4 GHz. The boards are linked by a network assembled by Linux Networx into a clustered system that will have 960 server nodes.

The file system, called Lustre, uses a client/server model. Large, fast RAM-based memory systems support a metadata center, and data is represented across the enterprise in the form of object-storage targets. “Being able to share data across the enterprise is an exciting new capability. It will allow more collaboration among research projects,” Seager said. For example, workstations on the network running visualization programs can directly access data generated by Evolocity.

Linux cluster [at University of Buffalo] will help research treatment of cancer, AIDS

Linux cluster [at University of Buffalo] will help research treatment of cancer, AIDS

The cluster at the Buffalo Center of Excellence in Bioinformatics at SUNY Buffalo went online in mid-August and is often running at full capacity even though the set-up team is still doing some minor tweaking, says Jeffrey Skolnick, the soon-to-be director of the bioinformatics center.

Most of the computers in the cluster are 1.26 GHz Dell PowerEdge servers, with a few higher-speed Xeons thrown in. Subcontractor Sistina Software is providing cluster file system technology to manage the data traffic among the nodes.

Skolnick can rattle off all kinds of interesting statistics about the cluster. It will enable researchers to predict protein structure and run large-scale computer simulations, and work that would’ve taken 1,000 years on on processor will be done in three to six months on the cluster. “We’re not just trying to collect computers, which is a nice little hobby,” he says. “It enables us to do science we couldn’t do elsewhere.”

And managing these 2,000 machines will be two sysadmins. That’s right, two of them. That’s the beauty of running a cluster instead of a bunch of individual machines, of course, and Linux will help keep the maintenance costs down, Skolnick says. “I’ve got to get the most bang for my research dollar,” he adds.