Archive for the ‘hpc’ Category.

UB buys IBM BladeCenter

Keeping track of my colleagues down the street:
ClusterWorld | University at Buffalo Adds IBM Blades:

The new supercomputer, capable of a peak performance of more than 1.32 TeraFlops, will consist of a cluster of 266 IBM eServer� BladeCenter� HS20 systems running Red Hat Advance Server 2.1 Linux, each with two 2.8 GHz Intel Xeon processors and 1.0 GB of memory. Seven IBM xSeries 345 Intel processor-based servers connect to 5 terabytes (TB) of IBM FAStT700 Storage to house large volumes of biological and research data. The supercomputer forms the basis of the IBM eServer Cluster 1350, a pre-packaged and tested supercluster that is ultra-dense and incredibly easy to manage.

Apple XGrid

Apple Previews Xgrid Technology at MacWorld Expo.
There’s also the
Apple Xgrid web site.
As one would expect from Apple, beautiful elegant design that somehow attracts the love of developers even when it doesn’t even try to create to open standards or adhere to existing ones.

Build a grid application with Python

IBM: Build a grid application with Python (tutorials):

Red Hat users balk at Enterprise Linux licensing

Red Hat users balk at Enterprise Linux licensing:

While Red Hat says the GPL gives customers the right to copy Red Hat Enterprise Linux freely, it also says it considers unauthorised copying to be a violation of its service contract – something that could lead to a breach of contract lawsuit, according to Bryan Sims, Red Hat’s vice president and associate legal counsel.

It will take more than that to win over customers like Argonne National Labs’ Beckman, however. He would like to see Red Hat produce a plain English document that explains what users can and cannot copy under the Enterprise Linux support licence, and he would like to see a price structure that better accommodates the needs of his class of user.

“I don’t know of any site that has lots of processors that plans on buying a per-processor licence,” he said.

Relative merits of supercomputers, grids, and clusters debated in House Science Committee

Wired News: Computer Groupthink Under Fire:
«
Critics at a House Science Committee hearing in July on the status of supercomputing in the United States claimed that federal agencies are focusing too heavily on developing and deploying grid computing and clusters, and not investing enough in development of true supercomputers.
»

KASY0 cluster breaks $100/GFLOP barrier

Shirky on Grid Computing

Shirky: Grid Supercomputing: The Next Push:

We have historically overestimated the value of connecting machines to one another, and underestimated the value of connecting people, and by emphasizing supercomputing on tap, the proponents of Grids are making that classic mistake anew. During the last great age of batch processing, the ARPAnet’s designers imagined that the nascent network would be useful as a way of providing researchers access to batch processing at remote locations. This was wrong, for two reasons: first, it turned out researchers were far more interested in getting their own institutions to buy computers they could use locally than in using remote batch processing, and Moore’s Law made that possible as time passed. Next, once email was ported to the network, it became a far more important part of the ARPAnet backbone than batch processing was. Then as now, access to computing power mattered less to the average network user than access to one another.

FAST protocol

Space.com: Pushing the Speed Limit: For Researchers, the Internet Just Got Faster
“Unlike the single path TCP protocol, FAST uses 10 parallel routes for its delivery, allowing researchers to send massive amounts of data while still keeping the size of each information packet down to current standards. During a data transfer, FAST monitors network congestion and rapidly adjusts the amount of information being sent to ensure a prompt delivery. … In comparison tests using only one pathway to send data from the Sunnyvale facility to CERN, a distance of about 6,236 miles (10,037 kilometers), FAST was still more than three times as efficient as the standard TCP method.”

Stein Gives Bioinformatics Ten Years to Live

O’Reilly Network: Stein Gives Bioinformatics Ten Years to Live [Feb. 05, 2003]:
‘ Lincoln Stein’s keynote at the O’Reilly Bioinformatics Technology Conference was provocatively titled “Bioinformatics: Gone in 2012.” Despite the title, Stein is optimistic about the future for people doing bioinformatics. But he explained that “the field of bioinformatics will be gone by 2012. The field will be doing the same thing but it won’t be considered a field.” His address looked at what bioinformatics is and what its future is likely to be in the context of other scientific disciplines. He also looked at career prospects for people doing bioinformatics and provided advice for those looking to enter the field.

One of Stein’s tests for a discipline is the “Department Of” test. Take your favorite field or service and prepend it with your favorite institution’s name, followed by “Department of”. For example, he is quite happy with the phrase “the Harvard Department of Genetics.” On the other hand, a “Department of Microscopy” seems to him to fit better at an Institute of Technology. He said that for him, a Department of Bioinformatics has the same feel and he doesn’t predict the establishment of bioinformatics departments.

Stein returned to the question, what is bioinformatics? In light of his thoughts on services defined by tools and disciplines defined by problem, his answer was simple. Bioinformatics is just one way of studying biology. Whether you think of bioinformatics as High Throughput Biology, Integrative Biology, or Large Data Set Biology, fundamentally Stein argues that bioinformatics is biology.

University of Florida buys mainframe for grid computing platform

ZDNet |UK| – News – Story – IBM sells mainframe for grid research:
“The university has created software that lets actual grids be carved up into private ones for individual users or specific applications. The researchers are using the z800 with z/VM and Linux and the cluster of Intel servers running VMware’s virtualisation software for Linux. In addition to developing grid virtualisation, the systems will be used for nanotechnology and computer science research.

The National Science Foundation funded the purchase of the z800, which was sold by Cornerstone Systems. The University of Florida also bought an Enterprise Storage Server “Shark” system with 3.36 terabytes of capacity.”