News

Are Supercomputers Hidden Power Center Of Silicon Valley?

Silicon Valley is one of the nerve centers for building the world’s fastest number-crunchers and is also famed for spawning the desktop, mobile and cloud computing revolutions. Once confined to big national laboratories, supercomputers are now in demand to crunch massive amounts of data for industries such as oil exploration, finance and online sales.

The valley’s strong hand in that business was highlighted in April when Intel landed the prime contract to design a $200 million supercomputer named Aurora to be housed at the Argonne National Laboratory in Illinois.

Aurora, developed in partnership with Cray of Seattle, will likely become the world’s fastest supercomputer when it goes online in 2018. With Aurora’s new architecture, the Santa Clara chip company appears to be taking aim at a bigger slice of what will soon be a $15 billion to $20 billion commercial market for “high-performance” computers that can give a company a competitive edge.

Think of a supercomputer as a cluster of tens of thousands of Mac workstations performing together like a symphony orchestra to process billions and trillions of bits of data every second, sometimes for hundreds of users.

Supercomputer prices run from $500,000 to more than $100 million. Some are general purpose machines that can perform tasks like 3-D modeling while hosting large numbers of users at the same time. A second type is used for one task, such as running a cloud-based service.

“A whole class of things start to become practical as the cost of computing drops,” said Alan Gara, an Intel fellow at the giant chip firm’s Santa Clara headquarters and lead system architect for the Aurora system.

“There used to be a few hundred supercomputers sold in the world each year because the prices were so high — $10 million and up,” said analyst Conway of IDC. But prices have fallen so sharply for powerful machines that “these days, companies and small organizations that wouldn’t think of getting one before can do so.” Nowadays, thousands of high-performance machines are sold every year.

Since they can provide an edge over competitors, some companies don’t want their supercomputers publicized.

“Some of the bigger companies really depend upon supercomputers to do lot of their work,” said Bill Mannel, head of HP’s Apollo server team in Houston, who added that “whether it’s the oil, auto or aircraft industry, it’s become such a core part of their development that they won’t even share details of how much they have or who they buy from.”

Most companies with valley headquarters have far-flung research and manufacturing sites, but this area is a key center of innovation said Scot Schultz at Mellanox Technologies. “I think it’s largely because most of the core tech providers have a presence here and have been a part of the Bay Area community here for so long.”

Google — whose operations require massive amounts of computing power — is pushing the frontiers in a collaboration on a quantum computer project with NASA Ames and the Universities Space Research Association. The hope is to develop a radically different computer that in theory could do certain problems in a few days that would take today’s computers “millions of years” to perform.

“If you thought IBM’s Watson on Jeopardy was impressive,” Conway said, “where things are headed will totally leave it in the dust.”​

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top