Report: Computing has hit ‘power wall’
The moved has been spurred by growing industry concern that today’s microprocessor computing engines have hit a “power wall”. That in turn has prompted a re-evaluation of the roadmap for high-performance computing, a reassessment that yielded a new study published by the National Research Council on the future of computing performance. The report’s bottom line is summed up in its subtitle: “Game Over or Next Level?”
“The era of sequential computing must give way to a new era in which parallelism is at the forefront,” the report asserts. “The next generation of discoveries is likely to require advances at both the hardware and software levels….”
The challenge, added the report’s editor, Samuel Fuller, chief technology officer at Analog Devices, is whether “we can develop software environments to develop new applications for multicore architectures.” What is needed are new parallel programming environments, Fuller said. “The breakthrough needs to be in the software environment.”
As single processors and CMOS technology approach the end of the technology line, the computing report concludes that chip designers and software developers alike must shift their focus to parallelism.
To that end, the report specifically recommends that research funded by industry, government and universities along with partnerships among them should focus on:
New algorithms that can exploit parallel processing;
Developing new programming methods with an eye toward broader industry use;
Overhauling the traditional computing “stack” to account for parallelism and resource-management challenges;
Investing in new parallel architectures that are driven by emerging applications like mobile computing;
Investing in R&D that focuses on power efficiency at all system levels. Further, the report recommends that R&D should directly address the looming “power wall” issue by “making logic gates more power efficient” and by looking beyond CMOS to lower-power device technologies.
As for software, some experts argue that the Open Source movement could help lead the charge in developing new programming methods for leveraging parallel processors. Open Source projects tend to operate like successful electronics industry consortia, according to David Liddle, a computer industry veteran who now serves as a general partner with U.S. Venture Partners. The Open Source movement has had a “huge impact” on computing, Liddle said, and a new effort is needed “to create the momentum necessary to attack the software” problem.
Others insist that performance improvements in devices like mobile phone SoCs have been hampered by power limits. “We’re in this box,” said Mark Horowitz, chairman of the electrical engineering department at Stanford University and chief scientist at Rambus Inc. “Performance now comes with a power penalty.”
The consensus among experts gathered here this week to consider the computing study recommendations, is that chip designers and software developers are now bound more tightly together as they seek a new paradigm for high-performance computing. Ultimately, it all comes down to power.
“You need to keep reducing voltage [or] this parallelism strategy won’t work,” warned Dan Dobberpuhl, cofounder and CEO of P.A. Semi.