Graphcore details power for Bow AI unit

Graphcore details power for Bow AI unit
Technology News |
Graphcore has detailed the power consumption of its latest AI processor, built with the second generation Colossus chips and a bonded wafer power supply. This Bow IPU has two wafers are bonded together to generate a new 3D die: one wafer for the AI processing, which is architecturally compatible with…
By Nick Flaherty

Share:

Graphcore has detailed the power consumption of its latest AI processor, built with the second generation Colossus chips and a bonded wafer power supply.

This Bow IPU has two wafers are bonded together to generate a new 3D die: one wafer for the AI processing, which is architecturally compatible with the GC200 IPU processor with 1,472 independent IPU-Core tiles, capable of running more than 8,800 threads and 900MB of In-Processor Memory

The second wafer adds deep trench capacitors and is bonded to the processor wafer next to the processing cores and memory to provide the higher performance, says Simon Knowles, co-founder and chief architect at Graphcore.

This uses new technologies developed at TSMC, including Back Side Through Silicon Via (BTSV) and the Wafer-on-Wafer (WoW) hybrid bonding.

The Bow 2000 IPU provides up to 40% more performance with 16% better power efficiency for real world AI applications. The 1U unit integrated four of the new Bow processors with 3.6GB In-Processor-Memory and up to 256GB Streaming Memory with an average power of 1.25kW. This provides 1.4 petaFLOPs of AI compute.

In comparison, four of the same Mk2 chips without the wafer-on-wafer power management in the previous M2000 1U unit provides a peak performance of 1Pflop/s at 1.5kW.

The Bow 2000 is the building block for larger ‘pods’, starting with the 5U BowPod16 providing 5.6 PFLOPS. The BowPod 1024 full rack unit with 1024 Bow processors will provide 350 PFLOPS.

GraphCore details power figures for Mk2 chip and AI system

Graphcore slashes power in first Dell server

Graphcore boosts AI performance to exascale with wafer on wafer

 

The power efficiency is allowing Graphcore to use the wafer on wafer technology to build a large scale AI machine 10 exaFLOPS of performance and 4Petabytes of memory and a bandwidth of 10 Pbyte/s

“Graphcore’s Bow POD systems set a new standard in artificial intelligence compute that will allow Atos customers to push their AI innovation further and achieve results faster than ever before,” said Agnès Boudot, Head of HPC & Quantum at French supercomputer developer Atos.

“As models grow in size and complexity, Graphcore and Atos work together to deliver Exascale systems. At the same time, Graphcore’s relentless focus on computational efficiency ensures that users are getting the best possible return on their AI investment,” she said.

US cloud service provider, Cirrascale, is making Bow Pod systems available today to customers as part of its Graphcloud IPU bare metal service, while European cloud service provider G-Core Labs has announced that it will be launching Bow IPU cloud instances in Q2 2022.

www.graphcore.com

Related articles

 

Linked Articles
eeNews Power
10s