A US startup has emerged from stealth mode with a programmable AI chip architecture and software for low power native processing of artificial intelligence and machine learning algorthms. Blaize, formerly called Thinci, has backing from automotive companies such as Toyota and Denso, and currently has 325 staff around the world, including two design teams in the UK.
The AI chip technology, called a Graph Streaming Processor (GSP), uses a multicore architecture controlled by a state machine. The flow of the machine learning data is determined in the compiler beforehand. This means the compiler and the software development kit (SDK) is a vital part of the development, and the company claims that the architecture can process data ten to a hundred times faster than other AI chip designs and o use less power.
The company has developed prototype chips on a multiproject wafer that is being used with the software by fifteen early adopters around the world. However it is not revealing details of the architecture or power consumption figures until the first half of next year, other than to say it uses small amounts of SRAM with every execution unit to reduce the need to go to power hungry off-chip high speed DRAM memory such as HBM.
“We are developing native computation of complex graphs,” said Dinakar Munagala, Co-founder and CEO of Blaize (above). “We had this vision of how to implement this in a small power envelope and silicon footprint. The way we run graphs natively is applicable to edge computing where we let the hardware manage the hardware scheduling and data dependencies without needing to go to DRAM as often as possible.
“We have a complex compiler and driver that understands the graph, and an SDK compiler tool chain that can understand any network – for example in automotive applications using a point cloud behind radar, lidar, etc.”
Next: Early architecture details