TL;DR
We return to semiconductors this week with new and intriguing progress in the AI-chip space, both on the software and hardware front. Synopsys has unveiled some new AI chip design capabilities, and Cerebras has demonstrated potentially compelling speed-ups over Nvidia GPUs.
The SysMoore Era in Chip Design
Moore’s law has been slowing for years, with smaller nodes proving increasingly difficult to design. In an earlier post, we dived into TSMC’s thoughts on the future of Moore’s law which suggested that new architectural innovations may allow for improved performance.
A recent talk by Aart de Geus, CEO of Synopsys suggests that we are entering what he calls the SysMoore era in chip design. Summarized briefly, Moore’s law improvements in transistor density are slowing down, so de Geus suggests that integrated designs of systems of chips may be able to pick up the slack.
In breakthrough work from a few years ago, Google showed that AI systems could be used to solve chip placement challenges for the design of its TPU chips. de Geus notes that applying AI to more complex chip designs raises considerable additional difficulties due to the exponentially larger search space.
However, Synopsys has achieved some promising early results applying AI optimization tools to the problem of design space optimization (DSO) and has been able to achieve AI-guided design results for simpler systems that beat the best human-optimized effort.
We have seen the growing maturity of AI-guided design efforts across multiple problem domains, from therapeutic design to materials design, and now to chip design. We are still likely in the early days of the AI-driven engineering revolution.
Cerebras’ Wafer Scale Chips are Starting to Show Results
Cerebras has designed a new wafer scale chip with an astounding 2.6 trillion transistors. Cerebras has been looking for new applications to show off the capabilities of its new chips and has implemented integration with PyTorch and TensorFlow to allow researchers to build more standard software on its chips. These results have started to pay off, with Cerebras and GlaxoSmithKline announcing results training an epigenomic language model considerably faster on a Cerebras system than on a server with 16 GPUs.
Cerebras has not yet achieved mainstream adoption, but these latest results may entice larger institutions to consider investing in Cerebras infrastructure to accelerate model training.
Weekly News Roundup
https://www.nextplatform.com/2022/03/02/industry-behemoths-back-intels-universal-chiplet-interconnect/: Intel is leading a new effort to establish a Universal Chiplet Interconnect (UCI) Standard
https://www.quantamagazine.org/crisis-in-particle-physics-forces-a-rethink-of-what-is-natural-20220301/: A fascinating tour of the growing problems raised by the “naturalness” principle in physics..
Feedback and Comments
Please feel free to email me directly (bharath@deepforestsci.com) with your feedback and comments!
About
Deep Into the Forest is a newsletter by Deep Forest Sciences, Inc. We’re a deep tech R&D company building an AI-powered scientific discovery engine. Deep Forest Sciences leads the development of the open source DeepChem ecosystem. Partner with us to apply our foundational AI technologies to hard real-world problems. Get in touch with us at partnerships@deepforestsci.com!
Credits
Author: Bharath Ramsundar, Ph.D.
Editor: Sandya Subramanian, Ph.D.