TL;DR
We briefly discuss whether recent advances in AI can lead to compounding exponential advances.
Compounding Exponential Advances?
Jack Clark is one of the sharpest observers of AI advances. A long-time OpenAI and Anthropic AI employee, his Import AI newsletter has been one of my favorite surveys of progress in AI for the last several years. For that reason, I was interested to see this thread from Jack discussing why he thinks that advances in AI could be self-compounding over the coming years in a dramatic and surprising fashion.
I am personally more pessimistic. My perspective may be shaped by my background as an applied AI scientist: I spend a lot of time in particular thinking about how AI can help us design new medicine. The problem of designing medicine is fundamentally hard since we have a very limited understanding of much of human biology. It is possible fundamental insights about a disease may be hidden in the scientific literature waiting to be picked up by “Biological GPT,” but I worry that hallucinated hypotheses could instead lead to many wrong conclusions.
The technique used in the second paper mentioned above of pairing a physics simulator such as MuJoCo to an LLM would not work for medicine discovery since there exists no simulator that can simulate meaningful biology at macroscopic scale yet. Several years ago, I wrote an essay about the challenge of biological simulation, https://rbharath.github.io/the-ferocious-complexity-of-the-cell/, that I believe still holds true in large part today. Human feedback is of limited use since humans do not fully understand these diseases themselves. LLM models can probably achieve some impact, but it feels unlikely they can generate the fundamental breakthroughs needed to understand biology at scale.
Jack’s analysis leads him to the conclusion that the next few years will be crucial for shaping positive societal outcomes
I agree I have been surprised by the capabilities of ChatGPT, Dall-E and other models in the last few years. I was also surprised when AlphaGo defeated Lee Sedol at Go, and when IBM Watson beat champions at Jeopardy, but little societal impact resulted from these earlier advances. I have argued in previous posts in this newsletter that easy progress due to hardware advances will slow down in the years to come.
The future of AI may require hard science, developing LLMs, and other models that can revitalize Moore’s law. Achieving breakthrough advances in semiconductor and quantum physics will be as hard as making fundamental advances in medicine; I believe that AI will play a crucial role, but that the leading insights must still come from human minds for decades to come. But time will tell I guess.
Interesting Links from Around the Web
. https://spectrum.ieee.org/wireless-charging: A potential design for next-generation wireless chargers.
Feedback and Comments
Please feel free to email me directly (bharath@deepforestsci.com) with your feedback and comments!
About
Deep Into the Forest is a newsletter by Deep Forest Sciences, Inc. We’re a deep tech R&D company building Chiron, an AI-powered scientific discovery engine. Deep Forest Sciences leads the development of the open source DeepChem ecosystem. Partner with us to apply our foundational AI technologies to hard real-world problems in drug discovery. Get in touch with us at partnerships@deepforestsci.com!
Credits
Author: Bharath Ramsundar, Ph.D.
Editor: Sandya Subramanian, Ph.D.