Deep into the Forest

Share this post

How Fast Will AI Advance?

deepforest.substack.com

How Fast Will AI Advance?

Estimated Reading Time: 4 minutes

Bharath Ramsundar
Feb 17
4
Share this post

How Fast Will AI Advance?

deepforest.substack.com

TL;DR

We briefly discuss whether recent advances in AI can lead to compounding exponential advances.

Compounding Exponential Advances?

Jack Clark is one of the sharpest observers of AI advances. A long-time OpenAI and Anthropic AI employee, his Import AI newsletter has been one of my favorite surveys of progress in AI for the last several years. For that reason, I was interested to see this thread from Jack discussing why he thinks that advances in AI could be self-compounding over the coming years in a dramatic and surprising fashion.

Twitter avatar for @jackclarkSF
Jack Clark @jackclarkSF
A mental model I have of AI is it was roughly ~linear progress from 1960s-2010, then exponential 2010-2020s, then has started to display 'compounding exponential' properties in 2021/22 onwards. In other words, next few years will yield progress that intuitively feels nuts.
12:24 AM ∙ Feb 12, 2023
2,182Likes338Retweets
Twitter avatar for @jackclarkSF
Jack Clark @jackclarkSF
There's pretty good evidence for the extreme part of my claim - recently, language models got good enough we can build new datasets out of LM outputs and train LMs on them and get better performance rather than worse performance. E.g, this Google paper: arxiv.org/abs/2210.11610
12:25 AM ∙ Feb 12, 2023
461Likes38Retweets
Twitter avatar for @jackclarkSF
Jack Clark @jackclarkSF
We can also train these models to improve their capabilities through use of tools (e.g, calculators, QA systems), as in the just-came-out 'Toolformer' paper arxiv.org/abs/2302.04761 . Another fav of mine= this wild paper where they staple MuJoCo to an LM arxiv.org/abs/2210.05359
12:26 AM ∙ Feb 12, 2023
223Likes12Retweets
Twitter avatar for @jackclarkSF
Jack Clark @jackclarkSF
We can also extract preference models from LMs and use those to retrain LMs via RL to get better - this kind of self-supervision is increasingly effective and seems like it gets better with model size, so gains compound further arxiv.org/abs/2204.05862
12:33 AM ∙ Feb 12, 2023
149Likes5Retweets

I am personally more pessimistic. My perspective may be shaped by my background as an applied AI scientist: I spend a lot of time in particular thinking about how AI can help us design new medicine. The problem of designing medicine is fundamentally hard since we have a very limited understanding of much of human biology. It is possible fundamental insights about a disease may be hidden in the scientific literature waiting to be picked up by “Biological GPT,” but I worry that hallucinated hypotheses could instead lead to many wrong conclusions.

The technique used in the second paper mentioned above of pairing a physics simulator such as MuJoCo to an LLM would not work for medicine discovery since there exists no simulator that can simulate meaningful biology at macroscopic scale yet. Several years ago, I wrote an essay about the challenge of biological simulation, https://rbharath.github.io/the-ferocious-complexity-of-the-cell/, that I believe still holds true in large part today. Human feedback is of limited use since humans do not fully understand these diseases themselves. LLM models can probably achieve some impact, but it feels unlikely they can generate the fundamental breakthroughs needed to understand biology at scale.

Jack’s analysis leads him to the conclusion that the next few years will be crucial for shaping positive societal outcomes

Twitter avatar for @jackclarkSF
Jack Clark @jackclarkSF
Anyway, how I'm trying to be in 2023 is 'mask off' about what I think about all this stuff, because I think we have a very tiny sliver of time to do various things to set us all up for more success, and I think information asymmetries have a great record of messing things up.
12:35 AM ∙ Feb 12, 2023
323Likes21Retweets

I agree I have been surprised by the capabilities of ChatGPT, Dall-E and other models in the last few years. I was also surprised when AlphaGo defeated Lee Sedol at Go, and when IBM Watson beat champions at Jeopardy, but little societal impact resulted from these earlier advances. I have argued in previous posts in this newsletter that easy progress due to hardware advances will slow down in the years to come.

Deep into the Forest
Towards Sentience? Probably Not.
TL;DR The last few months have seen a heated debate in AI circles about the Scaling Hypothesis, which argues that large enough models may be capable of achieving artificial general intelligence. We survey some of the recent evidence for and against this hypothesis and argue that while there is compelling evidence for the effects of scale, the field of AI…
Read more
9 months ago · 4 likes · Bharath Ramsundar

The future of AI may require hard science, developing LLMs, and other models that can revitalize Moore’s law. Achieving breakthrough advances in semiconductor and quantum physics will be as hard as making fundamental advances in medicine; I believe that AI will play a crucial role, but that the leading insights must still come from human minds for decades to come. But time will tell I guess.

Interesting Links from Around the Web

  • . https://spectrum.ieee.org/wireless-charging: A potential design for next-generation wireless chargers.

Feedback and Comments

Please feel free to email me directly (bharath@deepforestsci.com) with your feedback and comments! 

About

Deep Into the Forest is a newsletter by Deep Forest Sciences, Inc. We’re a deep tech R&D company building Chiron, an AI-powered scientific discovery engine. Deep Forest Sciences leads the development of the open source DeepChem ecosystem. Partner with us to apply our foundational AI technologies to hard real-world problems in drug discovery. Get in touch with us at partnerships@deepforestsci.com! 

Credits

Author: Bharath Ramsundar, Ph.D.

Editor: Sandya Subramanian, Ph.D.

Share this post

How Fast Will AI Advance?

deepforest.substack.com
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 Bharath Ramsundar
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing