TL;DR
We have our first post of the new year with some musings on the state of AI and more in 2025.
Where is AI going?
It’s been some time since Deep Forest’s last non-technical blog post. A lot has changed; a new government in Washington and lots of movement on the AI front as well. Scaling laws have been slowing down steadily as the supply of web-scale data has reduced. GPT5 is quite delayed at this point to say the least. And yet, the rise of test-time compute has brought a new wind to AI. OpenAI’s O3 in particular has achieved some astounding results on the ARC contest albeit at high cost.
I have to confess that I found this blog post very hard to write. Although a lot of people are very excited for the new presidency, the concept of mass deportations and an assault on civil rights fills me with dread. America’s history with race is messy and at times violent. I am holding out hope that the next few years will bring more good things than not and that the better angels of the new government will prevail over the worst tendencies. Instead of thinking about politics, I’ve spent most of the last year heads down working on our technology at Deep Forest Sciences. I am excited by what we have built and we will be able to share more progressively over the year to come. AI in science has tremendous potential, and as a scientist the prospect of working with increasingly powerful AI tools that can unlock meaningful discoveries is what keeps me excited to get up every day.
There is a lot of fear around AI. Fear that it will replace jobs and destroy meaning in careers. Complaints that AI is trained on stolen data. I won’t deny that there are real risks and injustices in AI, but at the same time I think there is a lot of potential. Current AI systems are just tools that draw on decades of old-school research in search, machine learning, and planning. These algorithms are powerful but they are not human. Rather they are tools that can enable skilled craftspeople to become better at their jobs. Good programmers can become better with AI. Good scientists can become better. There are difficult governance questions around AI systems that need to be figured out but I am on the whole hopeful that AI can bring positive benefits to society.
AI investment appears to be charging full bore ahead. With Stargate, even if the final funds are nowhere near $500B, it is clear that a tremendous societal push is underway to create training and inference infrastructure at scale. I question whether this degree of investment will yield practical breakthroughs to the degree sought within a reasonable timeframe. I don’t argue that AI is not powerful and perhaps even transformative, but transformations take time especially in the real world of atoms, brick and mortar. I saw my first self-driving car driving on the roads near Mountain View in 2011. Doing a bit of research reveals that the earliest self-driving prototypes date to the 1920s(!) (see https://en.wikipedia.org/wiki/History_of_self-driving_cars). Technologies can take decades to mature and have impact. I would guess that we are at least another decade out from self-driving becoming truly mainstream in the US and probably longer globally. I expect similarly that recent advances in AI-driven reasoning will shift society for decades to come. I hope that the burst of recent investment doesn’t tip over into pessimism when the singularity doesn’t manifest within the next 2-3 years.
About
Deep Into the Forest is a newsletter by Deep Forest Sciences, Inc. Get in touch with us at partnerships@deepforestsci.com.
Credits
Author: Bharath Ramsundar, Ph.D.
Editor: Sandya Subramanian, Ph.D.