TL;DR
AI Alignment work is starting to consume a large fraction of available funding for AI research. In this opinion post, I argue that investment in AI alignment is premature and better spent on concrete challenges such as secure chip supply chains.
Funding Alignment Research is Premature
Over the last decade, AI Alignment has emerged as a serious AI discipline. Continuing advances in the capabilities of large models provide growing evidence for the scale hypothesis, which holds roughly that a large enough model may be sufficient to achieve artificial general intelligence. As I argued in our earlier post, many of these arguments discount Moore’s law and the growing hardware challenges facing scaling research.
This week, news emerged that SMIC has released a foundry process for 7nm ASIC chips, a feat that no Western company, including Intel has achieved (source). SMIC has a very checkered past, with a long track record of IP theft (see our post linked below), but crooked means can still achieve stunning results. It’s worth emphasizing that a Chinese semiconductor company, tightly bound with the CCP, now possesses more advanced semiconductor technology than any US company.
In other recent news, President Biden warned Speaker Pelosi against visiting Taiwan due to the risks of inflaming tensions with the CCP (link). As Russia’s invasion of Ukraine has shown, the Pax Americana has passed, and other great powers feel free now to move to military action against the wishes of the US. Unification with Taiwan has been one of the CCP’s core goals for decades, meaning that a CCP driven invasion of Taiwan is a very real possibility. Given that most of the world’s leading AI chips are manufactured at TSMC, the potential invasion of Taiwan should play a very real role in estimates of near-term AI “takeoff” into superintelligence. If superintelligence is indeed near (which I don’t personally believe), that superintelligence stands a real chance of serving the CCP and its interests. Even if superintelligence is far off, CCP domination of the AI industry would be very damaging to the US and its allies.
As a field, AI alignment has drawn in powerful thinkers and promoters. Funding for AI Alignment research has grown steadily with major players like OpenAI or AnthropicAI devoting billions of dollars collectively to the problem. I agree that funding a few AI alignment researchers could be worthwhile, but I question whether this large scale investment is taking money away from other more pressing priorities in AI research. At a very practical level, AI alignment dollars could probably be better directed to funding next generation American foundry companies to ensure that the entire AI industry isn’t cast into turmoil by a potential future CCP invasion of Taiwan.
I agree that there is a small risk of a malignant AI controlling the world, but there are many larger risks including climate change, rising fascism, and the CCP that AI funders should consider. AI is an epochal technology, one that will shape much of human experience over the century to come. Focusing too many resources on shadowy fears of a malignant superintelligence takes away resources from more immediate and dangerous challenges. Funding sources such as the American government or American tech giants need to put far more resources into constructing a safe AI chip supply chain rather than just funding the next big language model.
Interesting Links from Around the Web
https://hub.jhu.edu/2022/07/21/artificial-intelligence-sepsis-detection/: Impressive progress in clinical sepsis detection with AI.
https://spectrum.ieee.org/acoustic-integrated-circuit: Acoustic semiconductor devices allow for active manipulation of sound by semiconductor systems.
Feedback and Comments
Please feel free to email me directly (bharath@deepforestsci.com) with your feedback and comments!
About
Deep Into the Forest is a newsletter by Deep Forest Sciences, Inc. We’re a deep tech R&D company building Chiron, an AI-powered scientific discovery engine. Deep Forest Sciences leads the development of the open source DeepChem ecosystem. Partner with us to apply our foundational AI technologies to hard real-world problems. Get in touch with us at partnerships@deepforestsci.com!
Credits
Author: Bharath Ramsundar, Ph.D.
Editor: Sandya Subramanian, Ph.D.
> Funding for AI Alignment research has grown steadily with major players like OpenAI or AnthropicAI devoting billions of dollars collectively to the problem.
I think you should provide some hard numbers here and show some work. For perspective, OpenAI has, over its entire history, raised around $1b (excluding the commitments from Musk etc which never arrived and never will), of which only a small fraction went into alignment research (I'd be surprised if even a third, given research outputs), and all the rest went into capability research like GPT-3 or DALL-E or especially spinning up the whole OA API startup. Anthropic raised <$0.6b, although at least all of that is nominally alignment funding. Rounding up, I get $1b from those two as a lifetime total.
Meanwhile, fabs for basic inputs like just the wafers cost >$5b (GlobalWafer's announced Texas fab), high-end chip fabs comparable to TSMC, per Moore's second law, start at $20b, easily go to $32b (TSMC's own latest fab, which shouldn't be surprising when you note their annual capex of $44b - it's all going somewhere); and the current chip manufacturing act is at >$52b in subsidies just to get some capacity localized. China's own attempts at local chip manufacturing still lag far behind TSMC, despite intense industrial espionage and recruiting, and lavish funding for 'Made in China 2025' to the tune of at least $100b thus far, and 3 years to go (apparently full steam ahead despite multi-billion dollar defaults & scandals like Wuhan Hongxin). All of these numbers can be safely expected to only go up (even if they won't be as bad as the usual mega-project boondoggles which come in at 5-10x initial projections).
Could you explain how exactly you see these two categories of expenditures as even remotely comparable, much less fungible? And how much you expect liquidating all AI alignment funding to buy 1/32nd of a chip fab would deter Chinese invasion of Taiwan? And how much good deterring that does compared to solving AI alignment?
I would also be particularly interested in your sources on "AI Alignment work is starting to consume a large fraction of available funding for AI research.", because that seems improbable to me when single entities like DeepMind already are approaching annual budgets comparable to the sum total of all funding for AI alignment to date. What are your estimates for the fraction of AI alignment which uses up the budgets of major entities like DeepMind, FAIR, MSR, MILA, Sensetime, Baidu, Nvidia, Intel, and so on, and how do you compute 'a large fraction' based on this?
@Bharath do you still feel this way?