10 Comments

> Funding for AI Alignment research has grown steadily with major players like OpenAI or AnthropicAI devoting billions of dollars collectively to the problem.

I think you should provide some hard numbers here and show some work. For perspective, OpenAI has, over its entire history, raised around $1b (excluding the commitments from Musk etc which never arrived and never will), of which only a small fraction went into alignment research (I'd be surprised if even a third, given research outputs), and all the rest went into capability research like GPT-3 or DALL-E or especially spinning up the whole OA API startup. Anthropic raised <$0.6b, although at least all of that is nominally alignment funding. Rounding up, I get $1b from those two as a lifetime total.

Meanwhile, fabs for basic inputs like just the wafers cost >$5b (GlobalWafer's announced Texas fab), high-end chip fabs comparable to TSMC, per Moore's second law, start at $20b, easily go to $32b (TSMC's own latest fab, which shouldn't be surprising when you note their annual capex of $44b - it's all going somewhere); and the current chip manufacturing act is at >$52b in subsidies just to get some capacity localized. China's own attempts at local chip manufacturing still lag far behind TSMC, despite intense industrial espionage and recruiting, and lavish funding for 'Made in China 2025' to the tune of at least $100b thus far, and 3 years to go (apparently full steam ahead despite multi-billion dollar defaults & scandals like Wuhan Hongxin). All of these numbers can be safely expected to only go up (even if they won't be as bad as the usual mega-project boondoggles which come in at 5-10x initial projections).

Could you explain how exactly you see these two categories of expenditures as even remotely comparable, much less fungible? And how much you expect liquidating all AI alignment funding to buy 1/32nd of a chip fab would deter Chinese invasion of Taiwan? And how much good deterring that does compared to solving AI alignment?

I would also be particularly interested in your sources on "AI Alignment work is starting to consume a large fraction of available funding for AI research.", because that seems improbable to me when single entities like DeepMind already are approaching annual budgets comparable to the sum total of all funding for AI alignment to date. What are your estimates for the fraction of AI alignment which uses up the budgets of major entities like DeepMind, FAIR, MSR, MILA, Sensetime, Baidu, Nvidia, Intel, and so on, and how do you compute 'a large fraction' based on this?

Expand full comment

Thank you for your detailed comments. I agree that perhaps not all of the work that OpenAI has done is devoted to alignment, but OpenAI has fundraised based on AGI worries and risks, so I think it's fair to include OpenAI's full funding when measuring alignment funds.

Your broader point that the $1-2B for AI alignment is small funds compared with foundry costs is valid though. The difference is the foundry costs are primarily for hard capex (land, water, construction, equipment, see https://deepforest.substack.com/p/a-deeper-dive-into-semiconductor). These aren't research dollars. The AI alignment dollars are going to papers and marketing that convince researchers, especially new students, that AI alignment and risk are fundamental questions that need to be answered now. For example, see https://twitter.com/ilyasut/status/1554518035590893568 that asserts that AGI risk will become mainstream in AI shortly. I think this misrepresents the real state of risks considerably.

Deploying models in the physical world bring considerable challenges. See for example the discussion in the comments https://twitter.com/timnitGebru/status/1553552825237393408 about what it takes to take a generated molecule to a scalable synthesis. I don't see any near term pathway for scaled models to resolve these issues. I agree though that with sufficient scale, this may change, but for now scaled models seem like yet another evolutionary step on the long ramp of increasing computational capabilities from the 50s onwards in AI.

Expand full comment

@Bharath do you still feel this way?

Expand full comment

We are funding AI Alignment today because WE HOPE it is early! If we are not early or if our efforts are not good enough, do you realize what is at stake?

Expand full comment

The entire argument of this post is not much is at stake! AI risk is a small issue at present compared with larger immediate geopolitical challenges. WW3 or climate change are bigger existential risks for now and possible for the future too

Expand full comment

> "WW3 or climate change are bigger existential risks for now and possible for the future too"

I recently wrote a long post comparing existential risks which is based on statistics from The Precipice by Toby Ord. Link here:

https://www.lesswrong.com/posts/cnKvxehpHqWjZJNry/how-do-ai-timelines-affect-existential-risk#Anthropogenic_state_risks

The post contains a graph that shows that existential risk from AI is vastly higher than climate change and nuclear war combined.

Expand full comment

Climate change/avoiding WW3 are problems where humans are the players and we know possible solutions/strategies.. but interacting with an unaligned AGI is something we are not prepared to do as we have no framework or previous knowledge that can be used. And as it has been said in the alignment community, you get just one try. If you do it wrong or poorly, we are all dead. It's not a joke or an exaggeration.

It can be in 10 years, it can in 30 years or in 50 years, but we NEED all time and funding to nail it right.

Expand full comment

I think that you are right but how to kickstart worthwhile chip ventures outside of China? It's brutally competitive & few startups there succeed.

Expand full comment

If you use 4 TLDRs in an article it looses meaning.

Expand full comment

Sorry that was a typo. Fixed

Expand full comment