Discussion about this post

User's avatar
Cynthia C Sample's avatar

Scott you’re a genius and my great admiration is primarily your seeing reality behind the BS of wealth inequality which is a moral thing as much as a comic thing…a societal decision made by individuals. Thanks for this post

Alon Torres's avatar

Scott, your argument seems to underweight the trajectory risk. It reads as if today’s AI systems, with their current jaggedness and failure modes, are roughly the systems we’ll be dealing with going forward. But people have been saying LLMs are about to hit a wall since the GPT-3 era, and the wall keeps moving.

Capabilities are not just improving, several measures suggest the pace of improvement has accelerated, especially with reasoning models. A year ago, many of my friends in tech thought AI was mostly hype. Then they were forced to use it in real workflows, and now many of them are openly worried about getting replaced in the next year or two.

The latest frontier models are also getting powerful enough that even the current administration, despite its deregulatory instincts, is moving toward pre-deployment government evaluations and reportedly considering formal review of new models before release. The U.S. and China are also reportedly exploring AI guardrail talks to prevent the rivalry from spiraling into crisis.

So I agree that “AI apocalypse” can be used as marketing. But dismissing the risk as mostly narrative seems like wishful thinking, and like a refusal to come to terms with the reality of the situation. It seems unwise to blindly assume today’s gaps are durable. No one has a crystal ball, but it is far from guaranteed that AI systems will remain this jagged while capability, reliability, scaffolding, and adoption keep advancing this quickly.

I wrote more about why I think the usual “technology always creates new jobs” argument breaks down here: https://alont.substack.com/p/what-happens-when-we-automate-our

4 more comments...

No posts

Ready for more?