10 Comments
User's avatar
Cynthia C Sample's avatar

Scott you’re a genius and my great admiration is primarily your seeing reality behind the BS of wealth inequality which is a moral thing as much as a comic thing…a societal decision made by individuals. Thanks for this post

Chris Guo's avatar

Former Microsoft Sr. Director here: half of Google and Amazon's most recent profits on their SEC filing came from real cash flow from their core businesses. THE OTHER HALF CAME FROM NON-CASH, THEORETICAL GAINS IN THEIR STAKE IN ANTHROPIC.

Imagine you invest in a friend’s lemonade stand. Later, you give him even more money, and you both agree the stand is now worth a billion dollars. Thanks to a quirky accounting rule, you can claim your share of that billion dollars as your own personal "profit" for the year.

People who see blowout AI profits from Big Tech aren't looking carefully into those profits.

Alon Torres's avatar

Scott, your argument seems to underweight the trajectory risk. It reads as if today’s AI systems, with their current jaggedness and failure modes, are roughly the systems we’ll be dealing with going forward. But people have been saying LLMs are about to hit a wall since the GPT-3 era, and the wall keeps moving.

Capabilities are not just improving, several measures suggest the pace of improvement has accelerated, especially with reasoning models. A year ago, many of my friends in tech thought AI was mostly hype. Then they were forced to use it in real workflows, and now many of them are openly worried about getting replaced in the next year or two.

The latest frontier models are also getting powerful enough that even the current administration, despite its deregulatory instincts, is moving toward pre-deployment government evaluations and reportedly considering formal review of new models before release. The U.S. and China are also reportedly exploring AI guardrail talks to prevent the rivalry from spiraling into crisis.

So I agree that “AI apocalypse” can be used as marketing. But dismissing the risk as mostly narrative seems like wishful thinking, and like a refusal to come to terms with the reality of the situation. It seems unwise to blindly assume today’s gaps are durable. No one has a crystal ball, but it is far from guaranteed that AI systems will remain this jagged while capability, reliability, scaffolding, and adoption keep advancing this quickly.

I wrote more about why I think the usual “technology always creates new jobs” argument breaks down here: https://alont.substack.com/p/what-happens-when-we-automate-our

Kathy Martino's avatar

I’ve been using both ChatGpt and Claude along with some others for 2 years. It does not replace a human. It replaces the gap in our ancient brains capabilities today and the advanced technology we keep creating. It doesn’t have the human emotion and empathy that makes us superior. I have thought of things my AI miss. It compliments me. Maybe it learns like I did to always challenge its responses or not. I was at the beginning of computing in the workplace in 1982, internet in 1992 and next is AI.

Scenarica's avatar

"Fear is the product. Capital is the outcome." thats the single best sentence written about AI economics this year and it deserves the extension you didnt quite give it.

The apocalypse narrative isnt just a capital attraction strategy. its a talent attraction strategy. if every software engineer believes AI will replace all jobs within five years, the rational career move is to join the company building the AI rather than the company being disrupted by it. Amodei, Altman, and Musk arent just scaring investors into funding them. theyre scaring the talent pool into working for them. the prediction doubles as a recruitment pitch because the implied message is "the safest place to be when the flood comes is on the ark, and we're building the ark."

Your employment data is the strongest section of this piece and it needs to be cited more widely because it demolishes the narrative with actual numbers. Meta returning to 2021 headcount. Microsoft at 47% above pre-pandemic levels even after the layoff announcement. net tech employment flat at 9.6 million rather than collapsing. the gap between what the CEOs say in press interviews and what their own HR departments are actually doing is the most revealing data set in the entire debate.

The Oracle example is particularly telling. 18% workforce reduction with negative cash flow projected through 2030. thats not AI efficiency. thats a company struggling to fund a capital expenditure cycle that may not generate returns for years. framing cost-cutting as AI transformation is the corporate equivalent of putting a GPU sticker on a budget memo.

The Schiller narrative feedback loop observation is the part of this piece that should worry policymakers more than it currently does. if the apocalypse narrative causes companies to pre-emptively freeze hiring because they believe AI will replace the roles anyway, and workers pre-emptively exit sectors they believe are doomed, you get a labour market contraction that has nothing to do with AIs actual capabilities and everything to do with the story being told about them. the recession that follows gets attributed to AI when it was actually caused by the narrative about AI. thats the self-fulfilling prophecy in its most precise form.

Your three scenarios are cleanly structured. Id put rough probabilities on them: bubble bursts (20%), slower-than-expected timeline (55%), disruption faster than adaptation (25%). the middle scenario is the one that gets the least attention because its boring and hard to monetise as content. but its where most of the evidence currently points, and its the scenario where thoughtful policy on transition infrastructure could make the biggest difference.

Gary Epstein's avatar

Wonderful stuff

Craig Merritt's avatar

I've coded applications on and off for the last 50 years but in my last job, I came to the conclusion that it was just not providing me enough human connection. I left it 9 years ago and converted my home to a small hotel where I have constant face to face interaction and constant problem solving challenges from my guests (I can fix anything.) I'm glad I made the transition to something that will be difficult to replace by AI. I hope others can make the transition eventually (granted, mine took capital infusion to make happen). Human to human problem solving is so rewarding.

Denver Sallee's avatar

You should listen to your own family of podcasts more sir. Alice Han on China Decode stated last week that the unemployment rate in China among 16-24 year-olds is 16.9%. She also spoke with cohost James Kynge about robots able to make dim sum (can make 3000 dumplings/hour). China has much more robot utilization than in the US. AI/robots are coming for way more jobs than you keep saying on your other podcasts

Glenn Whitney's avatar

There will be a tipping point. Suddenly almost everyone will prefer driverless taxis. It's coming soon.

Have your people figure out how many humans currently derive a lot of their income from driving taxis.

T Magee's avatar

You just don't like change, do you?