I have a feeling OpenAI won't IPO this year but maybe in 2027 (which they should've done anyway). I think Sarah Friar is right but unfortunately for OpenAI it's a problem when you combine Sarah's comments with everything else like a compressed IPO timeline, Sarah being excluded from important discussions (which is insane that the CEO and CFO aren't in the same room), Sarah not reporting to Sam, and other things. There's no business discipline, and there's a non-zero chance they'll get found out soon.
Open AI--and the wider "generative AI"/LLM business--was always a case of "you can fool (nearly) all of the people, some of the time."
Just think about the last few years of mania: with its product-in-search-of-an-ACTUALLY-profitable-use-case, the massive burning of resources by a few tech bros with their fingers crossed to becoming being trillionaires, and a credulous public (including C-suite members...and dang, I've MET 'em in person) ready to believe anything.
But you can't fool everyone. And you certainly can't fool everyone, long term.
Many of us who actually know the systems from a technical standpoint (rather than the woo-woo/"CEO-said-a-thing" journalism) have always been deeply skeptical. Others, who are keeping an eye on actual real-world externalities (vs fantasies of apocalypse or paradise, lol), see the criminality and enshittification these systems have now enabled at scale, running rampant...alongside virtually no measurable benefit.
Tl;dr: a chatbot was never going to be your savior. Systems designed to mimic the cosmetic appearance of "thought" IN THE FORM OF LANGUAGE, even at amazing speed, were never going to produce "Gen AI."
I just hope the reckoning comes sooner rather than later. (Unstable systems jammed wholesale and untested into the entire modern digital infrastructure--just to satisfy the private curiosity (and greed) of a tiny elite--was never gonna end well...so the sooner the market cools on it, the better.)
What intrigues me is that 5 LLMs (Chat, Claude, Gemini, Copilot, Lama) are super-scaling at similar trajectories. With unprecedented mountains of $$$ thrown at each for GPUs, compute, training runs and talent. Like the browser and OS wars of old, 1-2 will “win” and the others will crash leaving their assets for pickup at deep discounts (like the unlit fiber optics of the .com crash). Investors at some point will abandon the losers. Do we REALLY need five AGI’s?
So, what happens to our economic system when:
1) four LLMs “lose” and investors abandon the super scaling?
2) Enterprise investments line up behind the AGI “winner?”
3) Jobs are cut and unemployment drifts upward toward double digits?
4) housing markets collapse in expensive blue tech cities where worker mortgages go into default?
5) banks are faced with 2007 style bad debt balance sheets with a mark-to-market collapse?
6) Data centers that served “losers” sit empty while the cutting edge servers can’t be fire sold fast enough to the “winner”
7) blue states respond with Robot Taxes on their wealthy citizens and companies to fund the suddenly swamped safety nets and dried up income and property tax receipts?
8) those blue state citizens sprint for Texas and Florida as “refugees from socialism” leaving CA and WA as dried up husks of states with angry, unemployed citizens, collapsed borrowing capability, and no wealthy citizens to tax?
We could all go on - but isn’t this the logical outcome of this all? Isn’t the above story (unfolding within our current system + politics) the base case?
Sam Altman sold his first company, which was “find my phone”, for $30m when it had 300 customers. Whether it’s the smokescreen of ChatGPT or Gemini, tech bros haven’t shown a way to make $$$ off this amazing technology to pay for all these data centers. Silly ?: do we need to build all these data centers when the technology seems to run fine right now on out phones and computers? I’ve never heard anyone say, “I love AI but sometimes it runs way too slow. When are the 100 energy/water-draining AI centers gonna be up and running? Because I need this to work faster!”
OpenAI has already evolved from a model-capability company into the credit anchor of AI infrastructure. Its valuation no longer affects only itself. It now shapes the pricing of GPUs, cloud computing, data centers, power infrastructure, networking equipment, ASICs, HBM, advanced packaging, and the capital market’s valuation of the entire AI capex cycle.
That said, OpenAI’s business model cannot be judged simply through the early financial metrics of traditional SaaS or cloud computing. It may indeed become the next-generation software entry point, the automation layer for enterprise workflows, and the infrastructure layer for agentic computing. But precisely because it may carry such a large platform role, it must also prove that its unit economics, cost curve, and revenue curve can eventually close.
I have a feeling OpenAI won't IPO this year but maybe in 2027 (which they should've done anyway). I think Sarah Friar is right but unfortunately for OpenAI it's a problem when you combine Sarah's comments with everything else like a compressed IPO timeline, Sarah being excluded from important discussions (which is insane that the CEO and CFO aren't in the same room), Sarah not reporting to Sam, and other things. There's no business discipline, and there's a non-zero chance they'll get found out soon.
I can't wait to read their S-1!
Open AI--and the wider "generative AI"/LLM business--was always a case of "you can fool (nearly) all of the people, some of the time."
Just think about the last few years of mania: with its product-in-search-of-an-ACTUALLY-profitable-use-case, the massive burning of resources by a few tech bros with their fingers crossed to becoming being trillionaires, and a credulous public (including C-suite members...and dang, I've MET 'em in person) ready to believe anything.
But you can't fool everyone. And you certainly can't fool everyone, long term.
Many of us who actually know the systems from a technical standpoint (rather than the woo-woo/"CEO-said-a-thing" journalism) have always been deeply skeptical. Others, who are keeping an eye on actual real-world externalities (vs fantasies of apocalypse or paradise, lol), see the criminality and enshittification these systems have now enabled at scale, running rampant...alongside virtually no measurable benefit.
Tl;dr: a chatbot was never going to be your savior. Systems designed to mimic the cosmetic appearance of "thought" IN THE FORM OF LANGUAGE, even at amazing speed, were never going to produce "Gen AI."
I just hope the reckoning comes sooner rather than later. (Unstable systems jammed wholesale and untested into the entire modern digital infrastructure--just to satisfy the private curiosity (and greed) of a tiny elite--was never gonna end well...so the sooner the market cools on it, the better.)
What intrigues me is that 5 LLMs (Chat, Claude, Gemini, Copilot, Lama) are super-scaling at similar trajectories. With unprecedented mountains of $$$ thrown at each for GPUs, compute, training runs and talent. Like the browser and OS wars of old, 1-2 will “win” and the others will crash leaving their assets for pickup at deep discounts (like the unlit fiber optics of the .com crash). Investors at some point will abandon the losers. Do we REALLY need five AGI’s?
So, what happens to our economic system when:
1) four LLMs “lose” and investors abandon the super scaling?
2) Enterprise investments line up behind the AGI “winner?”
3) Jobs are cut and unemployment drifts upward toward double digits?
4) housing markets collapse in expensive blue tech cities where worker mortgages go into default?
5) banks are faced with 2007 style bad debt balance sheets with a mark-to-market collapse?
6) Data centers that served “losers” sit empty while the cutting edge servers can’t be fire sold fast enough to the “winner”
7) blue states respond with Robot Taxes on their wealthy citizens and companies to fund the suddenly swamped safety nets and dried up income and property tax receipts?
8) those blue state citizens sprint for Texas and Florida as “refugees from socialism” leaving CA and WA as dried up husks of states with angry, unemployed citizens, collapsed borrowing capability, and no wealthy citizens to tax?
We could all go on - but isn’t this the logical outcome of this all? Isn’t the above story (unfolding within our current system + politics) the base case?
I don’t trust Sam Altman.
That poor baby deserves their mother. Sam needs to give that baby back to their mother.
Sam Altman sold his first company, which was “find my phone”, for $30m when it had 300 customers. Whether it’s the smokescreen of ChatGPT or Gemini, tech bros haven’t shown a way to make $$$ off this amazing technology to pay for all these data centers. Silly ?: do we need to build all these data centers when the technology seems to run fine right now on out phones and computers? I’ve never heard anyone say, “I love AI but sometimes it runs way too slow. When are the 100 energy/water-draining AI centers gonna be up and running? Because I need this to work faster!”
We're going to find out pretty soon whether Altman is as good at the Reality Distortion Field as Steve Jobs. Grab some popcorn.
OpenAI has already evolved from a model-capability company into the credit anchor of AI infrastructure. Its valuation no longer affects only itself. It now shapes the pricing of GPUs, cloud computing, data centers, power infrastructure, networking equipment, ASICs, HBM, advanced packaging, and the capital market’s valuation of the entire AI capex cycle.
That said, OpenAI’s business model cannot be judged simply through the early financial metrics of traditional SaaS or cloud computing. It may indeed become the next-generation software entry point, the automation layer for enterprise workflows, and the infrastructure layer for agentic computing. But precisely because it may carry such a large platform role, it must also prove that its unit economics, cost curve, and revenue curve can eventually close.