Jessica Tarlov and I are live from SXSW with James Talarico — Saturday, March 14, at 3:30 p.m. EDT. Prof G+ subscribers only. Upgrade and register for the livestream here.
Sometimes, you add more value going second. Tim Cook and Satya Nadella did not found Apple and Microsoft, but each took the wheel and increased their company’s market capitalization tenfold. Last week, Dario Amodei went first in pushing back on the Trump administration, refusing to let the Department of Defense dictate the policies of a private business. But, in what may be the most undercovered story in tech, Satya Nadella may have changed the political landscape following his lead.
The flow of capital concentrates around good stories. Entrepreneurs deploy narratives that capture imaginations and capital, pulling the future forward. Narratives also work in reverse. Last month a piece of science fiction masquerading as a research report wiped out $300 billion in market value by describing a near-future scenario where AI led to 10% unemployment, consumer spending collapsed, markets cratered, and the economy was fundamentally altered. More recently, Anthropic CEO Dario Amodei, demonstrating that a crisis is a terrible thing to waste, deployed a narrative that turned a $200 million contract dispute into a branding event that added $150 billion to his firm’s valuation, while de-positioning OpenAI.
Nihilistic Weirdo
OpenAI originated as a nonprofit AI research company. Its mission sounded noble. “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” Sam Altman and his co-creators wrote back in 2015. OpenAI has since dropped the nonprofit masquerade, registering an $840 billion valuation, with Altman backfilling whatever narrative maintains its hallucinogenic 34x revenue multiple.
In 2024, Altman said ads plus AI were “uniquely unsettling,” calling advertising a “last resort” business model. Two years later, the firm is testing ads. In 2023, Altman told a Senate hearing, “If this technology goes wrong, it can go quite wrong.” Cut to: reports of users becoming addicted to ChatGPT, forming romantic relationships with chatbots, and experiencing psychosis, followed by multiple wrongful death lawsuits alleging that ChatGPT helped users to take their own lives. In a remarkably tone-deaf post on X last October, Altman wrote, “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.” With safe mode turned off, ChatGPT has a new feature … porn. Between Sam Altman and Elon Musk, whose Grok is the league leader in LLM-generated porn, AI is becoming a race to the bottom (pun intended). OpenAI also has a social network, Sora. But instead of connection, Sora provides users with unlimited AI slop starring … themselves. It also serves up content starring fictional characters and dead celebrities, including Stephen Hawking dying in a skateboard accident and Martin Luther King Jr. wearing a MAGA hat. (The King video has since been removed.)
You don’t need Woodward and Bernstein to follow the money trail from OpenAI’s altruistic origin story to the uncomfortable conclusion that the most dangerous AI isn’t one that goes rogue — it’s the one run by Sam Altman. Consider his response to criticism that Americans are subsidizing AI data centers that have driven up the wholesale cost of electricity by 267%: “People talk about how much energy it takes to train an AI model — but it also takes a lot of energy to train a human. It takes about 20 years of life — and all the food you consume during that time — before you become smart.” On social media, people compared Altman to Agent Smith, the villain from The Matrix who calls humanity a virus. I see it. But I also see Her, a movie Altman is evidently so obsessed with that he stole Scarlett Johansson’s voice for a virtual assistant. Her is a cautionary tale about human connection. Altman watched it and thought: I can monetize that. The film’s tragedy is that Theodore (Joaquin Phoenix) falls for something that was never really there. The tragedy of OpenAI is the same story — and a nihilistic weirdo is getting rich off others’ loneliness.
The Hero We Need
Laddering highlights your strengths by illuminating a competitor’s weakness. Imagine if Kara Swisher, my Pivot co-host, said, “I’m the host with good hair.” It’s a branding twofer: an organic reminder that your adversary sucks and you are wonderful by comparison.
Enter Dario Amodei, the Jekyll to Altman’s Hyde. During a recent contract negotiation with the Department of Defense, Anthropic refused to remove safeguards prohibiting the use of the company’s technology in autonomous weapons and the mass surveillance of Americans, believing those applications can’t be safely and reliably performed by today’s AI. Defense Secretary Pete Hegseth responded with a shakedown: The U.S. would brand Anthropic a supply chain risk or seize their tech via the Defense Production Act. To Hegseth, the corporation isn’t an entity, subject to a fair legal system, that creates the profits that help fund the Defense Department, but an entity that is either with us, or against us. If this movie starring man-children hopped up on steroids they buy at gas stations sounds familiar, trust your instincts. Law firms, universities, and Big Tech have bent the knee, while the rest of corporate America has adopted a duck-and-cover strategy in the face of tariffs that are both illegal and stupid, i.e., hurting others while hurting ourselves.
In contrast, Amodei stood up … for humanity, safety, and the rule of law: Companies have the right to do business with the government, as well as the right to decline, without fear of punishment. Publicly, Altman supported Amodei, but in private he did the deal Anthropic wouldn’t. The following day, after news of Altman’s deal broke, U.S. uninstalls of ChatGPT increased 295%, and Claude climbed to No. 1 in the App Store. Anthropic’s annual recurring revenue surged to $19 billion, from $14 billion just a few weeks ago, adding an estimated $150 billion to its valuation. Altman / OpenAI came across as reckless, duplicitous, and self-serving. Amodei / Anthropic came across as safety-conscious, honest, and selfless.
A year ago, I predicted the first CEO who forcefully and publicly resisted Trump could reap significant benefits, both reputationally and commercially. With its reputation for breaking barriers and the boldness chromosome in its DNA, I thought / hoped it would be Nike. But Amodei just did it … and Microsoft followed his lead, filing a brief in support of Anthropic’s lawsuit seeking to block its designation as a supply chain risk. As one of the largest government contractors, Microsoft has more to lose than almost any tech company. But as Andrew Ross Sorkin put it, “Microsoft decided the cost of staying silent was higher.”
Boycott
In 1880s Ireland, a community neutralized a ruthless land agent named Captain Charles Boycott by collectively refusing to work for, trade with, or even speak to him. Making Boycott the face of a tenant rights campaign wasn’t the right answer (British landlords were far more complicit), but his selection was effective. As historian Rutger Bregman recently wrote, the difference between past movements that fizzled and those that succeeded is simple: “They picked a single target — one that was both symbolically powerful and genuinely vulnerable — and went all in.”
We launched Resist and Unsubscribe to demonstrate to consumers that their wallets are weapons. We wanted to convert public anger into effective action and rewire the incentive for CEOs to make clear that enabling fascism carries a financial downside. Movements … move, i.e., they change course, narrow focus, and advance. We encourage everyone to participate in Resist and Unsubscribe however they choose. But going forward, our target is more focused: OpenAI.
While still dominant among LLMs, OpenAI is vulnerable. Its app’s market share has fallen from 69% to 45% in the past year, and the company is projected to lose $14 billion in 2026. In addition, QuitGPT has already mobilized 4 million people to boycott OpenAI products. Strength in numbers. OpenAI is also symbolic of fascist enablers. See: Altman’s pivot from Trump critic to sycophant just seven days after the inauguration, OpenAI President Greg Brockman’s $25 million donation to Trump’s super PAC, and the firm’s decision to enable mass surveillance of Americans and autonomous weapons without safeguards. Finally, OpenAI is the poster child for an industry facing growing backlash, with 77% of Americans saying they believe AI threatens humanity.
$10,000
Movements build infrastructure to grow. After creating a website that made unsubscribing easy, we launched a meter to track progress. Many of you have joined the movement (welcome), and some have created additional infrastructure (thank you). My personal favorite: Risto Lähdesmäki’s Impact Calculator. If our wallets are weapons, the Impact Calculator is our force multiplier. When one person cancels their $20-per-month ChatGPT subscription, OpenAI loses $240 in annual revenue and sheds $10,000 in valuation. If you have a decent social network, your impact can easily reach six figures. Sharing impact compounds impact.
The most powerful agent in the world isn’t GPT-5; it’s a consumer with a conscience and an Unsubscribe button. Sam Altman went from “AI will save humanity” to AI porn and government surveillance tools. He’s the tech bro embodiment of that boyfriend who says he’s “focusing on himself right now” and three weeks later gets engaged to your roommate. Except instead of your roommate, it’s Pete Hegseth. And instead of love, it’s autonomous weapons. Cancel your $20/month subscription. Make him feel something other than the void he’s trying to fill with AI companions and DOD contracts.
Broken
I believe Sam Altman is broken and, worse, could break us. In life there are “tells,” moments and behaviors that provide insight into a person’s character: How they treat their pets; if they make eye contact with service staff; how they talk about their ex. Responding to a question re the energy needs of AI, Sam highlighted how much energy and effort is required to raise a human capable of critical thinking. This is the tell. He embodies what I believe is most concerning about the virus that’s infected Big Tech: For them, ROI supersedes humanity.
The whole shooting match in life is to find people and causes who will let you love and invest in them, who require and accept a great deal from you — possibly more than you’ll ever get back. For me, it’s raising children with a partner, and the reward is the absence of any ROI. It’s the opportunity to invest without the expectation of any return other than that they, someday, become agents of care and comfort for others. AI, GDP, and shareholder value are just the means. The ends are being in a position to give more than you can ever get.
Life is so rich,
P.S. In case you weren’t paying attention, Jessica Tarlov and I are live from SXSW with James Talarico — Saturday, March 14, at 3:30 p.m. EDT. Prof G+ subscribers only. Upgrade and register for the livestream here.








I know of a semiconductor company that pulled their business account out of OpenAI chatgpt and transitioned to Anthropic's Claude. Making a big stink works.
I really admire you Scott. You could choose to sit this one out but instead choosing to speak up and save this democracy!