Latest Posts (20 found)

I Will Never Respect A Website

If you like this piece and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I recently put out the timely and important Hater’s Guide To The SaaSpocalypse , another on How AI Isn't Too Big To Fail , and a deep (17,500 word) Hater’s Guide To OpenAI .  Subscribing to premium is both great value and makes it possible to write these large, deeply-researched free pieces every week.  Soundtrack: Muse — Stockholm Syndrome I think the most enlightening thing about AI is that it shows you how even the most mediocre text inspires some sort of emotion. Soulless LinkedIn slop makes you feel frustration with a person for their lack of authenticity, but you can still imagine how they forced it out of their heads. You still connect with them, even if it’s in a bad way.  AI copy is dead. It is inert. The reason you can spot it is that it sounds hollow. I don’t care if a website says stuff on it because I typed in, just like I don’t care if it responds in a way that sounds human, because it all feels like nothing to me. I am not here to give a website respect, I will not be impressed by a website, nor will I grant a website any extra credit if it can’t do the right thing every time. The computer is meant to work for me. If the computer doesn’t do what I want, I change the kind of computer I use. LLMs will always hallucinate, their outputs are not trustworthy as a result, they cannot be deterministic, and any chance of any mistakes of any kind are unforgivable. I don’t care how the website made you feel: it’s a machine that doesn’t always work, and that’s not a very good machine.  I feel nothing when I see an LLM’s output. Tell me thank you or whatever, I don’t care. You’re a website. Oh you can spit out code? Amazing. Still a website.  Perhaps you’ve found value in LLMs. Congratulations! You should feel no compulsion to have to convince me, nor should you feel any pride in using a particular website. And if you feel you’re being judged for using AI, perhaps you should ask why you feel so vilified? Did the industry do something to somehow warrant judgment? Is there something weird or embarrassing about the product, such as it famously having a propensity to get things wrong? Perhaps it loses billions of dollars? Oh, it’s damaging to the environment too? And people are telling outright lies about it and constantly saying it’ll replace people’s jobs? And the CEOs are all greedy oafish sociopaths?  Did you try being cloying, judgmental, condescending, and aggressive to those who don’t like AI? Oh, that didn’t work? I can’t imagine why.  Sounds embarrassing! You must really like that website.  ChatGPT is a website. Claude is a website. While I guess Claude Code runs in a terminal window, that just means it’s an app, which I put in exactly the same mental box as I do a website.  Yet everything you read or hear or see about AI does everything it can to make you think that AI is something other than a website or an app. People that “discover the power of AI” immediately stop discussing it in the same terms as Microsoft Word, Google, or any other app or website. It’s never just about what AI can do today, but always about some theoretical “AGI” or vague shit about “AI agents” that are some sort of indeterminate level of “valuable” without anyone being able to describe why. Truly useful technology isn’t described in oblique or hyperbolic terms. For example, last week, IBM’s Dave McCann described using a series of “AI agents” to Business Insider Sounds like a website to me.  Sounds like a website using an LLM to summarize stuff to me. Why are we making all this effort to talk about what a website does?  My friend, this isn’t a “series of agents.” It’s an LLM that looks at stuff and spits out an answer. Chatbots have done this kind of thing forever. These aren’t “agents.” “Agents” makes it sound like there’s some sort of futuristic autonomous presence rather than a chatbot that’s looking at documents using technology that’s guaranteed to hallucinate incorrect information . Here’s a fun exercise: replace the word “agent” with “app,” and replace “AI” with “application.” In fact, let’s try that with the next quote: A variety of functions including searching for stuff, looking at stuff, generating stuff, transcribing a meeting, and searching for stuff. Wow! Who gives a fuck. Every “AI agent” story is either about code generation, summarizing some sort of information source, or generating something based on an information source that you may or may not be able to trust.  “Agent” is an intentional act of deception, and even “modern” agents like OpenClaw and its respective ripoffs ultimately boil down to “I can send you a reminder” or “I can transcribe a text you send me.” Yet everybody seems to want to believe these things are “valuable” or “useful” without ever explaining why. A page of OpenClaw integrations claiming to share “real projects, real automations [and] real magic” includes such incredible, magical use cases as “reads my X bookmarks and discusses them with me,” “check incoming mail and remove spam,” “researches people before meetings and creates briefing docs,” “schedule reminders,” “tracking who visits a website” (summarizing information), and “using voice notes to tell OpenClaw what to do,” which includes “distilling market research” (searching for stuff) and “tightening a proposal” (generating stuff after looking at it). I’d have no quarrel with any of this if it wasn’t literally described as magical and innovative. This is exactly the shit that software has always done — automations, shortcuts, reminders, and document work. Boring, potentially useful stuff done in an inefficient way requiring a Mac Mini and hundreds of dollars a day of API calls.  Even Stephen Fry’s effusive review of the iPad from 2010 , in referring to it as a “magical object,” still referred to it as “class,” “a different order of experience,” remarking on its speed, responsiveness, its “smooth glide,” and remarking that it’s so simple . Even Fry, a writer beloved for his effervescence and sophisticated lexicon, was still able to point at the things he liked (such as the design and simplicity) in clear terms. Even in couching it in terms of the future, Fry is still able to cogently explain why he’s excited about the present. Conversely, articles about Large Language Models and their associated products often describe them in one of three ways: This simply doesn’t happen outside of bubbles. The original CNET review of the iPhone — a technology I’d argue literally changed the way that human beings live their lives — still described it in terms that mirrored the reality we live in: I’d argue that technologies like cloud storage, contactless payments, streaming music, and video and digital photography have transformed our societies in ways that were obvious from the very beginning. Nobody sat around cajoling us to accept that we’d need to sunset our Nokia 3210s and get used to touchscreens because it was blatantly obvious that it was better on using the first iPhone.  Nobody ostracized you for not being sufficiently excited about iPhone apps. Git, launched in 2005, is arguably one of the single-most transformational technologies in tech history, changing how software engineers built all kinds of software . And I’d argue that Github, which came a few years later, was equally transformational.  I can’t find a single example of somebody being shamed for not being sufficiently excited, other than people arguing over whether Git was the superior version control software , or saying that  Github, a cloud-based repository for code and collaboration, was obvious in its utility. Those that liked it didn’t feel particularly defensive. Even articles about GitHub’s growth spoke entirely in terms rooted in the present. I realize this was before the hyper-polarized world of post-Musk Twitter, one where venture capital and the tech industry in general was a fraction of the size, but it’s really weird how different it feels when you read about how the stuff that actually mattered was covered. I must repeat that this was a very different world with very different incentives. Today’s tech industry is a series of giant group chats across various social networks and physical locations, with a much-larger startup community (yCombinator’s last batch had 199 people — the first had 8) influenced heavily by the whims of investors and the various cults of personality in the valley. While social pressure absolutely existed, the speed at which it could manifest and mutate was minute in comparison to the rabid dogs of Twitter or the current state of Hackernews. There were fewer VCs, too. In any case, no previous real or imagined tech revolution has ever inspired such eager defensiveness, tribalism or outright aggression toward dissenters, nor such ridiculous attempts to obfuscate the truth about a product outside of cryptocurrency, an industry with obvious corruption and financial incentives.  We’ve never had a cult of personality around a specific technology at this scale. There is something that AI does to people — in the way it both functions and the way that people react to it —  that inspires them to act, defensively, weirdly, tribally. I think it starts with LLMs themselves, and the feeling they create within a user. We all love prompts. We love to be asked questions about ourselves. We feel important when somebody takes interest in what we’re doing, and even more-so when they remember things about it and seem to be paying attention. LLMs are built to completely focus themselves on us and do so while affirming every single interaction.  Human beings also naturally crave order and structure, which means we’ve created frameworks in our head about what authoritative-sounding or looking information looks like, and the language that engenders trust in it. We trust Wikipedia both because it’s an incredibly well-maintained library of information riddled with citations and because it tonally and structurally resembles an authoritative source. Large Language Models have been explicitly trained to deliver information (through training on much of the internet including Wikipedia) in a structured manner that makes us trust it like we would another source massaged with language we’d expect from a trusted friend or endlessly-patient teacher. All of this is done with the intention of making you forget that you’re using a website. And that deception is what starts to make people act strangely. The fact that an LLM can maybe do something is enough to make people try it, along with the constant pressure from social media, peers and the mainstream media.  Some people — such as myself — have used LLMs to do things, seen that making them do said things isn’t going to happen very easily, and walked away because I am not going to use a website that doesn’t do what it says.  As I’ve previously said, technology is a tool to do stuff. Some technology requires you to “get used to it” — iPhones and iPads were both novel (and weird) in their time, as was learning to use the Moonlander ZSK — but in basically every example doesn’t involve you tolerating the inherent failings of the underlying product under the auspices of it “one day being better.” Nowhere else in the world of technology does someone gaslight you into believing that the problems don’t exist or will magically disappear. It’s not like the iPhone only occasionally allowed you to successfully take a photo, and reliable photography was something that you’d have to wait until the iPhone 3GS to enjoy. While the picture quality improved over time, every generation of iPhone all did the same basic thing successfully, reliably, and consistently.  I also think that the challenge of making an LLM do something useful is addictive and transformative. When people say they’ve “learned to use AI,” often they mean that they’ve worked out ways to fudge their prompts, navigate its failures, mitigate its hallucinations, and connect it to various different APIs and systems of record in such a way that it now, on a prompt, does something , and because they’re the ones that built this messy little process, they feel superior — because the model has repeatedly told them that they were smart for doing it and celebrated with them when they “succeeded.”  The term “AI agent” exists as both a marketing term and a way to ingratiate the user. Saying “yeah I used a chatbot to do some stuff” sounds boring, like you’re talking to an app or a website, but “using an AI agent” makes you sound like a futuristic cyber-warrior , even though you’re doing exactly the same thing. LLMs are excellent digital busyboxes for those who want to come up with a way to work differently rather than actually doing work. In WIRED’s article about journalists using AI , Alex Heath boasts that he “feels like he’s cheating in a way that feels amazing”: The linguistics of “transmitting an idea to an AI agent” misrepresent what is a deeply boring and soulless experience. Alex speaks into a microphone, his words are transcribed, then an LLM burps out a draft. A bunch of different services connect to Claude Cowork and a text document (that’s what the “custom set of instructions” is) that says how to write like him, and then it writes like him, and then he talks to it and then sometimes writes bits of the story himself. This is also most decidedly not automation. Heath still must sit and prompt a model again and again. He must still maintain connections to various services and make sure the associated documents in Notion are correct. He must make sure that Granola actually gets the transcriptions from his interview. He must (I would hope) still check both the AI transcription and the output from the model to make sure quotes are accurate. He must make sure his calendar reflects accurate information. He must make sure that Claude still follows his “voice and writing style” — if you can call it that given the amount of distance between him and the product. Well, Alex, you’re not telling anybody anything, your ideas and words come out of a Large Language Model that has convinced you that you’re writing them.  In any case, Heath’s process is a great example of what makes people think they’re “using powerful AI.” Large Language Models are extremely adept at convincing human beings to do most of the work and then credit “AI” with the outcomes. Alex’s process sounds convoluted and, if I’m honest, a lot more work than the old way of doing things. It’s like writing a blog using a machine from Pee-wee’s Playhouse.  I couldn’t eat breakfast that way every morning. I bet it would get old pretty quick. This is the reality of the Large Language Model era. LLMs are not “artificial intelligence” at all. They do not think, they do not have knowledge, they are conjuring up their own training data (or reflecting post-training instructions from those developing them or documents instructing them to act a certain way), and any time you try and make them do something more-complicated, they begin to fall apart, and/or become exponentially more-expensive. You’ll notice that most AI boosters have some sort of bizarre, overly-complicated way of explaining how they use AI. They spin up “multiple agents” (chatbots) that each have their own “skills document” (a text document) and connect “harnesses” (python scripts, text files that tell it what to do, a search engine, an API) that “let it run agentic workflows” (query various tools to get an outcome.”  The so-called “agentic AI” that is supposedly powerful and autonomous is actually incredibly demanding of its human users — you must set it up in so many different ways and connect it to so many different services and check that every “agent” (different chatbot) is instructed in exactly the right way, and that none of these agents cause any problems (they will) with each other. Oh, don’t forget to set certain ones to “high-thinking” for certain tasks and make sure that other tasks that are “easier” are given to cheaper models, and make sure that those models are prompted as necessary so they don’t burn tokens. But the process of setting up all those agents is so satisfying, and when they actually succeed in doing something — even if it took fucking forever and costs a bunch and is incredibly inefficient — you feel like a god! And because you can “spin up multiple agents,” each one ready and waiting for you to give them commands (and ready to affirm each and every one of them), you feel powerful, like you’re commanding an army that also requires you to monitor whatever it does. The reason that LLMs have become so interesting for software engineers is that this is already how they lived. Writing software is often a case of taping together different systems and creating little scripts and automations that make them all work, and the satisfaction of building functional software is incredible, even at the early stages.  Large Language Models perform an impression of automating that process, but for the most part force you, the user, to do the shit that matters, even if that means “be responsible for the code that it puts out.” Heath’s process does not appear to take less time than his previous one — he’s just moved stuff around a bit and found a website to tell him he’s smart for doing so.  They are Language Models interpreting language without any knowledge or thoughts or feelings or ability to learn, and each time they read something they interpret meaning based on their training data, which means they can (and will!) make mistakes, and when they’re, say, talking to another chatbot to tell it what to do next, that little mistake might build a fundamental flaw in the software, or just break the process entirely.  And Large Language Models — using the media — exist to try and convince you that these mistakes are acceptable. When Anthropic launched its Claude For Finance tool , which claims to “automate financial modeling” with “pre-built agents” (chatbots) but really appears to just be able to create questionably-useful models via Excel spreadsheets and “financial research” based on connecting to documents in your various systems, I imagine with a specific system prompt. Anthropic also proudly announced that it had scored a 55.3% on the Finance Agent Test .  I hate to repeat myself, but I will not respect a website, and I will not tolerate something being “55% good” at something if its alleged use case is that it’s an artificial intelligence.  Yet that’s the other remarkable thing about the LLM era — that there are people who are extremely tolerant of potential failures because they believe they’re either A) smart enough to catch them or B) smart enough to build systems that do so for them, with a little sprinkle of “humans make mistakes too,” conflating “an LLM that doesn’t know anything fucking up by definition” with “a human being with experiences and the capacity for adaptation making a mistake.”  I truly have no beef with people using LLMs to speed up Python scripts to do fun little automations or to dig through big datasets, but please don’t try and convince me they’re being futuristic by doing so. If you want to learn Python, I recommend reading Al Sweigart’s Automate The Boring Stuff . Anytime somebody sneers at you and says you are being “left behind” because you’re not using AI should be forced to show you what it is they’ve created or done, and the specific system they used to do so. They should have to show you how much work it took to prepare the system, and why it’s superior to just doing it themselves.  Karpathy also had a recent (and very long) tweet about “ the growing gap in understanding of AI capability ,” involving more word salad than a fucking SweetGreen: Wondering what those “staggering improvements” are?  The one tangible (and theoretical!) example Karpathy gives is an example of how hard people work to overstate the capabilities of LLMs. “Coherently restructuring” a codebase might happen when you feed it to an LLM (while also costing a shit-ton of tokens, but putting that aside), or it might not understand at all because Claude Opus is acting funny that day , or it might sort-of fix it but mess something subtle up that breaks things in the future. This is an LLM doing exactly what an LLM does — it looks at a block of text, sees whether it matches up with what a user said, sees how that matches with its training data, and then either tells you things to do or generates new code, much like it would do if you had a paragraph of text you needed to fact-check. Perhaps it would get some of the facts right if connected to the right system. Perhaps it might make a subtle error. Perhaps it might get everything wrong. This is the core problem with the “checkmate, boosters — AI can write code!” problem. AI can write code. We knew that already. It gets “better” as measured by benchmarks that don’t really compare to real world success , and even with the supposedly meteoric improvements over the last few months, nobody can actually explain what the result of it being better is, nor does it appear to extend to any domain outside of coding. You’ll also notice that Karpathy’s language is as ingratiating to true believers as it is vague. Other domains are left unexplained other than references to “research” and “math.” I’m in a research-heavy business, and I have tried the most-powerful LLMs and highest-priced RAG/post-RAG research tools, and every time find them bereft of any unique analysis or suggestions.  I don’t dispute that LLMs are useful for generating code, nor do I question whether or not they’re being used by software developers at scale. I just think that they would be used dramatically less if there weren’t an industrial-scale publicity campaign run through the media and the majority of corporate America both incentivizing and forcing them to do so.  Similarly, I’m not sure anybody would’ve been anywhere near as excited if OpenAI and Anthropic hadn’t intentionally sold them a product that was impossible to support long-term.  This entire industry has been sold on a lie, and as capacity becomes an issue, even true believers are turning on the AI labs. About a year ago, I warned you that Anthropic and OpenAI had begun the Subprime AI Crisis , where both companies created “priority processing tiers” for enterprise customers (read: AI startups like Replit and Cursor), dramatically increasing the cost of running their services to the point that both had to dramatically change their features as a result. A few weeks later, I wrote another piece about how Anthropic was allowing its subscribers to burn thousands of dollars’ worth of tokens on its $100 and $200-a-month subscriptions, and asked the following question at the end: I was right to ask, as a few weeks ago ( as I wrote in the Subprime AI Crisis Is Here ) that Anthropic had added “peak hours” to its rate limits, and users found across the board that they were burning through their limits in some cases in only a few prompts . Anthropic’s response was, after saying it was looking into why rate limits were being hit so fast , to say that users were ineffectively utilizing the 1-million-token context window and failing to adjust Claude’s “thinking effort level” based on whatever task it is they were doing. Anthropic’s customers were (and remain) furious , as you can see in the replies of its thread on the r/Anthropic Subreddit . To make matters worse, it appears that — deliberately or otherwise — Anthropic has been degrading the performance of both Claude Opus 4.6 and Claude Code itself , with developers, including AMD Senior AI Director Stella Laurenzo, documenting the problem at length (per VentureBeat): Think that Anthropic cares? Think again:  Another developer found that Claude Opus 4.6 was “thinking 67% less than it used to,” though Anthropic didn’t even bother to respond. In fact, Anthropic has done very little to explain what’s actually happening, other than to say that it doesn’t degrade its models to better serve demand . To be clear, this is far from the only time that I’ve seen people complain about these models “getting dumber” — users on basically every AI Subreddit will say, at some point, that models randomly can’t do things they used to be able to, with nobody really having an answer other than “yeah dude, same.”  Back in September 2025, developer Theo Browne complained that Claude had got dumber , but Anthropic near-immediately responded to say that the degraded responses were a result of bugs that “intermittently degraded responses from Claude,” adding the following:  Which begs the question: is Anthropic accidentally making its models worse? Because it’s obvious it’s happening, it’s obvious they know something is happening, and its response, at least so far, has been to say that either users need to tweak their settings or nothing is wrong at all. Yet these complaints have happened for years, and have reached a crescendo with the latest ones that involve, in some cases, Claude Code burning way more tokens for absolutely no reason , hitting rate limits earlier than expected or wasting actual dollars spent on API calls. Some suggest that the problems are a result of capacity issues over at Anthropic, which have led to a stunning (at least for software used by millions of people) amounts of downtime, per the Wall Street Journal : This naturally led to boosters (and, for that matter, the Wall Street Journal) immediately saying that this was a sign of the “insatiable demand for AI compute”: Before I go any further: if anyone has been taking $2.75-per-hour-per-GPU for any kind of Blackwell GPU, they are losing money. Shit, I think they are at $4.08. While these are examples from on-demand pricing (versus paid-up years-long contracts like Anthropic buys), if they’re indicative of wider pricing on Blackwell, this is an economic catastrophe. In any case, Anthropic’s compute constraints are a convenient excuse to start fucking over its customers at scale. Rate limits that were initially believed to be a “ bug ” are now the standard operating limits of using Anthropic’s services, and its models are absolutely, fundamentally worse than they were even a month ago. It’s January 14 2026, and you just read The Atlantic’s breathless hype-slop about Claude Code , believing that it was “bigger than the ChatGPT moment,” that it was an “inflection point for AI progress,” and that it could build whatever software you imagined. While you’re not exactly sure what it is you’re meant to be excited about, your boss has been going on and on about how “those who don’t use AI will be left behind,” and your boss allows you to pay $200 for a year’s access to Claude Pro. You, as a customer, no longer have access to the product you purchased. Your rate limits are entirely different, service uptime is measurably worse, and model performance has, for some reason, taken a massive dip. You hit your rate limits in minutes rather than hours. Prompts that previously allowed you a healthy back-and-forth over a project are now either impractical or impossible.  Your boss now has you vibe-coding barely-functional apps as a means of “integrating you with the development stack,” but every time you feed it a screenshot of what’s going wrong with the app you seem to hit your rate limits again. You ask your boss if he’ll upgrade you to the $100-a-month subscription, and he says that “you’ve got to make do, times are tough.” You sit at your desk trying to work out what the fuck to do for the next four hours, as you do not know how to code and what little you’ve been able to do is now impossible. This is the reality for a lot of AI subscribers, though in many cases they’ll simply subscribe to OpenAI Codex or another service that hasn’t brought the hammer down on their rate limits. …for now, at least. The con of the Large Language Model era is that any subscription you pay for is massively subsidized, and that any product you use can and will see its service degraded as these companies desperately try to either ease their capacity issues or lower their burn rate. Yet it’s unclear whether “more capacity” means that things will be cheaper, or better, or just a way of Anthropic scaling an increasingly-shittier experience.  To explain, when an AI lab like Anthropic or OpenAI “hits capacity limits,” it doesn’t mean that they start turning away business or stop accepting subscribers, but that current (and new) subscribers will face randomized downtime and model issues, along with increasingly-punishing rate limits.  Neither company is facing a financial shortfall as a result of being unable to provide their services (rather, they’re facing financial shortfalls because they’re providing their services to customers. And yet, the only people that are the only people paying that price because of these “capacity limits” are the customers. This is because AI labs must, when planning capacity, make arbitrary guesses about how large the company will get, and in the event that they acquire too much capacity, they’ll find themselves in financial dire straits, as Anthropic CEO Dario Amodei told Dwarkesh Patel back in February :  What happens if you don’t buy enough compute? Well, you find yourself having to buy it last-minute, which costs more money, which further erodes your margins, per The Information : In other words, compute capacity is a knife-catching game. Ordering compute in advance lets you lock in a better rate, but having to buy compute at the last-minute spikes those prices, eating any potential margin that might have been saved as a result of serving that extra demand.  Order too little compute and you’ll find yourself unable to run stable and reliable services, spiking your costs as you rush to find more capacity. Order too much capacity and you’ll have too little revenue to pay for it. It’s important to note that the “demand” in question here isn’t revenue waiting in the wings, but customers that are already paying you that want to do more with the product they paid for. More capacity allows you to potentially onboard new customers, but they too face the same problems as your capacity fills.  This also begs the question: how much capacity is “enough”? It’s clear that current capacity issues are a result of the inference (the creation of outputs) demands of Anthropic’s users. What does adding more capacity do, other than potentially bringing that under control?  This also suggests that Anthropic’s (and OpenAI’s by extension) business model is fundamentally flawed. At its current infrastructure scale, Anthropic cannot satisfactorily serve its current paying customer base , and even with this questionably-stable farce of a product, Anthropic still expects to burn $14 billion . While adding more capacity might potentially allow new customers to subscribe, said new customers would also add more strain on capacity, which would likely mean that nobody’s service improves but Anthropic still makes money. It ultimately comes down to the definition of the word “demand.” Let me explain. Data center development is very slow. Only 5GW of capacity is under construction worldwide (and “construction” can mean anything from a single steel beam to a near-complete building). As a result, both Anthropic and OpenAI are planning and paying for capacity years in advance based on “demand.” “Demand” in this case doesn’t just mean “people who want to pay for services,” but “the amount of compute that the people who pay us now and may pay us in the future will need for whatever it is they do.”  The amount of compute that a user may use varies wildly based on the model they choose and the task in question — a source at Microsoft once told me in the middle of last year that a single user could take up as many as 12 GPUs with a coding task using OpenAI’s o4-mini — which means that in a very real sense these guys are guessing and hoping for the best. It also means that their natural choice will be to fuck over their current users to ease their capacity issues, especially when those users are paying on a monthly or — ideally — annual basis. OpenAI and Anthropic need to show continued revenue growth, which means that they must have capacity available for new customers, which means that old customers will always be the first to be punished. We’re already seeing this with OpenAI’s new $100-a-month subscription, a kind of middle ground between its $20 and $200-a-month ChatGPT subscriptions that appears to have immediately reduced rate limits for $20-a-month subscribers.  To obfuscate the changes further, OpenAI also launched a bonus rate limit period through May 31 2026 , telling users that they will have “10x or 20x higher rate limits than plus” on its pricing page while also featuring a tiny little note that’s very easy for somebody to miss: This is a fundamentally insane and deceptive way to run a business, and I believe things will only get worse as capacity issues continue. Not only must Anthropic and OpenAI find a way to make their unsustainable and unprofitable services burn less money, but they must also constantly dance with metering out whatever capacity they have to their customers, because the more extra capacity they buy, the more money they lose.   However you feel about what LLMs can do, it’s impossible to ignore the incredible abuse and deception happening to just about every customer of an AI service. As I’ve said for years, AI companies are inherently unsustainable due to the unreliable and inconsistent outputs of Large Language Models and the incredible costs of providing the services. It’s also clear, at this point, that Anthropic and OpenAI have both offered subscriptions that were impossible to provide at scale at the price and availability that they were leading up to 2026, and that they did so with the intention of growing their revenue to acquire more customers, equity investment and attention.  As a result, customers of AI services have built workflows and habits based on an act of deceit. While some will say “this is just what tech companies do, they get you in when it’s cheap then jack up the price,” doing so is an act of cowardice and allegiance with the rich and powerful.  To be clear, Anthropic and OpenAI need to do this. They’ve always needed to do this. In fact, the ethical thing to do would’ve been to charge for and restrict the services in line with their actual costs so that users could have reliable and consistent access to the services in question. As of now, anyone that purchases any kind of AI subscription is subject to the whims of both the AI labs and their ability to successfully manage their capacity, which may or may not involve making the product that a user pays for worse. The “demand” for AI as it stands is an act of fiction, as much of that demand was conjured up using products that were either cheaper or more-available. Every one of those effusive, breathless hype-screeds about Claude Code from January or February 2026 are discussing a product that no longer exists. On June 1 2026, any article or post about Codex’s efficacy must be rewritten, as rate limits will be halved .  While for legal reasons I’ll stop short of the most obvious word, Anthropic and OpenAI are running — intentionally or otherwise — deeply deceitful businesses where their customers cannot realistically judge the quality or availability of the service long-term. These companies also are clearly aware that their services are deeply unpopular and capacity-constrained, yet aggressively court and market toward new customers, guaranteeing further service degradations and potential issues with models. This applies even to API customers, who face exactly the same downtime and model quality issues, all with the indignity of paying on a per-million token basis, even when Claude Opus 4.6 decides to crap itself while refactoring something, running token-intensive “agents” to fix simple bugs or fails to abide by a user’s guidelines .  This is not a dignified way to use software, nor is it an ethical way to sell it.  How can you plan around this technology? Every month some new bullshit pops up. While incremental model gains may seem like a boon, how do you actually say “ok, let’s plan ahead” for a technology that CHANGES, for better or for worse, at random intervals? You’re constantly reevaluating model choices and harnesses and prompts and all kinds of other bullshit that also breaks in random ways because “that’s how large language models work.” Is that fun? Is that exciting? Do you like this? It seems exhausting to me, and nobody seems to be able to explain what’s good about it. How, exactly, does this change?  Right now, I’d guess that OpenAI has access to around 2GW of capacity ( as of the end of 2025 ), and Anthropic around 1GW based on discussions with sources. OpenAI is already building out around 10GW of capacity with Oracle, as well as locking in deals with CoreWeave ( $22.4 billion ), Amazon Web Services ( $138 billion ), Microsoft Azure ( $250 billion ), and Cerebras (“ 750MW ”). Meanwhile, Anthropic is now bringing on “multiple gigawatts of Google’s next-generation TPU capacity ” on top of deals with Microsoft , Hut8 , CoreWeave and Amazon Web Services. Both of these companies are making extremely large bets that their growth will continue at an astonishing, near-impossible rate. If OpenAI has reached “ $2 billion a month ” (which I doubt it can pay for) with around 2GW of capacity, this means that it has pre-ordered compute assuming it will make $10 billion or $20 billion a month in a few short years, which fits with The Information’s reporting that OpenAI projects it will make $113 billion in revenue in 2028. And if it doesn’t make that much revenue — and also doesn’t get funding or debt to support it — OpenAI will run out of money, much as Anthropic will if that capacity gets built and it doesn’t make tens of billions of dollars a month to pay for it. I see no scenario where costs come down, or where rate limits are eased. In fact, I think that as capacity limits get hit, both Anthropic and OpenAI will degrade the experience for the user (either through model degradation or rate limit decay) as much as they can.  I imagine that at some point enterprise customers will be able to pay for an even higher priority tier, and that Anthropic’s “Teams” subscription (which allows you to use the same subsidized subscriptions as everyone else) will be killed off, forcing anyone in an organization paying for Claude Code (and eventually Codex) via the API, as has already happened for Anthropic’s enterprise users. Anyone integrating generative AI is part of a very large and randomized beta test. The product you pay for today will be materially different in its quality and availability in mere months. I told you this would happen in September 2024 . I have been trying to warn you this would happen, and I will repeat myself: these companies are losing so much more money than you can think of, and they are going to twist the knife in and take as many liberties with their users and the media as they can on the way down.  It is fundamentally insane that we are treating these companies as real businesses, either in their economics or in the consistency of the product they offer.  These are unethical products sold in deceptive ways, both in their functionality and availability, and to defend them is to help assist in a society-wide con with very few winners. And even if you like this, mark my words — your current way of life is unsustainable, and these companies have already made it clear they will make the service worse, without warning, if they even acknowledge that they’ve done so directly. The thing you pay for is not sustainable at its current price and they have no way to fix that problem.  Do you not see you are being had? Do you not see that you are being used?  Do any of you think this is good? Does any of this actually feel like progress?  I think it’s miserable, joyless and corrosive to the human soul, at least in the way that so many people talk about AI. It isn’t even intelligent. It’s just more software that is built to make you defend it, to support it, to do the work it can’t so you can present the work as your own but also give it all the credit.  And to be clear, these companies absolutely fucking loathe you. They’ll make your service worse at a moment’s notice and then tell you nothing is wrong.  Anyone using a subscription to OpenAI or Anthropic’s services needs to wake up and realize that their way of life is going away — that rate limits will make current workflows impossible, that prices will increase, and that the product they’re selling even today is not one that makes any economic sense. Every single LLM product is being sold under false pretenses about what’s actually sustainable and possible long term. With AI, you’re not just the product, you’re a beta tester that pays for the privilege. And you’re a mark for untrustworthy con men selling software using deceptive and dangerous rhetoric.  I will be abundantly clear for legal reasons that it is illegal to throw a Molotov cocktail at anyone, as it is morally objectionable to do so. I explicitly and fundamentally object to the recent acts of violence against Sam Altman. It is also morally repugnant for Sam Altman to somehow suggest that the careful, thoughtful, determined, and eagerly fair work of Ronan Farrow and Andrew Marantz is in any way responsible for these acts of violence. Doing so is a deliberate attempt to chill the air around criticism of AI and its associated companies. Altman has since walked back the comments , claiming he “wishes he hadn’t used” a non-specific amount of the following words: These words remain on his blog, which suggests that Altman doesn’t regret them enough to remove them. I do, however, agree with Mr. Altman that the rhetoric around AI does need to change.  Both he and Mr. Amodei need to immediately stop overstating the capabilities of Large Language Models. Mr. Altman and Mr. Amodei should not discuss being “ scared ” of their models, or being “uncomfortable” that men such as they are in control unless they wish to shut down their services, or that they “ don’t know if models are conscious .”  They should immediately stop misleading people through company documentation that models are “ blackmailing ” people or, as Anthropic did in its Mythos system card , suggest a model has “broken containment and sent a message” when it A) was instructed to do so and B) did not actually break out of any container. They must stop discussing threats to jobs without actual meaningful data that is significantly more sound than “jobs that might be affected someday but for now we’ve got a chatbot.” Mr. Amodei should immediately cease any and all discussions of AI potentially or otherwise eliminating 50% of white collar jobs , as Mr. Altman should cease predicting when Superintelligence might arrive, as Mr. Amodei should actively reject and denounce any suggestions of AI “ creating a white collar bloodbath .” Those that defend AI labs will claim that these are “difficult conversations that need to be had,” when in actuality they engage in dangerous and frightening rhetoric as a means of boosting a company’s valuation and garnering attention. If either of these men truly believed these things were true, they would do something about it other than saying “you should be scared of us and the things we’re making, and I’m the only one brave enough to say anything.”  These conversations are also nonsensical and misleading when you compare them to what Large Language Models can do, and this rhetoric is a blatant attempt to scare people into paying for software today based on what it absolutely cannot and will not do in the future . It is an attempt to obfuscate the actual efficacy of a technology as a means of deceiving investors, the media and the general public.  Both Altman and Amodei engage in the language of AI doomerism as a means of generating attention, revenue and investment capital, actively selling their software and future investment potential based on their ownership of a technology that they say (disingenuously) is potentially going to take everybody’s jobs.  Based on reports from his Instagram , the man who threw the molotov cocktail at Sam Altman’s house was at least partially inspired by If Anyone Builds It, Everyone Dies, a doomer porn fantasy written by a pair of overly-verbose dunces spreading fearful language about the power of AI, inspired by the fearmongering of Altman himself. Altman suggested in 2023 that one of the authors might deserve the Nobel Peace Prize . I only see one side engaged in dangerous rhetoric, and it’s the ones that have the most to gain from spreading it. I need to be clear that this act of violence is not something I endorse in any way. I am also glad that nobody was hurt.  I also think we need to be clear about the circumstances — and the rhetoric — that led somebody to do this, and why the AI industry needs to be well aware that the society they’re continually threatening with job loss is one full of people that are very, very close to the edge. This is not about anybody being “deserving” of anything, but a frank evaluation of cause and effect.  People feel like they’re being fucking tortured every time they load social media. Their money doesn’t go as far. Their financial situation has never been worse . Every time they read something it’s a story about ICE patrols or a near-nuclear war in Iran, or that gas is more expensive, or that there’s worrying things happening in private credit. Nobody can afford a house and layoffs are constant. One group, however, appears to exist in an alternative world where anything they want is possible. They can raise as much money as they want . They can build as big a building as they want anywhere in the world. Everything they do is taken so seriously that the government will call a meeting about it . Every single media outlet talks about everything they do. Your boss forces you to use it. Every piece of software forces you to at least acknowledge that they use it too. Everyone is talking about it with complete certainty despite it not being completely clear why. As many people writhe in continual agony and fear, AI promises — but never quite delivers — some sort of vague utopia at the highest cost known to man. And these companies are, in no uncertain terms, coming for your job.  That’s what they want to do. They all say it. They use deceptively-worded studies that talk about “AI-exposed” careers to scare and mislead people into believing LLMs are coming for their jobs, all while spreading vague proclamations about how said job loss is imminent but also always 12 months away . Altman even says that jobs that will vanish weren’t real work to begin with , much as former OpenAI CTO Mira Murati said that some creative jobs shouldn’t have existed in the first place . These people who sell a product with no benefit comparable on any level to its ruinous, trillion-dollar cost are able to get anything they want at a time when those who work hard are given a kick in the fucking teeth, sneered at for not “using AI” that doesn’t actually seem to make their lives easier, and then told that their labor doesn’t constitute “real work.” At a time when nobody living a normal life feels like they have enough, the AI industry always seems to get more. There’s not enough money for free college or housing or healthcare or daycare but there’s always more money for AI compute.  Regular people face the harshest credit market in generations but private credit and specifically data centers can always get more money and more land .  AI can never fail — it can only be failed. If it doesn’t work, you simply don’t know how to “use AI” properly and will be “ at a huge disadvantage " despite the sales pitch being “this is intelligent software that just does stuff.”  AI companies can get as much attention as they need, their failings explained away, their meager successes celebrated like the ball dropping on New Years Eve, their half-assed sub-War Of The Worlds “Mythos” horseshit treated like they’ve opened the gates of Hell .  Regular people feel ignored and like they’re not taken seriously, and the people being given the most money and attention are the ones loudly saying “we’re richer than anyone has ever been, we intend to spend more than anyone has ever spent, and we intend to take your job.”  Why are they surprised that somebody mentally unstable took them seriously? Did they not think that people would be angry? Constantly talking about how your company will make an indeterminate amount of people jobless while also being able to raise over $162 billion in the space of two years and taking up as much space on Earth as you please is something that could send somebody over the edge.  Every day the news reminds you that everything sucks and is more expensive unless you’re in AI, where you’ll be given as much money and told you’re the most special person alive. I can imagine it tearing at a person’s soul as the world beats them down. What they did was a disgraceful act of violence.  Unstable people in various stages of torment act in erratic and dangerous ways. The suspect in the molotov cocktail incident apparently had a manifesto where he had listed the names and addresses of both Altman and multiple other AI executives, and, per CNBC, discussed the threat of AI to humanity as a justification for his actions. I am genuinely happy to hear that this person was apprehended without anyone being hurt.  These actions are morally wrong, and are also the direct result of the AI industry’s deceptive and manipulative scare campaign, one promoted by men like Altman and Amodei, as well as doomer fanfiction writers like Yudowsky, and, of course, Daniel Kokotajlo of AI 2027 — both of whom have had their work validated and propagated via the New York Times.  On the subject of “dangerous rhetoric,” I think we need to reckon with the fact that the mainstream media has helped spread harmful propaganda, and that a lack of scrutiny of said propaganda is causing genuine harm.  I also do not hear any attempts by Mr. Altman to deal with the actual, documented threat of AI psychosis, and the people that have been twisted by Large Language Models to take their lives and those of others . These are acts of violence that could have been stopped had ChatGPT and similar applications not been anthropomorphized by design, and trained to be “friendly.”  These dangerous acts of violence were not inspired by Ronan Farrow publishing a piece about Sam Altman. They were caused by a years-long publicity campaign that has, since the beginning, been about how scary the technology is and how much money its owners make.  I separately believe that these executives and their cohort are intentionally scaring people as a means of growing their companies, and that these continual statements of “we’re making something to take your job and we need more money and space to do it” could be construed as a threat by somebody that’s already on edge.  I agree that the dangerous rhetoric around AI must stop. Dario Amodei and Sam Altman must immediately cease their manipulative and disingenuous scare-tactics, and begin describing Large Language Models in terms that match their actual abilities, all while dispensing with any further attempts to extrapolate their future capabilities. Enough with the fluff. Enough with the bullshit. Stop talking about AGI. Start talking about this like regular old software, because that’s all that ChatGPT is.  In the end, if Altman wants to engage with “good-faith criticism,” he should start acting in good faith. That starts with taking ownership of his role in a global disinformation campaign. It starts with recognizing how the AI industry has sold itself based on spreading mythology with the intent of creating unrest and fear.  And it starts with Altman and his ilk accepting any kind of responsibility for their actions. I’m not holding my breath. As if their ability to try to do some of a task allows them to do the entire task.   As if their ability to do tasks is somehow impressive or a justification for their cost. An excuse for why they cannot do more hinged on something happening in the future.

0 views

Premium: The Hater's Guide to OpenAI

Soundtrack: The Dillinger Escape Plan — Setting Fire To Sleeping Giants In what The New Yorker’s Andrew Marantz and Ronan Farrow called a “tense call” after his brief ouster from OpenAI in 2023, Sam Altman seemed unable to reckon with a “pattern of deception” across his time at the company:  No, he cannot. Sam Altman is a deeply-untrustworthy individual, and like OpenAI lives on the fringes of truth, using a complaint media to launder statements that are, for legal reasons, difficult to call “lies” but certainly resemble them. For example, back in November 2025, Altman told venture capitalist Brad Gerstner that OpenAI was doing “well more” than $13 billion in annual revenue when the company would do — and this is assuming you believe CNBC’s source — $13.1 billion for the entire year . I guarantee you that, if pressed, Altman would say that OpenAI was doing “well more than” $13 billion of annualized revenue at the time, which was likely true based on OpenAI’s stylized math, which works out as so (per The Information): This means that, per CNBC’s reporting, OpenAI barely scratched $10 billion in revenue in 2025, and that every single story about OpenAI’s revenue other than my own reporting (which came directly from Azure) massively overinflates its sales. The Information’s piece about OpenAI hitting $4.3 billion in revenue in the first half of 2025 should really say “$3.44 billion,” but even then, my own reporting suggests that OpenAI likely made a mere $2.27 billion in the first half of last year, meaning that even that $10 billion number is questionable. It’s also genuinely insane to me that more people aren’t concerned about OpenAI, not as a creator of software, but as a business entity continually misleading its partners, the media, and the general public. To put it far more bluntly, the media has failed to hold OpenAI accountable, enabling and rationalizing a company built on deception, rationalizing and normalizing ridiculous and impossible ideas just because Sam Altman said them. Let me give you a very obvious example. About a month ago, per CNBC , “...OpenAI reset spending expectations, telling investors its compute target was around $600 billion by 2030.” This is, on its face, a completely fucking insane thing to say, even if OpenAI was a profitable company. Microsoft, a company with hundreds of billions of dollars of annual revenue, has about $42 billion in quarterly operating expenses .  OpenAI cannot afford to pay these agreements. At all. Hell, I don’t think any company can! And instead of saying that, or acknowledging the problem, CNBC simply repeats the statement of “$600 billion in compute spend,” laundering Altman and OpenAI’s reputation as it did (with many of the same writers and TV hosts) with Sam Bankman-Fried . CNBC claimed mere months before the collapse of FTX that it had grown revenue by 1,000% “during the crypto craze,” with its chief executive having “ ...survived the market wreckage and still expanded his empire .” You might say “how could we possibly know?” and the answer is “read CNBC’s own reporting that said that Bankman-Fried intentionally kept FTX in the Bahamas ,” which said that Bankman-Fried had intentionally reduced his stake in Canadian finance firm Voyager ( which eventually collapsed on similar terms to FTX ) to avoid regulatory disclosures around (Bankman-Fried’s investment vehicle) Alameda’s finances. This piece was written by a reporter that has helped launder the reputation of Stargate Abilene , claiming it was “online” despite only a fraction of its capacity actually existing.  The same goes for OpenAI’s $300 billion deal with Oracle that OpenAI cannot afford and Oracle does not have the capacity to serve . These deals do not make any logical sense, the money does not exist, and the utter ridiculousness of reporting them as objective truths rather than ludicrous overpromises allowed Oracle’s stock to pump and OpenAI to continue pretending it could actually ever have hundreds of billions of dollars to spend. OpenAI now claims it makes $2 billion a month , but even then I have serious questions about how much of that is real money considering the proliferation of discounted subscriptions (such as ones that pop up when you cancel that offer you three months of discounted access to ChatGPT Plus ) and free compute deals, such as the $2500 given to Ramp customers , millions of tokens in exchange for sharing your data , the $100,000 token grants given to AI policy researchers , and the OpenAI For Startups program that appears to offer thousands (or even tens of thousands) of dollars of tokens to startups . While I don’t have proof, I would bet that OpenAI likely includes these free tokens in its revenues and then counts them as part of its billions of dollars of sales and market spend . I also think that revenue growth is a little too convenient, accelerating only to match Anthropic, which recently “hit” $30 billion in annualized revenue under suspicious circumstances . I can only imagine OpenAI will soon announce that it’s actually hit $35 billion in annualized revenue , or perhaps $40 billion in annualized revenue , and if that happens, you know that OpenAI is just making shit up.  Regardless, even if OpenAI is actually making $2 billion a month in revenue, it’s likely losing anywhere from $4 billion to $10 billion to make that revenue. Per my own reporting from last year, OpenAI spent $8.67 billion on inference to make $4.329 billion in revenue , and that’s not including training costs that I was unable to dig up — and those numbers were before OpenAI spent tens of millions of dollars in inference costs propping up its doomed Sora video generation product , or launched its Codex coding environment. In simpler terms, OpenAI’s costs have likely accelerated dramatically with its supposed revenue growth. And all of this is happening before OpenAI has to spend the majority of its capital. Oracle has, per my sources in Abilene, only managed to successfully build and generate revenue from two buildings out of the eight that are meant to be done by the end of the year, which means that OpenAI is only paying a small fraction of the final costs of one Stargate data center. Its $138 billion deal with Amazon Web Services is only in its early stages, and as I explained a few months ago in the Hater’s Guide To Microsoft , Redmond’s Remaining Performance Obligations that it expects to make revenue from in the next 12 months have remained flat for multiple quarters, meaning that OpenAI’s supposed purchase of “ an incremental $250 billion in Azure compute ” are yet to commence. In practice, this means that OpenAI’s expenses are likely to massively increase in the coming months. And while the “ $122 billion ” funding round it raised — with $35 billion of it contingent on either AGI or going public (Amazon), and $60 billion of it paid in tranches by SoftBank and NVIDIA — may seem like a lot, keep in mind that OpenAI had received $22.5 billion from SoftBank on December 31 2025 , a little under four months ago.  This suggests that either OpenAI is running out of capital, or has significant up-front commitments it needs to fulfil, requiring massive amounts of cash to be sent to Amazon, Microsoft, CoreWeave ( which it pays on net 360 terms ) and Oracle.  And if I’m honest, I think the entire goal of the funding round was to plug OpenAI’s leaky finances long enough to take it public, against the advice of CFO Sarah Friar. One under-discussed part of Farrow and Marantz’s piece was a quote about OpenAI’s overall finances, emphasis mine : As I wrote up earlier in the week , OpenAI CFO Sarah Friar does not believe, per The Information , that OpenAI is ready to go public, and is concerned about both revenue growth slowing and OpenAI’s ability to pay its bills: To make matters worse, Friar also no longer reports to Altman — and god is it strange that the CFO doesn’t report to the CEO! — and it’s actually unclear who it is she reports to at all, as her current report, Fiji Simo, has taken an indeterminately-long leave of medical absence . Friar has also, per The Information, been left out of conversations around financial planning for data center capacity. These are the big, flashing warning signs of a company with serious financial and accounting issues, run by Sam Altman, a CEO with a vastly-documented pattern of lies and deceit. Altman is sidelining his CFO, rushing the company to go public so that his investors can cash out and the larger con of OpenAI can be dumped onto public investors. And beneath the surface, the raw economics of OpenAI do not make sense. You’ll notice I haven’t talked much about OpenAI’s products yet, and that’s because I do not believe they can exist without venture capital funding them and the customers that buy them. These products only have market share as long as other parties continue to build capacity or throw money into the furnace. To explain: While OpenAI is not systemically necessary , the continued enabling and normalization of its egregious and impossible promises has created an existential threat to multiple parties named above. Its continued existence requires more money than anybody has ever raised for a company — private or public — and in the event it’s allowed to go public, I believe that both retail investors and large equity investors like SoftBank will be left holding the bag. OpenAI has a fundamental lack of focus as a business, despite how many articles have claimed over the last year that it’s working on a “SuperApp” and has some sort of renewed plan to take on whoever it is that OpenAI perceives as the competition in any given calendar month.  Everything OpenAI does is a reaction to somebody else. Its Atlas browser was a response to Perplexity’s Comet browser , its first ( of multiple! ) Code Reds in 2025 was a reaction to Google’s Gemini 3, and its rapid deployment of its Codex model and platform was to compete with Anthropic’s Claude Code . I’ve read about this company and the surrounding industry for hours a day for several years, and I can’t think of a single product that OpenAI has launched first . Even its video-generating social network app Sora was beaten to market by five days by Meta’s putrid and irrelevant “Vibes.” Actually, that’s not true. OpenAI did have one original idea in 2025 — the launch of GPT-5, a much-anticipated new model launch that included a “model router” to make it “more efficient,” except it turned out that it boofed on benchmarks and that the model router actually made it (as I reported last year) more expensive , which led to the router being retired in December 2025 .  I tend to be pretty light-hearted in what I write, but please take me seriously when I say I have genuine concerns about the dangers posed by OpenAI. I believe that OpenAI is an incredibly risky entity, not due to the power of its models or its underlying assets, but due to Sam Altman’s ability to con people and find others that will con in his stead. Those responsible for rooting out con artists — regulators, investors, and the media — have not simply failed , but actively assisted Altman in this con. Here’re the crucial elements of the con: Sam Altman is a dull, mediocre man that loves money and power. He appears to be superficially charming, but his actual skill is ingratiating himself with others and having them owe him favors, or feel somehow indebted to him otherwise. He remembers people’s names and where he met them, and is very good at emailing people, writing checks, or finding reasons for somebody else to write a check. He is not technical — he can barely code and misunderstands basic machine learning ( to quote Futurism ) — but is very good at making the noises that people want to hear, be they big scary statements that confirm their biases or massive promises of unlimited revenue that don’t really make any rational sense. While OpenAI might have started on noble terms, it has since morphed into a massive con led by the Valley’s most-notable con artist.  I realize that those who like AI might find this offensive, but what else do you call somebody who makes promises they can’t keep ($300 billion to Oracle, $200 billion of revenue by 2030), spreads nonsensical financials (promises to spend $600 billion in compute), makes announcements of deals that don’t exist (see: NVIDIA’s $100 billion funding and the entire Stargate project), and speaks in hyperbolic terms to pump the value of his stock (such as basically every time he talks about Superintelligence). Altman has taken advantage of a tech and business media that wants to see him win, a market divorced from true fundamentals, desperate venture capitalists at the end of their rope , hyperscalers that have run out of hypergrowth ideas , and multiple large companies like Oracle and SoftBank that are run by people that can’t do maths. OpenAI is a psuedo-company that can only exist with infinite resources, its software sold on lies, its infrastructure built and paid for by other parties, and its entire existence fueled by compounding layers of leverage and risk.  OpenAI has never made sense, and was only rationalized through a network of co-conspirators. OpenAI has never had a path to profitability, and never had a product that was worthy of the actual cost of selling it. The ascension of this company has only been possible as part of an exploitation of ignorance and desperation, and its collapse will be dangerous for the entire tech industry. Today I’ll explain in great detail the sheer scale of Sam Altman’s con, how it was exacted, the danger it poses to its associated parties, and how it might eventually collapse. This is the Hater’s Guide To OpenAI, or Sam Altman, Freed.  OpenAI’s ChatGPT Subscriptions are, like every LLM product, deeply unprofitable, which means that OpenAI needs constant funding to keep providing them. I have found users of OpenAI Codex who have been able to burn between $1,000 and $2,000 in the space of a week on a $200-a-month subscription, and OpenAI just reset rate limits for the second time in a month. This isn’t a real business. OpenAI’s API customers (the ones paying for access to its models) are, for the most part, venture-backed startups providing services like Cursor and Perplexity that are powered by these models. These startups are all incredibly unprofitable, requiring them to raise hundreds of millions of dollars every few months ( as is the case with Harvey , Lovable, and many other big-name AI firms), which means that a large chunk — some estimate around 27% of its revenue — is dependent on customers that stop existing the moment that venture capital slows down. OpenAI’s infrastructure partners like CoreWeave and Oracle are taking on anywhere from a few billion to over a hundred billion dollars’ worth of debt to build data centers for OpenAI, putting both companies in material jeopardy in the event of OpenAI’s failure to pay or overall collapse. 67% of CoreWeave’s 2025 revenue came from Microsoft renting capacity to rent to OpenAI , and $22 billion (32%) of of CoreWeave’s $66.8 billion in revenue backlog , which requires it to build more capacity to fill.  Oracle took on $38 billion in debt in 2025 , and is in the process of raising another $50 billion more as it lays off thousands of people , with said debt’s only purpose being building data center capacity for OpenAI. OpenAI’s lead investor SoftBank is putting its company in dire straits to fund the company, with over $60 billion invested in the company so far, existentially tying SoftBank’s overall financial health to both OpenAI’s stock price and SoftBank’s ability to continue paying (or refinancing) its loans. SoftBank took on a year-long $15 billion bridge loan in 2025 , had to sell its entire stake in NVIDIA , and expand its ARM-stock-backed margin loan to over $11 billion to give OpenAI $30 billion in 2025, and then took on another $40 billion bridge loan a few weeks ago to fund the $30 billion it promised for OpenAI’s latest funding round . Creating a halo of uncertainty around the actual efficacies of LLMs, to the point that a cult of personality grew around a technology that obfuscated its actual outcomes and efficacies to the point that it could be sold based on what it might do rather than what it actually does . Creating a halo of “genius” around Altman himself, aided by constant and vague threats of human destruction with the suggestion that only Altman could solve them. Normalizing the idea that it’s both necessary and important to let a company burn billions of dollars. Normalizing the idea that it’s okay that a company has perpetual losses, and perpetuating the idea that these losses are necessary for innovation to continue at large.

0 views

AI Is Really Weird

If you like this piece and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I just put out a massive Hater’s Guide To The SaaSpocalypse , as well as last week’s deep dive into How AI Isn't Too Big To Fail .  Subscribing helps directly support my free work, and premium subscribers don’t see this ad in their inbox. I can’t get over how weird the AI bubble has become. Hyperscalers are planning to spend over $600 billion on data center construction and GPUs predominantly bought from NVIDIA, the largest company on the stock market, all to power generative AI, a technology that’s so powerful that none of them will discuss how much it’s making them, or what it is we’re all meant to be so excited.  To make matters weirder , Microsoft, a company that spent $37.5 billion in capital expenditures in its last quarter on AI , recently updated the terms and conditions of its LLM-powered “Copilot” service to say that it was “for entertainment purposes only,” discussing a product that apparently has 15 million users as part of enterprise Microsoft 365 subscriptions , and is sold to both local and national governments overseas , including the US federal government . That’s so weird! What’re you doing Microsoft? What do you mean it’s for entertainment purposes? You’re building massive data centers to drive this!  Well, okay, you’re building them at some point. As I discussed a few weeks ago, despite everybody talking about the hundreds of gigawatts of data centers being built “to power AI,” only 5GW are actually “under construction,” with “under construction” meaning anything from “we’ve got some scaffolding up” to “we’re about to hand over the keys to the customer.”  But isn’t it weird we’re even building those data centers to begin with? Why? What is it that AI does that makes it so essential — or, rather, entertaining — that we keep funding and building these things? Every day we hear about “the power of AI,” we’re beaten over the head with scary propaganda saying “AI will take our jobs,” but nobody can really explain — outside of outright falsehoods about “AI replacing all software engineers” — what it is that makes any of this worthy of taking up any oxygen let alone essential or a justification for so many billions of dollars of investment. Instead of providing an actual answer of some sort , AI boosters respond by saying it’s “just like the dot com bubble” — another weird thing to do considering 168,000 people lost their jobs as the NASDAQ dropped by 80% in two years , and only 16% of the world even used the internet , and those that did in America had an average internet speed of 50 kilobits per second ( and only 52% of them had access in 2000 anyway ). Conversely, to quote myself: And with that incredibly easy access , only 3% of households pay for AI . Boosters will again use this talking point to say that “we’re in the early days,” but that’s only true if you think that “early days” means “people aren’t really using it yet.”  Yet the “early days” argument is inherently deceptive. While the Large Language Model hype cycle might have only begun in 2022, the entirety of the media and markets have focused their attention on AI, along with hundreds of billions of dollars of venture capital and nearly a trillion dollars of hyperscale capex investment . AI progress isn’t hampered by a lack of access, talent, resources, novel approaches, or industry buy-in, but by a single-minded focus on Large Language Models, a technology that has been so obviously-limited from the very beginning that Gary Marcus was able to call it in 2022 .  Saying it’s “the early days” also doesn’t really make sense when faced with the rotten and incredibly unprofitable economics of AI. The early days of the internet were not unprofitable due to the underlying technology of serving websites , but the incredibly shitty businesses that people were building. Pets.com spent $400 per customer in customer acquisition costs , millions of dollars on advertising, and had hundreds of employees for a business with a little over $600,000 in quarterly revenue — and as a result, nothing about its failure was about “the early days of the internet” at all, as was the case with Kozmo, or any number of other dot com flameouts.  Similarly, internet infrastructure companies like Winstar collapsed because they tried to grow too fast and signed stupid deals rather than anything about the underlying technology’s flaws. For example, in 1998, Lucent Technologies signed its largest deal — a $2 billion “equipment and finance agreement” — with telecommunications company Winstar , which promised to bring in “$100 million in new business over the next five years” and build a giant wireless broadband network, along with expanding Winstar’s optical networking. Eager math-heads in the audience will be able to see the issue of borrowing $2 billion to make $100 million over five years, as will eager news-heads laugh at WIRED magazine in 1999 saying that Winstar’s “small white dish antennas…[heralded] a new era and new mind-set in telecommunications.” Winstar died two years later because its business was built to grow at a rate that its underlying product couldn’t support . In the end, microwave internet (high-speed internet delivered via radio waves) has become an $8 billion-a-year industry , despite everybody’s excitement. In any case, anytime that somebody tells you that we’re in “the early days of AI” has either been conned or is in the process of conning you, as they’re using it to deflect from issues of efficacy or underlying economic weakness.  In fact, that’s a great place to go next. Probably the weirdest thing about this entire era is how nobody wants to talk about the fact that AI isn’t actually doing very much, and that AI agents are just chatbots plugged into an API. Per Redpoint Ventures’ Reflections on the State of the Software and AI Market , “the agent maturity curve is still early, but the TAM implications are enormous,” with agents able to “...run discretely for minutes, [and] execute end-to-end tasks with some oversight.” What tasks, exactly? Who knows! Truly, nobody seems able to say. To paraphrase Steven Levy at WIRED , 2025 was meant to be the year of AI agents, but turned out to be the year of talking about AI agents. Agents were/are meant to be autonomous pieces of software that go off and do distinct tasks. In reality, it’s kind of hard to say what those tasks are. “AI agent” now refers to literally anything anybody wants it to, but ultimately means “chatbot that has access to some systems.”  The New York Times’ Ezra Klein recently talked to the entity currently inhabiting former journalist and Anthropic co-founder Jack Clark recently about “how fast AI agents would rip through the economy,” but despite speaking for over an hour, the closest we got was “it wrote up a predator-prey simulation (a complex-sounding but extremely-common kind of webgame that Anthropic likely ingested through its training material )” and “chatbots that talk to each other about tasks,” and if you think I’m kidding, this is how he described it: Anyway, this is all bad, because multiple papers have now shown that, and I quote, agents are “...incapable of carrying out computational and agentic tasks beyond a certain complexity,” with Futurism adding that said complexity was pretty low . The word “agent” is meant to make you think of powerful autonomous systems that carry out complex and minute tasks, when in reality it’s…a chatbot. It’s always a fucking chatbot. It might be a chatbot with API access or a chatbot that generates a plan that another chatbot looks at and says something about, but it’s still chatbots talking to chatbots. When you strip away the puffery, nobody seems to actually talk about what AI does.  Let’s take a look at CNBC’s piece on Goldman Sachs’ supposed contract with Anthropic to build “autonomous systems for time-intensive, high-volume back-office work”: …okay, but like, what does it do? Right, brilliant. Great. Love it. What tasks? What is the thing you’re paying for? Okay, great, we have two things it might do in the future , and that’s “employee surveillance” (?) and making pitchbooks. The upshot is that, with the help of the agents in development, clients will be onboarded faster and issues with trade reconciliation or other accounting matters will be solved faster, Argenti said. Onboarding? Chatbot. “Issues with trade reconciliation”? Chatbot connected to a knowledge base, like we’ve had for years but worse and more expensive. Oh, and “other accounting matters” will be solved faster, always with the future tense with these guys. How about Anthropic and outsourcing body shop giant InfoSys’ “AI agents for telecommunications and other regulated industries ”? Let’s go through the list of tasks and say what they mean, my comments in bold: How about OpenAI’s “Frontier” platform for businesses to “ build, deploy and manage AI agents that do real work” ?  Shared context? Chatbot. Onboarding? Chatbot. Hands-on learning with feedback? Chatbot. Clear permissions and boundaries? Chatbot setting. Let’s check out the diagram! Uhuh. Great. What real-world tasks? Uhhh.  Reason over data? Chatbot. “Complex tasks”? No idea, it doesn’t say. “Working with files”? Doesn’t say how it works with files, but I’d bet it can analyze, summarize and create charts based on them that may or may not have errors in them, and based on my experience of trying to get these things to make charts (as a test, I’d never use them in my actual work), it doesn’t seem to be able to do that. “Evaluation and optimization loops”? Unclear, because we have no idea what the tasks are. What are the agents planning, acting, or executing on? Again, no idea.  Yet the media continues to perpetuate the myth of some sort of present or future “agentic AI” that will destroy all employment. A few weeks ago, CNBC mindlessly repeated that ServiceNow CEO Bill McDermott believed that agents would send college grad unemployment over 30% . NowAssist , ServiceNow’s AI platform, is capable of — you guessed it! — summarization, conversational exchanges, content creation, code generation and search, a fucking chatbot just like the other chatbots.  A few weeks ago, The New York Times wrote about how “AI agents are fun, useful, but [not to] give them your credit card,” saying that they can “do more than just chat…they can edit files, send emails, book trips and cause trouble”: Sure sounds like you connected a chatbot to your email there Mr. Heyneman.  Let’s go through these: Yes, you can string together chatbots with various APIs and have the chatbot be able to activate certain systems. You could also do the same with a button you bought on Etsy connected to your computer via USB if you really wanted to. The ability to connect something to something else does not mean that anything useful happens at the end, and LLMs are extremely bad at the kind of deterministic actions that define the modern knowledge economy, especially when choosing to do them based on their interpretation of human language. AI agents do not, as sold, actually exist. Every “AI agent” you read about is a chatbot talking to another chatbot connected to an API and a system of record, and the reason that you haven’t heard about their incredible achievements is because AI agents are, for the most part, fundamentally broken.  Even OpenClaw, which CNBC confusingly called a “ ChatGPT moment ,” is just a series of chatbots with the added functionality of requiring root access to your computer and access to your files and emails. Let’s see how CNBC described it back in February :  Hmmm interesting. I wonder if they say what that means: Reading this, you might be fooled into believing that OpenClaw can actually do any of this stuff correctly, and you’d be wrong! OpenClaw is doing the same chatbot bullshit, just in a much-more-expensive and much-more convoluted way, requiring either a well-secured private space or an expensive Mac Mini to run multiple AI services and do, well, a bunch of shit very poorly. The same goes for things like Perplexity’s “Computer,” which it describes as “an independent digital worker that completes and workflows for you,” which means, I shit you not, that it can search, generate stuff (words, code, images), and integrate with Gmail, Outlook, Github, Slack, and Notion, places where it can also drop stuff it’s generated. Yes, all of this is dressed up with fancy terms like “persistent memory across sessions” (a document the chatbot reads and information it can access) with “authenticated integrations” (connections via API that basically any software can have). But in reality, it’s just further compute-intensive ways of trying to fit a square peg in a round hole, by which I mean having a hallucination-prone chatbot do actual work. The only reason Jensen Huang is talking about OpenClaw is that there’s nothing else for Jensen Huang to talk about: That’s wild, man. That’s completely wild. What’re you talking about? What can NemoClaw or OpenClaw or whatever-the-fuck actually do? What is the actual output? That’s so fucking weird! I can already hear the haters in my head screaming “ but Ed, coding models! ” and I’m kind of sick of talking about them, because nobody can actually tell me what I’m meant to be amazed or surprised by.  To be clear, LLMs can absolutely write code, and can absolutely create software, but neither of those mean that the code is good, stable or secure, or that the same can be said of the software they create. They do not have ideas, nor do they create unique concepts — everything they create is based on training data fed to it that was first scraped from Stack Overflow, Github and whatever code repositories Anthropic, OpenAI, and Google have been able to get their hands on.  It’s unclear what the actual economic or productivity effects are, other than an abundance of new code that’s making running companies harder. Per The New York Times :  As I wrote a few weeks ago , LLMs are good at writing a lot of code , not good code, and the more people you allow to use them, the more code you’re going to generate, which means the more time you’re either going to need to review that code, or the more vulnerabilities you’re going to create as a result. Worse still, hyperscalers like Meta and Amazon are allowing non-technical people to ship code themselves, which is creating a crisis throughout the tech industry.  Worse still , LLMs allow shitty software engineers that would otherwise be isolated by their incompetence to feign enough intelligence to get by, leading to them actively lowering the quality of code being shipped. Per the Times: The Times also notes that because LLM coding works better on a device rather than a web interface, “...engineers are downloading their entire company’s code to their laptops, creating a security risk if the laptop goes missing.”  Speaking frankly, it appears that LLMs can write code, and create some software, but without any guarantee that said code will compile, run, be secure, performant, or easy to read and maintain. For an experienced and ethical software engineer, LLMs can likely speed them up somewhat , though not in a way that appears to be documented in any academic sense, other than it makes them slower .  And I think it’s fair to ask what any of this actually means. What’s the advantage of having an LLM write all of your code? Are you shipping faster? Is the code better? Are there many more features being shipped? What is the actual thing you can point at that has materially changed for the better?  Software engineers don’t seem happier, nor do they seem to be paid more, nor do they seem to be being replaced by AI, nor do we have any examples of truly vibe coded software companies shipping incredible, beloved products.  In fact, I can’t think of a new piece of software I’ve used in the last few years that actually impressed me outside of Flighty . Where’s the beef? What am I meant to be looking at? What’re you shipping that’s so impressive? Why should I give a shit? Isn’t it weird that we’re even having this conversation? Shouldn’t it be obvious by now? This week, economist Paul Kedrosky told me on the latest episode of my show Better Offline that AI is “...nowhere to be seen yet in any really meaningful productivity data anywhere,” and only appears in the non-residential fixed investments side of America’s GDP, at (and I quote again) “...levels we last saw with the railroad build out or with rural electrification.” That’s so fucking weird! NVIDIA is the largest company on the US stock market and has sold hundreds of billions of dollars of GPUs in the last few years, with many of them sold to the Magnificent Seven, who are building massive data centers and reopening nuclear power plants to power them, and every single one of them is losing money doing so, with revenues so putrid they refuse to talk about them!   And all that to make…what, Gemini? To power ChatGPT and Claude? What does any of this actually do that makes any of those costs actually matter? And as I’ve discussed above, what, literally, does this software do that makes any of this worth it?   Ask the average AI booster — or even member of the media — and they’ll say something about “lots of code being written by AI,” or “novel discoveries” (unrelated to LLMs) or “LLMs finding new materials ( based on an economics paper with faked data )” or “people doing research,” or, of course, “that these are the fastest-growing companies of all time.” That “growth” is only possible because all of the companies in question heavily subsidize their products , spending $3 to $15 for every dollar of revenue. Even then, only OpenAI and Anthropic seem to be able to make “billions of dollars of revenue,” a statement that I put in quotes because however many billions there might be is up for discussion. Back in November 2025 , I reported that OpenAI had made — based on its revenue share with Microsoft — $4.329 billion between January and September 2025, despite The Information reporting that it had made $4.3 billion in the first half of the year based on disclosures to shareholders .  While a few outlets wrote it up, my reporting has been outright ignored by the rest of the media. I was not reached out to by or otherwise acknowledged by any other outlets, and every outlet has continued to repeat that OpenAI “made $13 billion in 2025,” despite that being very unlikely given that it would have required it to have made $8 billion in a single quarter. While I understand why — I’m an independent, after all — these numbers directly contradict existing reporting, which, if I was a reporter, would give me a great deal of concern about the validity of my reporting and the sources that had provided it.  Similarly, when Anthropic’s CFO said in a sworn affidavit that it had only made $5 billion in its entire existence , nobody seemed particularly bothered, despite reports saying it had made $4.5 billion in 2025 , and multiple “annualized revenue” reports — including Anthropic’s own — that added up to over $6.6 billion .  Though I cannot say for certain, both of these situations suggest that Anthropic and OpenAI are misleading their investors, the media and the general public. If I were a reporter who had written about Anthropic or OpenAI’s revenues previously, I would be concerned that I had published something that wasn’t true, and even if I was certain that I was correct, I would have to consider the existence of information that ran counter to my own. I would be concerned that Anthropic or OpenAI had lied to me, or that they were lying to someone else, and work diligently to try and find out what happened. I would, at the very least, publish that there was conflicting information. The S-1 will give us the truth, I guess.  Let’s talk for a moment about margins , because they’re very important to measuring the length of a business.  Back in February in my Hater’s Guide To Anthropic, I raised concerns that Dario Amodei was using a different way to calculate margins than other companies do .  Amodei told the FT in December 2024 that he didn’t think profitability was based on how much you spent versus how much you made: He then did the same thing in an interview with John Collison in August 2025 : Almost exactly six months later on February 13, 2026’s appearance on the Dwarkesh Podcast, Dario would once again try and discuss profitability in terms other than “making more money than you’ve spent”: The above quote has been used repeatedly to suggest that Anthropic has 50% gross margins and is “profitable,” which is extremely weird in and of itself as that’s not what Dario Amodei said at all. Based on The Information’s reporting from earlier in the year , Anthropic’s “gross margin” was 38%.” Yet things have become even more confusing thanks to reporting from Eric Newcomer, who ( in reporting on an investor presentation by Coatue from January ) revealed that Anthropic’s gross margin was “45% in the quarter ended Sep-25,” with the crucial note that — and I quote — “Non-GAAP gross margins [are] calculated by Anthropic management…[are] unaudited, company-provided, and may not be comparable to other companies.” This means that however Anthropic calculates its margins are not based on Generally Accepted Accounting Principles , which means that the real margins probably suck ass , because Anthropic loses billions of dollars a year, just like OpenAI. Yet one seemingly-innocent line in there gives me even more pause: “Model payback improving significantly as revenue scales faster than R&D training costs.” This directly matches with Dario Amodei’s bizarre idea that “...If you consider each model to be a company, the model that was trained in 2023 was profitable. You paid $100 million, and then it made $200 million of revenue.” Yes, I know it’s a “stylized fact” or whatever, but that’s what he said, and I think that their IPO might have a rude surprise in the form of a non-EBITDA margin calculation that makes even the most-ardent booster see red. This week, The Wall Street Journal published a piece about OpenAI and Anthropic's finances that included one of the most-offensive lines in tech media history: Two thoughts: As I said a few months ago about training costs: The Journal also adds that both Anthropic and OpenAI are showing investors two versions of their earnings — one with training costs, and one without — without adding the commentary that this is extremely deceptive or, at the very least, extremely unusual. The more I think about it the more frustrated I get. Having two sets of earnings is extremely dodgy! Especially when the difference between them is billions of dollars. This should be immediately concerning to every financial journalist, the reddest of red flags, the biggest sign that something weird is happening… …but because this is the AI industry, the Journal runs propaganda instead: That “fast-growing” part is only possible because both Anthropic and OpenAI subsidize the compute of their subscribers , allowing them to burn $3 to $15 for every dollar of subscription revenue. And no, this is nothing like Uber or Amazon , that’s a silly comparison, click that link and read what I said and then never bring it up again. I realize my suspicion around Anthropic’s growth has become something of a meme at this point, but I’m sorry, something is up here. Let’s line it all up: Anthropic was making $9 billion in annualized revenue at the end of 2025, or approximately $750 million in a 30-day period. Per Newcomer , as of December 2025, this is how Anthropic’s revenue breaks down: Per The Information , Anthropic also sells its models through Microsoft, Google and Amazon, and for whatever reason reports all of the revenue from their sales as its own and then takes out whatever cut it gives them as a sales and marketing expense: The Information also adds that “...about 50% of Anthropic’s gross profits on selling its AI via Amazon has gone to Amazon,” and that “...Google typically takes a cut of somewhere between 20% and 30% of net revenue, after subtracting infrastructure costs.”  The problem here is that we don’t know what the actual amounts of revenue are that come from Amazon or Google (or Microsoft, for that matter, which started selling Anthropic’s models late last year), which makes it difficult to parse how much of a cut they’re getting. That being said, Google ( per DataCenterDynamics/The Information ) typically takes a cut of 20% to 30% of net revenue after subtracting the costs of serving the models . Nevertheless, something is up with Anthropic’s revenue story.  Let’s humour Anthropic for a second and say that what it’s saying is completely true: it went from making $750 million in monthly revenue in January to $2.5 billion in monthly revenue in April 2026. That’s remarkable growth, made even more remarkable by the fact that — based on its December breakdown — most of it appears to have come from API sales. That leap from $750 million to $1.16 billion between December and February feels, while ridiculous , not entirely impossible , but the further ratchet up to $2.5 billion is fucking weird! But let’s try and work it out.  On February 5 2026, Anthropic launched Opus 4.6 , followed by Claude Sonnet 4.6 on February 17 2026.  Based on OpenRouter token burn rates , Opus 4.5 was burning around 370 billion tokens a week. Immediately on release, Opus 4.6 started burning way, way more tokens — 524 billion in its first week, then 643 billion, then 634 billion, then 771 billion, then 822 billion, then 976 billion, eventually going over a trillion tokens burned in the final week of March.  In the weeks approaching its successor’s launch, Sonnet 4.5 burned between 500 billion and 770 billion tokens. A week after launch, 4.6 burned 636 billion tokens, then 680 billion, then 890 billion, and, by about a month in, it had burned over a trillion tokens in a single week.  Reports across Reddit suggest that these new models burn far more tokens than their predecessors with questionable levels of improvement.  The sudden burst in token burn across OpenRouter doesn’t suggest a bunch of people suddenly decided to connect to Anthropic and other services’ models , but that the model themselves had started to burn nearly twice the amount of tokens to do the same tasks. At this point, I estimate Anthropic’s revenue split to be more in the region of 75% API and 25% subscriptions, based on its supposed $2.5 billion in annualized revenue (out of $14 billion, so a little under 18%) in February coming from “Claude Code” (read: subscribers to Claude, there’s no “Claude Code” subscription).  If that’s the case, I truly have no idea how it could’ve possibly accelerated so aggressively, and as I’ve mentioned before , there is no way to reconcile having made $5 billion in lifetime revenue as of March 9, 2026, having $14 billion in annualized revenue on February 12 2026, and having $4.5 billion in revenue for the year 2025. Things get more confusing when you hear how Anthropic calculates its annualized revenues, per The Information : So, Anthropic is annualizing based on the last four weeks of API revenue times 13, a number that’s extremely easy to manipulate using, say, launches of new products. In simpler terms, Anthropic is cherry-picking four-week windows of API spend — ones that are pumped by big announcements and new model releases — and annualizing them. The one million token context window is a big deal, too, having been raised from 200,000 tokens in previous models. With Opus and Sonnet 4.6, Anthropic lets users use up to one million tokens of context, which means that both models can now carry a very, very large conversation history, one that includes every single output, file, or, well, anything that was generated as a result of using the model via the API. This leads to context bloat that absolutely rinses your token budget.   To explain, the context window is the information that the model can consider at once. With 4.6, Anthropic by default allows you to load in one million tokens’ worth of information at once, which means that every single prompt or action you take has the model load one million tokens’ worth of information at once unless you actively “trim” the window through context editing .  Let’s say you’re trying to work out a billing bug in a codebase via whatever interface you’re using to code with LLMs. You load in a 350,000 token codebase, a system prompt (IE: “you are a talented software engineer,” here’s an example ), a few support tickets, and a bunch of word-heavy logs to try and fix it. On your first turn (question), you ask it to find the bug, and you send all of that information through. It spits out an answer, and then you ask it how to fix the bug…but “asking it to fix the bug” also re-sends everything, including the codebase, tickets and logs. As a result, you’re burning hundreds of thousands of tokens with every single prompt. Although this is a simplified example, it’s the case across basically any coding product, such as Claude Code or Cursor. While Cursor uses codebase indexing to selectively fetch pieces of the codebase without constantly loading it into the context window, one developer using Claude inside of Cursor watched a single tool call burn 800,000 tokens by pulling an entire database into the context window , and I imagine others have run into similar problems. To be clear, Anthropic charges at a per-million-token rate of $5 per million input and $25 per million output, which means that those casually YOLOing entire codebases into context are burning shit tons of cash (or, in the case of subscribers, hitting their rate limits faster). if Anthropic actually made $2.5 billion in a month — we’ll find out when it files its S-1! — it likely came not from genuine growth or a surge of adoption, but in its existing products suddenly costing a shit ton more because of how they’re engineered.  The other possibility is the nebulous form of “enterprise deals” that Anthropic allegedly has, and the theory that they somehow clustered in this three-month-long period, but that just feels too convenient.   If 70% of Anthropic’s revenue is truly from API calls, this would suggest: I don’t see much evidence of Anthropic creating custom integrations that actually matter, or — and fuck have I looked! — any real examples of businesses “doing stuff with Claude” other than making announcements about vague partnerships.  There’s also one other option: that Silicon Valley is effectively subsidizing Anthropic through an industry-wide token-burning psychosis. And based on some recent news, there’s a chance that’s the case. As I discussed a few weeks ago, Silicon Valley has a “tokenmaxxing” problem , where engineers are encouraged by their companies to burn as many tokens as possible, at times by their peers, and at others by their companies. The most egregious — and honestly, worrying! — version of this came from The Information’s recent story about Meta employees competing on an internal leaderboard to see who can burn the most tokens, deliberately increasing the size of their prompts and the amount of concurrent sessions ( along with unfettered and dangerous OpenClaw usage ) to do so:   The Information reports that the dashboard, called “Claudeonomics” (despite said dashboard covering other models from OpenAI, Google, and xAI), has sparked competition within Meta, with users burning a remarkable 60 trillion tokens in the space of a month, with one individual averaging around 281 billion tokens, which The Information remarks could cost millions of dollars. Meta’s company-mandated psychosis also gives achievements for particular things like using multiple models or high utilization of the cache. Here’s one very worrying anecdote: One poster on Twitter says that there are people at Meta running loops burning tokens to rise up the leaderboards, and that Meta’s managers also measure lines of code as a success metric.  The Information says that, considering Anthropic’s current pricing for its models, that 60 trillion tokens could be as much as $900 million in the space of a month, though adds that this assumes that every token being burned was on Claude Opus 4.6 (at $15 per 1 million tokens).  I personally think this maths is a bit fucked, because it assumes that A) everybody is only using Claude Opus, B) that none of that token burn runs through the cache (which it obviously does, and the cache charges 50%, as pointed out by OpenCode co-founder Dax Radd ), and C) that Meta is entirely using the API (versus paying for a $200-a-month Claude Max subscription for each user).  Digging in further, it appears that a few years ago Meta created an internal coding tool called CodeCompose , though a source at Meta tells me that developers use VSCode and an assistant called Devmate connected to models from Anthropic, OpenAI and xAI. One engineer on Reddit — albeit an anonymous one! — had some commentary on the subject: If we assume that Meta is an enterprise customer paying API rates for its tokens, it’s reasonable to assume — at even a low $5-per-million average — that it’s spending $300 million or more a month on API calls. As Radd also added, there’s likely a discount involved. He suggested 20%, which I agree with. Even if it’s $300 million, that’s still fucking insane. That’s still over three billion dollars a year. If this is what’s actually happening, and this is what’s contributing to Anthropic’s growth, this is not a sustainable business model, which is par for the course for Anthropic, a company that has only lost billions of dollars. Encouraging workers to burn as many tokens as possible is incredibly irresponsible and antithetical to good business or software engineering. Writing great software is, in many cases, an exercise in efficiency and nuance , building something that runs well, is accessible and readable by future engineers working on it, and ideally uses as few resources as it can. TokenMaxxing runs contrary to basically all good business and software practices, encouraging waste for the sake of waste, and resulting in little measurable productivity benefits or, in the case of Meta, anything user-facing that actually seems to have improved. Venture capitalist Nick Davidov mentioned yesterday that sources at Google Cloud “started seeing billions of tokens per minute from Meta, which might now be as big as a quarter of all the token spend in Anthropic.” While I can’t verify this information ( and Davidoff famously deleted his photos using Claude Cowork while attempting to reorganize his wife’s desktop ), if that’s the case, Meta is a load-bearing pillar of Anthropic’s revenue — and, just as importantly, a large chunk of Anthropic’s revenue flows through Google Cloud , which means A) that Anthropic’s revenue truly hinges on Google selling its models, and B) that said revenue is heavily-inflated by the fact that Anthropic books revenue without cutting out Google’s 20%+ revenue share. In any case, TokenMaxxing is not real demand, but an economic form of AI psychosis. There is no rational reason to tell somebody to deliberately burn more resources without a defined output or outcome other than increasing how much of the resource is being used. I have confirmed with a source at that there is no actual metric or tracking of any return on investment involved in token burn at Meta, meaning that TokenMaxxing’s only purpose is to burn more tokens to go higher on a leaderboard, and is already creating bad habits across a company that already has decaying products and leadership. To make matters worse, TokenMaxxing also teaches people to use Large Language Models poorly. While I think LLMs are massively-overrated and have their outcomes and potential massively overstated, anyone I know who actually uses them for coding generally has habits built around making sure token burn isn’t too ridiculous, and various ways to both do things faster without LLMs and ways to be intentional with the models you use for particular tasks. TokenMaxxing literally encourages you to do the opposite — to use whatever you want in whatever way you want to spend as much money as possible to do whatever you want because the only thing that matters is burning more tokens. Furthermore, TokenMaxxing is exactly the kind of revenue that disappears first. Zuckerberg has reorganized his AI team four or five times already, and massively shifted Meta’s focus multiple times in the last five years, proving that at the very least he’ll move on a whim depending on external forces. After laying off tens of thousands of people in the last few years , Meta has shown it’s fully capable of dumping entire business lines or groups with a moment’s notice, and while moving on from AI might be embarrassing , that would suggest that Mark Zuckerberg experiences shame or any kind of emotion other than anger. This is the kind of revenue that a business needs to treat with extreme caution, and if Meta is truly spending $300 million or more a month on tokens, Anthropic’s annualized revenues are aggressively and irresponsibly inflated to the point that they can’t be taken seriously, especially if said revenue travels through Google Cloud, which takes another 20% off the top at the very least.  Though the term is pretty new, the practice of encouraging your engineers to use AI as much as humanly possible is an industry-wide phenomena, especially across hyperscalers like Amazon, Microsoft and Google, all of whom until recently directly have pushed their workers to use models with few restraints. Shopify and other large companies are encouraging their workers to reflexively rely on AI, with performance reviews that include stats around your token burn and other nebulous “AI metrics” that don’t seem to connect to actual productivity. I’m also hearing — though I’ve yet to be able to confirm it — that Anthropic and other model providers are forcing enterprise clients to start using the API directly rather than paying for monthly subscriptions.  Combined with mandates to “use as much AI as possible,” this naturally increases the cost of having software engineers, which — and I say this not wanting anyone to lose their jobs — does the literal opposite of replacing workers with AI. Instead, organizations are arbitrarily raising the cost of doing business without any real reason.  Because we’re still in the AI hype cycle, this kind of wasteful spending is both tolerated and encouraged, and the second that financial conditions worsen or stock prices drop due to increasing operating expenses, these same companies will cut back on API spend, which will overwhelmingly crush Anthropic’s glowing revenues. I think it’s also worth asking at this point what is is we’re actually fucking doing.   We’re building — theoretically — hundreds of gigawatts of data centers, feeding hundreds of billions of dollars to NVIDIA to buy GPUs, all to build capacity for demand that doesn’t appear to exist, with only around $65 billion of revenue (not profit) for the entire generative AI industry in 2025 , with much of that flowing from two companies (Anthropic and OpenAI) making money by offering their models to unprofitable AI startups that cannot survive without endless venture capital, which is also the case for both AI labs. Said data centers make up 90% of NVIDIA’s revenue, which means that 8% or so of the S&P 500’s value comes from a company that makes money selling hardware to people that immediately lose money on installing it. That’s very weird! Even if you’re an AI booster, surely you want to know the truth , right?  The most-prominent companies in the AI industry — Anthropic and OpenAI — burn billions of dollars a year, have margins that get worse over time , and absolutely no path to profitability, yet the majority of the media act as if this is a problem that they will fix, even going as far as to make up rationalizations as to how they’ll fix it, focusing on big revenue numbers that wilt under scrutiny. That’s extremely weird, and only made weirder by members of the media who seem to think it’s their job to defend AI companies ’ bizarre and brittle businesses. It’s weird that the media’s default approach to AI has, for the most part, been to accept everything that the companies say, no matter how nonsensical it might be. I mean, come on! It’s fucking weird that OpenAI plans to burn $121 billion in the next two years on compute for training its models , and that the media’s response is to say that somehow it will break even in 2030, even though there’s no actual explanation anywhere as to how that might happen other than vague statements about “efficiency.” That’s weird! It’s really, really weird! It’s also weird that we’re still having a debate about “the power of AI” and “what agents might do in the future” based on fantastical thoughts about “agents on the internet ” that do not exist, cannot exist, and will never exist, and it’s fucking weird that executives and members of the media keep acting as if that’s the case. It’s also weird that people discussing agents don’t seem to want to discuss that OpenAI’s Operator Agent does not work , that AI browsers are fundamentally broken , or that agentic AI does not do anything that people discuss. In fact, that’s one of the weirdest parts of the whole AI bubble: the possibility of something existing is enough for the media to cover it as if it exists, and a product saying that it will do something is enough for the media to believe it does it. It’s weird that somebody saying they will spend money is enough to make the media believe that something is actually happening , even if the company in question — say, Anthropic — literally can’t afford to pay for it . It’s also weird how many outright lies are taking place, and how little the media seems to want to talk about them. Stargate was a lie! The whole time it was a lie! That time that Sam Altman and Masayoshi Son and Larry Ellison stood up at the white house and talked about a $500 billion infrastructure project was a lie! They never formed the entity ! That’s so weird! Hey, while I have you, isn’t it weird that OpenAI spent hundreds of millions of dollars to buy tech podcast TBPN “to help with comms and marketing”? It’s even weirder considering that TBPN was already a booster for OpenAI!  It’s also weird that a lot of AI data center projects don’t seem to actually exist, such as Nscale’s project to make “one of the most powerful AI computing centres ever” that is literally a pile of scaffolding , and that despite that announcement the company was able to raise $2 billion in funding . It’s also weird that we’re all having to pretend that any of this matters. The revenues are terrible, Large Language Models are yet to provide any meaningful productivity improvements, and the only reason that they’ve been able to get as far as they have is a compliant media and a venture capital environment borne of a lack of anything else to invest in .  Coding LLMs are popular only because of their massive subsidies and corporate encouragement, and in the end will be seen as a useful-yet-incremental and way too expensive way to make the easy things easier and the harder things harder, all while filling codebases full of masses of unintentional, bloated code. If everybody was forced to pay their actual costs for LLM coding, I do not believe for a second that we’d have anywhere near the amount of mewling, submissive and desperate press around these models.  The AI bubble has every big, flashing warning sign you could ask for. Every company loses money. Seemingly every AI data center is behind schedule, and the vast majority of them aren’t even under construction . OpenAI’s CFO does not believe that it’s ready to go public in 2026 , and Sam Altman’s reaction has been to have her report to somebody else other than him, the CEO. Both OpenAI and Anthropic’s margins are worse than they projected. Every AI startup has to raise hundreds of millions of dollars, and their products are so weak that they can only make millions of dollars of revenue after subsidizing the underlying cost of goods to the point of mass unprofitability .   And it’s really weird that the mainstream media has a diametric view — that all of this is totally permissible under the auspices of hypergrowth, that these companies will simply grow larger, that they will somehow become profitable in a way that nobody can actually describe, that demand for AI data centers will exist despite there being no signs of that happening. I get it. Living in my world is weird in and of itself. If you think like I do, you have to see every announcement by Anthropic or OpenAI as suspicious — which should be the default position of every journalist, but I digress — and any promise of spending billions of dollars as impossible without infinite resources. At the end of this era, I think we’re all going to have to have a conversation about the innate credulity of the business and tech media, and how often that was co-opted to help the rich get richer. Until then, can we at least admit how weird this all is? Telecommunications: AI agents will help carriers modernize network operations, simplify customer lifecycle management, and improve service delivery—bringing intelligent automation to one of the most operationally complex and regulated industries in the world. Meaningless. Automation of what?  Financial services: AI agents will help firms detect and assess risk faster, automate compliance reporting, and deliver more personalized customer interactions, such as tailoring financial advice based on a client's full account history and market conditions. Chatbot! “More-personalized interactions” are a chatbot with a connection to a knowledge system, as is any kind of “tailored financial advice.” Compliance reporting? Summarizing or pulling documents from places, much like any LLM can do, other than the fact that it’ll likely get shit wrong, which is bad for compliance. Manufacturing and engineering: Claude will help accelerate product design and simulation, reducing R&D timelines and enabling engineers to test more iterations before production. I assume this refers to people using Claude Code to do coding, which is what it does. Software development: Teams will use Claude Code to write, test, and debug code, helping developers move faster from design to production. Claude Code. Enterprise operations: Claude Cowork will help teams automate routine work like document summarization, status reporting, and review cycles. Literally a chatbot that deleted every single one of a guy’s photos when he asked it to organize his wife’s desktop . “Gather information” — search tool, part of chatbots for years. “Write reports” — generative AI’s most basic feature, with no details on quality. “Edit files” — to do what exactly? Chatbot feature. “Send and receive messages through email and text” — generating and reading text, connected to an email account.  “Delegate work” — what work? No need to get specific!  Are you fucking kidding me? If you simply remove billions of dollars in costs, OpenAI is profitable! Why do you think these companies are going to break even anytime soon? You have absolutely no basis for doing so other than leaks from the company!  Anthropic said on February 12, 2026 it had hit $14 billion in annualized revenue . This would work out to roughly $1.16 billion in a 30-day period, let’s assume from January 11 2026 to February 11 2026. Anthropic’s CFO said it had made “exceeding $5 billion” in lifetime revenue on March 9 2026. On March 3, 2026 Dario Amodei said it had hit $19 billion in annualized revenue.  This would work out to $1.58 billion in a 30-day period. Let’s assume this is for the period from February 2 2026 to March 2 2026. On April 6, 2026, Anthropic said it had hit $30 billion in annualized revenue . This works out to about $2.5 billion in a 30-day period. Let’s assume that said period is March 6 2026 to April 6 2026. Anthropic’s $14 billion in annualized revenue from February 16, 2026 includes both the launch of Claude Opus 4.6 , as well as the height of the OpenClaw hype cycle where people were burning hundreds of dollars of tokens a day .  This announcement also included the launch of Anthropic’s 1 million token context window in Beta for Opus 4.6 Anthropic’s $19 billion in annualized revenue from March 3, 2026 included both the launch of Claude Opus 4.6 and Claude Sonnet 4.6 . This period includes around half of the January 16 to February 16 2026 window from the previous $14 billion annualized number, and the launch of the beta of the 1 million token context window for Sonnet 4.6. To be clear, the betas required you to explicitly turn on the 1 million token context window, and had higher pricing around long context. Anthropic’s $30 billion in annualized revenue from April 6 2026 included two weeks’ worth of massive token burn from the launches of Sonnet and Opus 4.6. This includes a few days of the previous window (March 3 to April 5). This also included the general availability of the 1-million token context window , enabling it by default, billed at the standard pricing. Massive new customers that are making payments up front, which makes this far from “recurring” revenue. Massive new customers are spending tons of money immediately, burning hundreds of millions of dollars a month in tokens, and paying Anthropic handsomely for them.

0 views

News: OpenAI CFO Doesn't Believe Company Ready For IPO, Unsure Revenue Will Support Commitments

News out of The Information's Anissa Gardizy and Amir Efrati over the weekend - OpenAI CFO Sarah Friar has apparently clashed with CEO Sam Altman over timing around OpenAI's IPO, emphasis mine: I cannot express how strange this is. Generally a CFO and CEO are in lock-step over IPO timing, or at the very least the CFO has an iron grip on the actual timing because, well, CEOs love to go public and the CFO generally exists to curb their instincts. Nevertheless, Clammy Sam Altman has clearly sidelined Friar, and as of August last year, the CFO of OpenAI doesn't report to the CEO . In fact, the person Friar reports to ( Fiji Simo ) just took a medical leave of absence: It is extremely peculiar to not have the Chief Financial Officer report to the Chief Executive Officer , but remember folks, this is OpenAI, the world's least-normal company! Anyway, all of this seemed really weird, so I asked investor, writer and economist Paul Kedrosky for his thoughts: Very cool! Paul is also a guest on this week's episode of my podcast Better Offline , by the way. Out at 12AM ET Tuesday. Anyway, The Information's piece also adds another fun detail - that OpenAI's margins were even worse than expected in 2025: Riddle me this, Batman! If your AI company always has to buy extra compute to meet demand, and said extra compute always makes margins worse, doesn't that mean that your company will either always be unprofitable or die because it buys too much compute? Say, that reminds me of something Anthropic CEO Dario Amodei said to Dwarkesh Patel earlier in the year ... It is extremely strange that the CFO of a company doesn't report to the CEO of a company, and even more strange that the CFO is directly saying "we are not ready for IPO" as its CEO jams his foot on the accelerator. It's clear that both OpenAI and Anthropic are rushing toward a public offering so that their CEOs can cash out, and that their underlying economics are equal parts problematic and worrying. Though I am entirely guessing here, I imagine Friar sees something within OpenAi's finances that give her pause. An S-1 - one of the filings a company makes before going public - is an audited document, and I imagine the whimsical mathematics that OpenAI engages in - such as, per The Wall Street Journal , calculating profitability without training compute - might not match up with what actual financiers crave. If you like this piece and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of  NVIDIA ,  Anthropic and OpenAI’s finances , and  the AI bubble writ large . I just put out  a massive Hater’s Guide To The SaaSpocalypse , as well as last week’s deep dive into How AI Isn't Too Big To Fail . Supporting my premium supports my free newsletter. OpenAI CFO Sarah Friar has, per The Information, said that OpenAI is not ready to go public in 2026, in part because of the "risks from its spending commitments" and not being sure whether the company's revenue growth would support its spending commitments. Friar (CFO) no longer reports to Sam Altman (CEO) and hasn't done so since August 2025. OpenAI's margins were lower in 2025 "...due to the company having to buy more expensive compute at the last minute."

0 views

Premium: AI Isn't Too Big To Fail

Soundtrack — Soundgarden — Blow Up The Outside World A lot of people try to rationalize the AI bubble by digging up the past. Billions of dollars of waste are justified by saying “OpenAI just like Uber” (it isn’t) and “the data center buildout is just like Amazon Web Services” ( it isn’t, Amazon Web Services was profitable in a decade and cost about $52 billion between 2003 and 2017, and that’s normalized for inflation ) and, most egregiously, that AI is “too big to fail.”  I think that these statements are acts of cowardice if they are not backed up by direct and obvious comparisons based on historical data and actual research. They are lazy intellectual tropes borne of at best ignorance, or at worst an intellectual weakness that makes somebody willing to take flimsy information and repeat it as if it were gospel. Nobody has any proof that AI is profitable on inference, nor is there any explanation about how it will become profitable at some point, just a cult-like drone of “they’ll work it out” and “look at the growth!”  And the last argument, that AI is “too big to fail” is the most cowardly of them all, given that said statement seldom precedes the word “because,” and then an explanation of why generative AI is so economically important, and why any market correction would be so catastrophic that the bubble must continue to inflate.  Over the last few months I have worked diligently to unwind these myths. I discussed earlier in the year how the AI Bubble is much worse than the dot com bubble , and ended last year with a mythbusters (AI edition) that paired well with my free opus, How To Argue With An AI Booster .  I don’t see my detractors putting in anything approaching a comparable effort. Or any effort, really.  This isn’t a game I’m playing or some sort of competitive situation, nor do I feel compelled to “prove my detractors wrong” with any specificity. I believe time will do that for me. My work is about actually finding out what’s going on, and I believe that explaining it is key to helping people understand the world. None of the people who supposedly believe that AI is the biggest, most hugest and most special boy of all time have done anything to counter my core points around AI economics other than glance-grade misreads of years-old pieces and repeating things like “they’re profitable on inference!” Failing to do thorough analysis deprives the general public of the truth, and misleads investors into making bad decisions. Cynicism and skepticism is often framed as some sort of negative process — “hating” on something for the sake of being negative, or to gain some sort of cultural prestige, or as a way of performatively exhibiting one’s personal morality — when both require the courage (when done properly) to actually understand things in-depth.  I also realize many major media outlets are outright against skepticism. While they frame their coverage as “taking on big tech,” their questions are safe, their pieces are safer, their criticisms rarely attack the actual soft parts of the industries (the funding of the companies or infrastructure developments, or the functionality of the technology itself), and almost never seek to directly interrogate the actual statements made by AI leaders and investors, or the various hangers-on and boosters. This is why I’ve been so laser-focused on the mythologies that have emerged over the past couple of years,  such as when people say “it’s just like the dot com bubble" — it’s not, it’s much worse ! — because if these mythologies actually withstood scrutiny, my work wouldn’t have much weight.  The Dot Com Bubble in particular grinds my gears because it’s a lazy trope used to rationalise rotten economics, all while disregarding the actual harms that took place. Unemployment spiked to 6% , venture capital funds lost 90% of their value, and hundreds of thousands of people in the tech industry lost their jobs, some of them for good.  It is utterly grotesque how many people minimize and rationalize the dot com bubble, reframing it as a positive , by saying that “things worked out afterwards,” all so that they can use that as proof that we need to keep giving startups as much money as they ask for forever and that AI is the biggest thing in the world. Yet AI is, in reality, much smaller than people think. As I wrote up ( and Bloomberg clearly were inspired by! ) last week, only 5GW of AI data centers are actually under construction worldwide out of the 12GW that are supposedly meant to be delivered this year, with many of them slowed by the necessity of foreign imports of electrical equipment and, you know, the fact that construction is hard, and the power isn’t available.  Meanwhile, back in October 2025, The Wall Street Journal claimed that a “ giant new AI data center is coming to the epicenter of America’s fracking boom ” in a deal between Poolside AI (a company that does not appear to have released a product) and CoreWeave (an unprofitable AI data center company that I’ve written about a great deal ). This was an “exclusive” report that included the following quote: Turns out Mr. Kant was correct, as it was just reported that CoreWeave and Poolside’s deal fell apart , along with Poolside’s $2 billion funding round, as Poolside was “unable to stand up the first cluster of chips to CoreWeave’s timeline,” probably because it couldn’t afford them and wasn’t building anything. The FT added that “...Poolside was unable to convince investors that it could train AI models to the same level of established competitors.” It was also unable to get Google to take over the site. Elsewhere, troubling signs are coming from the secondary markets — the place where people sell stock in private companies like OpenAI. Those signs being that, well, nobody’s buying.   Per Bloomberg, over $600 million of OpenAI shares are sitting for sale with no interest from buyers at its current $850 billion post-money valuation, though apparently $2 billion is “ready to deploy” for private Anthropic shares at a $380 billion valuation, according to Next Round Capital (a secondary share sale site)’s Ken Smythe.  Though people will try to frame this as a case of OpenAI’s shares “being too close to what they might go public at,” one has to wonder why shares of what is supposed to be the literal most valuable company of all time aren’t being sold at what, theoretically, is a massive discount.  One might argue that it’s because people think that the stock might drop on IPO and then grow , but…that doesn’t show a great degree of faith in the company. Investors likely think that Anthropic would go public at a higher price than $380 billion, though I do need to note that the full quote was that "buyers have indicated that they have $2 billion of cash ready to deploy into Anthropic,” which is not the same thing as “will actually buy it.” In any case, the market is no longer treating OpenAI like it’s the golden child. Poolside’s CoreWeave deal is dead. Data centers aren’t getting built. Oracle is laying off tens of thousands of people to fund AI data centers for OpenAI , a company that cannot afford to pay for them. AI demand, despite how fucking annoying everybody is being about it, does not seem to exist at the scale that makes any part of this industry make sense. Yet people still squeal that “The Trump Administration Will Bail Out The AI Industry,” and that OpenAI is “too big to fail,” two statements that are not founded in history or analysis, but are the kinds of things that you say only when you’re either so beaten down by bad news that you’ve effectively given up or are so willfully ignorant that you’ll say stuff without knowing what it means because it makes you feel better. As I discussed in this week’s free newsletter, there is a subprime AI crisis going on. When the subprime mortgage crisis happened towards the end of the 2000s, millions of people built their lives around the idea that easy money would always be available, and that housing would only ever increase in value. These assumptions led to the creation of inherently dangerous mortgage products that never should have existed, and that inevitably screwed the buyers.  I talked about these in my last free newsletter. Negative amortization mortgages, for example, were a thing in the US. These were where the mortgage payments didn’t actually cover the cost of the interest , let alone the principal. Similarly, in the UK, my country of birth, many homebuyers used endowment mortgages — an interest-only mortgage where, instead of paying the principal, buyers made monthly payments into an investment savings account that (theoretically) would cover the cost of the property (and perhaps provide some extra cash) at the end of the term. If the investments did extremely well, the buyer could potentially pay off the mortgage early.  Far too often, those investments underperformed, meaning buyers were left staring at a shortfall at the end of their term .  Across the globe, the value of housing was massively overinflated by the lax standards of a mortgage industry incentivized to sign as many people as possible thanks to a lack of regulation and easily-available funding.  The value of housing — and indeed the larger housing and construction boom — was a mirage. In reality, housing wasn’t worth anywhere near what it was being sold for, and the massive demand for housing was only possible with unlimited resources, and under ideal conditions (namely, normal levels of inflation and relatively low interest rates).  Those buying houses they couldn’t afford with adjustable-rate mortgages either didn’t understand the terms, or believed members of the media and government officials that suggested housing prices would never decrease and that one could easily refinance the mortgage in question. Similarly, AI startups products are all subsidized by venture capital, and must, in literally every case, allow users to burn tokens at a cost far in excess of their subscription fees, a business that only “works” — and I put that in quotation marks — as long as venture capital continues to fund it. While from the outside these may seem like these are functional businesses with paying users, without the hype cycle justifying endless capital, these businesses wouldn’t be possible , let alone viable , in any way shape or form.  For example, Harvey is an AI tool for lawyers that just raised $200 million at an $11 billion valuation , all while having an astonishingly small $190 million in ARR, or $15.8 million a month. It raised another $160 million in December 2025 , after raising $300 million in June 2025 , after raising $300 million in February 2025 .  Remove even one of those venture capital rounds and Harvey dies. Much like subprime loans allowed borrowers to get mortgages they had no hope of paying, hype cycles create the illusion of viable businesses that cannot and will never survive without the subsidies.  The same goes for companies like OpenAI and Anthropic, both of whom created priority processing tiers for their enterprise customers last year , and the latter of which just added peak rate limits from 5am and 11am Pacific Time . Their customers are the subprime borrowers too — they built workflows around using these products that may or may not be possible with new rate limits, and in the case of enterprise customers using priority processing, their costs massively spiked, which is why Cursor and Replit suddenly made their products worse in the middle of 2025.  The reason that the Subprime Mortgage Crisis led to the Great Financial Crisis was that trillions of dollars were used to speculate upon its outcome, across $1.1 trillion of mortgage-backed securities . In mid-2008, per the IMF, more than 60% of all US mortgages had been securitized (as in turned into something you could both trade , speculate on the outcome of and thus buy credit default swaps against ). Collateralized debt obligations — big packages of different mortgages and other kinds of debt that masked the true quality of the underlying assets — expanded to over $2 trillion by 2006 , though the final writedowns were around $218 billion of losses . By comparison, AI is pathetically small. While there were $178.5 billion in data center credit deals done in America last year , speculation and securitization remains low, and in many cases the amount of actual cash available is in tranches based on construction milestones, with most data center projects ( like Aligned’s recent $2.58 billion raise ) funded by “facilities” specifically to minimize risk.  As I’ve written about previously, building a data center is hard — especially when you’re building at scale. Finding land, obtaining permits (something which can be frustrated by opposition from neighbors or local governments), obtaining electricity, and then obtaining the labor, machinery, and raw materials all take time. Some components — like electrical transformers — have lead times in excess of a year.  And so, you can understand why there’s such a disparity between the dollar amount in data center credit deals, and the actual capital deployed to build said data centers.  There also isn’t quite as much wilful ignorance on the part of ratings agencies, though that isn’t to say they’re actually doing their jobs. CoreWeave is one of many data center companies that’s been able to raise billions of dollars using its counterparties’ credit ratings, with Moody’s giving the debt for an unprofitable data center company that would die without endless debt that’s insufficiently capitalized to pay it off an “A3 investment grade rating” because it was able to use Meta’s credit rating and the GPUs in question as collateral.  Nevertheless, none of this comes close to the apocalypse that the global economy faced as a result of the catastrophically dangerous bets made by the entire finance industry during the late 2000s, because those bets weren’t made on housing so much as they were made on financial instruments that were given power because of housing.  Juiced by a mortgage industry that allowed basically anybody to buy a house regardless of whether they could pay for it, by the middle of 2008, nearly $9 trillion of mortgages were outstanding in America (with around $1.1 trillion of home equity loans on top). Trillions (it’s hard to estimate due to the amount of off-balance sheet trades that happened) more were gambled on top of them as they were packaged into CDOs (collateralized debt obligations) and synthetic CDOs where somebody would buy a credit default swap (CDS, or a bet against the default) against the underlying assets, assuming (incorrectly) that the company issuing the CDS would have the funds to pay them. As I’ll get into deeper in the piece, no such comparison exists for AI, and the asset-backed securitization of data centers and GPUs remains very small. Despite many deceptive studies that attempt to claim otherwise, the economy is relatively unaffected by AI , and while software companies might have debt, AI companies, for the most part, do not appear to, and those that do (OpenAI and Anthropic) have credit facilities rather than lump-sum loans.  In totality, the AI industry seems to have made about $65 billion in revenue (not profit!) in 2025, with I estimate about a third of that being the result of OpenAI or Anthropic feeding money to hyperscalers or neoclouds like CoreWeave, and a billions more being AI startups (funded entirely by VC) feeding money to Anthropic and OpenAI to rent their models. Even the venture capital scale of AI startups is drastically overestimated. While ( as reported by The New York Times ) “AI startups” raised $297 billion in the first quarter of 2026, $188 billion of that was taken by OpenAI (which has yet to fully receive the funds!), Anthropic, xAI, and Waymo. In 2025, $425 billion was invested in startups globally , with half of that (about $212.5 billion) going to AI startups, but about half of that ($102 billion) going to Anthropic, OpenAI, xAI, Scale AI’s not-quite-acquisition by Meta , and Bezos’ Project Prometheus .  The great financial crisis was, as I’ll get into, a literal collapse of how banks, financial institutions, and property businesses operated, with their reckless speculation on a housing market that was only made possible by a craven mortgage industry incentivized to get people to sign at any cost. When people speculated that there was a bubble, articles ran saying that housing was actually cheap , that subprime lending had actually “made the mortgage market more perfect,” that the sky was not falling in the credit markets because unemployment wasn’t going to rise , that subprime mortgages wouldn’t hurt the economy , and that there was no recession coming .  In any case, OpenAI, Anthropic and AI startups in general are far from “systemic risks.” They are not load-bearing. TARP and associated bailouts did not bail out the markets themselves — the S&P 500 lost around half of its value during the bear market that followed , and home prices only returned to growth in 2012.  I imagine the “systemic risk” argument is that NVIDIA makes up 7% to 8% of the value of the S&P 500, and that makes sense as long as you ignore that Exxon Mobil was around 5% of the value of S&P 500 in 2008 and saw its value tank for years following the crisis without any bailout to stop it. Microsoft, Meta, Amazon, Google, NVIDIA, Tesla, and Apple are not going bankrupt if AI dies, and anybody suggesting they will is wrong. NVIDIA’s revenue collapsing by 50% or 80% or more would not cause a “financial crisis,” nor would said collapse be considered a “systemic risk” to the stability of the broader economy, though I admit, it would be very bad for the markets writ large.  Conversely, a similar blow at TSMC — the company that owns the literal foundries that makes many of the leading-edge semiconductors used today, including those used for data center GPUs — would be, because its collapse would massively reduce the demand for its products, which, I add, require billions of dollars of upfront investment to make.   GPUs are not critical to the global economy, nor are Large Language Models, nor is OpenAI, nor is Anthropic. Their collapse would end a hype cycle, which would make the markets drop much like they did in the dot com bust, but that is not the same as too big to fail. Today’s premium is one of the most comprehensive analyses I’ve ever written — a rundown of what makes something “Too Big To Fail,” an explanation of the actual fundamentals of the Great Financial Crisis, and a true systemic analysis of the AI bubble writ large. None of this is too big to fail, and in many ways its failure is necessary for us to move forward as a society.

0 views

The Subprime AI Crisis Is Here

Hi! If you like this piece and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I just put out a massive Hater’s Guide To The SaaSpocalypse , as well as last week’s deep dive into how the majority of data centers aren’t getting built and the overall AI industry is depressingly small . Supporting my premium supports my free newsletter, and premium subscribers don't get this ad. Soundtrack: Metallica — …And Justice For All   Bear with me, readers. I need to do a little historical foreshadowing to fully explain what’s going on. In the run-up to the great financial crisis, unscrupulous lenders issued around 1.9 million subprime loans , with many of them being adjustable rate mortgages (ARMs) with variable rates that, after a two-or-three-year-long introductory period , would adjust every twelve months, per CBS News in July 2006 : At the time, 18% of homeowners had adjustable-rate mortgages, which also made up more than 25% of new mortgages in the first quarter of 2006, with (at the time) over $330 billion of mortgages expected to adjust upwards. Things were grimmer beneath the surface. A question on JustAnswer from 2009 showed a homeowner that was about to lose their house after being conned into a negative amortization loan — a mortgage where payments didn’t actually cover the interest, meaning that each month the balance increased . Dodgy lenders were given bonuses for selling more mortgages, whether or not the person on the other end was capable of paying, and by November 2007 , around two million homeowners held $600 billion of ARMs.  Yet the myth of the subprime mortgage crisis was that it was caused entirely by low income borrowers. Per Duke’s Manuel Adelino : Despite The Big Short’s dramatic “stripper with six properties” scene made for a vivid demonstration of the subprime problem, the reality was that everybody got taken in by teaser rate mortgages, driving up the value of properties based on a housing market that was only made possible by mortgages that were expressly built to hide the real costs as interest rates and borrower payments rose every six to 36months. I’ll add that near-prime mortgages — for borrowers with just-below-prime credit scores — were also growing, with over 1.1 million of them in 2005, when they represented nearly 32% of all loans. Many people who bought houses that they couldn’t afford did so based on a poor understanding of the terms of their mortgage, thinking that the value of housing would continue to climb as it had for over a hundred years , and/or the belief that they’d easily be able to refinance the loans. Even as things deteriorated toward the middle of the 2000s, people came up with rationalizations as to why things would work out, such as Anthony Downs of The Brookings Institution, who in October 2007 said the following in a piece called “Credit Crisis: The Sky is not Falling”: Brookings also added that “...the vast majority of subprime mortgages are likely to remain fully paid up as long as unemployment remains as low as it is now in the U.S. economy.” At the time, US unemployment was 4.7% , but a year later it was at 6.5%, and would peak at 10% in October 2009.   In an article from the December 2004 issue of Economic Policy Review , Jonathan McCarthy and Richard W. Peach argued that there was “little basis” for concerns about housing prices, with “home prices essentially moving in line with increases in family income and declines in nominal mortgage interest rates,” and hand-waved any concerns based on vague statements about “demand”: From the outside, this made it appear that the value of housing was exponential, and that the “pent-up demand” for homes necessitated a massive boom in construction, one that peaked in January 2006 with 2.27 million new homes built . A year later, this number collapsed to 1.084 million, and in January 2009, only 490,000 new homes had been built in America, the lowest it had been in history.  Denial rates for mortgages declined drastically ( along with the increase in things like 40-year or 50-year mortgages ), which meant that suddenly anybody was able to get a house, which made it only seem logical to build more housing. Low interest rates before 2006 allowed consumers to take on mountains of new credit card debt, rising to as high as 20% of household incomes in 2007 , to the point that by the 2000s, credit card companies were making more money from credit card lending than the fees from people using the credit cards, with $65 billion of the $95 billion of the credit card industry’s revenue coming from interest on debt, with lending-related penalty fees and cash advance fees contributing another $12.4 billion, per Philadelphia Fed Economist Lukasz Drozd. While the precise order of events is a little more complex, the general gist of the subprime mortgage crisis was straightforward: easily-available money allowed massive amounts of people — many of whom couldn’t afford to buy these houses outside of the easy money that funded the bubble — to enter the housing market, which in turn made it much easier to sell a house for a much higher price, which inflated the value of housing.  People made decisions based on fundamentally-flawed information. In January 2004, the Bush administration declared that America’s economy was on the path to recovery , with small businesses creating the majority of new jobs and the stock market booming. Debt was readily-available across the board, with commercial and industrial loans spiking along with consumer debt ( including a worrying growth in subprime auto loans ). The good times were rolling, as long as you didn’t think about it too hard. But, as I said, the chain of events was simple: it was easy to borrow money to buy a house, which meant lots of people were buying houses, which meant that the value of a house seemed higher than it was outside of the easy money era. Easily-available money put lots of cash into the economy, which led to higher prices, which led to inflation, which forced the federal reserve to raise interest rates 17 times in the space of two years , which made it harder to get any kind of loan, which made it harder to get a mortgage, which made it harder to sell a house, which made people sell houses for cheaper, which lowered the value of houses, which made it harder to refinance the bad loans, which meant people foreclosed on their homes, which in turn lowered the value of housing, all as demand for housing dropped because nobody was able to buy housing. The underlying problems were, ultimately, the illusion of value and mobility. Those borrowing at the time believed they had invested in something with a consistent (and consistently-growing) value — a house — and would always have easy access to credit (via credit cards and loans), as before-tax family income had never been higher . In the beginning of 2007, delinquencies on consumer and business loans climbed, abandoned housing developments grew , and a US economy dependent on the housing bubble (per Paul Krugman’s “ That Hissing Sound ” from August 2005) began to stumble. By November 2009 , 23% of US consumer mortgages were underwater (meaning they were worth less than their loans). The housing bubble was created through easily-available debt, insane valuations based on debt-fueled speculation, do-nothing regulators ( like eventual Fed Chair Ben Bernanke, who said in October 2005 that there was no housing bubble ) and consumers being sold an impossible, unsustainable dream by people financially incentivized to make them rationalize the irrational, and believe that nothing bad will ever happen. In February 2005 , 40% ($19 billion) of IndyMac Bancorp’s mortgage originations in a single quarter came from a “Pay-Option ARM,” which started with a 1% teaser rate which jumped in a few short months to 4% or more, with frequent adjustments. Washington Mutual CEO Kerry Killinger said in 2003 that he wanted WaMu to be “ the Wal-Mart of banking ,” and did so by using (to quote the New York Times) “relaxed standards,” including issuing a mortgage to a mariachi singer who claimed a six-figure income and verified it using a single photo of himself.  By the time it collapsed in September 2008, WaMu had over $52.9 billion in ARMs and $16.05 billion in subprime mortgage loans .  Had Washington Mutual and the many banks making dodgy ARM and subprime loans underwritten loans based on the actual creditworthiness of their applicants, there wouldn’t have been a housing bubble, because many of these borrowers would’ve been unable to pay their mortgages, and thus wouldn’t have been deemed creditworthy, and thus no apparent housing demand would’ve grown.  In very simple terms, the “demand” for housing was inflated by a deceitfully-priced product that undersold its actual costs, and through that deceit millions of people were misled into believing said product was viable. Did you work out where this is going yet? In September 2024 , I raised my first concerns about a Subprime AI Crisis: This theory is important, and thus I’m going to give it a lot of time and love to break it down.  That starts with the parties involved, and how the economics involved get worse over time, returning to my theory of “ AI’s chain of pain , and the hierarchy of how the actual AI economy works. The AI industry has done a great job in obfuscating exactly how brittle its economics really are, and as a result, I need to explain both how money is raised , money is deployed, and where the economics begin to break down. Generally, AI is funded from only a few places:: Some things to keep note of: This is a crucial point, so stay with me.  AI models work by charging a per-million token rate for inputs (things you feed in) and outputs, which are either the things that the model outputs (such as an image, text or code), or the “ chain of thought reasoning ” many models rely upon now, where they take an input, generate a plan (which is an “output”) and then do stuff based on said plan. AI startups, for the most part, do not have their own models, and thus must pay OpenAI or Anthropic (or other providers to a much lesser extent) to build services using them.   When you pay for access to an AI startup’s service — which, of course, includes OpenAI and Anthropic — you do so for a monthly fee, such as $20, $100 or $200-a-month in the case of Anthropic’s Claude , Perplexity’s $20 or $200-a-month plan , or OpenAI’s $8, $20, or $200-a-month subscriptions . In some enterprise use cases, you’re given “credits” for certain units of work, such as how Lovable allows users “100 monthly credits” in its $25-a-month subscription , as well as $25 (until the end of Q1 2026) of cloud hosting, with rollovers of credits between months. When you use these services, the company in question then pays for access to the AI models in question, either at a per-million-token rate to an AI lab, or (in the case of Anthropic and OpenAI) whatever cloud provider is renting them the GPUs to run the models. A token is basically ¾ of a word. As a user, you do not experience token burn, just the process of inputs and outputs. AI labs obfuscate the cost of services by using “tokens” or “messages” or 5-hour-rate limits with percentage gauges, and you, as the user, do not really know how much any of it costs. On the back end, AI startups are annihilating cash, with up until recently Anthropic allowing you to burn upwards of $8 in compute for every dollar of your subscription . OpenAI allows you to do the same, though it’s hard to gauge by how much. This is where the economic problem has begun. When the AI bubble started, venture capitalists flooded AI startups with cash, encouraging them to create hypergrowth businesses using, for the most part, monthly subscription costs that didn’t come close to covering the costs. As a result, many AI companies have experienced rapid growth selling a product that can only exist with infinite resources.  The problem is fairly simple: providing AI services is very expensive, and costs can vary wildly depending on the customer, input and output, the latter of which can change dramatically depending on the prompt and the model itself. A coding model relies heavily on chain-of-thought reasoning, which means that despite the cost of tokens coming down (which does not mean the price of providing them has decreased, it’s a marketing move ), models are using far, far more tokens, increasing costs across the board . And consumers crave new models. They demand them. A service that doesn’t provide access to a new model cannot compete with those that do, and because the costs of models have been mostly hidden from users, the expectation is always the newest models provided at the same price. As a result, there really isn’t any way that these services make sense at a monthly rate, and every single AI company loses incredible amounts of money, all while failing to make that much revenue in the first place.  For example, Harvey is an AI tool for lawyers that just raised $200 million at an $11 billion valuation , all while having an astonishingly small $190 million in ARR, or $15.8 million a month. It raised another $160 million in December 2025 , after raising $300 million in June 2025 , after raising $300 million in February 2025 .  Cursor is an AI coding tool that raised $160 million in 2024 (As of December 2024, it had $48 million ARR, or around $4 million of monthly revenue), $900 million ($500 million ARR/$41.6 million) in June 2025, and $2.3 billion in November 2025 ($1 billion ARR/$83 million). As of March 2, 2026, Cursor was at $2 billion annualized revenue, or $166 million in monthly revenue.  I’ll get to Cursor in a little bit, because it’s crucial to the Subprime AI Crisis. The Subprime AI Crisis is what happens when somebody actually needs to start making money, or, put another way, stop losing quite so much, revealing how every link in the chain was funded based on questionable assumptions and deadly short-term thinking.  Here’s the order of events as I see them. The entire generative AI industry is based on unprofitable, unsustainable economics, rationalized and funded by venture capitalists and bankers speculating on the theoretical value of Large Language Model-based services. This naturally incentivized developers to price their subscriptions at rates that attracted users rather than reflecting the actual economics of the services. Venture capitalists are also part of the subprime AI crisis, sitting on “billions of dollars” of AI companies that lose hundreds of millions of dollars, their companies built on top of AI models owned by OpenAI and Anthropic with little differentiation and no path to profitability. Nobody is going public! Nobody is getting acquired! As I discussed back in AI Is A Money Trap , there really is no liquidity mechanism for the billions of dollars sunk into most AI companies. Going public also reveals the ugly financial condition of these startups. MiniMax, for example, made a pathetic $79 million in revenue in 2025, and somehow lost $250.9 million in the process . Much like the houses in the great financial crisis, AI startups only retain their value as long as there is a market, or at least the perception that these companies could theoretically go public or be acquired. It only takes one failed exit or firesale to break the illusion.  At least you can live in a house. Every AI company will be a problem child that burns money on inference, bereft of intellectual property thanks to their dependence on OpenAI and Anthropic. What use is Perplexity without an eternal subsidy? The value of having Aravind Srivinas sitting around your office all day? I’d rather start my car in the garage.  “Fast-growing” AI companies only grew because they were allowed to burn as much money as they wanted selling services that are entirely unsustainable, raising more venture capital money with every burst of user growth, which they use to aggressively market to new users and grow further to raise another bump of venture capital. As a result, AI labs and AI startups have created negative habits with their users in two ways: To grow their user bases as fast as possible, AI startups (and AI labs) allowed their users to burn incredible amounts of tokens, I assume because they believed at some point things would become profitable or they’d always have access to easy venture capital. This created an entire industry of AI startups that disconnected their users from the raw economics of the product, creating a race to the bottom where every single AI startup must have every AI model and every AI feature and do every AI thing, all at an incredible cost that only ever seems to increase. Another fun feature is that just about every product gives some sort of “free” access period for new (and expensive!) models, like when Cursor had a free access period for GPT 5’s launch . It’s unclear who shoulders the burden here, but somebody is paying those costs. In any case, nowhere are the subsidies higher than those of Anthropic and OpenAI, who use their tens of billions of dollars of funding to allow users to burn anywhere from $3 to $13 per every dollar of subscription revenue to outpace their competition.  The Subprime AI Crisis is when the largest parties are finally forced to reckon with their rotten economics, and the downstream consequences that follow.  As I reported in July 2025 , starting in June last year, both OpenAI and Anthropic launched “priority service tiers,” jacking up the price on their enterprise customers (who pay for model access via their API to provide models in their software) for guaranteed uptime and less throttling of their services while also requiring an up-front (3-12 month) guarantee of token throughput.  Anthropic’s changes immediately increased the costs on AI startups like Lovable, Replit, Augment Code, and Anthropic’s largest customer, Cursor, which was forced to dramatically change its pricing from a per-request model to a bizarre pricing model where you pay model pricing with a 20% fee , but also receive A) at least as much as you pay in your subscription fee in tokens and B) “generous included usage” of Cursor’s Composer model: What’s crazy is that even with this pricing, Cursor still gives away 16 cents for every dollar on its $60-a-month plan and $1 for every dollar on its $200-a-month plan, and that’s before “generous usage” of other models. I’ll also add that Anthropic has already turned the screws on its subscription customers too, adding weekly limits to Claude subscribers on July 28, 2025 , a few weeks after quietly tightening other limits . Over the next few months, just about every AI startup had to institute some form of austerity. Replit shifted to something called “effort-based” pricing in June 2025, and then launched something called “Agent 3” in September 2025 that burned through users’ limits even faster — and, to be clear, Replit’s pricing gives you your subscription price in credits every single month on top of the cloud hosting necessary to get them online , meaning that a $20-a-month subscriber likely burns at least $25 a month, and Replit remains unprofitable.  Coding platform Augment Code was forced to change its pricing in October 2025 on a per-message basis, which meant that any message you sent cost the same amount no matter how complex the required response. In one case, a user spent $15,000 in tokens on a $250-a-month plan. Since then, Augment Code has moved to a confusing “credit” based model where they claim you use about 293 credits per Claude Sonnet 4.5 task, and users absolutely hate it because Augment Code was too cowardly to charge users based on the actual model pricing, because doing so would scare them away. Now Augment Code is planning to remove its auto-complete and next edit features , claiming that their global usage was in decline and saying that developers “...are no longer working primarily at the level of individual lines of code; instead, they are orchestrating fleets of agents across tasks.”  Elsewhere, Notion bumped its Business Plan from $15 to $20-a-month per user thanks to its new “AI features,” which I imagine sucked for previous business subscribers who didn’t want “AI agents” or any of that crap but did want things like Single Sign On and Premium Integrations. The result? Profit margins dropped by 10% . Great job everybody! In February 2026, Perplexity users noticed that rate limits had been aggressively trimmed from even its January 2026 limits , with $20-a-month subscribers now limited to arbitrary “average use weekly limits” on searches, and “monthly limits” on research queries ( that one user worked out dropped them from 600 deep research queries a month to 20 ), down from 300+ searches a day and generous deep research limits.  Price hikes and product changes are likely to accelerate in the next few months as things get desperate. But now for a quick intermission… I have been training with with Nik Suresh, author of I Will Fucking Piledrive You If You Mention AI Again , and while I’m kidding , I want to be clear that if you don’t stop bringing up Uber and AWS as examples of why AI will work out I may react poorly as I’m fucking tired of this point because it’s stupid and wrong. I will put you in the embrace of God, I swear.  The AI bubble and its representative companies do not and have never represented the buildout of Amazon Web Services or the growth and burnrate of Uber. If you are still saying this you are wrong, ignorant and potentially a big fucking liar.  As I discussed about a month ago , Amazon Web Services cost around $52 billion (adjusted for inflation!) between 2003 (when it was first used internally) through two years after it hit profitability (2017). OpenAI raised $42 billion last year. Anthropic raised $30 billion in February. You are full of shit if you keep saying this.  As I discussed a few weeks ago , Uber’s economics are absolutely nothing like generative AI. Uber did not have capex, and burned those billions on R&D and marketing (making it more similar to Groupon in the end): Here’re some other myths I’m tired of hearing about: Yet the most obvious one that I hear is the funniest: that Anthropic and OpenAI can just raise their prices! As both OpenAI and Anthropic aggressively stumble toward their respective attempts to take their awful businesses public, both are making moves to try and become “respectable businesses,” by which I mean “businesses that still lose billions of dollars but in less-annoying ways.” Last week, OpenAI killed Sora — both the app and the model — along with a $1 billion investment from Disney, with the Wall Street Journal reporting it was burning a million dollars a day , but Forbes estimating the number was closer to $15 million . OpenAI will frame this as part of its "refocus" on a “Superapp” ( per the WSJ ) that combines ChatGPT, coding app codex, and its dangerously shit browser into one rat king of LLM toys that nobody can work out a real business model for. All of this is part of a supposed internal effort to “ prioritize coding and business customers ” that we’ve heard some version of for months. Meanwhile, OpenAI’s attempts to bring advertising to its users have been a little embarrassing, with a two-month-long trial involving “less than 20%” of ChatGPT users resulting in “$100 million in annualized revenue,” better known as about $8.3 million in a month from what was meant to be a business line that brought in “low billions” in 2026 according to the Financial Times . Timing confusingly with this “refocus” is OpenAI’s plan to nearly double its workforce from 4,500 to 8,000 people by the end of 2026 . In fact, writing all this down makes it feel like OpenAI doesn’t really have much of a focus beyond “buy more stuff” and saying “superapp!” every six months. Hey, whatever happened to OpenAI’s plan to be “the interface to the internet” that Alex Heath reported would happen by the first half of 2025 ? Did that happen? Did I miss it? In any case, OpenAI’s other strategy is to absolutely jam the gas pedal on its Codex coding product — for example, one user I found was able to burn $2,192 in tokens on a $200-a-month ChatGPT plan , and another was able to burn $1,461 in three days on the same subscription.  Meanwhile, Anthropic has been in the midst of a months-long rugpull following an all-out media campaign through December and January, pushing Claude Code on tech and business reporters who don’t bother to think too hard about things, per my Hater’s Guide to Anthropic : On February 18, 2026,  Anthropic started banning anybody who used multiple Claude Max accounts , something that had never been an issue before it needed everybody to talk about Claude Code non-stop. The same day, Anthropic “ cleared up ” its Claude Code policies, saying that you can’t connect your Claude account to external services, meaning that all of those people who have been spinning up OpenClaw instances and buying $10,000 worth of Mac Minis are going to find that they’re suddenly having to pay for their API calls.  Around a month later, Anthropic would start a two-week-long 2x-rate limit promotion for off-peak usage that ended on March 27, 2026. A day before on March 26 2026, Anthropic would announce that it was starting “peak hours,”  with Claude users maxing out their sessions faster between the hours of 5am and 11PM pacific time Monday to Friday, with a spokesperson limply adding that “efficiency wins” will “offset this” and only “7% of users will hit the limits.” All of this was sold as a result of “managing the growing demand for Claude.” If I’m honest, this might be Anthropic’s most-egregious swindle yet. By pumping off-peak usage and then immediately cutting it just before introducing peak hours , Anthropic further muddies the water of how much actual access you get to their products. Peak hours appear to have become aggressively restricted, and I imagine off peak feels…something like the regular peak hours used to. Users almost immediately started hitting limits regardless of what time or day they were using it. One user on the $100-a-month Max plan complained about hitting 61% of his session limit after four prompts (which cost $10.26 in tokens). Another said that they hit 63% of their rate limit on their $200-a-month plan in the space of a day, and another hit 95% after 20 minutes of using their Max plan (I’m gonna guess $100-a-month). This person hit their Max limit after “ two or three things .” This one vowed to cancel their $200-a-month subscription after hitting their weekly limit in the space of a day, saying that they (and I’m going off of a translation, so forgive me) “expected a premium experience for $200, and what they got was constant limit stress.” This guy is scared to use Claude Code because of the limits . This guy blew 28% of his limits in less than an hour . This guy “can’t even do basic work on a 20x Max plan.” This guy hit his limits “in a few prompts” on Anthropic’s $20-a-month Pro plan, and the same prompts would have (apparently) consumed 5% of the limits “normally” (I assume last week), and while Thariq from Anthropic assured him that this was abnormal , he didn’t bother to respond to this guy in the thread who said he ran out of usage on the Max plan in 15 minutes . While Anthropic Technical Staff Member Lydia Hallie posted that Anthropic was “aware people are hitting usage limits in Claude Code way faster than expected” and that some investigation was taking place, it’s hard to imagine that Anthropic had no idea that these limits were so severe or that any of this was a surprise.  Naturally, OpenAI had already reset limits on its Codex coding model the second that these reports begun , claiming that they “wanted people to experiment with the magnificent plugins they launched” rather than saying something more-truthful like “we’re lowering limits so that the hogs braying with anger at Anthropic start paying OpenAI instead.” While an eager Redditor claimed that these rate limits were a result of a cache bug on Claude Code , Anthropic quickly said that this wasn’t the reason , nor did they say anything about there being a reason or that anything was wrong.   Meanwhile, users are complaining about the reduced quality of outputs from its Claude Opus 4.6 model , with some saying it acts like cheaper models , and another noting that it might be because of Anthropic’s upcoming Mythos model , which was leaked when Fortune mysteriously somehow discovered an openly-accessible “data cache” that included 3000 assets but somehow no actual information about the model other than it would be a “step change” and its cybersecurity powers were too much to release at once , the tech equivalent of deliberately dropping a magnum condom out of your wallet in front of a woman, or Dril ’s “I was just buying ear medication for my sick uncle…who’s a model by the way” post. I’m gonna be honest I just don’t give a shit about Mythos or Capybara or any blatant leaks intended to spook cybersecurity stocks , especially as these models are also meant to be much more compute-intensive, and thus, vastly more expensive to run.  How will that work with these rate limits, exactly?  I think there’re a few ways this goes: I wager that this is just the first of a few major belt-tightening operations from both Anthropic and OpenAI as they desperately shoulder-barge each other to file the world’s worst S-1. Both companies lose billions of dollars, both companies have no path to profitability, and both companies sell products — both to consumers and businesses — that simply do not work when users are forced to pay something approaching a sustainable cost.  Even with these egregious limits, a user I previously linked to was allowed to burn $10 in tokens in four prompts on a $100-a-month plan. Even in the world of Amodei’s Stylized Facts, that would still be $5 of prompts every 5 hours, which over the course of a month will absolutely be over $100.  Yet the sheer fury of Anthropic’s customers only proves the fundamental weakness of Anthropic’s business model, and the impossibility of ever finding any kind of profitability. And the AI industry has nobody to blame but itself. While it’s really easy to make fun of people obsessed with LLMs, I want to be clear that Anthropic and OpenAI are inherently abusive companies that have built businesses on theft, deception and exploitation. Anybody who’s spent more than a few minutes in one of the many AI Subreddits has read story after story of models mysteriously “becoming dumb,” or rate limits that seem to expand and contract at random. Even the concept of “rate limits” only serves to further deceive the customer. Outside of intentionally asking the model, users are entirely unaware of their “token burn,” or at the very least have built habits around rate limits that, as of right now, are entirely different to even a month ago. A user who bought a $200-a-month Claude Pro subscription in December 2025 , a mere three months later, now very likely cannot do the same things they did on Claude Code when they decided to subscribe, and those who use these subscriptions for their day jobs are now having to sit on their hands waiting for the rate limits to pass, and have no clarity into whether they’ll be able to work at the same rate they did even a month ago, let alone when they subscribed.  All of this is a direct result of Anthropic, OpenAI, and other AI startups intentionally deceiving customers through obtuse pricing so that people would subscribe believing that the product would continue providing the same value, and I’d argue that annual subscriptions to these services amount to, if not fraud, a level of consumer deception that deserves legal action and regulatory involvement. To be clear, no AI company should have ever sold a monthly subscription, as there was never a point at which the economics made sense. Yet had these companies actually charged their real costs, nobody would have bothered with AI, because even with these highly-subsidized subscriptions, AI still hasn’t delivered meaningful productivity benefits, other than a legion of people who email me saying “it’s changed my life as a programmer!” without explaining to me what that means or why it matters or what the actual result is at the end.  Isn’t it kind of weird that we have these LLM subscriptions to products that arbitrarily become less-accessible or less-performant in a way that’s impossible to really measure, and labs never seem to address? We don’t know the actual rate limits on Claude (other than via CCusage or Shellac’s research ), or ChatGPT, or any of these products by design , because if we did, it would be blatantly obvious how unsustainable and ridiculous these products were.  And the magical part about Large Language Models is that your most engaged customers are also your most-expensive, and the more-intensive the work, the more expensive the outputs become.  If you’re about to say “well they’ll just raise the prices,” perhaps you should check Twitter or Reddit, and notice that Anthropic’s customers are screaming like they’re being stung to death by bees because of new rate limits that only let them burn $10 of compute in five hours. Do you think these people would be comfortable with a $130-a-month, $1,300-a-month or $2,500-a-month subscription? One that performs the same way (if not worse) as their $20, $100 or $200-a-month subscription did? Or do you think they’ll do Aaron Sorkin speeches about Anthropic’s greed and immediately jump to ChatGPT in the hopes that the exact same thing doesn’t happen a few months later?  Much as homeowners were assured that they’d simply be able to refinance their homes before the adjustable rates hit, AI fans repeatedly switch subscriptions to whichever provider is currently offering the best deal, in some cases paying for multiple subscriptions under the explicit knowledge that rate limits existed and would become increasingly-punishing. Based on the reactions of their users, I don’t really see how the AI labs — or AI startups, for that matter — fix this problem.  On one hand, AI subscribers are acting like babies, crying that their product won’t let them use $2500 of tokens for $200. This was an obvious con, a blatant subsidy, and a party that wouldn’t last forever.  On the other, AI labs and AI startups have never, ever acted with any degree of honesty or clarity with regards to their costs, instead choosing to add “exciting” new features that often burn more tokens without charging the end user more, which sounds nice until you remember that things cost money and money is not unlimited. The very foundation of every AI startup is economically broken. The majority of them sell some sort of “deep research” report feature that costs several dollars to generate at a time, and many sell some form of expensive coding or “computer use” product, tool-based web search features, and many other products that exist to keep a user engaged while burning tokens, all without explaining to the user “yeah, we’re spending way more than we make off of you, this is an introductory rate.” This intentional, blatant and industry-wide deception set the terms for the Subprime AI Crisis. By selling AI services at $20 or $50 or even $200-a-month, AI startups and labs created the terms for their own destruction, with users trained for years to expect relatively unlimited access sold at a flat rate for a service powered by Large Language Models that burn tokens at arbitrary rates based on their inference of the user’s prompt, making costs near-impossible to moderate.   And when these companies make changes to slightly bring costs under control, their users act with revulsion, because rate limits aren’t price increases, but direct changes to the functionality of the product. Imagine if a subscription to a car service was $200-a-month, and let you go 50 miles, or 25 miles, or 100 miles, or 4 miles, or 12 miles depending on the day, and never at any point told you how many miles you had left beyond a percentage-based rate limit. To make matters worse, sometimes the car would arbitrarily take a different route, driving you five miles in the opposite direction, or decide to park on the side of the curb, charging you for every mile.  This is the reality of using an AI product in the year of our lord 2026. A Claude Code or OpenAI Codex user cannot with any clarity say that in three months their current workload or workflow will be possible based on their current subscription. Somebody buying an annual subscription to any AI product is immediately sacrificing themselves to the whims of startup CEOs that intentionally decided to deceive users for years as a means of juicing growth.  And when these limits decay, does it eventually make the ways in which some of these users work with Claude Code impossible? At what point do these rate limit shifts start changing how reliable the experience is and how much one can get done in a day? What use is a tool that gets more unreliable to access and expensive over time? Even if this week’s rate limits are an overcorrection, one has to imagine they resemble the future of Anthropic’s products, and are indicative of a larger pattern of decay in the value of its subscriptions.   I’m going to be as blunt as possible: every bit of AI demand — and barely $65 billion of it existed in 2025 — that exists only exists due to subsidies, and if these companies were to charge a sustainable rate, said demand would evaporate. There is no righting this ship. There is no pricing that makes sense that customers will pay at scale, nor is there a magical technological breakthrough waiting in the wings that will reduce costs. Vera Rubin will not save AI, nor will some sort of “too big to fail” scenario, because “too big to fail” was based on the fact that banks would have stopped providing dollars to people and insurance companies would have  stopped issuing insurance. Despite NVIDIA’s load-bearing valuation and the constant discussion of companies like OpenAI and Anthropic, their actual economic footprint is quite small in comparison to the trillions of dollars of CDOs and trillion plus dollars of mortgages involved in the great financial crisis. The death of the AI industry would be cataclysmic to venture capitalists, bring about the end of the hypergrowth era for the Magnificent Seven, and may very well kill Oracle, but — seriously — that is nothing in comparison to the scale of the Great Financial Crisis. This isn’t me minimizing the chaos to follow, but trying to express how thoroughly fucked everything was in 2008.  On Friday I’m going to get into this more in the premium. This wasn’t an intentional ad, I just realized as I wrote that sentence that that was what I have to do.  Anyway, I’ll close with a grim thought. What’s funny about the comparison to the subprime mortgage crisis is that there are, in all honesty, multiple different versions of the Stripper With Five Houses from The Big Short: All of these entities are acting based on a misplaced belief that the world will cater to them, and that nothing will ever change. While there might be different levels of cynicism — people that know there’re subsidies but assume they’ll be fine once they arrive, or people like Sam Altman that are already rich and don’t give a shit — I think everybody in the AI industry has deluded themselves into believing they have the mandate of Heaven.    Back in August 2024 , I named several pale horses of the AIpocalypse, and after absolutely fucking nailing the call two years early on OpenAI’s “big, stupid magic trick” of launching Sora to the public , I think it’s time to update them: Anyway, thanks for reading this piece. Data centers raise debt from either banks, private credit, private equity or “business development companies,” non-banking entities that borrow money from banks to lend to risky companies. In an analysis of 26 prominent data center deals, I found ( back in December 2025 ) several names — Blue Owl, MUFG (Mitsubishi), Goldman Sachs, JP Morgan Chase, Morgan Stanley, SMBC (Sumitomo Mitsui) and Deutsche Bank — that come up regularly.  AI Labs (and AI startups) raise funding from venture capitalists (EG: Dragoneer (Anthropic, OpenAI, Perplexity) and Founders Fund (Anthropic, OpenAI)), hyperscalers (Google, Amazon, NVIDIA, Microsoft, all of whom have now invested in both OpenAI and Anthropic), sovereign wealth funds (GIC, Singapore’s sovereign wealth fund, invested in Anthropic), and even banks providing lines of credit , as they did for both Anthropic and OpenAI .  Many of the big names in data center development (who I believe have all, in some way, backed CoreWeave) funded those lines of credit, including Morgan Stanley, SMBC, JPMorgan and MUFG. Those common names are points of failure, in particular SMBC and MUFG, two large Japanese banks that have aggressively loaned to just about every part of the AI economy. This pairs badly with the fact that the Japanese government is considering interest rate hikes thanks to the continuing chaos in the Middle East , which will make debt more expensive. Venture capitalists are funded by limited partners (EG: pension funds, investment banks and wealthy individuals), and the venture capital industry is facing an historic liquidity crisis (IE: they can’t raise money and their investments aren’t selling), which means that it cannot sustain the AI industry forever. NVIDIA (and other hardware sellers to a much lesser extent) sells GPUs and the associated hardware to data center developers and hyperscalers . At around $42 million a megawatt between GPUs, data center and power construction, these data centers are almost entirely paid using debt. This is the only link in the chain that is really profitable. Data center developers rent their GPUs to AI labs and hyperscalers. Developers, who raised $178.5 billion in debt in the US alone last year , must borrow heavily to fund buildouts, and due to many of these projects being run by either brand or relatively new developers, debt costs are higher.  As a result, based on my premium data center model , many data center projects are unprofitable even with a paying customer , and that’s assuming they even get built. To make matters worse, as I discussed last week, only 5GW of data center capacity out of over 200GW announced is actually under construction globally , which means many of these loans are currently on interest-only payments. All evidence points to GPU compute either being a low or negative-margin business. CoreWeave — the largest, best-funded and NVIDIA-backed AI compute provider — had an operating margin of -6% and net loss margin of -29% in 2025 .  CoreWeave’s largest customers are Microsoft, OpenAI and NVIDIA, which means that it should, in theory, be getting the best rate around. Hyperscalers like Google, Meta, Amazon, and Microsoft, who both rent GPUs from data center providers and rent GPUs to AI labs (as well as offering API access to some AI labs’ models — Google and Amazon sell Anthropic’s, Microsoft sells OpenAI’s models, and both it’s own models and other models like xAI’s Grok ). Hyperscalers steadfastly refuse to talk about their AI revenues, and do not break out costs. I would also put Oracle in this bucket. AI labs rent GPUs from either hyperscalers or data center companies to either train models or run inference (creating the outputs of models), sell access to models via their API, and offer subscription services to both consumer and business customers. Important detail: in almost every case, an AI lab must make an up front commitment, likely with a prepayment, to secure future capacity. This means that AI labs are often having to pony up massive amounts of up-front capital on top of their incredibly high ongoing costs. Anthropic has made $5 billion in revenue and spent $10 billion on compute to date , and had to raise another $30 billion in February 2026 after raising about $16.5 billion in 2025 alone. Through September 2025, OpenAI made $4.3 billion in revenue and spent $8.67 billion on inference alone . Neither of these companies have a path to profitability. AI startups buy access to models via AI labs’ API, building services that have “AI features” powered by said models, paid on a per-million token basis (for input tokens (user-fed data) and output tokens (model outputs)). Every single AI startup is unprofitable , and every AI startup functions by offering a service powered by AI models provided by AI labs. In every case that I’ve found, these providers always offer far more in token burn than the cost of their subscriptions. Consumers and businesses pay for monthly subscriptions or, in some cases, API access to models. Customers paying for AI services in most cases pay for a monthly service, such as Anthropic’s Claude Pro or Max or Perplexity Pro/Max, running from $20 a month to $200 a month.  These subscriptions for the most part mask the amount of tokens that you are actually burning as a customer, but in every single case that I’ve found, that amount is always in excess of the subscription cost. Cursor has, at this point, raised $3.36 billion, and turned it into, at best, about a billion dollars of revenue, and that’s assuming it linearly grew between periods versus (more likely) having up and down months. As AI labs grow, their costs increase dramatically, both in their immediate compute costs and the demands from GPU providers for up-front cash to secure future compute allocation.  In parallel, as AI startups grow, they burn more money per customer, which increases their dependence on venture capital. As this happens, AI labs are facing both a cash and compute crush, which means they have to start either controlling the amount of compute customers use or make more money from serving it. AI labs are thus forced to raise prices on AI startups, either through tolls (priority processing) or raw cost increases. Another important detail: one of the ways that AI labs raise prices isn’t even through “making things more expensive,” but selling access to models that burn more tokens. Think of this as the variable rate mortgage of the Subprime AI Crisis.  As AI labs raise prices on their AI startup clients, these startups are forced to reduce the quality of their services and/or increase their costs after years of getting their customers used to a significantly-cheaper or better service, which makes their products less attractive, leading to customer churn.  Worse still, these customers are used to using subscriptions from Anthropic and OpenAI with remarkable rate limits that are impossible for even a well-capitalized AI startup to compete with, which means that these changes only slow the rate of burn rather than making these companies profitable. As a result, these AI startups are more dependent on venture capital. While OpenAI and Anthropic are pretty happy on the top of the food chain, they are also dependent on the existence of AI startups for revenue for their models, which means that while these price changes increase the amount of revenue they get in the short term, they invariably push AI startups toward cheaper open source models and death. AI labs have, this entire time, been massively subsidizing their own products. Per Forbes , AI coding platform Cursor has faced numerous problems competing with Anthropic, who it claims at one point let users burn $5000 a month in tokens on a $200-a-month subscription, which reflects my own reporting from last year . Cursor also claims in the same article that its enterprise customers are profitable, but I call bullshit considering the multiple enterprise customers who have reached out to tell me they can burn $2 or $3 for every $1 of subscription.  The problem is that a subsidy is always a losing proposition, which means that at some point Anthropic and OpenAI will have to massively reduce the amount of tokens that people use on their accounts. As I’ll get to later, this infuriates users and sends them running for the doors. At some point, the cost of doing business with Anthropic and OpenAI will kill AI startups, as there is no point at which any of them become sustainable, which will in turn kill the revenue from selling access to their models. At some point, users will be forced to burn tokens at a rate that actually matches their subscription costs, which will reduce the value of the product, which will in turn reduce the amount of subscribers they will have. And at some point, Anthropic and OpenAI will be left with a bunch of compute reservations they’ve made that they don’t need and can’t afford due to miss-timed growth projections. As Dario Amodei said back in February, there’s no hedge on Earth that could stop Anthropic from going bankrupt if it buys too much compute . As the two largest customers of AI compute — there really isn’t even a distant third outside of xAI and hyperscalers, the latter of which are predominantly standing up OpenAI and Anthropic (or in Meta’s case a bunch of unprofitable LLM bullshit) — who’s going to pay for all of those data centers? Fucking Aquaman ? Users are inherently trained to expect a service that they pay for on a monthly basis, and their experience of said service is entirely separated from “token burn,” making it impractical to impossible to get them to use models directly, or to apply rate limits. The longer a user has used the service, the more their habits orient around an “unlimited” or “partially limited” service, which means your only options are to raise prices or apply rate limits, with the only justification for either of them being “new models” (which are more expensive) or “we’re unable to afford to run our company,” which the user doesn’t give a shit about. They’re profitable on inference - no they are not! There is no proof of this statement anywhere! What’s your source here? Sam Altman saying it in August 2025 ? Dario Amodei saying he had gross margins of 50%? That was a “stylized fact” that he specifically said wasn’t about Anthropic , not that you care! What else have you got for me here? SemiAnalysis’ InferenceX ? Gun to your head, explain to me how that’s the case. Oh you’ve heard the companies do “batch processing”? Why is all that “batch processing” not making them profitable?  I swear to god if you say any shit about how these companies would be “profitable without training” I’m going to scream. No! AI training costs are not going away. They are an inherent part of running these companies, and are not capex . They are operating expenses . AI is being funded by the largest companies in the world with the most healthy balance sheets- I will obliterate you with the 100-Type Guanyin Bodhisattva ! Microsoft is the only remaining hyperscaler that is funding the AI buildout without debt, and none of them will talk about AI revenues. This point is trotted out by imbeciles to try and say “this is nothing like the dot com bubble,” which I fundamentally agree with — it’s worse! It’s weirder! It’s a bigger waste! And they collectively need $2 trillion in brand new AI revenues by 2030 for any of it to make sense! The cost of AI services is going down because the token prices are going down - you are a silly person! You do not actually understand anything! The cost of tokens is not the same as the cost of serving tokens! OpenAI cut the cost of its o3 reasoning model by 80% a few weeks after the release of Claude Opus 4 . Do you think that happened because of magical price reductions on the ops side? If so, I wish to study your brain. It’s the gym model they want people to subscribe and not use it it’s the gym model it’s the gym model- TZZZZT, whoops, looks like you got tazed. Anthropic will announce that it’s “fixed the bug” (IE: eased rate limits it intentionally set) and apologize to the community, prolonging the inevitable. Rate limits will continue to decay over time, just at a slower pace.  Anthropic keeps the limits where they are, and we hit a new normal that makes everybody really mad. The AI companies that only have customers because they spend $3 to $10 for every dollar of revenue. The venture capitalists that are ultra-rich on paper, heavily leveraging their firms in companies like Harvey (worth “$11 billion”) and Cursor (worth “$29.3 billion”) that burn hundreds of millions or billions of dollars and are now both too large to sell to another company and too shitty a company to take public. The AI labs that have built massive businesses on selling heavily-subsidized subscriptions to customers who don’t want to pay for them and API calls to AI startups that can only pay them if infinite resources exist. The AI data center companies that, thanks to readily-available debt, have started 200GW of projects (and only started building 5GW of them) for AI demand that doesn’t exist, entirely based on the theoretical sense that maybe it will in the future. Oracle, who is building hundreds of billions of dollars of data centers for OpenAI (which needs infinite resources to be able to pay its compute costs), is taking on equally-large amounts of debt, all because it assumes that nothing bad will ever happen. The customers of AI startups that are building lifestyles, identities and workflows around them believing that we’re “just at the beginning” on top of unsustainable AI subscriptions. Any further price increases or service degradations from Anthropic and OpenAI are a sign that they’re running low on cash. Any reduction in capex from big tech is a sign that the AI bubble is bursting, as NVIDIA’s continued growth only comes from Microsoft, Google, Amazon, Meta, Oracle and other large companies buying tens of billions of dollars of servers from Taiwanese ODMs like Foxconn and Quanta. Any further price increases or service degradations from AI startups , such as Cursor, Perplexity, Harvey, Lovable or Replit. These are all token-intensive venture-hogs that burn $4 or $5 for every $1 of revenue. Any discussion of layoffs at AI companies . The collapse of a data center deal that has yet to commence construction. The collapse of a data center already in construction, but before it’s finished. The collapse of an already-constructed data center. CoreWeave or any major data center player having trouble or failing to raise debt. We’ve already seen the beginnings of this with CoreWeave’s issues raising for its Lancaster PA data center . The Further Collapse of Stargate Abilene: If anything happens to the construction of OpenAI’s flagship data center (being built by Oracle) in Abilene Texas, you know shit is getting bad. Any problems or delays with OpenAI or Anthropic going public: both of these companies are the financial equivalent of Chernobyl, so I can only imagine it’ll take some talented accountants to get them in any shape where investors without lead poisoning actually want to get involved. Any problems with Blue Owl as an ongoing concern: Blue Owl is the loosest lender in the AI bubble, and if it falls behind on their loans or has issues with its limited partners, that’s a bad sign too. Any problems with SoftBank : SoftBank was somehow able to raise $40 billion in debt (payable in a year ) to fund its chunk of OpenAI’s pseudo-$110 billion round, running over its promised 25% ratio of loans to the value of its assets. This puts SoftBank in a very precarious position. ARM’s stock tanking: A great deal of SoftBank’s wealth comes from its investment in ARM, including a $15 billion margin loan based on its stock. If ARM drops below $80, things are going to get hairy for Masayoshi Son. Any issues with NVIDIA’s customers’ ability to pay: If NVIDIA’s customers don’t reliably pay it, things will look bad come earnings season. NVIDIA misses on earnings: This is an obvious one, but I think the markets will crap their pants if NVIDIA misses on earnings estimates.

0 views

Premium: How Much Of The AI Bubble Is Real?

I’m turning 40 in a month or so, and at 40 years young, I’m old enough to remember as far back as December 11 2025, when Disney and OpenAI “reached an agreement” to “bring beloved characters from across Disney’s brands to Sora.” As part of the deal, Disney would “become a major customer of OpenAI,” use its API “to build new products, tools and experiences (as well as showing Sora videos in Disney+),” and “deploy ChatGPT for its employees,” as well as making a $1 billion equity investment in OpenAI. Just one small detail: none of this appears to have actually happened. Despite an alleged $1 billion equity investment, neither Disney’s FY2025 annual report nor its February 2, 2026 Q1 FY2026 report mention OpenAI or any kind of equity investment. Disney+ does not show any Sora videos, and searching for “Sora” brings up “So Random,” a musical comedy sketch show from 2011 with a remarkably long Wikipedia page that spun off from another show called “Sonny With A Chance” after Demi Lovato went into rehab. It doesn’t appear that investment ever happened, likely because — as was reported earlier this week by The Information and the Wall Street Journal — OpenAI is killing Sora. Shortly after the news was reported, The Hollywood Reporter confirmed that the deal with Disney was also dead . Per The Journal, emphasis mine: Oh, okay! The app that CNBC said was “ challenging Hollywood ” and “ freaking out the movie industry ” and The Hollywood Report would suggest could somehow challenge Pixar and was Sam Altman successfully “ playing Hollywood ” and that The Ankler said was OpenAI “ going to war with Hollywood ” as it “ shook the industry ” and that Deadline said made Hollywood “ sore ” and that Boardroom said was in a standoff with Hollywood and that the LA Times said was “ deepening a battle between Hollywood and OpenAI ” and “ igniting a firestorm in Hollywood ” and that Puck said had “ Hollywood panicking ” and TechnoLlama said was “ the end of copyright as we know it ” and that Slate said was a case of AI " crushing Hollywood as it we’ve known it ” is completely dead a little more than five months after everybody claimed it was changing everything.  It’s almost as if everybody making these proclamations was instinctually printing whatever marketing copy had been imagined by the AI labs to promote compute-intensive vaporware, and absolutely nobody is going to apologize to the people working in the entertainment industry for scaring the fuck out of them with ghost stories! Every single person who blindly repeated that Sora existed and was changing everything should be forced to apologize to their readers!  I cannot express the sheer amount of panic that spread through every single part of the entertainment industry as a result of these specious, poorly-founded mythologies spread by people that didn’t give enough of a shit to understand what was actually going on. Sora 2 was always an act of desperation — an attempt to create a marketing cycle to prop up a tool that burned as much as $15 million a day that most of the mainstream media bought into because they believe everything OpenAI says and are willing to extrapolate the destruction of an entire industry from a fucking facade.  Thanks to everyone who participated in this grotesque scare-campaign, everybody I know in the film industry has been freaking out because every third headline about Sora 2 said that it would quickly replace actors and directors. The majority of coverage of Sora 2 acted as if we were mere minutes from it replacing all entertainment and all video-based social media, even though the videos themselves were only a few seconds long and looked like shit!  Sora 2 was never “challenging Hollywood” or “a threat to actors and directors,” it was a way to barf out videos that looked very much like Sora 2’s training data, and the reason you could only generate a few seconds at a time was these models started hallucinating stuff very quickly, because that’s what Large Language Models do.   Yet this is what the AI bubble is — poorly-substantiated media-driven hype cycles that exploit a total lack of awareness or willingness to scrutinize the powerful. Sora 2 was always a dog, it always looked like shit, it never challenged Hollywood, it never actually threatened the livelihoods of actors or directors or DPs or screenwriters outside of the tiny brains of studio executives that don’t watch or care about movies. Anybody that published a scary story about the power of Sora 2 helped needlessly spread panic through the performing arts, and should feel deep, unbridled shame.  You have genuinely harmed people I know and love, and need to wise up and do your fucking job.  I know, I know, you’re going to say you were “just reporting what was happening,” and that “OpenAI seemed unstoppable,” but none of that was ever true other than in your mind and the minds of venture capitalists and AI boosters. No, Sora 2 was never actually replacing anyone, that’s just not true, you made it up or had it made up for you.  But that, my friends, is the AI bubble. Five months can pass and an app can go from The End of Hollywood that apparently raised $1 billion to “ discontinued via Twitter post that reads exactly like the collapse of a failed social network from 2013 ” and “didn’t actually raise anything.” It doesn’t matter if stuff actually exists, because it’ll be reported as if it does as long as a company says it’ll happen. Perhaps I sound a little deranged, but isn’t anybody more concerned that a billion dollars that was meant to move from one company to another simply didn’t happen? Or, for that matter, that this keeps happening, again and again and again? I’m serious! As I discussed in last year’s Enshittifinancial Crisis , OpenAI has had multiple deals that seem to be entirely fictional: That’s just the AI bubble, baby! We don’t need actual stuff to happen! Just announce it and we’ll write it up! No problem, man! It doesn’t matter that one of the largest entertainment companies in the world simply didn’t give the most-notable startup in the world one billion dollars, much as it’s not a big deal that the entire media flew like Yogi Bear lured with a delicious pie toward every single talking point about OpenAI destroying Hollywood, much like it’s not a problem that Broadcom, AMD, SK Hynix, and Samsung all have misled their investors and the media about deals that range from threadbare to theoretical. Except it is a problem, man! As I covered in this week’s free newsletter , I estimate that only around 3GW of actual IT load (so around 3.9GW of power) came online last year, and as Sightline reported , only 5GW of data center construction is actually in progress globally at this time, despite somewhere between 190GW and 240GW supposedly being in progress. In reality, data centers take forever to build (and obtaining the power even longer than that), but nobody needs to harsh their flow by looking into what’s actually happening. In reality, the AI industry is pumped full of theoretical deals, obfuscations of revenues, promises that never lead anywhere, and mysterious hundreds of millions or billions of dollars that never seem to appear.  Beneath the surface, very little actual economic value is being created by AI , other than the single-most-annoying conversations in history pushed by people who will believe and repeat literally anything they are told by a startup or public company. No, really. The two largest consumers of AI compute have made — at most, and I have serious questions about OpenAI — a combined $25 billion since the beginning of the AI bubble, and beneath them lies a labyrinth of different companies trying to use annualized revenues to obfuscate their meager cashflow and brutal burn-rate.  To make matters worse, almost every single data center announcement you’ve read for the last four years is effectively theoretical, their nigh-on-conceptual “AI buildouts” laundered through major media outlets to give the appearance of activity where little actually exists. The AI industry is grifting the finance and media industry, exploiting a global intelligence crisis where the people with some of the largest audiences and pocketbooks have fundamentally disconnected themselves from reality. I don’t like being misled, and I don’t like seeing others get rich doing so.  It’s time to get to the bottom of this. Let’s rock . Its supposed $100 billion investment (that was always a “letter of intent”) from NVIDIA that went from OpenAI allegedly buying billions of GPUs from NVIDIA in October 2025 to “only a commitment” in February 2026 in a mere four months. A “letter of intent” between SK Hynix and Samsung to supply 900,000 wafers of RAM a month that was reported as representing 40% of the global supply of DRAM that never resulted in anybody buying or selling any fucking RAM. A supposed “definitive agreement” with AMD from October 2025 that would involve OpenAI using AMD’s GPUs to power its “next-generation AI infrastructure,” except AMD didn’t change guidance and does not appear to have any revenue from OpenAI , despite the first gigawatt of data center capacity being due by the end of this year. Part of the deal also involved OpenAI being able to buy 10% of AMD’s stock, but that was so stupid I can’t even bring myself to write it up. When asked about this on its latest earnings call , AMD CEO Lisa Su said that “the ramp is on schedule to start in the second half of the year,” repeating the deal existed while not increasing guidance to account for a gigawatt of chips, which would work out to somewhere in the region of $20 billion to $30 billion of sales as its weak guidance of $9.8 billion in the next quarter, sending the stock tumbling as a result . Isn’t it also weird that Meta signed a near-identical deal on February 24 2026 and nobody seemed to notice that guidance wasn’t changing and AMD was apparently also going to install a gigawatt of GPUs with Meta by the end of 2026? Is everybody drunk? What’s going on? A “strategic collaboration” with Broadcom “...to deploy 10 gigawatts of openAI-designed AI accelerators” by the end of 2029 that has resulted in no sales of any kind and no increase in guidance to match, with no mentions of OpenAI in its latest quarterly earnings report .  On its most-recent earnings call, Broadcom CEO Hock Tan added that it expected OpenAI to “deploy in volume their first-generation XPU in 2027 at over 1 gigawatt of capacity,” but did not raise guidance or, when asked directly, say how it would deploy 10GW by the end of 2029. I’ll also add that there isn’t a chance in hell OpenAI deploys a gigawatt of these chips in that timeframe, and Broadcom has yet to show any proof that these chips are going to be made.

0 views

The AI Industry Is Lying To You

Hi! If you like this piece and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 18,000 words, including vast, detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I just put out a massive Hater’s Guide To The SaaSpocalypse , as well as the Hater’s Guide to Adobe . It helps support free newsletters like these! The entire AI bubble is built on a vague sense of inevitability — that if everybody just believes hard enough that none of this can ever, ever go wrong that at some point all of the very obvious problems will just go away. Sadly, one cannot beat physics. Last week, economist Paul Kedrosky put out an excellent piece centered around a chart that showed new data center capacity additions (as in additions to the pipeline, not brought online ) halved in the fourth quarter of 2025 (per data from Wood Mackenzie ):   Wood Mackenzie’s report framed it in harsh terms: As I said above, this refers only to capacity that’s been announced rather than stuff that’s actually been brought online , and Kedrosky missed arguably the craziest chart — that of the 241GW of disclosed data center capacity, only 33% of it is actually under active development: The report also adds that the majority of committed power (58%) is for “wires-only utilities,” which means the utility provider is only responsible for getting power to the facility, not generating the power itself, which is a big problem when you’re building entire campuses made up of power-hungry AI servers.  WoodMac also adds that PJM, one of the largest utility providers in America, “...remains in trouble, with utility large load commitments three times as large as the accredited capacity in PJM’s risked generation queue,” which is a complex way of saying “it doesn’t have enough power.”  This means that fifty eight god damn percent of data centers need to work out their own power somehow. WoodMac also adds there is around $948 billion in capex being spent in totality on US-based data centers, but capex growth decelerated for the first time since 2023 . Kedrosky adds: Let’s simplify: The term you’re looking for there is data center absorption, which is (to quote Data Center Dynamics) “...the net growth in occupied, revenue-producing IT load,” which grew in America’s primary markets from 1.8GW in new capacity in 2024 to 2.5GW of new capacity in 2025 according to CBRE .   The problem is, this number doesn’t actually express newly-turned-on data centers. Somebody expanding a project to take on another 50MW still counts as “new absorption.”  Things get more confusing when you add in other reports. Avison Young’s reports about data center absorption found 700MW of new capacity in Q1 2025 , 1.173GW in Q2 , a little over 1.5GW in Q3 and 2.033GW in Q4 (I cannot find its Q3 report anywhere), for a total of 5.44GW, entirely in “colocation,” meaning buildings built to be leased to others. Yet there’s another problem with that methodology: these are facilities that have been “delivered” or have a “committed tenant.” “Delivered” could mean “the facility has been turned over to the client, but it’s literally a powered shell (a warehouse) waiting for installation,” or it could mean “the client is up and running.” A “committed tenant” could mean anything from “we’ve signed a contract and we’re raising funds” (such as is the case with Nebius raising money off of a Meta contract to build data centers at some point in the future ). We can get a little closer by using the definitions from DataCenterHawk (from whichAvison Young gets its data), which defines absorption as follows :  That’s great! Except Avison Young has chosen to define absorption in an entirely different way — that a data center (in whatever state of construction it’s in) has been leased, or “delivered,” which means “a fully ready-to-go data center” or “an empty warehouse with power in it.”  CBRE, on the other hand, defines absorption as “net growth in occupied, revenue-producing IT load,” and is inclusive of hyperscaler data centers. Its report also includes smaller markets like Charlotte, Seattle and Minneapolis, adding a further 216MW in absorption of actual new, existing, revenue-generating capacity. So that’s about 2.716GW of actual, new data centers brought online. It doesn’t include areas like Southern Virginia or Columbus, Ohio — two massive hotspots from Avison Young’s report — and I cannot find a single bit of actual evidence of significant revenue-generating, turned-on, real data center capacity being stood up at scale. DataCenterMap shows 134 data centers in Columbus , but as of August 2025, the Columbus area had around 506MW in total according to the Columbus Dispatch, though Cushman and Wakefield claimed in February 2026 that it had 1.8GW . Things get even more confusing when you read that Cushman and Wakefield estimates that around 4GW of new colocation supply was “delivered” in 2025, a term it does not define in its actual report, and for whatever reason lacks absorption numbers. Its H1 2025 report , however, includes absorption numbers that add up to around 1.95GW of capacity…without defining absorption, leaving us in exactly the same problem we have with Avison Young.  Nevertheless, based on these data points, I’m comfortable estimating that North American data center absorption — as the IT load of data centers actually turned on and in operation — was at around 3GW for 2025 , which would work out to about 3.9GW of total power. And that number is a fucking disaster. Earlier in the year, TD Cowen’s Jerome Darling told me that GPUs and their associated hardware cost about $30 million a megawatt. 3GW of IT load (as in the GPUs and their associated gear’s power draw) works out to around $90 billion of NVIDIA GPUs and the associated hardware, which would be covered under NVIDIA’s “data center” revenue segment: America makes up about 69.2% of NVIDIA’s revenue, or around $149.6 billion in FY2026 (which runs, annoyingly, from February 2025 to January 2026). NVIDIA’s overall data center segment revenue was $195.7 billion, which puts America’s data center purchases at around $135 billion, leaving around $44 billion of GPUs and associated technology uninstalled. With the acceleration of NVIDIA’s GPU sales, it now takes about 6 months to install and operationalize a single quarter’s worth of sales. Because these are Blackwell (and I imagine some of the new next generation Vera Rubin) GPUs, they are more than likely going to new builds thanks to their greater power and cooling requirements, and while some could in theory be going to old builds retrofitted to fit them, NVIDIA’s increasingly-centralized (as in focused on a few very large customers) revenue heavily suggests the presence of large resellers like Dell or Supermicro (which I’ll get to in a bit) or the Taiwanese ODMs like Foxconn and Quanta who manufacture massive amounts of servers for hyperscaler buildouts.  I should also add that it’s commonplace for hyperscalers to buy the GPUs for their colocation partners to install, which is why Nebius and Nscale and other partners never raise more than a few billion dollars to cover construction costs.  It’s becoming very obvious that data center construction is dramatically slower than NVIDIA’s GPU sales, which continue to accelerate dramatically every single quarter. Even if you think AI is the biggest most hugest and most special boy: what’s the fucking point of buying these things two to four years in advance? Jensen Huang is announcing a new GPU every year!  By the time they actually get all the Blackwells in Vera Rubin will be two years old! And by the time we install those Vera Rubins, some other new GPU will be beating it!  Before we go any further, I want to be clear how difficult it is to answer the question “how long does a data center take to build?”. You can’t really say “[time] per megawatt” because things become ever-more complicated with every 100MW or so. As I’ll get into, it’s taken Stargate Abilene two years to hit 200MW of power . Not IT load. Power .  Anyway, the question of “how much data center capacity came online?” is pretty annoying too.  Sightline ’s research — which estimated that “almost 6GW of [global data center power] capacity came online last year” — found that while 16GW of capacity was slated to come online in 2026 across 140 projects, only 5GW is currently under construction, and somehow doesn’t say that “maybe everybody is lying about timelines.” Sightline believes that half of 2026’s supposed data center pipeline may never materialize, with 11GW of capacity in the “announced” stage with “...no visible construction progress despite typical build timelines of 12-18 months.” “Under construction” also can mean anything from “ a single steel beam ” to “nearly finished.” These numbers also are based on 5GW of capacity , meaning about 3.84GW of IT load, or about $111.5 billion in GPUs and associated gear, or roughly 57.5% of NVIDIA’s FY2026 revenue that’s actually getting built. Sightline (and basically everyone else) argues that there’s a power bottleneck holding back data center development, and Camus explains that the biggest problem is a lack of transmission capacity (the amount of power that can be moved) and power generation (creating the power itself):  Camus adds that America also isn’t really prepared to add this much power at once: Nevertheless, I also think there’s another more-obvious reason: it takes way longer to build a data center than anybody is letting on, as evidenced by the fact that we only added 3GW or so of actual capacity in America in 2025. NVIDIA is selling GPUs years into the future, and its ability to grow, or even just maintain its current revenues, depends wholly on its ability to convince people that this is somehow rational. Let me give you an example. OpenAI and Oracle’s Stargate Abilene data center project was first announced in July 2024 as a 200MW data center . In October 2024, the joint venture between Crusoe, Blue Owl and Primary Digital Infrastructure raised $3.4 billion , with the 200MW of capacity due to be delivered “in 2025.” A mid-2025 presentation from land developer Lancium said it would have “1.2GW online by YE2025.” In a May 2025 announcement , Crusoe, Blue Owl, and Primary Digital Infrastructure announced the creation of a $15 billion joint vehicle, and said that Abilene would now be 8 buildings, with the first two buildings being energized by the “first half of 2025,” and that the rest would be “energized by mid-2026.” Each building would have 50,000 GPUs, and the total IT load is meant to be 880MW or so, with a total power draw of 1.2GW.  I’m not interested in discussing OpenAI not taking the supposedly-planned extensions to Abilene because it never existed and was never going to happen .  In December 2025, Oracle stated that it had “delivered” 96,000 GPUs , and in February, Oracle was still only referring to two buildings , likely because that’s all that’s been finished. My sources in Abilene tell me that Building Three is nearly done, but…this thing is meant to be turned on in mid-2026. Developer Mortensen claims the entire project will be completed by October 2026 , which it obviously, blatantly won’t. I hate to speak in conspiratorial terms, but this feels like a blatant coverup with the active participation of the press. CNBC reported in September 2025 that “ the first data center in $500 billion Stargate project is open in Texas ,” referring to a data center with an eighth of its IT load operational as “online” and “up and running,” with Crusoe adding two weeks later that it was “live,” “up and running” and “continuing to progress rapidly,” all so that readers and viewers would think “wow, Stargate Abilene is up and running” despite it being months if not years behind schedule. At its current rate of construction, Stargate Abilene will be fully built sometime in late 2027. Oracle’s Port Washington Data Center, as of March 6 2026, consisted of a single steel beam . Stargate Shackelford Texas broke ground on December 15 2025 , and as of December 2025, construction barely appears to have begun in Stargate New Mexico . Meta’s 1GW data center campus in Indiana only started construction in February 2026 .  And, despite Microsoft trying to mislead everybody that its Wisconsin data center had ‘arrived” and “been built,” looking even an inch deeper suggests very little has actually come online” — and, considering the first data center was $3.3 billion ( remember: $14 million a megawatt just for construction), I imagine Microsoft has successfully brought online about 235MW of power for Fairwater. What Microsoft wants you to think is it brought online gigawatts of power (always referred to in the future tense), because Microsoft, like everybody else, is building data centers at a glacial pace, because construction takes forever, even if you have the power, which nobody does! The concept of a hundred-megawatt data center is barely a few years old, and I cannot actually find a built, in-service gigawatt data center of any kind, just vague promises about theoretical Stargate campuses built for OpenAI, a company that cannot afford to pay its bills.  Everybody keeps yammering on about “what if data centers don’t have power” when they should be thinking about whether data centers are actually getting built. Microsoft proudly boasted in September 2025 about its intent to build “the UK’s largest supercomputer” in Loughton, England with Nscale, and as of March 2026, it’s literally a scaffolding yard full of pylons and scrap metal . Stargate Abilene has been stuck at two buildings for upwards of six months.  Here’s what’s actually happening: data center deals are being funded by eager private credit gargoyles that don’t know shit about fuck. These deals are announced, usually by overly-eager reporters that don’t bother to check whether the previous data centers ever got built, as massive “multi-gigawatt deals,” and then nobody follows up to check whether anything actually happened.  All that anybody needs to fund one of these projects is an eager-enough financier and a connection to NVIDIA. All Nebius had to do to raise $3.75 billion in debt was to sign a deal with Meta for data center capacity that doesn’t exist and will likely take three to four years to build (it’s never happening). Nebius has yet to finish its Vineland, New Jersey data center for Microsoft , which was meant to be “ at 100MW ” by the end of 2025, but appears to have only had 50MW (the first phase) available as of February 2026 .  I’m just gonna come out and say it: I think a lot of these data center deals are trash, will never get built, and thus will never get paid. The tech industry has taken advantage of an understandable lack of knowledge about construction or power timelines in the media to pump out endless stories about “data center capacity in progress” as a means of obfuscating an ever-growing scandal: that hundreds of billions of NVIDIA GPUs got sold to go in projects that may never be built. These things aren’t getting built, or if they’re getting built, it’s taking way, way longer than expected, which means that interest on that debt is piling up. The longer it takes, the less rational it becomes to buy further NVIDIA GPUs — after all, if data centers are taking anywhere from 18 months to three years to build, why would you be buying more of them? Where are you going to put them, Jensen? This also seriously brings into question the appetite that private credit and other financiers have for funding these projects, because much of the economic potential comes from the idea that these projects get built and have stable tenants. Furthermore, if the supply of AI compute is a bottleneck, this suggests that when (or if) that bottleneck is ever cleared, there will suddenly be a massive supply glut, lowering the overall value of the data centers in progress…which are, by the way, all filled with Blackwell GPUs, which will be two or three-years-old by the time the data centers are finally turned on. That’s before you get to the fact that the ruinous debt behind AI data centers makes them all remarkably unprofitable , or that their customers are AI startups that lose hundreds of millions or billions of dollars a year , or that NVIDIA is the largest company on the stock market, and said valuation is a result of a data center construction boom that appears to be decelerating and even if it wasn’t operating at a glacial pace compared to NVIDIA’s sales . Not to sound unprofessional or nothing, but what the fuck is going on? We have 241GW of “planned” capacity in America, of which only 79.5GW of which is “under active development,” but when you dig deeper, only 5GW of capacity is actually under construction?   The entire AI bubble is a god damn mirage. Every single “multi-gigawatt” data center you hear about is a pipedream, little more than a few contracts and some guys with their hands on their hips saying “brother we’re gonna be so fuckin’ rich!” as they siphon money from private credit — and, by extension, you, because where does private credit get its capital from? That’s right. A lot comes from pension funds and insurance companies. Here’s the reality: data centers take forever. Every hyperscaler and neocloud talking about “contracted compute” or “planned capacity” may as well be telling you about their planned dinners with The Grinch and Godot. The insanity of the AI buildout will be seen as one of the largest wastes of capital of all time ( to paraphrase JustDario ), and I anticipate that the majority of the data center deals you’re reading about simply never get built. The fact that there’s so much data about data center construction and so little data about completed construction suggests that those preparing the reports are in on the con. I give credit to CBRE, Sightline and Wood Mackenzie for having the courage to even lightly push back on the narrative, even if they do so by obfuscating terms like “capacity” or “power” in ways that reporters and other analysts are sure to misinterpret. Hundreds of billions of dollars have been sunk into buying GPUs, in some cases years in advance, to put into data centers that are being built at a rate that means that NVIDIA’s 2025 and 2026 revenues will take until 2028 to 2029 to actually operationalize, and that’s making the big assumption that any of it actually gets built. I think it’s also fair to ask where the money is actually going. 2025’s $178.5 billion in US-based data center deals doesn’t appear to be resulting in any immediate (or even future) benefit to anybody involved. I also wonder whether the demand actually exists to make any of this worthwhile, or what people are actually paying for this compute.  If we assume 3GW of IT load capacity was brought online in America, that should (theoretically) mean tens of billions of dollars of revenue thanks to the “insatiable demand for AI” — except nobody appears to be showing massive amounts of revenue from these data centers.  Applied Digital only had $144 million in revenue in FY2025 (and lost $231 million making it). CoreWeave, which claimed to have “ 850MW of active power (or around 653MW of IT load)” at the end of 2025 (up from 420MW in Q1 FY2025 , or 323MW of IT load), made $5.13 billion of revenue (and lost $1.2 billion before tax ) in FY2025 .  Nebius? $228 million, for a loss of $122.9 million on 170MW of active power (or around 130MW of IT load). Iren lost $155.4 million on $184.7 million last quarter , and that’s with a release of deferred tax liabilities of $182.5 million. Equinix made about $9.2 billion in revenue in its last fiscal year , and while it made a profit , it’s unclear how much of that came from its large and already-existent data center portfolio , though it’s likely a lot considering Equinix is boasting about its “multi-megawatt” data center plans with no discussion of its actual capacity . And, of course, Google, Amazon, and Microsoft refuse to break out their AI revenues. Based on my reporting from last year , OpenAI spent about $8.67 billion on Azure through September 2025, and Anthropic around $2.66 billion in the same period on Amazon Web Services . As the two largest consumers of AI compute, this heavily suggests that the actual demand for AI services is pretty weak, and mostly taken up by a few companies (or hyperscalers running their own services.)  At some point reality will set in and spending on NVIDIA GPUs will have to decline. It’s truly insane how much has been invested so many years in the future, and it’s remarkable that nobody else seems this concerned. Simple questions like “where are the GPUs going?” and “how many actual GPUs have been installed?” are left unanswered as article after article gets written about massive, multi-billion dollar compute deals for data centers that won’t be built before, at this rate, 2030.  And I’d argue it’s convenient to blame this solely on power issues, when the reality is clearly based on construction timelines that never made any sense to begin with. If it was just a power issue, more data centers would be near or at the finish line, waiting for power to be turned on. Instead, well-known projects like Stargate Abilene are built at a glacial pace as eager reporters claim that a quarter of the buildings being functional nearly a year after they were meant to be turned on is some sort of achievement. Then there’s the very, very obvious scandal that NVIDIA, the largest company on the stock market, is making hundreds of billions of dollars of revenue on chips that aren’t being installed. It’s fucking strange, and I simply do not understand how it keeps beating and raising expectations every quarter given the fact that the majority of its customers are likely going to be able to use their current purchases in the next decade. Assuming that Vera Rubin actually ships in 2026, it’s reasonable to believe that people will be installing these things well into 2028, if not further, and that’s assuming everything doesn’t collapse by then. Why would you bother? What’s the point, especially if you’re sitting on a pile of Blackwell GPUs?  Why are we doing any of this?  Last week also featured a truly bonkers story about Supermicro, a reseller of GPUs used by CoreWeave and Crusoe, where co-founder Wally Liaw and several other co-conspirators were arrested for selling hundreds of millions of dollars of NVIDIA GPUs to China , with the intent to sell billions more.  Liaw, one of Supermicro’s co-founders, previously resigned in a 2018 accounting scandal where Supermicro couldn’t file its annual reports, only to be (per Hindenburg Research’s excellent report ) rehired in 2021 as a consultant , and restored to the board in 2023, per a filed 8K .  Mere days before his arrest, Liaw was parading around NVIDIA’s GTC conference , pouring unnamed liquids in ice luges and standing two people away from NVIDIA CEO Jensen Huang. Liaw was also seen congratulating the CEO of Lambda on its new CFO appointment on LinkedIn , as well as shaking hands (along with Supermicro CEO Charles Liang, who has not been arrested or indicted) with Crusoe (the company building OpenAI’s Abilene data center) CEO Chase Lochmiller .  Supermicro isn’t named in the indictment for reasons I imagine are perfectly normal and not related to keeping the AI party going . Nevertheless, Liaw and his co-conspirators are accused of shipping hundreds of millions of dollars’ worth of NVIDIA GPUs to China through a web of counterparties and brokers, with over $510 million of them shipped between April and mid-May 2025. While the indictment isn’t specific as to the breakdown, it confirms that some Blackwell GPUs made it to China, and I’d wager quite a few. The mainstream media has already stopped thinking about this story, despite Supermicro being a huge reseller of NVIDIA gear, contributing billions of dollars of revenue, with at least $500 million of that apparently going to China. The fact that Supermicro wasn’t specifically named in the case is enough to erase the entire tale from their minds, along with any wonder about how NVIDIA, and specifically Jensen Huang, didn’t know. This also isn’t even close to the only time this has happened. Late last year, Bloomberg reported on Singapore-based Megaspeed — a (to quote Bloomberg) “once-obscure spinoff of a Chinese gaming enterprise [that] evolved into the single largest Southeast Asian buyer of NVIDIA chips” — and highlighted odd signs that suggest it might be operating as a front for China.  As a neocloud, Megaspeed rents out AI compute capacity like CoreWeave, and while NVIDIA (and Megaspeed) both deny any of their GPUs are going to China, Megaspeed, to quote Bloomberg, has “something of a Chinese corporate twin”: Bloomberg reported that Megaspeed imported goods “worth more than a thousand times its cash balance in 2023,” with two-thirds of its imports being NVIDIA products. The investigation got weirder when Bloomberg tried to track down specific circuit boards that NVIDIA had told the US government were in specific sites: Things get weirder throughout the article, with a Chinese company called “Shanghai Shuoyao” having a near-identical website and investor deck (as mentioned) to Megaspeed, with several of the “computing clusters under construction” actually being in China.  Things get a lot weirder as Bloomberg digs in, including a woman called “Huang” that may or may not be both the CEO of Megaspeed and an associated company called “Shanghai Hexi,” which is also owned by the Yangtze River Delta project… who was also photographed sitting next to Jensen Huang at an event in Taipei in 2024. While all of this is extremely weird and suspicious, I must be clear there is no declarative answer as to what’s going on, other than that NVIDIA GPUs are absolutely making it to China, somehow. I also think that it would be really tough for Jensen Huang to not know about it, or for billions of dollars of GPUs to be somewhere without NVIDIA’s knowledge.  Anyway, Supermicro CEO Charles Liang has yet to comment on Wally Liaw or his alleged co-conspirators, other than a statement from the company that says that their acts were “ a contravention of the Company’s policies and compliance controls .”  Jensen Huang does not appear to have been asked if he knew anything about this — not Megaspeed, not Supermicro, or really any challenging question of any kind for the last few years of his life.  Huang did, however, say back in May 2025 that there was “no evidence of any AI chip diversion,’ and that the countries in question “monitor themselves very carefully.”  For legal reasons I am going to speak very carefully: I cannot say that Jensen is wrong, or lying, but I think it’s incredible, remarkable even, that he had no idea that any of this was going on. Really? Hundreds of millions if not billions of dollars of GPUs are making it to China — as reported by The Information in December 2025 — and Jensen Huang had no idea? I find that highly unlikely, though I obviously can’t say for sure. In the event that NVIDIA had knowledge — which I am not saying it did, of course — this is a huge scandal that, for the most part, nobody has bothered to keep an eye on outside of a few brave souls at The Information and Bloomberg who give a shit about the truth. Has anybody bothered to ask Jensen about this? People talk to him on camera all the time.  I’ll also add that I am shocked that so many people are just shrugging and moving on from Supermicro, which is a major supplier of two of the major neoclouds (Crusoe and CoreWeave) and one of the minors (Lambda, which they also rents cloud capacity to). The idea that a company had no idea that several percentage points of its revenue were flowing directly to China via one of its co-founders is an utter joke. I hope we eventually find out the truth. Nevertheless, this kind of underhanded bullshit is a sign of desperation on the part of just about everybody involved. So, I want to explain something very clearly for you, because it’s important you understand how fucked up shit has become: hyperscalers are forcing everybody in their companies to use AI tools as much as possible, tying compensation and performance use to token burn, and actively encouraging non-technical people to vibe-code features that actually reach production.  In practice, this means that everybody is being expected to dick around with AI tools all day, with the expectation that you burn massive amounts of tokens and, in the case of designers working in some companies, actively code features without ever knowing a line of code.  “How do I know the last part? Because a trusted source told me — and I’ll leave it at that” One might be forgiven for thinking this means that AI has taken a leap in efficacy, but the actual outcomes are a labyrinth of half-functional internal dashboards that measure random user data or convert files, spending hours to save minutes of time at some theoretical point. While non-technical workers aren’t necessarily allowed to ship directly to production, their horrifying pseudo-software, coded without any real understanding of anything, is expected to be “fixed” by actual software engineers who are also expected to do their jobs. These tools also allow near-incompetent Business Idiot software engineers to do far more damage than they might have in the past. LLM use is relatively-unrestrained (and actively incentivized) in at least one hyperscaler, with just about anybody allowed to spin up their own OpenClaw “AI agent” (read: series of LLMs that allegedly can do stuff with your inbox or Slack for no clear benefit, other than their ability to delete all of your emails ). In Meta’s case , this ended up causing a severe security breach: According to The Information, Meta systems storing large amounts of company and user-related data were accessible to engineers who didn’t have permission to see them, and was marked a sec-1 incident, the second highest level of severity on an internal scale that Meta uses to rank security incidents.  The incident follows multiple problems caused at Amazon by its Kiro and Q LLMs. I quote Business Insider ’s Eugene Kim:  Despite the furious (and exhausting) marketing campaign around “the power of AI code,” I believe that these events are just the beginning of the true consequences of AI coding tools: the slow destruction of the tech industry’s software stack.  LLMs allow even the most incompetent dullard to do an impression of a software engineer, by which I mean you can tell it “make me software that does this” or “look at this code and fix it” and said LLM will spend the entire time saying “you got this” and “that’s a great solution.”  The problem is that while LLMs can write “all” code, that doesn’t mean the code is good, or that somebody can read the code and understand its intention (as these models do not think), or that having a lot of code is a good thing both in the present and in the future of any company built using generative code.  LLM-based code is often verbose, and rarely aligns with in-house coding guidelines and standards, guaranteeing that it’ll take far longer to chew through, which naturally means that those burdened with reviewing it will either skim-read it or feed it into another LLM to work out what the hell to do. Worse still, LLM use is also entirely directionless. Why is anybody at Meta using an OpenClaw? What is the actual thing that OpenClaw does, other than burn an absolute fuck-ton of tokens?  Think about this very, very simply for a second: you have given every engineer in the company the explicit remit to write all their code using LLMs, and incentivized them to do so by making sure their LLM use is tracked. You have now massively increased both the operating costs of the company (through token burn costs) and the volume of code being created.  To be explicit, allowing an LLM to write all of your code means that you are no longer developing code, nor are you learning how to develop code, nor are you going to become a better software engineer as a result. This means that, across almost every major tech company, software engineers are being incentivized to stop learning how to write software or solve software architecture issues .   If you are just a person looking at code, you are only as good as the code the model makes, and as Mo Bitar recently discussed, these models are built to galvanize you, glaze you, and tell you that you’re remarkable as you barely glance at globs of overwritten code that, even if it functions, eventually grows to a whole built with no intention or purpose other than what the model generated from your prompt.  Things only get worse when you add in the fact that hyperscalers like Meta and Amazon love to lay off thousands of people at a time, which makes it even harder to work out why something was built in the way it was built, which is even harder when an LLM that lacks any thoughts or intentions builds it. Entire chunks of multi-trillion dollar market cap companies are being written with these things, prompted by engineers (and non-engineers!) who may or may not be at the company in a month or a year to explain what prompts they used.  We’re already seeing the consequences! Amazon lost hundreds of thousands of orders! Meta had a major security breach! The foundations of these companies are being rotted away through millions of lines of slop-code that, at best, occasionally gets the nod from somebody who has “software engineer” on their resume, and these people keep being fired too, raising the likelihood that somebody who knows what’s going on or why something is built a certain way will be able to stop something bad from happening.  Remember: Google, Amazon, Microsoft, and Meta all hold vast troves of personal information, intimate conversations, serious legal documents, financial information, in some cases even social security numbers, and all four of them along with a worrying chunk of the tech industry are actively encouraging their software engineers to stop giving a fuck about software.   Oh, you’re so much faster with AI code? What does that actually mean? What have you built? Do you understand how it works? Did you look at the code before it shipped, or did you assume that it was fine because it didn’t break?  This is creating a kind of biblical plague within software engineering — an entire tech industry built on reams of unmanageable and unintentional code pushed by executives and managers that don’t do any real work. LLMs allow the incompetent to feign competence and the unproductive to produce work-adjacent materials borne of a loathing for labor and craftsmanship, and lean into the worst habits of the dullards that rule Silicon Valley. All the Valley knows is growth , and “more” is regularly conflated with “valuable.” The New York Times’ Kevin Roose — in a shocking attempt at journalism — recently wrote a piece celebrating the competition within Silicon Valley to burn more and more tokens using AI models : Roose explains that both Meta and OpenAI have internal leaderboards that show how many tokens you’ve used, with one software engineer in Stockholm spending “more than his salary in tokens,” though Roose adds that his company pays for them. Roose describes a truly sick culture, one where OpenAI gives awards to those who spend a lot of money on their tokens , adding that he spoke with several tech workers who were spending thousands of dollars a day on tokens “for what amount to bragging rights.” Roose also added one more insane detail: that one person found a loophole in Claude’s $20-a-month using a piece of software made by Figma that allowed them to burn $70,000 in tokens . Despite all of this burn, Roose struggled to find anybody who was able to explain what they were doing beyond “maintaining large, complex pieces of software using coding agents running in parallel,” but managed to actually find one particularly useful bit of information — that all of this might be performative: I do give Roose one point for wondering if “...any of these tokenmaxxers [were] producing anything good, or whether they [were] merely spinning their wheels churning out useless code in an attempt to look busy.” Good job Kevin.  That being said, I find this story horrifying, and veering dangerously close to the actions of drug addicts and cult followers. Throughout this story in one of the world’s largest newspapers, Roose fails to find a single “tokenmaxxer” making something that they can actually describe, which has largely been my experience of evaluating anyone who talks nonstop about the power of “agentic coding.”  These people are sick, and are participating in a vile, poisonous culture based on needless expenses and endless consumption.  Companies incentivizing the amount of tokens you burn are actively creating a culture that trades excess for productivity, and incentivizing destructive tendencies built around constantly having to find stuff to do rather than do things with intention.  They are guaranteeing that their software will be poorly-written and maintained, all in the pursuit of “doing more AI” for no reason other than that everybody else appears to be doing so. Anybody who actually works knows that the most productive-seeming people are often also the most-useless, as they’re doing things to seem productive rather than producing anything of note. A great example of this is a recent Business Insider interview with a person who got laid off from Amazon after learning “AI” and “vibe coding,” and how surprised they were that these supposed skills didn’t make them safer from layoffs: To be clear, this person is a victim . They were pressured by Amazon to take up useless skills and build useless things in an expensive and inefficient way, and ended up losing their job despite taking up tools they didn’t like under duress.  This person was, at one point, actively part of building an internal Amazon site using AI, and had to “learn to vibe code with a lot of trial and error” and the help of a colleague. Was this a good use of her time? Was this a good use of her colleague’s time? No! In fact, across all of these goddamn AI coding hype-beast Twitter accounts and endless proclamations about the incredible power of AI agents, I can find very few accounts of something happening other than someone saying “yeah I’m more productive I guess.”  I am certain that at some point in the near future a major big tech service is going to break in a way that isn’t immediately fixable as a result of thousands of people building software with AI coding tools, a problem compounded by the dual brain drain forces of layoffs and a culture that actively empowers people to look busy rather than actually produce useful things. What else would you expect? You’re giving people a number that they can increase to seem better at their job, what do you think they’re going to do, try and be efficient? Or use these things as much as humanly possible, even if there really isn’t a reason to? I haven’t even gotten to how expensive all of this must be, in part because it’s hard to fully comprehend.  But what I do know is that big tech is setting itself up for crisis after crisis, especially when Anthropic and OpenAI stop subsidizing their models to the tune of allowing people to spend $2500 or more on a $200-a-month subscription .  What happens to the people who are dependent on these models? What happens to the people who forgot how to do their jobs because they decided to let AI write all of their code? Will they even be able to do their jobs anymore?   Large Language Models are creating Silicon Valley Habsburgs — workers that are intellectually trapped at whatever point they started leaning on these models that were subsidized to the point that their bosses encouraged them to use them as much as humanly possible. While they might be able to claw their way back into the workforce, a software engineer that’s only really used LLMs for anything longer than a few months will have to relearn the basic habits of their job, and find that their skills were limited to whatever the last training run for whatever model they last used was.  I’m sure there are software engineers using these models ethically, who read all the code, who have complete industry over it and use it as a means of handling very specific units of work that they have complete industry over. I’m also sure that there are some that are just asking it to do stuff, glancing at the code and shipping it. It’s impossible to measure how many of each camp there are, but hearing Spotify’s CEO say that its top developers are basically not writing code anymore makes me deeply worried, because this shit isn’t replacing software engineering at all — it’s mindlessly removing friction and putting the burden of “good” or “right” on a user that it’s intentionally gassing up. Ultimately, this entire era is a test of a person’s ability to understand and appreciate friction.  Friction can be a very good thing. When I don’t understand something, I make an effort to do so, and the moment it clicks is magical. In the last three years I’ve had to teach myself a great deal about finance, accountancy, and the greater technology industry, and there have been so many moments where I’ve walked away from the page frustrated, stewed in self-doubt that I’d never understand something. I also have the luxury of time, and sadly, many software engineers face increasingly-deranged deadlines set by bosses that don’t understand a single fucking thing, let alone what LLMs are capable of or what responsible software engineering is. The push from above to use these models because they can “write code faster than a human” is a disastrous conflation of “fast” and “good,” all because of flimsy myths peddled by venture capitalists and the media about “LLMs being able to write all code.” Generative code is a digital ecological disaster, one that will take years to repair thanks to company remits to write as much code as fast as possible.  Every single person responsible must be held accountable, especially for the calamities to come as lazily-managed software companies see the consequences of building their software on sand.  In the end, everything about AI is built on lies.  Hundreds of gigawatts of data centers in development equate to 5GW of actual data centers in construction.  Hundreds of billions of dollars of GPU sales are mostly sitting waiting for somewhere to go. Anthropic’s constant flow of “annualized” revenues ended up equating to literally $5 billion in revenue in four years , on $25 billion or more in salaries and compute. Despite all of those data centers supposedly being built, nobody appears to be making a profit on renting out AI compute. AI’s supposed ability to “write all code” really means that every major software company is filling their codebases with slop while massively increasing their operating expenses. Software engineers aren’t being replaced — they’re being laid off because the software that’s meant to replace them is too expensive, while in practice not replacing anybody at all. Looking even an inch beneath the surface of this industry makes it blatantly obvious that we’re witnessing one of the greatest corporate failures in history. The smug, condescending army of AI boosters exists to make you look away from the harsh truth — AI makes very little revenue, lacks tangible productivity benefits, and seems to, at scale, actively harm the productivity and efficacy of the workers that are being forced to use it. Every executive forcing their workers to use AI is a ghoul and a dullard, one that doesn’t understand what actual work looks like, likely because they’re a lazy, self-involved prick.  Every person I talk to at a big tech firm is depressed, nagged endlessly to “get on board with AI,” to ship more, to do more, all without any real definition of what “more” means or what it contributes to the greater whole, all while constantly worrying about being laid off thanks to the truly noxious cultures that are growing around these services. AI is actively poisonous to the future of the tech industry. It’s expensive, unproductive, actively damaging to the learning and efficacy of its users, depriving them of the opportunities to learn and grow, stunting them to the point that they know less and do less because all they do is prompt. Those that celebrate it are ignorant or craven, captured or crooked, or desperate to be the person to herald the next era, even if that era sucks, even if that era is inherently illogical, even if that era is fucking impossible when you think about it for more than two seconds. And in the end, AI is a test of your introspection. Can you tell when you truly understand something? Can you tell why you believe in something, other than that somebody told you you should, or made you feel bad for believing otherwise? Do you actually want to know stuff, or just have the ability to call up information when necessary?  How much joy do you get out of becoming a better person?If you can’t answer that question with certainty, maybe you should just use an LLM, as you don’t really give a shit about anything. And in the end, you’re exactly the mark built for an AI industry that can’t sell itself without spinning lies about what it can (or theoretically could) do.  Only 33% of announced US data centers are actually being built, with the rest in vague levels of “planning.” That’s about 79.53GW of power, or 61GW of IT load. “Active development” also refers to anything that is (and I quote) “...under development or construction,” meaning “we’ve got the land and we’re still working out what to do with it. This is pretty obvious when you do the maths. 61GW of IT load would be hundreds of thousands of NVIDIA GB200 NVL72 racks — over a trillion dollars of GPUs at $3 million per 72-GPU rack — and based on the fact there were only $178.5 billion in data center debt deals last year , I don’t think many of these are actually being built right now. Even if they were, there’s not enough power for them to turn on. NVIDIA claims it will sell $1 trillion of GPUs between 2025 and 2027 , and as I calculated previously , it sells about 1.6GW (in IT load terms, as in how much power just the GPUs draw) of GPUs every quarter, which would require at least 1.95GW of power just to run, when you include all the associated gear and the challenges of physically getting power. None of this data talks about data centers actually coming online.

0 views

Premium: The Hater's Guide To Adobe

I hear from a lot of people that are filled with bilious fury about the tech industry, but few companies have pissed off the world more than Adobe. As the foremost monopolist in software, web and graphic design, Adobe has created one of the single-most abusive, usurious freakshows in capitalist history, trapping users in endless, punishing subscriptions to software they need that only ever seems to get worse. In the Department of Justice’s recently-settled case against Adobe , it was revealed that early termination fees for its annual subscriptions amounted to 50% of the remaining balance on the customer’s subscription, with one unnamed Adobe executives referring to these fees as “a bit like heroin for Adobe,” adding that there [was] “...absolutely no way to kill off ETF or talk about it more obviously [without] taking a big business hit.”  Let me explain how loathsome Adobe’s business model truly is.  The below is a screenshot from Adobe’s website from Wednesday March 18 2026. One might read this and think “wow, $34.99 a month, what a deal!” and immediately sign up without clicking on “view terms,” which reveals that after three months the subscription cost becomes $69.99 a month, and that this “monthly” subscription is a year-long contract.   Adobe deliberately hid (and I’d argue still hides!) its early termination fees behind “inconspicuous hyperlinks and fine print.” Want to cancel? Adobe charges you 50% of the remaining balance on your contract — so, in this case, over $300 , and it justifies this by saying (and I quote) “...your purchase of a yearly subscription comes with a significant discount. Therefore, a cancellation fee applies if you cancel before the year ends.”  The DOJ did a great job in its complaint explaining how much Adobe sucks, just before doing nothing to impede them doing so: An exhibit from the DOJ’s lawsuit shows the MC Escher painting of canceling an Adobe subscription and the six different screens that it takes to do so. The DOJ also added that Adobe’s subscription revenue had nearly doubled between 2019 ($7.71 billion) and 2023 ($14.22 billion), and since then, Adobe’s subscription revenue hit $20.5 billion in 2024 and $22.9 billion in 2025 .  To be clear, Adobe is utilizing many very, very common tricks that the software industry has used to keep people from quitting, and basically every software service I use makes you jump through three to five different screens (fuck you, Canva!) to cancel. These tricks are commonly referred to as “dark patterns.”  Adobe’s Early Termination Fees are, however, uniquely awful, both in that they employ the evil sorcery of enterprise software contracts and deploy them against creatives that are, in many cases, barely keeping their heads above water in an era defined by people trying to destroy them.  I will say, however, that I’ve never seen anyone else bill monthly for an annual contract outside of the grotesque SaaS monstrosities I wrote about last week . These are egregious, deceptive and manipulative techniques that shouldn’t be deployed against anyone , let alone creatives and consumers.  And because this is the tech industry under a regulatory environment that fails to hold them accountable, the $150 million settlement with the DOJ doesn’t appear to have changed a damn thing about how this company does business, other than offering “$75 million worth of services for free to customers that qualify.” The judgment does not appear to require any changes to how Adobe does business, and $150 million amounts to roughly 0.345% of the $43.4 billion that Adobe made in 2024 and 2025. Adobe is a business that runs on rent-seeking, deception, and a monopoly over modern design software mostly built by people that no longer work there, such as John and Thomas Knoll, who won an Oscar in 2019 for scientific and engineering achievements for creating Photoshop along with Mark Hamburg, who left Adobe the same year .  Adobe does not create things but extract from those that do , exhibiting the most egregious and horrifying elements of the Rot Economy ’s growth-at-all-costs avarice. While you may or may not like Photoshop, or Lightroom, or any other Adobe property, that’s mostly irrelevant to the glorified holding corporation that shoves different bits around every few months in the hopes that they can scrape another dollar from their captured audience.  Much of this comes from Adobe’s abominable subscription products, most notably (and I’ll get into it in more detail after the premium break) its Creative Cloud subscription, a rat king of different services like Photoshop and InDesign and services like “Adobe Creative Community” and “generative credits” for AI services that are used to justify constant price increases and confusing product suite tweaks, all in the service of revenue growth.   All the while, Adobe’s net income has, for the most part, flattened out for the best part of two years at a seasonal range from $1.5 billion to $1.8 billion a quarter , all as the company debases its products, customers and brand in the filth of generative AI features that range from kind of useful to actively harmful to the creative process and have generated, at best, a couple hundred million dollars of revenue in the last two years .  I should also be clear that Adobe has an indeterminately-large enterprise division that includes marketing automation software like Marketo , which it acquired in 2018 for $4.75 billion along with Magento , a different company that develops a software platform to run corporate eCommerce pages, all so it can do battle with Salesforce. CNBC’s Jim Cramer once called Salesforce and Adobe’s competition “ one of the great rivalries in tech ,” and he’s correct, in the sense that both companies love to buy other companies to prop up their revenues. Adobe has bought 61 of them since the 90s , but Salesforce has it beat at 75 .  They’re also both devious, underhanded SaaSholes that make their money through rent-seeking and micro-monopolies. The business known as “Adobe” is a design platform, a photo editor, a PDF creation platform, an eCommerce platform, a marketing automation platform, a content management system, a marketing project management system, an analytics platform, and a content collaboration platform.  You do business with Adobe not because you want to , but because doing business at some point requires you to do so. Use PDFs regularly? You’re gonna use Acrobat. Need to edit an image? Photoshop. Run a design studio? You’re gonna pay for Creative Suite, and you’re gonna get a price increase at some point because you don’t really have any other options. Doing a lot of email marketing campaigns? You’re gonna use Marketo, whether you like it or not . Adobe’s “Digital Experience” vertical is effectively a holding corporation for Adobe’s acquisitions to help boost revenue, an ungainly enterprise limb that grabs companies and puts it in a big bag that says “money me money now” every year or two.  Put another way, one does not do business with Adobe. It has business done to it.   There’s also the “publishing and advertising” division that has made somewhere between $146 million and $300 million a year since 2019, most of which comes from abandoned products and, ironically, the product that originally made Adobe famous — PostScript, the language that underpins most of modern printing, whether directly or by inspiring the various other alternatives that emerged in the following decades. Adobe is a company that bathes in the scent of mediocrity, constantly doing an impression of an ever-growing business through a combination of acquisitions and price increases that are only possible in a global regulatory torpor and a market that doesn’t know when it’s being conned.  It’s also emblematic of how the modern software company grows — not through an honest exchange of value built on a bedrock of innovation and customer happiness, but the eternal death march of enshittification of its products and monopolization of whatever fields it can barge its way into.  In many ways, Adobe is one of the greater tragedies of the Rot Economy . Beneath the endless layers of subscriptions and weird upsells and horrible Business Idiots lay beloved products like Photoshop, Illustrator and InDesign that are slowly decaying as Adobe searches to boost engagement and revenue.  A great example is a story from Digital Camera World from 2025 , where writer Adam Juniper talked about features he loved that were disappearing for no reason: Juniper found that Adobe had intentionally moved the speech bubble to an optional “legacy shapes and more” feature, all with the intent of pushing users to pay for (per Juniper) Adobe’s add-on Stocks subscription . In fact, a simple web search brings up user after user after user after user after user after user after user saying the same thing: that Adobe only ever seems to make its products worse, with the solution often being “find a way to revert to how things were done before the update” or “find another company to work with,” except Adobe’s scale and market presence make it near-impossible to compete.  Adobe even has the temerity to bug you with ads within its own products , nagging you with annoying pop-ups about new features or attempting to con you into a two-month-long trial of another piece of software using “ in-product messaging ” that’s turned on by default. These are all the actions of a desperate, greedy company run by people that don’t give a shit about their customers or the things they sell.  A few weeks ago, CEO Shantanu Narayen said that he was stepping down after 18 years in which he took Adobe from a company that built things that people loved and turned it into a sleazy sales operation built on rent-seeking and other people’s innovation.  Those who don’t bother to read or know anything about software will tell you that the “threat of AI” or “the SaaSpocalypse” is killing Adobe — a convenient (and incorrect!) way to ignore that Adobe is only able to grow through acquisitions or price-hikes.  The sickly irony is that acquisitions were always in Adobe’s blood from the very early days of Photoshop. It just used to be run by people who gave a fuck about whether software was good and customers were happy. In fact, I’m going to have a little rant about this.  I’m sick and tired of journalists from reputable outlets talking about “the threat of AI” to software companies without ever explaining what they mean or any of the economic effects involved. Adobe isn’t being killed by “AI.” We’re at the end of the hypergrowth era of software, and the only thing that grows forever is cancer. It also gives executives Narayen cover for running operations built on deceit, exploitation, extraction and capital deployment. Years of evaluating these companies entirely based on their revenues and imagined things like “the threat of AI” without any connection to actual fucking software makes the majority of the analysis of software entirely useless.   Nothing even really has to change about reporting. Just use the product! Use it and tell me how you feel. Talk to some customers. Spend more than 20 minutes on Facebook. Use Photoshop and tell me how many popups you get, or whether it inexplicably slows down or starts eating up RAM . You’ll quickly see that we’re in a crisis that’s less about AI and more about creating a tech industry powered by creating mediocre software and putting far more effort into making a business impossible to avoid. Decades of this psuedo-journalism mean that a great many business reporters are simply unprepared to discuss what’s actually happening, evaluating software companies based on 10-Ks and shadows on the wall of a fucking cave.  The tech industry has done a great job of scaring reporters into thinking that having a negative opinion is somehow “not supporting innovation,” and I want to be clear that refusing to criticize the tech industry is what’s actually stopping innovation. Letting these companies get away with ruining either the products they build or the products they buy is creating a climate in which the most-successful companies are the ones that crowd out the competition and raise prices. Adobe’s growth has come from being a fucking asshole . Its decline has come from the limitations of one’s ability to buy other companies and claim their revenues as your own and constantly increasing the price of your services. If there were a “threat from AI,” you’d actually be able to name it and point to it rather than referring to it like the Baba Fucking Yaga.  I’m going to put it very, very bluntly: the last 15 years or so of tech earnings have been earned predominantly by fucking over the customer through either reducing the value of the product or increasing its price. The tech and business media’s lack of attention to the actual state of technology is partially to blame, because Number Has Always Gone Up, and thus the assumption was that the underlying product quality was raising that number versus screwing over the customer.  Wake up! Look at every tech product you’ve used and tell me if it’s improved in the last decade! Facebook’s worse, email’s worse, browsers are either the same or worse, Google Search is worse, Adobe Creative Suite is worse, iPhones might seem better but the software is bloated with endless options and dropdowns and ads and nags, pretty much the only thing that’s improved is physical hardware because shipping bullshit, useless hardware is much, much harder. This total lack of awareness of the actual state of the world is why these companies have gotten away with so much shit over the years, and why so many of you are incapable of actually capturing this moment. You are not actually looking for what’s happening, just for what might comfortably fit your analysis of the world.  Vaguely blaming things on “the threat of AI” allows you to continue pretending everything will grow forever, and rationalize bad behavior by framing every problem through the lens of disruption and innovation. A company that’s on the decline “being disrupted by AI” allows you to believe that another company will grow and take its place . Saying that a company is growing revenue “because their AI bets are paying off” allows you to ignore price increases and deteriorating software, and think the world is a better place, even if you can only do so by living in a fantasy.  Gun to your head, what is the threat to software from AI? How is it manifesting, and who is the threat? Is it OpenAI? Anthropic? Are their products actually replacing anything? Can you prove that, or is this just something you heard enough people say that you’re now comfortable believing it?  The actual threat to software companies is their hatred of innovation and their customers, and what's happening to Adobe will eventually happen to them all.  Products that provide value are enshittified , and the products they acquire have been (or came pre-) enshittified. The prices have gone up. The nags to consumers have increased. Revenues have gone up because these companies have been allowed to buy effectively anyone they want — though Adobe was, thankfully, stopped from acquiring Figma — and increase prices whenever they want, and when it’s come time to evaluate the health or strength or actual value of these companies, all that anybody ever looks at is revenue s.  Perhaps your argument might be that the markets don’t care about how good something is , except the markets are influenced by journalism and financial analysts. The markets celebrate dogshit companies like Meta that make broken, harmful products because their disgusting monopolies allow them to brutalize businesses and consumers alike.  What we’re seeing in the software industry are the limits of how much one can abuse a customer, a business model that SaaS enabled and both the tech media and analysts celebrated because it worked , in the sense that it worked at making the software companies rich. And because the people at the top have chased out anybody who knows what “good” looks like and empowered vacuous growth-perverts at every level, these companies have no idea what to do to stop the tide from coming in. Your argument might be that these companies couldn’t grow so fast without fucking customers over or making their products worse — and at that point you should ask yourself what you want the world to look like, and how willingly you’ve participated in making it look how it does today.  The decline has yet to fully begin, but a CEO doesn’t suddenly decide to quit their company after 18 years during record results because the future looks bright.   The real SaaSpocalypse is the comeuppance for decades of focusing businesses on growth by any means possible, and the hysterical non-analysis of blaming it on AI is a sign that those responsible can’t be bothered to live in anything other than the dreamworld of venture capital and Ivy League business schools.  Adobe’s story is a tragedy — the tale of the great things that can be done with software for the betterment of humanity, and how usurious Business Idiots can hijack it as a means of expressing eternal growth to the markets. This is The Hater’s Guide To Adobe, or The Adobe Enshittification Suite.

0 views

Why Are We Still Doing This?

Hi! If you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 185,000 words, including vast, extremely detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large .   I just put out a massive Hater’s Guide To The SaaSpocalypse — an urgent and in-depth analysis of the end of the hypergrowth era of software — and my Hater’s Guides To Private Equity , Anthropic , Oracle and Microsoft are huge (12k+ word) research projects priced lower than the cost of a cup of coffee, which is partly an inflation issue on the part of the coffee shop, but what I’m getting at is this is a ton of value. Where’s Your Ed At Premium is incredibly useful, read by hedge funds, private equity firms, Fortune 500 CEOs, a large chunk of the business and tech media, and quite a few CEOs of major tech firms. I am regularly several steps ahead in my coverage, and you get an absolute ton of value, several books’ worth of content a year. Subscribe today and support my work, I deeply appreciate it. Hey everyone! I know everybody is super excited about the supposed power of AI, but I think it’s time we set some fair ground rules going forward so we stop acting so crazy.   Let’s start with a simple one: AI boosters are no longer allowed to explain what’s good about AI using the future tense. You can no longer say “it will,” “could,” “might,” “likely,” “possible,” “estimated,” “promise,” or any other term that reviews today’s capabilities in the language of the future.  I am constantly asked to explain my opinions (not that anybody who disagrees with me actually reads them ) in the terms of the present, I am constantly harangued for proof of what I believe, and every time I hand it over there’s some sort of ham-fisted response of “it’s getting better” and “it will get even more better from here!’ That’s no longer permissible! I am no longer accepting any arguments that tell me something will happen, or that “things are trending” in a certain way. For an industry so thoroughly steeped in cold, hard rationality, AI boosters are so quick to jump to flights of fancy — to speak of the mythical “AGI” and the supposed moment when everything gets cheaper and also powerful enough to be reliable or effective.  I hear all this crap about AI changing everything, but where’s the proof?  Wow. Anthropic managed to turn $30 billion dollars into $5 billion dollars and start one of the single most annoying debates in internet history. No, really, its CFO Krishna Rao stated on March 9, 2026 in a legal filing that it had made “exceeding” $5 billion in revenue and spent “over” $10 billion on inference and training. None of these numbers line up with previous statements about annualized revenue, by the way — I went into this last week — and no amount of contorting around the meaning of “exceeding” takes away from the fact that adding up all the annualized revenues is over $6 billion, which I believe means that Anthropic defines “annualized” in a new and innovative way. In any case, Anthropic turned $30 billion into $5 billion. That’s…bad. That’s just bad business. And I hear no compelling argument as to how this might improve, other than “these companies need more compute, and then something will happen.” In fact, let’s talk about that for a second. At the end of January, OpenAI CFO Sarah Firar said that “our ability to serve customers—as measured by revenue—directly tracks available compute,” messily suggesting that the more compute you have the more revenue you have.     This is, of course, a big bucket of bollocks. Did OpenAI scale its compute dramatically between hitting $20 billion in annualized revenue (to be clear, I have deep suspicions about these numbers and how OpenAI measures “annualized” revenue) in January 2026 and $25 billion in March 2026 ? I think that’s highly unlikely.  I also have to ask — where are the limited parties, exactly? If revenue scales with revenue, wouldn’t that mean that each increase in compute availability would be allowing somebody to pay OpenAI or Anthropic that couldn’t do so before? I don’t see any reports of customers who can’t pay either company due to a lack of available compute. Are there training runs that can’t be done right now? That doesn’t really make sense either, because training doesn’t automatically lead to more revenue, other than in releasing a new model, I guess?  It’s almost as if every talking point in the generative AI industry is the executives in question saying stuff in the hopes that people will just blindly repeat it! But really folks, we’ve gotta start asking: where’s the money?   Anthropic made $5 billion in its entire existence in revenue and spent $10 billion just on compute . OpenAI claims it made $13.1 billion in revenue in 2025 and “only” lost $8 billion — but those numbers seem unlikely considering my report from November of last year that had OpenAI at $4.3 billion in revenue on $8.67 billion of inference costs through September 2025 , and this is accrual accounting, which means these are from the quarters in question. How likely do you think it is that OpenAI booked $8.8 billion in a quarter (Q4 CY2025) and only lost $8 billion in the year after it lost $12 billion ( per the Wall Street Journal ) in the previous quarter?  Look, I get it! This isn’t a situation where thinking critically is rewarded. Even articles explicitly criticizing the economics of these companies are still filled with weasel wording about “expects to grow” and “anticipates hitting,” or the dreaded phrase “if their bet pays off.” Saying obvious stuff like “every AI company is unprofitable” or “there is no path to profitability” or “nobody is talking about AI revenues” is considered unfair or cynical or contrarian , even though these are very reasonable and logical statements grounded in reality. “But Ed! What about Uber!”  What about Uber? Uber is a completely different business to Anthropic and OpenAI or any other AI company. It lost about $30 billion in the last decade or so, and turned a weird kind of profitable through a combination of cutting multiple markets and business lines (EG: autonomous cars), all while gouging customers and paying drivers less .  The economics are also completely different. Uber does not pay for its drivers’ gas, nor their cars, nor does it own any vehicles. Its PP&E has been between $1.5 billion and $2.1 billion since it was founded . Uber’s revenue does not increase with acquisitions of PP&E, nor does its business become significantly more expensive based on how far a driver drives, how many passengers they might have in a day, or how many meals they might deliver. Uber is, effectively, a digital marketplace for getting stuff or people moved from one place to another, and its losses are attributed to the constant need to market itself to customers for fear that other rideshare (Lyft) or delivery companies (DoorDash, Seamless) might take its cash. Also: Uber’s primary business model was on a ride-by-ride basis, not a monthly subscription. Users may have been paying less , but they were still thinking about each transaction with Uber in terms that made sense when prices were raised ( though it briefly tried an unlimited ride pass option in 2016 ) .  Charging on a ride-by-ride basis was the smartest move that Uber made, as it meant that when prices went up, users didn’t have to change their habits.  AI companies make money either through selling subscriptions (or some sort of token-based access to a model) or by renting their models out via their APIs. One of their biggest mistakes was offering any kind of monthly subscription to their services, because the compute cost of a user is almost impossible to reconcile with any amount they’d pay a month, as the exponential complexity of a task is impossible to predict, both based on user habits and the unreliability of an AI model in how it might try and produce an output.  Let’s give an example. Somebody spending $20 a month on a Claude subscription can spend as much as $163 in compute .  There are two reasons this might be happening: In both cases, Anthropic (and OpenAI, for that matter) is screwed. If we assume Anthropic’s gross margin is 38% ( per The Information , though to be clear I no longer trust any leak from Anthropic, also no, Dario did not say Anthropic had 50% gross margins, it was a hypothetical ), that would mean that $163 of compute costs it $101. Now, not every user is spending that much, but I imagine a lot of users are considering the aggressive ( and deceptive ) media campaign around Claude Code means that a great many are, at the very least, testing the limits of the product. Those on the Max $100 and $200-a-month plans are specifically paying for fewer rate limits, meaning that they are explicitly paying to burn more tokens. The obvious argument that you could make is that Anthropic could simply increase the price of the subscription product, but I need to be clear that for any of this to make sense, it would have to do so by at least 300%, and even then that might not do the job. This would immediately price out most consumers — an $80-a-month subscription would immediately price out just about every consumer, and turn this from a “kind of like the cost of Netflix” purchase into something that has to have obvious, defined results. A $400-a-month or $800-a-month subscription would make a Claude or ChatGPT Pro subscription the size of a car payment. For a company with 100 engineers, a subscription to Claude Max 5x would run at around $480,000 a year. And this is assuming that rate limits stay the same, which I doubt they would.  In any case, there is no future for any AI company that uses a subscription-based approach, at least not one where they don’t directly pass on the cost of compute. This is a huge problem for both Anthropic and OpenAI, as their scurrilous growth-lust means that they’ve done everything they can to get customers used to paying a single monthly cost that directly obfuscates the cost of doing business.  I need to be very direct about what this means, because it’s very important and rarely if ever discussed. A user of ChatGPT or Claude Code is only thinking of “tokens” or “compute” in the most indirect sense — a vague awareness of the model using something to do something else, totally unmoored from the customer’s use of the product. All they see is the monthly subscription cost ($20, $100, or $200-a-month) and rate limits that vaguely say you have X% of your five-hour allowance left. Users are not educated in (nor are they thinking about) their “token burn” or burden on the company, because software has basically never made them do so in the past.  This means it will be very, very difficult to increase subscription costs on users, and near-impossible to convince them to pay the cost of the API. It’s like if Uber, which had charged $20-a-month for unlimited rides, suddenly started charging users their drivers’ gas costs, and gas was at around $250 a gallon.  That might not even do the price disparity justice. This theoretical example still involves users being in the back of a car and being driven a distance, and that said driving costs gas. Token burn is an obtuse, irregular process involving a per-million input and output tokens, with the latter increasing when you use reasoning models, which use output tokens to break down how it might handle a task.  The majority of AI users do not think in these terms, and even technical users that do so have likely been using a monthly subscription which doesn’t make them think about the costs. Think about it — you log onto Claude Code every day and do all your work on it, sometimes bumping into rate limits, then coming back five hours (or however long) later and doing the same thing. Perhaps you’re thinking that a particular task might burn more tokens, or that you should use a model like Claude Sonnet over Claude Opus so that you don’t hit your limits earlier, but you do not, in most cases, even if you know the costs of a model, think about them in a way that’s useful. Let’s say that Anthropic and OpenAI immediately decide to switch everybody to the API. How would anybody actually budget? Is somebody that pays $200 a month for Claude Max going to be comfortable paying $1000 or $1500 or $2500 a month in costs, and have, at that point, really no firm understanding of the cost of a particular action?   First, there’s no way to anticipate how many tokens a prompt will actually burn, which makes any kind of budgeting a non-starter. It’s like going to the supermarket and committing to buy a gallon of milk, not knowing if it’ll cost you $5 or $50.  But also, suppose a prompt doesn’t quite return the result you need, and thus, you’re forced to run it again — perhaps with slightly altered phrasing, or with more exposition to ensure the model has every detail you need. And again, you have no idea how many tokens the model will burn. How does a person budget for that kind of thing?  This is a problem both based on user habits and the unreliability of Large Language Models — such as spending several minutes “thinking” when they get stuck in loops trying to evaluate code or come up with a way to execute a task . User habits are also antithetical to switching from a paid subscription to metered access to models. A user might forgive Claude for chasing its own tail for several minutes when not burdened by the cost of it doing so, but if that act cost $2 or $3 or $10, they may hesitate to use the model at all.  I’ll give you another example. You, a relative novice, decide to use Claude Code to build a dinky little personal website. During the process, Claude Code gets lost, messes up a few little things, taking a few minutes in aggregate, and you calmly tell it to fix things and do what you’d like, and after a little back-and-forth you get something you’re happy with. As you try and upload it to Amazon Web Services, you get stuck, and spend ten minutes getting it to explain how you get the website online. At $20 a month, you might find this process delightful , empowering even. You just coded a website (even if it was a clone of one of thousands of different online templates), and you did so using natural language. Wow! What a magical world we live in. You realize as you look at the website that you forgot to add a section. Doing so takes another half an hour. You bump into your rate limits, take a break for five hours, then come back and finish it at the end of the day. The model has told you the entire time that you’re a genius for making this, and the website rocks , and that you built it , even though you didn’t. If you were paying via the API, this excursion could’ve cost you anywhere from $5 to $15. Every single little back-and-forth begins to add up. Every little change. Every little addition. Every attempt that Claude makes to fix something but makes it worse. Every “I don’t get it” you feed it about AWS.  It’s difficult to actually say what it was that made it expensive or not, and doing so adds a level of cognitive burden on top of the constant vigilance you need to make sure the model doesn’t do something unproductive. Even explicit, direct and well-manicured prompts can lead these models on expensive little expeditions.  Token burn isn’t something that neatly maps to another way that we pay for things outside of cloud storage, and even then, there are very few services that rival the chaotic costs of Large Language Models. Even if people can conceptualize that there are inputs and outputs , the latter of which costs more money, mapping a task to a reliable amount of tokens is actually pretty difficult.  Even if these companies were profitable on inference (I do not believe they are), they are dramatically, horrendously un profitable on subscriptions, and there isn’t a chance in Hell that the majority of those subscriptions convert into token-based API users. When Uber — a completely different business, to be clear — jacked up prices, it did so gradually, and also didn’t ask users to dramatically shift how they think about using the app.  Anthropic and OpenAI have no clean way to jack up prices or cut costs. They can increase subscription fees, but doing so would lead to users paying two to five times what they’re paying today, which would undoubtedly lead to massive churn.  They could also reduce rate limits with the intention of pushing people toward the API, but as I’ve discussed, subscription-based customers are neither educated nor prepared to pay a confusing, metered service that directly counters habits driven by an abundance of token burn. Users are not taught to be considerate of their burn or mindful of their costs when using a subscription-based LLM. The other problem is that these companies don’t really appear to have a way to cut costs, because inference remains very expensive and training costs are never going away : I hear a lot of wank about “ASICs” and “TPUs” that will magically bring down costs. When? How? Oh, NVIDIA’s latest chip is 10x more efficient or some bullshit? Show me the fucking evidence ! Because every time the revenues and costs get reported the revenues seem lower and the costs seem higher.   And it’s completely fucking insane that we don’t have an answer beyond “things will get cheaper” or “prices will go up.” Despite everybody talking about it endlessly for three god damn years, LLMs lack the kind of obvious, replicable, industrially-necessary outcomes that make a 3x, 4x or 10x price increase tenable.  I also think that Anthropic and OpenAI have deliberately used their subscriptions as a means of conning the media into conceptualizing AI as far more affordable than it actually is. Most users do not have any real idea of how much it costs to use these services, let alone how much it costs to run them. All of that glowing, effusive press around Claude Code was based on outcomes that were both subsidized and obfuscated by Anthropic. I think that these articles would’ve been much less positive if the reporters were even aware of the actual costs. So, let’s do some maths shall we? Assume a business has 100 engineers, and currently pays $200 a month for each engineer to use Claude Max, at a cost of $20,000 a month, or $240,000 a year. Let’s assume on average you pay your engineers $125,000, meaning that your salaries are $12.5 million a year, not considering other costs (this is a toy example). Now imagine that Claude switches to a metered billing system.  Let’s assume that, in actuality, these engineers are burning a mere $10 a day in tokens, which brings costs to $365,000 a year, or an increase of $125,000… and remember, this is a team of engineers that were previously used to a subscription that allowed them to spend upwards of $2700 a month in tokens, or nearly 10 times the $300 a month they’re now spending. Let’s be a little more realistic, and bump that number up to $25. Now you’re spending $912,500 a year in tokens. $30 a day puts you over a million bucks. Oops, busy month, you’re now spending $40 a day. Now you’re spending more than 10% of your salaries on compute costs.  Anthropic’s own Claude Code documentation says that the average cost is $6 per-developer-per-day, with “daily costs remaining below $12 for 90% of users.” Good news! If you, as an engineer, can limit your usage to $6 a day, you’re actually saving the company money!  But you’re not spending $6 a day. That’s a silly number for anybody coding. One user on Reddit said that they spent $200-to-300 a day on API costs and decided instead to spend $40 to $50 a day on a GPU cluster on Lambda to use the open source model Qwen 3.5 to handle their code, which still works out at $14,600 in API costs.   Another user found that their parallel Claude Code sessions using Claude’s $200-a-month plan (I assume using multiple accounts) worked out to around $12,000 a month in API costs. Another that hit their limits on their Max subscription “only needed another hour or two to finish a project,” and that hour or two resulted in almost $600 in API costs . Even the boosters are beginning to worry.  Last week, Chamath Palihapitiya made a shockingly reasonable point :  When ROI indeed, Chamath. The fact that one of the most-prominent voices (for better or worse) in the tech industry is unable to get a straight answer to “where is the return on investment” — somebody directly incentivized to keep the party going — should have everybody a little worried. Really though, where is the ROI? Who is actually getting a profit out of this? NVIDIA? The companies that make RAM? Because it doesn’t seem to be the companies who are buying the GPUs. It doesn’t seem to be the AI companies. I don’t think it’s true, but if you believe it, you believe code is truly being automated away — to what end? What are the actual documented economic effects we can point at and what are the actual meaningful changes to the world?  Real data. Something from today, please. You are legally banned from saying the words “soon” or “in the future.” No more future-tense. It’s not allowed. All of my stuff has to be in the present — so yours should too.  Let’s do a quick-fire round: Boosters, I am begging of you — point to one thing TODAY, from TODAY’s models, that even remotely justifies burning nearly a trillion dollars and filling our internet full of slop and creating the moral distance from an action that might have blown up a school and empowering the theft of millions of people’s work and having to hear every fucking day about Sam Altman and Dario Amodei, two terrifyingly boring and annoying oafs with no culture and no whimsy in their wretched little hearts.  Even if you are impressed by what LLMs can do, remember that what you’re impressed by is the result of burning more money than anybody has ever burned on anything, including the Great Financial Crisis’ Troubled Asset Relief Plan (a little over $400 billion) and the COVID Paycheck Protection Program (somewhere between $800 billion and $900 billion). Anthropic and OpenAI have raised (assuming OpenAI gets all the money) over $200 billion in funding, on top nearly $700 billion in capex in 2026 alone across Google, Amazon, Meta, and Microsoft , on top of the $800 billion or so they’ve already spent . I haven’t even included the tens of billions spent by CoreWeave, or the $178.5 billion in US-based data center debt deals from 2025 , or the hundreds of billions of venture dollars that went to AI companies worldwide .  Yet when you look even an inch below the surface, everything seems kind of shit.  Per my Hater’s Guide To The SaaSpocalypse : Every single AI startup without exception does the same thing: turn hundreds of millions of dollars into tens of millions of dollars, or a few billion dollars into a few hundred million dollars. None of them are improving their margins. None of them have a solution.  Every single problem I’ve discussed above about the costs of running Anthropic or OpenAI apply directly to every AI startup, except they have far less venture capital backing and are subject, as Cursor was back in June 2025 , to whatever price increases Anthropic or OpenAI decide, such as adding “priority processing” that’s effectively mandatory to have consistent access to frontier models. Absolutely none of these companies have a plan. The only reason anyone is still humouring them is that the media and venture capital continue to promote the idea that — without explaining how — they will magically find a way of becoming margin positive. When? How? Those are problems for rubes who don’t know we’re living in the future! Let’s hope that venture capital can afford to fund them in perpetuity! They can’t, of course, because venture capital has had dogshit returns since 2018 , and AI startups do not have much intellectual property, as most of them are just wrappers for frontier AI labs who also don’t have any path to profitability. As I covered last week , the story is similar for public companies.  Adobe’s “AI-first” revenue ($375 million ARR) works out to about $60 million a quarter at most for a company that makes $6 billion a quarter. ServiceNow has “$600 million in annual contract value,” an extrapolation of a non-specific period’s revenue that does not actually mean $600 million for a company that makes over $10 billion a year . Salesforce’s Agentforce revenue is $800 million , or roughly $66 million a month for a company that makes over $11 billion a year. Shopify, the company that mandates you prove that AI can’t do a job before asking for resources , does not break out AI revenue. Workday, a company that makes about $2.5 billion a quarter in revenue, said it “generated over $100 million in new ACV from emerging AI products, [and that] overall ARR from these solutions was over $400 million.” $400 million ARR is $33 million a month.  To be clear, ARR is not a consistent figure, and churn happens all the time, especially for products like LLMs that have questionable outcomes and high prices. Four fucking years of this and we’re still talking about this stuff in riddles, mostly because it’s a terrible business. Then there’s the infrastructure issue. One of the more-recent (and egregious) failures of journalism is the reporting of data center deals. Before we go any further, one very important detail: when you read “active power,” that does not mean actual available compute capacity, which is called “IT load.” Per my premium data center model from a few months ago , you should take any “active power” and divide it by 1.3 to represent “PUE” — the standard for power usage effectiveness that calculates for everything that gets the power to the IT gear, and all the infrastructure that’s necessary to keep things running, like cooling systems. Anywho, Bloomberg just reported that Meta had signed a “$27 billion” compute capacity deal with “$12 billion of capacity available in 2027” with AI compute company Nebius. Based on discussions with numerous experts in AI infrastructure, it works out to about $12.5 million per megawatt of compute, meaning that “$12 billion of dedicated capacity” would be around 960MW of IT Load. And, of course, Nebius just raised $3.75bn in debt on the back of that compute deal .  This is on top of Microsoft’s $17.4 billion deal , and, of course, Meta’s $3 billion deal from last year.  One little problem: as of its February 12 2026 Letter to Shareholders , Nebius has around 170MW of active power .  How the fuck is it going to have that capacity ready, exactly?  For some context, CoreWeave — an AI compute company backed by ( and backstopped by ) NVIDIA with an entirely separate company building its capacity (Core Scientific) with backing from Blackstone and seemingly every major financier in the world — managed to go from 420MW of active power (NOT IT LOAD) in Q1 2025 to 850MW in active power in Q4 2025 , with much of that already under construction in Q1 2025.  Nebius only started building its 300MW of New Jersey-based compute in March 2025 , and based on its letter to shareholders, things aren’t going very well at all.  Then there’s Nscale, a company that raised $2 billion from NVIDIA, Lenovo and a bunch of other investors , and this week signed a “1.35GW deal” with Microsoft to fill a data center full of the latest generation of Vera Rubin GPUs.  In September 2025, NVIDIA CEO Jensen Huang said that the UK was going to be an “ AI superpower ” as he plunged hundreds of millions of dollars into Nscale as part of an “ historic commitment to the UK AI sector” between NVIDIA, OpenAI, and Microsoft.  When The Guardian visited the supposed site of Nscale’s UK-based data center in February 2026 — which is meant to be built by the end of the year — it found “...a depot stacked with pylons and scrap metal under a corrugated roof, while flatbed lorries drove in and out stacked with poles.” As part of the investigation, The Guardian found that the supposed billions of dollars in data center commitments made by Nscale and CoreWeave were never checked by the government, and that no mechanism existed to audit them. The response from both CoreWeave and Nscale was that these billions of dollars of investments would mostly be in NVIDIA GPUs, which is where we get to the “why” of these massive compute contracts. You see, when Nebius, or Nscale, or CoreWeave signs a giant deal that it doesn’t have the capacity to provide, it does so specifically to raise debt on the contract to buy NVIDIA GPUs. See the below diagram from CoreWeave’s Q1 2025 earnings presentation : If people were actually paying attention, they’d see the immediate problem: a data center takes an incredible amount of time to build, and takes longer depending on the amount of capacity necessary .  It’s a deeply cynical con. Hyperscalers like Microsoft and Meta are paying for these contracts because they don’t reflect as assets on the balance sheet, all while moving the risk onto the AI compute company — and if the AI company misses a deadline, the hyperscaler can walk away.  For example, Nebius’ deal with Microsoft from last year has a clause that says that “...fails to meet agreed delivery dates for a GPU Service and the Company cannot provide alternative capacity, Microsoft has the right to terminate that GPU Service.”  Based on discussions with people with direct knowledge of its infrastructure, Microsoft has already set up Nebius to fail, with the expectation that it would have over 50MW of IT load specifically made up of NVIDIA’s GB200 and GB300 GPUs available by the end of April, with at least another 150MW of IT load (or more) by the end of the year for a company that only has about 130MW of IT Load in its entire global infrastructure, most of which isn’t in Vineland, New Jersey . Hyperscalers are helping no-name companies with little or no history or experience in building data centers borrow billions of dollars in debt which is increasingly funded by people’s retirements and insurance funds lured in by the idea of “consistent yields” from companies that cannot afford to do business without convincing everybody to believe the illogical.  Data centers take forever to build. The “1.2GW” (so 880MW of IT load) Stargate Abilene’s first two buildings were meant to be fully energized by the middle of 2025 . Only the first two-buildings’ worth of 96,000 GPUs were “delivered” by the middle of December 2025 , and while the entire project was meant to be energized by mid-2026 , it appears that only two buildings are actually ready to go.  Every time you report on these deals should include a timeline. In the end, I bet Stargate Abilene never gets built, but if it does, I’d be shocked if it’s done before the middle of 2027, which would mean it takes about 3 years per Gigawatt of power, or about a year per 293MW of IT load.  I have read absolutely zero fucking stories about data center development that take this into account. The flippancy with which the media reports on these data centers — both in the structure of the deals and the realities of the construction ( I go into detail about this in a premium from late last year , but making data centers is hard ) — is allowing con artists to get rich and creating the conditions for yet another great financial crisis.  Pension funds and state investment boards are reading about these deals, seeing “Microsoft,” and assuming that everything will be fine, per my Hater’s Guide To Private Equity : All that the pension fund sees is an article on CNBC or Bloomberg and the name of a company like Microsoft or Meta. In turn, they (or the private credit firm managing their money) buy bonds or fund these debt deals because they see them as stable, straightforward, reliable investment yields, because the media and private credit firms are selling them as such. In reality, data center debt deals are incredibly dangerous, as each one is effectively a bet on both the existence of AI demand (so that the debt can be repaid with revenue) and the existence of the company in question as an ongoing concern. Nscale, Nebius and CoreWeave are only a few years old, and the concept of a 1GW data center is not much older. During the great financial crisis, massive amounts — billions and billions of dollars’ worth — of pension and insurance funds went into Collateralized Debt Obligations (CDOs) that were rated as AAA despite being a rat king of low-grade (and in many cases delinquent) debt.  This time around, data center debt deals are often given junk ratings — such as the B+ rating given to one of CoreWeave’s 2025 debt deals — which might make you think that there’s nothing to worry about, and that investors would naturally steer clear of these investments. The problem is that the markets have AI psychosis, and thus believe anything to do with data centers is a natural winner. Blackstone funded part of its $38 billion investment in Oracle’s data centers — you know, the ones explicitly built for OpenAI, which cannot afford to pay for the compute — using its insurance funds. Per The Information : This is the standard line from anybody in finance about data centers, and is based on little more than wish-casting and fantasy. These are brand new kinds of debt for some of the largest infrastructure projects in history, and as I’ve discussed repeatedly, outside of hyperscalers moving compute off of their balance sheets, there’s only a billion dollars of compute demand . 77% of CoreWeave’s 2025 revenue — and keep in mind that CoreWeave is the largest independent AI compute provider — was from Microsoft and NVIDIA, the latter of which plans to spend $26 billion in the next five years on renting back its GPUs…which suggests that little organic demand exists. 2026 or 2027’s great financial crisis will replace “homes” with “data centers,” and I worry it’ll be calamitous for the pensions and insurance funds that have tied their futures to AI.  Even putting aside my own personal feelings about LLMs…I’m just not sure why we’re doing this anymore.  Okay, okay, I know why we’re doing it — the software industry is out of hypergrowth ideas and has been in a years-long decline since 2018, though it briefly had a burst of excitement in 2021 when money was cheap and everybody was insane after the lockdowns ended. Nevertheless, AI has become one of the largest cons in history, bought and sold based on stuff it can’t do (but might do, one day, at a non-specific time), constantly ignoring the blatant swindles and acidic economics that are only made possible with regulators and the media and the markets piloted by people that don’t know or want to know what’s actually happening. If you are an AI fan, I need to genuinely ask you to consider whether what you’re impressed by is what the LLMs can do today rather than what they might be able to do tomorrow. If you’re excited based on the potential , you’re not excited about technology, you’re excited about marketing.  And I get it. The tech industry hasn’t had anything really exciting in a while. It’s easy to get swept away by hype, especially when everybody is being swept away in exactly the same way. It’s hard to push back when Microsoft, Google, Meta and Amazon are all participating in a financial death cult, and their revenues keep growing — having to understand anything more than the headlines is tough and you’ve got all this shit to do and it’s so much easier to just nod and agree with everybody else. But know that this is an industry that sells itself on fear and lies. Know that LLMs cannot do many of the things that people talk about — they do not blackmail people , no GPT-4 did not trick a taskrabbit , and every single time an AI CEO says AI “will” do something you should spit in their fucking face for making shit up not print it without a second’s thought. It’s time to get specific. What will AI do, and when will it do it? What will the actual software be? How will it work? How much will it cost? How will it make money? How will it become profitable?  Because right now we’re being sold a lie and I’m sick of it, almost as sick as I am of seeing critics framed as outlier factions spreading conspiracy theories. I’ve proven my point again and again and again. Where is the same effort from the AI boosters? All I see is the occasional desperate attempt to claim that LLMs doing what they’ve always done is somehow remarkable?  Oh wow, so you can code a clone of an open source software project , all set up with an LLM that may or may not get the code right. Oh, someone was able to vibe code something that may or may not work and looks exactly the same as every other vibe code project. Congratulations on making a website that’s purple for some reason — you’re puking out a facsimile of an era of websites defined by the colour scheme chosen by Tailwind CSS .  I also want to be clear that I am extremely nervous about how many people appear to be fine with not reading code. I am currently (very slowly) learning Python, and every new thing I learn reinforces my overwhelming anxiety that there is a lot of software being written today by people who don’t read the output from LLMs and, in some cases, may not have understood it if they did. While I’m not saying all or even many software engineers might do this, I am alarmed by the idea that it’s becoming more commonplace — and even more alarmed that the reaction appears to be “ah it’s fine who gives a shit, it works.”  Guess what! It doesn’t always work. Amazon Web Services had multiple recent outages caused by use of its Kiro AI coding tool, and while it insists that AI isn’t to blame , it also convened an internal meeting to discuss this specific issue, and The Financial Times reported that Amazon now requires junior and mid-level engineers to get sign off on AI-assisted changes to code. However you may feel about Amazon as a service, its engineers are likely indicative of corporate engineering on some level, which is making me wonder if we’re not going to have some real problems in software development in the next few years as a result. What does the software industry look like if nobody is actually reading their code? How many software engineers are comfortable doing this? I’m sure somebody will read this and get terribly offended, but to be clear, I’m not accusing you of copy-pasting code you can’t understand and being happy if it works unless that’s exactly what you’re doing.   To be explicit, allowing LLM to write all of your code means that you are no longer developing code, nor are you learning how to develop code, nor are you going to become a better software engineer as a result.  This isn’t even an insult or hyperbole. If you are just a person looking at code, you are only as good as the code the model makes, and as Mo Bitar recently discussed , these models are built to galvanize you, glaze you, and tell you that you’re remarkable as you barely glance at globs of overwritten code that, even if it functions, eventually grows to a whole built with no intention or purpose other than what the model generated from your prompt.  I’m sure there are software engineers using these models ethically, who read all the code, who have complete industry over it and use it like a glorified autocomplete. I’m also sure that there are some that are just asking it to do stuff, glancing at the code and shipping it. It’s impossible to measure how many of each camp there are, but hearing Spotify’s CEO say that its top developers are basically not writing code anymore makes me deeply worried, because this shit isn’t replacing software engineering at all — it’s mindlessly removing friction and putting the burden of “good” or “right” on a user that it’s intentionally gassing up. Ultimately, this entire era is a test of a person’s ability to understand and appreciate friction.  Friction can be a very good thing. When I don’t understand something, I make an effort to do so, and the moment it clicks is magical. In the last three years I’ve had to teach myself a great deal about finance, accountancy, and the greater technology industry, and there have been so many moments where I’ve walked away from the page frustrated, stewed in self-doubt that I’d never understand something. I also have the luxury of time, and sadly, many software engineers face increasingly-deranged deadlines set by bosses that don’t understand a single fucking thing, let alone what LLMs are capable of or what responsible software engineering is. The push from above to use these models because they can “write code faster than a human” is a disastrous conflation of “fast” and “good,” all because of flimsy myths peddled by venture capitalists and the media about “LLMs being able to write all code.” The problem is that LLMs can write all code, but that doesn’t mean the code is good, or that somebody can read the code and understand its intention, or that having a lot of code is a good thing both in the present and in the future of any company built using generative code. And in the end, where are the signs that this is working? Where are the vibe coded software products destabilizing incumbents? Where are the actual software engineers being replaced — not that I want this to happen, to be clear — by LLMs, outside of AI-washing stories that have got so egregious that even Sam Altman called it out ? Where is the revenue? Where are the returns? Where are the outcomes?  Why are we still doing this? Anthropic is intentionally subsidizing its subscribers’ compute in an attempt to gain market share. Anthropic is incapable of creating stable limitations on its models’ compute costs, as Large Language Models cannot be “limited” in a linear sense to “only spend” a certain amount of tokens, as it’s impossible to guarantee how many tokens a task might take.  While I must be clear that Anthropic can limit Claude subscriptions, as can OpenAI limit ChatGPT, I doubt either can do so with precision. Hyperscalers are seeing incredible revenue growth, which is coming from AI! - why aren’t they telling us their AI revenues, then? Also, every single hyperscaler has hiked prices in the last few years, with Microsoft’s latest increases including a 33% increase on cheap subscriptions for front-line workers . Fun fact! Microsoft was the only hyperscaler to ever talk about actual AI revenues, and last did so in January 2025 when it said it had reached a “ $13 billion run rate ” (so $1.03 billion). It has never done so again. We’re in the early day- shut up. Stop it. We’re nearly four years in. What’re you talking about?  The exponential growth in capabilities of AI models- I am calling Jigsaw from “Saw” if you cannot express to me in clear, certain and direct terms what it is that’s actually changed. No benchmarks, either! They had to stop using SWE-Bench because models were trained specifically to solve it . Show me something that an LLM created, all on its own, and it better be fucking great, and fast too. Oh it “sped up coders”? How? To what end? Is the code better? Did they lay people off? Block laid off 4000 people because of AI- Yes hello, Mr. Jigsaw? Yeah it’s Ed, you had me chained against a radiator the other week. No, I’m doing a lot better, I’m glad we talked things out. Anyway, I need your help with something. Everybody is saying that Block laid off 4000 people because of AI, and that proves something! All Jack Dorsey said was that “[Block is] already seeing that the intelligence tools [it’s] creating and using…are enabling a new way of working which fundamentally changes what it means to build and run a company.” I know, that doesn’t mean anything, and all Block is doing is AI-washing, which is when a company uses AI as a scapegoat to justify laying people off.  No, no, don’t handcuff anyone to a radiator, I just needed somebody to talk to. Maybe later, okay? Jokes aside, Block — like many other companies — aggressively recruited during the pandemic , with headcount growing by 2.5x between 2019 and 2025 . And now, as the market conditions are looking choppier, it seems like it’s trying to Ozempic away some of its corporate “bloat.” Saying you’re firing people because of AI is a bit less embarrassing than saying “we fucked up.”  [Software company] is still growing, so AI must be helping?- Is that actually true? Have you looked? Because if you haven’t looked, I wrote about this in the Hater’s Guide To The SaaSpocalypse . AI is not actually driving much revenue at all!

0 views

Premium: The Hater's Guide To The SaaSpocalypse

Soundtrack: The Dillinger Escape Plan — Black Bubblegum To understand the AI bubble, you need to understand the context in which it sits, and that larger context is the end of the hyper-growth era in software that I call the Rot-Com Bubble .  Generative AI, at first, appeared to be the panacea — a way to create new products for software companies to sell (by connecting their software to model APIs), a way to sell the infrastructure to run it, and a way to create a new crop of startups that could be bought or sold or taken public.  Venture capital hit a wall in 2018 — vintages after that year are, for the most part, are stuck at a TVPI (total value paid in, basically the money you make for each dollar you invested) of 0.8x to 1.2x, meaning that you’re making somewhere between 80 cents to $1.20 for every dollar. Before 2018, Software As A Service (SaaS) companies had had an incredible run of growth, and it appeared basically any industry could have a massive hypergrowth SaaS company, at least in theory. As a result, venture capital and private equity has spent years piling into SaaS companies, because they all had very straightforward growth stories and replicable, reliable, and recurring revenue streams.  Between 2018 and 2022, 30% to 40% of private equity deals (as I’ll talk about later) were in software companies, with firms taking on debt to buy them and then lending them money in the hopes that they’d all become the next Salesforce, even if none of them will. Even VC remains SaaS-obsessed — for example, about 33% of venture funding went into SaaS in Q3 2025, per Carta . The Zero Interest Rate Policy (ZIRP) era drove private equity into fits of SaaS madness, with SaaS PE acquisitions hitting $250bn in 2021 . Too much easy access to debt and too many Business Idiots believing that every single software company would grow in perpetuity led to the accumulation of some of the most-overvalued software companies in history. As the years have gone by, things slowed down, and now private equity is stuck with tens of billions of dollars of zombie SaaS companies that it can’t take public or sell to anybody else, their values decaying far below what they had paid, which is a very big problem when most of these deals were paid in debt.  To make matters worse, 9fin estimates that IT and communications sector companies (mostly software) accounted for 20% to 25% of private credit deals tracked, with 20% of loans issued by public BDCs (like Blue Owl) going to software firms. Things look grim. Per Bain , the software industry’s growth has been on the decline for years, with declining growth and Net Revenue Retention, which is how much you're making from customers and expanding their spend minus what you're losing from customers leaving (or cutting spend): It’s easy to try and blame any of this on AI, because doing so is a far more comfortable story. If you can say “AI is causing the SaaSpocalypse,” you can keep pretending that the software industry’s growth isn’t slowing. That isn’t what’s happening. No, AI is not replacing all software. That is not what is happening. Anybody telling you this is either ignorant or actively incentivized to lie to you.  The lie starts simple: that the barrier to developing software is “lower” now, either “because anybody can write code” or “anybody can write code faster.” As I covered a few weeks ago … From what I can gather, the other idea is that AI can “simply automate” the functions of a traditional software company, and “agents” can replace the entire user experience, with users simply saying “go and do this” and something would happen. Neither of these things are true, of course — nobody bothers to check, and nobody writing about this stuff gives a fuck enough to talk to anybody other than venture capitalists or CEOs of software companies that are desperate to appeal to investors. To be more specific, the CEOs that you hear desperately saying that they’re “modernizing their software stack for AI” are doing so because investors, who also do not know what they are talking about, are freaking out that they’ll get “left behind” because, as I’ve discussed many times, we’re ruled by Business Idiots that don’t use software or do any real work . There are also no real signs that this is actually happening. While I’ll get to the decline of the SaaS industry’s growth cycle, if software were actually replacing anything we’d see direct proof — massive contracts being canceled, giant declines in revenue, and in the case of any public SaaS company, 8K filings that would say that major customers had shifted away business from traditional software.  Midwits with rebar chunks in their gray matter might say that “it’s too early to tell and that the contract cycle has yet to shift,” but, again, we’d already have signs, and you’d know this if you knew anything about software. Go back to drinking Sherwin Williams and leave the analysis to the people who actually know stuff!  We do have one sign though: nobody appears to be able to make much money selling AI, other than Anthropic ( which made $5 billion in its entire existence through March 2026 on $60 billion of funding ) and OpenAI (who I believe made far less than $13 billion based on my own reporting .) In fact, it’s time to round up the latest and greatest in AI revenues. Hold onto your hats folks! Riddle me this, Batman: if AI was so disruptive to all of these software companies, would it not be helping them disrupt themselves? If it were possible to simply magic up your own software replacement with a few prompts to Claude, why aren’t we seeing any of these companies do so? In fact, why do none of them seem to be able to do very much with generative AI at all?   The point I’m making is fairly simple: the whole “AI SaaSpocalypse” story is a cover-up for a much, much larger problem. Reporters and investors who do not seem to be able to read or use software are conflating the slowing growth of SaaS companies with the growth of AI tools, when what they’re actually seeing is the collapse of the tech industry’s favourite business model, one that’s become the favourite chew-toy of the Venture Capital, Private Equity and Private Credit Industries. You see, there are tens of thousands of SaaS companies in everything from car washes to vets to law firms to gyms to gardening companies to architectural firms. Per my Hater’s Guide To Private Equity : You’d eventually either take that company public or, in reality, sell it to a private equity firm . Per Jason Lemkin of SaaStr : The problem is that SaaS valuations were always made with the implicit belief that growth was eternal , just like the rest of the Rot Economy , except SaaS, at least for a while, had mechanisms to juice revenues, and easy access to debt. After all, annual recurring revenues are stable and reliable , and these companies were never gonna stop growing, leading to the creation of recurring revenue lending : To be clear, this isn’t just for leveraged buyout situations, but I’ll get into that later. The point I’m making is that the setup is simple: You see, nobody wants to talk about the actual SaaSpocalypse — the one that’s caused by the misplaced belief that any software company will grow forever.  Generative AI isn’t destroying SaaS. Hubris is.  Alright, let’s do this one more time. SaaS — Software As A Service — is both the driving force and seedy underbelly of the tech industry. It’s a business model that sells itself on a seemingly good deal. Instead of paying upfront for an expensive software license and then again when future updates happen, you pay a “low” monthly fee that allows you to get (in theory) the most up-to-date (in theory) and well-maintained ( in theory ) version of whatever it is you’re using. It also ( in theory ) means that companies need to stay competitive to keep your business, because you’re committing a much smaller amount of money than a company might make from a single license. Over here in the real world , we know the opposite is true. Per The Other Bubble, a piece I wrote in September 2024 : It’s hard to say exactly how large SaaS has become, because SaaS is in basically everything, from whatever repugnant productivity software your boss has insisted you need, to every consumer app now having some sort of “Plus” package that paywalls features that used to be free. Nevertheless, “SaaS” in most cases refers to business software , with the occasional conflation with the nebulous form of “the enterprise,” which really means “any company larger than 500 people.”  McKinsey says it was worth “$3 trillion” in 2022 “after a decade of rapid growth,” Jason Lemkin and IT planning software company Vena say it has revenues somewhere between $300 billion and $400 billion a year. Grand View Research has the global business software and services market at around $584 billion , and the reason I bring that up is that basically all business software is now SaaS, and these companies make an absolute shit ton on charging service fees. “Perpetual licenses” — as in something you pay for once, and use forever — are effectively dead, with a few exceptions such as Microsoft Windows, Microsoft Office, and some of its server and database systems. Adobe killed them in 2014 ( and a few more in 2022 ), Oracle killed them in 2020 , and Broadcom killed them in 2023 , the same year that Citrix stopped supporting those unfortunate to have bought them before they went the way of the dodo in 2019 . To quote myself again, in 2011, Marc Andreessen said that “ software is eating the world.” And he was right, but not in a good way. Andreesen’s argument was that software should eat every business model: Every single company you work with that has any kind of software now demands you subscribe to it, and the ramifications of them doing so are more significant than you’ve ever considered.  That’s because SaaS is — or, at least, was — a far-more-stable business model than selling people something once. Customers are so annoying . When they buy something, they tend to use it until it stops working, and if you made the product well , that might mean they only pay you once.   SaaS fixes this problem by giving them only one option — to pay you a nasty little toll every single month, or ideally once a year, on a contractual basis, in a way that’s difficult to cancel.  Sadly, the success of the business software industry turned everything into SaaS.  Recently, I tried to cancel my membership to Canva, a design platform that sort of works well when you want it to but sometimes makes your browser crash. Doing so required me to go through no less than four different screens, all of which required me to click “cancel” — offers to give me a discount, repeated requests to email support, then a final screen where the cancel button moved to a different place.  This is nakedly evil. If you are somebody high up at Canva, I cannot tell you to go fuck yourself hard enough! This is a scummy way to make business and I would rather carve a meme on my ass than pay you another dollar!  It’s also, sadly, one of the tech industry’s most common (and evil!) tricks .  Everybody got into SaaS because, for a while, SaaS was synonymous with growth. Venture capitalists invested in business with software subscriptions because it was an easy way to say “we’re gonna grow so much ,” with massive sales teams that existed to badger potential customers, or “customer success managers” that operate as internal sales teams to try and get you to start paying for extra features, some of which might also be useful rather than helping somebody hit their sales targets.  The other problem is how software is sold. As discussed in the excellent Brainwash An Executive Today , Nik Suresh broke down the truth behind a lot of SaaS sales — that the target customer is the purchaser at a company, who is often not the end user, meaning that software is often sold in a way that’s entirely divorced from its functionality. This means that growth, especially as things have gotten desperate, has come from a place of conning somebody with money out of it rather than studiously winning a customer’s heart.  And, as I’ve hinted at previously, the only thing that grows forever is cancer. In today’s newsletter I am going to walk you through the contraction — and in many cases collapse — of tech’s favourite business model, caused not by any threat from Large Language Models but the brutality of reality, gravity and entropy. Despite the world being anything but predictable or reliable, the entire SaaS industry has been built on the idea that the good times would never, ever stop rolling. I guess you’re probably wondering why that’s a problem! Well, it’s quite simple (emphasis mine): That’s right folks, 40% of PE deals between 2018 and 2022 were for software companies, the very same time venture capital fund returns got worse. Venture and private equity has piled into an industry it believed was taking off just as it started to slow down. The AI bubble is just part of the wider collapse of the software industry’s growth cycle. This is The Hater’s Guide To The SaaSpocalypse, or “Software As An Albatross.”  In its Q4 2025 earnings, IBM said its total “generative AI book of business since 2023” hit $12.5 billion — of which 80% came from its consultancy services, which consists mostly of selling AI other people’s AI models to other businesses. It then promptly said it would no longer report this as a separate metric going forward .  To be clear, this company made $67.5 billion in 2025, $62.8 billion in 2024, $61.9 billion in 2023 and $60.5 billion in 2022. Based on those numbers, it’s hard to argue that AI is having much of an impact at all, and if it were, it would remain broken out. Scummy consumer-abuser Adobe tries to scam investors and the media alike by referring to “AI-influenced” revenue — referring to literally any product with a kind of AI-plugin you can pay for (or have to pay for as part of a subscription) — and “AI-first” revenue, which refers to actual AI products like Adobe Firefly. It’s unclear how much these things actually make. According to Adobe’s Q3 FY2025 earnings , “AI-influenced” ARR was “surpassing” $5 billion (so $1.248 billion in a quarter, though Adobe does not actually break this out in its earnings report), and “AI-first” ARR was “already exceeding [its] $250 million year-end target,” which is a really nice way of saying “we maybe made about $60 million a quarter for a product that we won’t shut the fuck up about.”  For some context, Adobe made $5.99 billion in that quarter, which makes this (assuming AI-first revenue was consistent) roughly 1% of its revenue .   Adobe then didn’t report its AI-first revenue again until Q1 FY2026, when it revealed it had “more than tripled year over year” without disclosing the actual amount, likely because a year ago its AI-first revenue was $125 million ARR , but this number also included “add-on innovations.” In any case, $375 million ARR works out to $31.25 million a month, or (even though it wasn’t necessarily this high for the entire quarter) $93.75 million a quarter, or roughly 1.465% of its $6.40 billion in quarterly revenue in Q1 FY2026. Bulbous Software-As-An-Encumberance Juggernaut Salesforce revealed in its latest earnings that its Agentforce and Data 360 (which is not an AI product, just the data resources required to use its services) platforms “exceeded” $2.9 billion… but that $1.1 billion of that ARR came from its acquisition of Informatica Cloud , (which is not a fucking AI product by the way!). Agentforce ARR ended up being a measly $800 million, or $66 million a month for a company that makes $11.2 billion a year. It isn’t clear whether what period of time this ARR refers to.  Microsoft, Google and Amazon do not break out their AI revenues. Box — whose CEO Aaron Levie appears to spend most of his life tweeting vague things about AI agents — does not break out AI revenue .  Shopify, the company that mandates you prove that AI can’t do a job before asking for resources , does not break out AI revenue. ServiceNow, whose CEO said back in 2022 told his executives that “everything they do [was now] AI, AI, AI, AI, AI,”   said in its Q4 2025 earnings that its AI-powered “ Now Assist” had doubled its net new Annual Contract Value had doubled year-over-year ,” but declined to say how much that was after saying in mid-2025 it wanted a billion dollars in revenue from AI in 2026 .  Apparently it told analysts that it had hit $600 million in ACV in March ( per The Information )...in the fourth quarter of 2025, which suggests that this is not actually $600 million of revenues quite yet, nor do we know what that revenue costs.  What we do know is that ServiceNow had $3.46 billion in 2025, and its net income has been effectively flat for multiple quarters , and basically identical since 2023. Intuit, a company that vibrates with evil, had the temerity to show pride that it had generated " almost $90 million in AI efficiencies in the first half of 2025 ,” a weird thing to say considering this was a statement from March 2026. Anyway, back in November 2025 it agreed to pay over $100 million for model access to integrate ChatGPT . Great stuff everyone.  Workday, a company that makes about $2.5 billion a quarter in revenue, said it “generated over $100 million in new ACV from emerging AI products, [and that] overall ARR from these solutions was over $400 million.” $400 million ARR is $33 million.  Atlassian, which just laid off 10% of its workforce to “ self-fund further investment in AI ,” does not break out its AI revenues. Tens of thousands of SaaS companies were created in the last 20 years. These companies, for a while, had what seemed to be near-perpetual growth. This led to many, many private equity buyouts of SaaS companies, pumping them full of debt based on their existing recurring revenue and the assumption that they would never, ever stop growing. I will get into this later. It’s very bad. When growth slowed, the reaction was for these companies to raise venture debt — loans based on their revenue — and per Founderpath , 14 of the largest Business Development companies loaned $18 billion across 1000 companies in 2024 alone, with an average loan size of $13 million. This includes name brand companies like Cornerstone OnDemand and Dropbox, the latter of which took on a $34.4 million debt facility with an 11% interest rate. One has to wonder why a company that had $643 million in revenue in Q4 2024 needed that debt.

1 views

The Beginning Of History

Hi! If you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 185,000 words, including vast, extremely detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large .  I just put out a massive Hater’s Guide To Private Equity and one about both Oracle and Microsoft in the last month. I am regularly several steps ahead in my coverage, and you get an absolute ton of value, several books’ worth of content a year in fact!. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual.  Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.  Before we go any further: no, this is not going to turn into a geopolitics blog. That being said, it’s important to understand the effect of the war in Iran on everything I’ve been discussing. So, let’s start simple. Open Google Maps. Scroll to the Middle East. Look at the bit of water separating the Gulf Arab countries from Iran. That’s the Persian Gulf.  Scroll down a bit. Do you see the narrow channel between the United Arab Emirates and Iran? That’s the Strait of Hormuz. At its narrowest point, it measures 24 miles across. Around 20% of the world’s oil and a similar percentage of the world’s liquified natural gas (LNG) flows through it each year.  Yes, that natural gas, the natural gas being used to power data centers like OpenAI and Oracle’s “Stargate” Abilene (which I’ll get to in a bit) and Musk’s Colossus data center . But really, size is misleading. Oil and gas tankers are massive, and they’re full to the brim with incredibly toxic material. Spills are, obviously, bad . Also, because of their size, these tankers need to stick where to where the water is a specific depth, lest they find themselves stuck.  As a result, there are two lanes that tankers use when navigating through the Strait of Hormuz — one going on, one going out. This a sensible idea with the goal to reduce the risk of collisions, but it also means that the potential chokepoint is even smaller.   Anyway, at the end of last month, Iran’s Revolutionary Guard Corps unilaterally closed off the strait, warning merchant shipping that any attempt to travel through the strait was “not allowed .” This closure, for what it’s worth, is not legally binding. Iran can’t unilaterally close a stretch of international waters. And yes, while some of those shipping lanes cross through Iran’s territorial waters ( and Oman’s, for that matter ), they’re still governed by the UN Convention on the Law of the Sea (UNCLOSS) , which gives ships the right to cross through narrow geographical chokeholds where part of the waters belong to another state, and that says that nations “shall not hamper transit passage.” That requirement, I add, cannot be suspended.  Still, merchant captains don’t want to risk getting themselves and their crews blown up, or arrested and thrown in Evin Prison . Insurers don’t want to pay for any ship that gets blown up, or indeed, for the ensuing environmental catastrophe. And the UAE doesn’t want its pristine beaches covered in crude oil.  And so, the tankers are staying put . And they’ll stay there until one of four things happens: Of the first three, none feels particularly likely, at least in the short-to-medium term. Maybe I’m wrong. Maybe everything reverses and everyone suddenly works it out — Trump realizes that he’s touching the stove and pulls out after claiming a “successful operation.” The world is chaotic and predicting it is difficult. Nevertheless, before that happens, closing the Strait of Hormuz means that Iran can inflict pain on American consumers at the pump, and we’ve already seen a 30% overnight spike in oil prices , with the price of a barrel jumping over $100 for the first time since 2022 (though as of writing this sentence it’s around $95). With midterms on the horizon, Iran hopes that it can translate this consumer pain to political pain for Donald Trump at the ballot box.  This is all especially nasty when you consider that the price of oil is directly tied to inflation. It influences shipping costs, a lot of medicines, construction materials, and consumer objects have petrochemical inputs. In very simple terms, if oil is used to make your stuff (or get it to you), that stuff goes up in price. While this obviously hurts countries with which Iran has previously had cordial relations, (particularly Qatar which is a major exporter of LNG), I genuinely don’t think it cares any more.  I mean, Iran has launched drones and missiles at targets located within Qatar’s territory , resulting in (at the latest count) 16 civilian injuries. Qatar shot down a couple of Iranian jets last week . I’m not sure what pressure any of the Gulf countries could exert on Iran to make it back down.  I don’t see the security situation improving, either. Iran’s Shahed drones are cheap and fairly easy to manufacture, and developed under some of the most punishing sanctions, when the country was cut off from the global supply chain. It then licensed the design to Russia, another heavily-sanctioned country, which has employed them to devastating effect in Ukraine.  Iran can produce these in bulk, and then — for the fraction of a cost of an American tomahawk missile — send them out as a swarm to hit passing ships. Even without the ability to produce new ones, Iran is believed to have possessed a pre-war stockpile of tens of thousands of Shahed drones .  Shaheds aren’t complicated, or expensive, or flashy, or even remotely sophisticated, and that’s what makes them such a threat. It took Ukraine a long time to effectively figure out how to counter them, and it’s done so by using a whole bunch of different tactics — from l and-based defenses like the German-made Gepard anti-aircraft gun , to interceptor drones , to repurposed 1960’s agricultural planes , to (quite literally) people shooting them down with assault rifles from the passenger seat of a propeller-powered planes .  Ukraine has the experience in combating these drones, and even still some manage to slip through its defences, often hitting civilian infrastructure . Airstrikes can probably reduce the threat to shipping (though not without exacting an inevitable and horrible civilian cost), but they can’t eliminate it.  Hell, even the Houthis — despite only controlling a small portion of Yemen, and despite efforts by a coalition of nations to degrade its offensive capabilities — still pose a risk to maritime traffic heading towards the Suez Canal.  Given the cargo these ships carry, any risk is probably too much risk for the insurers, for the carriers, and for the neighbouring countries. While I could imagine the US, at some point, saying “great news! It’s fine to go through the Strait of Hormuz now,” and though it has started offering US government-backed reinsurance for vessels , I don’t know if any shippers will actually believe it or take advantage of it.  And so, we get to the last point on my list. Regime change.  Do I believe that the Iranian government is deeply unpopular with its own people? Yes. Do I believe that said government can be overthrown by airstrikes alone? No. Do I believe that Iran’s government will do anything within its power to remain in control, even if that means slaughtering tens of thousands of their own people? Yes.  Even if there was an uprising, who would lead it? Iran’s virtually cut off from the Internet , and movement within the country is restricted, making it hard for any opposition figures to organize. The two most high-profile outside opposition figures — Reza Pahlavi, the son of the former Shah, and Maryam Rajavi, leader of the MEK and NCRI — both have their own baggage, and they’re living in the US and France respectively.  As I said previously, this isn’t me wading into geopolitics, but more of a statement that there’s no way of knowing when things will eventually return to normal. This conflict might wrap up in a couple of weeks, or it might be months, or, even longer than that. All this amounts to a huge amount of global oil production being bottled up, which is made worse by the fact that there’s also the slight problem that Iran produces a lot of oil itself, sending most of it (over 80%) to China . With Iran unable to export crude, and its production facilities now under attack, China’s going to have to look elsewhere. Which will result in even higher oil prices.  Which, in turn, will make everything else more expensive.  That is what brings us back to the AI bubble.  Now, given that most of the high-profile data center projects you’ve heard about are based in the US, which is (as mentioned) largely self-sufficient when it comes to hydrocarbons, you’d assume that it would be business as usual.  And you would be wrong.  You see, this is a global market. Prices can (and will!) go up in the US, even if the US doesn’t import oil or natural gas from abroad, because that’s just how this shit works. Sure, there are variations in cost where geography or politics play a role, but everyone will be on the same price trajectory. While we won’t see the same kind of shortages that we witnessed during the last oil shock (the one which ended up taking down the Carter presidency ), it will still hurt . While the US managed to decouple itself from oil imports, it hasn’t (and probably can’t) decouple itself from global pricing dynamics.  The US has faced a few major oil shocks — the first in 1973 , after OPEC issued an embargo against the US following the Yom Kippur War, which ended the following year after Saudi Arabia broke ranks, and the second in 1979, following the Iranian Revolution — and both hurt…a lot. This won’t be much different.  First, inflation.  As the cost of living spikes, people will start demanding higher wages, which will, in turn, be passed down through higher prices.  At least, that’s what would normally happen. Paul Krugman, the Nobel-winning economist, wrote in his latest substack that US workers in the 1970s were often unionized, and they benefited from contractual cost-of-living increases in their work contracts.  Sadly, we live in 2026. Union membership hasn’t recovered from the dismal Reagan years, and with layoffs and offshoring, combined with an already tough jobs market, workers have little leverage to demand raises. We’re in an economy oriented around do-nothing bosses that loathe their workers , one where workers will get squeezed even further by the consequences of any economic panic, even if it’s one caused by multiple events completely out of their control. So, it’s unlikely that we’ll see a wage-based amplification of any inflation that comes from the current situation.  That said, depending on how bad things get, we will see inflation spike, and Increases in inflation are usually met with changes in monetary policy, with central banks raising the cost of borrowing in an attempt to “cool” the economy (IE: reduce consumer spending so that companies are forced to bring down prices).    And we’d just started to bring down interest rates, with the Fed announcing in December that it projected rates of 3.4% by the end of 2026 . Iran changes that in the most obvious way possible — if prices soar, interest rates may follow, and if rates go up, even by a point or two of a percentage, financing the tens and hundreds of billions of dollars in borrowing that the AI bubble demands will become significantly more expensive.  For some context, the International Monetary Fund’s Kristalina Georgieva recently said “...a 10% increase in energy prices that persists for a year would push up global inflation by 40 basis points and slow global economic growth by 0.1-0.2%,” per The Guardian, who also added… And remember : the AI bubble, along with the massive private equity and credit funds backing it, is fueled almost entirely by debt. All this chaos and potential for jumps in inflation will also affect the affordability calculations that lenders will make before loaning the likes of Oracle and Meta the money they need at a time when lenders are already turning their nose up at Blue Owl-backed data center debt deals . The alternative is, of course, not raising interest rates — which, if the Fed loses its independence, is a possibility — which would be equally catastrophic, as we saw in the case of Turkey, whose president, Recep Tayyip Erdogan, has a somewhat… ahem… “unorthodox approach to monetary policy .  Erdogan believes that high interest rates cause inflation — a theory which he tested to the detriment of his own people .  In simpler terms, Turkey has faced some of the worst hyperinflation in the developed world , and has a currency that lost nearly 90% of its value in five years.  It’s not just the data centers, either. As interest rates go up, VC funds tend to shrink, because the investors that back said funds can get better returns elsewhere , and with much less risk.  As I discussed in the Hater’s Guide to Private Equity , 14% of large banks’ total loan commitments go to private equity, private credit and other non-banking institutions , at a time when ( to quote Forbes ) PE firms are taking an average of 23 months fundraising (up from 16 months in 2021), after private credit’s corporate borrowers’ default rates (as in the loans written off as unpaid by the borrow) hit 9.2% in 2025 . Put really simply, private equity, private credit, venture capital and basically everything to do with technology currently depends on the near-perpetual availability of debt. The growth of private credit is so recent that we truly don’t know what happens if the debt spigot gets turned off, but I do not think it will be pretty . Things get a little worse when you remember that famed business dipshits SoftBank are currently trying to raise a $40 billion loan to fund its three $10 billion Klarna-esque payments as part of its $30 billion investment in OpenAI’s not-actually-$110-billion-yet funding round . How SoftBank — a company that raised a $15 billion bridge loan due to be paid off in around four months and has about $41.5 billion in existing debt that’s maturing that needs to be refinanced in the next nine months or so, per JustDario — intends to take on another $40 billion is beyond me. And that’s a sentence I would’ve written before the war in Iran began. There’s also evidence that links lower IPO numbers to rising inflation rates , which means that achieving the exit that investors want will become so much harder — and so, they might as well not bother. Need proof? SoftBank-owned mobile payments company PayPay delayed its IPO last week, and I quote Reuters , because “...markets were rattled by [the attack] on Iran, according to two people familiar with the matter.” Inflation also negatively affects company valuations — which, again, will influence whether investors open their purse strings.  This is all a long-winded way of saying that the AI industry is about to enter a world of hurt. Every AI startup is unprofitable, which means they need to raise money from venture capitalists, who raise money from investors that aren’t paying them, pension funds and insurers, and private equity and credit firms that raise money from banks, both of which will struggle should central bank rates spike.  The infrastructural layer — AI data centers — also requires endless debt ( due to the massive upfront costs for NVIDIA chips and construction ), and that debt was already becoming difficult to raise.  Then there's the practical opex and capex costs. Higher interest rates mean that any contractors building the facilities will insist on higher fees, because their costs — labor costs, the price of filling up a van or a truck with gas, or paying for building materials — has gone up. And they’ll probably pad the increase a bit to take into account for any future rises in inflation.  Those gas turbines you’re running to power your facility? Yeah, feeding those is going to get much more expensive. Natural gas is up as much as 50%, and a lot of US capacity is going to serve markets in Asia and Europe to take advantage of the spike in prices , which will mean an increase in prices for US consumers.  In fact, you don’t even need interest rates to spike for things to get nasty. As the price of oil continues to skyrocket, flying a Boeing 747 filled with GB200 racks from Taiwan to Texas or mobilizing the thousands of people that work ( to quote Bloomberg ) day and night to build Stargate Abilene will become extra-normally more expensive. And even in the very, very unlikely event that things somehow quickly return to whatever level of “normal” you’d call the world before the conflict started, even brief shocks to the financial plumbing are enough to destabilize an already-fractured hype cycle. Last week, Bloomberg reported something I’d already confirmed three weeks ago — that OpenAI was no longer part of the planned expansion (past the initial two (of eight) buildings) of Stargate Abilene, a project that’s already massively delayed from its supposed “full energization” by mid-2026 .  Oracle disputes the report (and if it’s wrong, I imagine investors will rightly sue) claiming that “Crusoe [the developer] and Oracle are “operating in lockstep,” which doesn’t make sense considering the delays or, well, reality. My sources in Abilene also tell me that the expansion fell apart due to Oracle’s dissatisfaction with the revenue it was making on buildings one and two, and that a bidding war was taking place between Meta and Google for the future capacity.  Bloomberg’s Ed Ludlow also reports that NVIDIA put down a $150 million deposit as Crusoe attempts to lock down Meta as a tenant — a very strange thing to do considering Meta is flush with cash, suggesting a desperation in the hearts of everybody involved. It’s also very, very strange to have a supplier get involved in a discussion between a vendor and a customer , almost as if there’s some sort of circular financing going on. As I reported back in October, Stargate currently only has around 200MW of power , and The Information reports that power won’t be available for a year or more, something I also said in October .  As self-serving as it sounds, I really do recommend you read my premium piece about the AI Bubble’s Impossible Promises , because I laid out there how stupid and impossible gigawatt data centers were before the war in Iran. We’ve already got a shortage in the electrical grade steel and transformers required to expand America’s (and the world’s) power grid, we’ve already got a shortage of skilled labor required to build that power (and data centers in general) , and we’re moving massive amounts of heavy shit around a large patch of land using thousands of people, which will cost a lot of gas. I don’t know why, but the media and the markets seem incapable of imagining a world where none of this stuff happens, clinging to previous epochs where “things worked out” and where “things were okay” without a second thought. In The Black Swan , Nassim Taleb makes the point that “…the process of having [journalists] report in lockstep [causes] the dimensionality of the opinion set to shrink considerably,” saying that they tend to “[converge] on opinions and [use] the same items as causes.”  In simpler terms, everybody reporting the same thing in the same way naturally makes everybody converge on the same kinds of ideas — that AI is going to be a success because previous eras have “worked out,” even if they can’t really express what “worked out” means.  The logic is almost childlike — in the past, lots of money was invested in stuff that didn’t work out, but because some things worked out after spending lots of money , spending lots of money will work out here.  The natural result is that reporters (and bloggers) seek endless positive confirmation, and build narratives to match. They report that Anthropic hit $19 billion in annualized revenue and OpenAI hit $25 billion in annualized revenue — which has been confirmed to refer to a 4-week-long period of revenue multiplied by 12 — as proof that the AI bubble is real, ignoring the fact that both companies lose billions of dollars and that my own reporting says that OpenAI made billions less and spent billions more in 2025. They assume that a company would not tell everybody something untrue or impossible, because accepting that companies do this undermines the structure of how reporting takes place, and means that reporters have to accept that they, in some cases, are used by companies to peddle information with the intent of deception. And thanks to an affidavit from Anthropic Chief Financial Officer Krishna Rao filed as part of Anthropic’s suit against the Department of Defense’s supply chain risk designation , it’s clear that the deception was intentional, as the affidavit confirmed that Anthropic’s lifetime revenue “to date” (referring to March 9th 2026) is $5 billion , and it has spent $10 billion on inference and training.  To be abundantly clear , this means that Anthropic’s previous statement that it made $14 billion in annualized revenue ( stated by Anthropic on February 12 2026, and referring, I’ve confirmed, to a month-long period multiplied by 12 ) — referring to a period of 30 days where it made $1.16 billion — accounts for more than 23% of its lifetime revenue.  This comes down to which Anthropic you believe, because these two statements do not match up. I am not stating that it is lying , but I do believe annualized revenue is a deliberate attempt to obfuscate things and give the vibe that the business is healthier than it is. I also do not think it’s likely that Anthropic made 23% of its lifetime revenue in the space of a month. What this almost certainly means is that the sources that told media outlets that Anthropic made $4.5 billion in 2025 were misleading them . The exact quote from the affidavit is that “...[Anthropic] has generated substantial revenue since entering the commercial market—exceeding $5 billion to date,” and while boosters will say “uhm, it says “exceeding,” if it were anything higher than $5.5 billion Anthropic would’ve absolutely said so.  We can also do some very simple maths that suggests that Anthropic’s “annualized” figures are…questionable. On February 12 2026, annualized revenue hit $14 billion. Five days before the lawsuit was filed, it was $19 billion, “ with $6 billion added in February ” (per Dario Amodei at a Morgan Stanley conference), suggesting that annualized revenue was $13 billion, or $1.083 billion.  Even if we assume a flat billion, that means that Anthropic made $2.16 billion between January and the end of February 2026. And that’s not including the revenue made in March so far.  But I’m a curious little critter and went ahead and added up all of the times that Anthropic had talked about its annualized revenue from 2025 onward, and the results — which you can find with links here! — and based on my calculations, just using published annualized revenues gets us to $4.837 billion.  We are, however, missing several periods of time, which I’ve used “safe” (as in lower, so that I am trying to give Anthropic the benefit of the doubt) numbers to calculate based on the periods themselves. With these estimates, we get a grand total of $6.66 billion (ominous!), which is a great deal higher than $5 billion. When you remove the estimates and annualized revenues for 2026, you get $3.642 billion, which heavily suggests that Anthropic did not, in fact, make $4.5 billion in 2025. There isn’t a chance in Hell this company made $4.5 billion in 2025 based on its own CFO’s affidavit. I also think it’s reasonable to doubt the veracity of these annualized revenues, or, in my kindest estimation, that Anthropic is using any kind of standard “annualized” formula.  Here are the ways in which people will try and claim I’m wrong: I think it’s reasonable to doubt whether Anthropic made anywhere near $4.5 billion in 2025, whether Anthropic has annualized revenues even approaching those reported, and whether anything it says can be trusted going forward. It appears one of the most prominent startups in the valley has misled everybody about how much it makes, or if it has not, that somebody else is perpetuating a misinformation campaign. Add together the annualized revenues. Look at the links. Do the maths. I got the links for annualized revenues from Epoch AI , though I have seen all of these before in my own research.  People are going to try and justify why this isn’t a problem in all manner of ways. They’ll say that actually Anthropic made less money in 2025 but that’s fine because they all could see what annualized revenues really meant. So far, nobody has a cogent response, likely because there isn’t one. I haven’t even addressed the $10 billion in training and inference costs, because good lord, those costs are stinky , and based on my own reporting — which did not come from Anthropic, which is why I trust it! — Anthropic spent $2.66 billion on Amazon Web Services from January through September 2025, or around 26% of its lifetime compute spend. That’s remarkable, and suggests this company’s compute spend is absolutely out of control. This leads me to one more quote from Anthropic’s CFO: Without attempting to influence their decision making, if I were a counterparty to a company like this, my biggest concern would now be that this filing appears to suggest that Anthropic’s revenues are materially smaller than I believed. Though it might seem dangerous to be like me, pointing at stuff and saying “that doesn’t make sense!” Or questioning a narrative held by the entire stock market and most of modern journalism, but I’d argue the danger is that narrow, narrative-led, establishment-driven thinking makes it impossible for reporters to report.  While you might be able to say “a source told me that something went wrong,” the natural drive to report on what everybody else is saying means that this information is often reported with careful weasel words like “still going as planned” or “still growing incredibly fast.” It’s a kind of post-factual decorum — a need to keep the peace that frames bad signs as bumps in the road and good signs as cast-iron affirmations of future success. This is a catastrophic failure of journalism that deprives retail investors and the general public of useful information. It also — though it feels as if reporters are “getting scoops” or “breaking news” — naturally magnetizes journalists toward information that confirms the narrative, or “leaks” that are actually the company intentionally getting something in front of a reporter so that they (the reporter) can appear as if this was “investigative news” versus “marketing in a different hat.” It also means that modern journalism is ill-equipped, and no, this is not a “new” phenomena. It is the same thing that led to the dot com bubble, the NFT bubble, the crypto bubble, the Clubhouse bubble, the AR and VR bubble, and many more bubbles to come.  To avoid being “wrong,” reporters are pursuing stories that prove somebody else right, which almost invariably ends with the reporter being wrong. “Pursuing stories to prove somebody else right” means that a great many reporters (and newsletter writers) that claim to be objective and fact-focused end up writing the narrative that companies use to raise money using evidence manufactured by the company in question.  In some cases, this is an act of cowardice. Following the narrative because it’s easy and because everybody’s doing it adds a layer of reputation laundering. If everybody failed, everybody was conned and thus nobody has to be held accountable, and because there really has never been any accountability for the media being wrong about any previous bubbles, the assumption is that it’ll never happen.  However you may feel about my work or what I’m saying, I need you to understand something: journalism, both historically and currently, is unprepared for the consequences of being wrong.  The current media consensus around the AI bubble is that even if it pops it will be fine , with some even saying that “even if OpenAI folds, everything will work out, because of the dot com bubble.” This is a natural attempt to rationalize and normalize the chaotic and destructive — an attempt to map how this bubble would burst onto previous bubbles because new things are difficult and scary to imagine.  There has never been a time when the entire market crystallised around a few specific companies — not even the dot com bubble! — and then built an entire infrastructural layer mostly in service of two of them, with a price tag now leering close to the $1tn mark .   Let’s get specific. The scoffing and jeering I get from people when I say that AI demand doesn’t exist or that AI companies don’t have revenues or that OpenAI or Anthropic are unsustainable is never met with a good faith response , just quotes about how “Amazon Web Services lost lots of money” or “Uber lost lots of money” or that “these are the fastest growing companies of all time” or something about “all code being written by AI,” a subject I discussed at length two weeks ago .  The Large Language Model era is uniquely built to exploit human beings’ belief that we can infer the future based on the past, both in how it processes data and in how people report on its abilities. It exploits media outlets that do not have people that are given the time (or held to a standard where they have) to actually learn the subjects in question, and sells itself based on the statement that “this is the worst it’ll ever be” and “previous eras of investment worked out.”  LLMs also naturally cater to those who are willing to accept substandard explanations and puddle-deep domain expertise. The slightest sign that Claude Code can build an app — whether it’s capable of actually doing so or not — is enough for people that are on television every day to say that it will build all software, because it confirms the biases that the cycle of innovation and incumbent disruption still exists, even if it hasn’t for quite some time. A glossy report about job displacement — even one that literally says that Anthropic found “no systematic increase in job displacement in unemployment” from AI — gets reported as proof that jobs are being displaced by AI because it says “AI is far from reaching its theoretical capability: actual coverage remains a fraction of what’s feasible.”  This is an aggressive exploitation in how willing people with the responsibility to tell the truth are willing to accept half-assed expectations, and how willing people are to operate based on principles garnered from the lightest intellectual lifts in the world. The assumption is always the same: that what has happened before will happen again, even if the actuality of history doesn’t really reflect that at all. Society — the media, politicians, chief executives, shit, everyone on some level — is incapable of thinking of new stuff that would happen, especially if that new stuff would be economically destructive, such as a massive scar across all private credit, private equity and venture capital, one so severe that it may potentially destroy the way that businesses (and startups, for that matter) raise capital for the foreseeable future. People are more willing to come up with societally-destructive theories — such as all software engineering and all journalism and all content being created by LLMs, even if it doesn’t actually make sense — because it fits their biases. Perhaps they’re beaten down by decades of muting the power of labor or the destruction of our environment. Perhaps they’re beaten down by the rise of the right and the destruction of the rights of minorities and people of colour.  Or more noxiously, perhaps they’re excited to be the one that called it first, so that the new overlords that they perceive will own this (fictional) future, so much so that they’ll ignore the underlying ridiculousness of the economics, refuse to do any further reading that might invalidate their beliefs, or simply say whatever they’re told because it gets clicks and makes their advertisers, bosses or friends happy. People are willing to fall in line behind mythology because conceiving an entirely-different future is an intellectually challenging and emotionally draining act. It requires learning about a multitude of systems and interconnecting disciplines and being willing to admit, again and again, that you do not understand something and must learn more. There are plenty of people that are willing to do this, and plenty more that are not, and those are the people with TV shows and writing in the newspaper. I believe we’re in a new era. It’s entirely different. Stop trying to say “but in the past,” because the past isn’t that useful, and it’s only useful if you’re capable of evaluating it critically, skeptically, and making sure that it’s actually the same rather than it feeling like it is.  I keep calling this era “The Beginning of History,” not because it directly reflects Francis Fukuyama’s theory (which relates to democracies), but because I believe that those who succeed in this world are not those who are desperate to neatly fit it into the historical failures or successes of the past, but are willing to stare at it with the cold, hard fury of the present.  There are many signs that the past no longer makes sense. The collapse of SaaS (which I’ll cover in this week’s premium), the collapse of the business models of both venture capital and private equity, the collapse of democracies under the weight of fascism because the opposition parties never seem to give enough of a fuck about the experiences of regular people.  That’s because using the past to dictate what will happen in the future is masturbatory. It allows you to feel smart and say “I know the most about anything, which means I know what’s going on.” It is, much like an LLM, assuming that simply reading enough is what makes somebody smart, that shoving a bunch of text in your head — whether or not you understand it is immaterial — is what makes somebody know something or good at something.  It’s an intellectually bankrupt position that I believe will lead those unable to adapt to the reality of the future to destruction. It leads to lazy thinking that grasps at confirmations rather than any fundamental understanding, depriving the general public of good information in the favor of that which confirms the biases and wants and needs of the malignant and ignorant.  It takes courage to be willing to be wrong with deliberacy, but only if you admit that you were wrong. This hasn’t happened in previous bubbles, and it has to again for us to stop bubbles forming. I have made a great deal of effort to learn more as time goes on. I do not see boosters doing the same to prove their points. I will be pointing to this sentence in the future, one way or another.  So much more effort is put into humouring the ideas of the bubbles, of proving the marketing spiel of the bubbles, framed as a noxious “both-sides” that deprives the reader, listener or viewer of their connection with reality. It might be tempting to say this happens with cynicism too, except the majority of attention paid to bubbles is positive , and saying otherwise is a fucking lie. Need to justify unprofitable, unsustainable AI companies? Uber lost money before. Need to explain why AI data centers being built for demand isn’t a problem? Well, the internet exists, and people eventually used that fiber.  You can ignore actual proof while pretending to provide your own, all just by pointing vaguely to things in the past. It takes actual courage to form an opinion, something boosters fundamentally lack.  I’m not saying it’s impossible to make predictions, but that the majority of people make them with flimsy information, such as “this thing happened before” or “everyone’s saying this will happen.” I’m not saying you can’t try and understand what will happen next, but doing so requires you to use information that is not, on its face, generated by wishcasting or events that took place decades ago.  In the end, the greatest lesson we can learn from is that, historically speaking, people tend to fuck around and then find out.  The assumption boosters make is that one can fuck around forever. History tends to disagree. Iran rescinds its ban on travel through the strait. The security situation improves (either because Iran’s ability to attack shipping becomes sufficiently degraded, or because the Gulf countries, or perhaps their Western allies, feel sufficiently confident that they can safely escort ships through the strait).  The current Iranian government is overthrown and the conflict ends.  Both sides reach an agreement and we return to the status quo.  April 1 to 30, 2025, which I estimate as $166 million based on reports of Anthropic’s annualized revenue being $2 billion at the end of March 2025. August 1 to August 20, 2025, which I estimate as $271 million based on July 2025’s revenues ($4 billion). November 1 to November 29, 2025, which I estimate as $556 million, based on October’s $7 billion in annualized revenues.  January 1 to January 11, 2026, which I estimate as $219.1 million, assuming $9 billion in annualized revenue (based on reported December revenues). “Ed, it’s commercial revenue!” — this is all revenue. Anthropic doesn’t have “non-commercial revenue,” unless you are going to use a very, very broad version of what “non-commercial” means, at which point you have to tell me why you trust Anthropic. “This doesn’t include all the revenue up until March 2026! Maybe this suit was written weeks ago!” — even if it doesn’t, based on Anthropic’s own numbers, things don’t line up. Also, this was written specifically as part of the lawsuit with the DoD. It’s recent.  “It says “exceeding”! — it also says “over $10 billion in inference and training costs.” Can I just say whatever number I want here? Because if this is your argument that’s what you’re doing. “That $5 billion number is accurate!” — the only way this makes sense is if some or all of these annualized revenues are incorrect.

0 views

The AI Bubble Is An Information War

Editor's Note: Apologies if you received this email twice - we had an issue with our mail server that meant it was hitting spam in many cases! Hi! If you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 185,000 words, including vast, extremely detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I just put out a massive Hater’s Guide To Private Equity and one about both Oracle and Microsoft in the last month. I am regularly several steps ahead in my coverage, and you get an absolute ton of value, several books’ worth of content a year in fact!. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual. Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.  Soundtrack - The Dillinger Escape Plan - Unretrofied  So, last week the AI boom wilted brutally under the weight of an NVIDIA earnings that beat earnings but didn’t make anybody feel better about the overall stability of the industry . Worse still, NVIDIA’s earnings also mentioned $27bn in cloud commitments — literally paying its customers to rent the chips it sells, heavily suggesting that there isn’t the underlying revenue. A day later, CoreWeave posted its Q4 FY2025 earnings , where it posted a loss of 89 cents per share, with $1.57bn in revenue and an operating margin of negative 6% for the quarter. Its 10-K only just came out the day before I went to press, and I’ve been pretty sick , so I haven’t had a chance to look at it deeply yet. That being said, it confirms that 67% of its revenue comes from one customer (Microsoft).  Yet the underdiscussed part of CoreWeave’s earnings is that it had 850MW of power at the end of Q4, up from 590MW in Q3 2025 — an increase of 260MW…and a drop in revenue if you actually do the maths.  While this is a somewhat-inexact calculation — we don’t know exactly how much compute was producing revenue in the period, and when new capacity came online — it shows that CoreWeave’s underlying business appears to be weakening as it adds capacity, which is the opposite of how a business should run.  It also suggests CoreWeave's customers — which include Meta, OpenAI, Microsoft (for OpenAI), Google, and a $6.3bn backstop from NVIDIA for any unsold capacity through 2032 — are paying like absolute crap.  CoreWeave, as I’ve been warning about since March 2025 , is a time bomb. Its operations are deeply-unprofitable and require massive amounts of capital expenditures ($10bn in 2025 alone to exist, a number that’s expected to double in 2026). It is burdened with punishing debt to make negative-margin revenue, even when it’s being earned from the wealthiest and most-prestigious names in the industry. Now it has to raise another $8.5bn to even fulfil its $14bn contract with Meta . For FY2025, CoreWeave made $5.13bn in revenue, making a $46m loss in the process. The temptation is to suggest that margins might improve at some point, but considering it’s dropped from 17% (without debt) for FY2024 to negative 1% for FY2025, I only see proof to the contrary. In fact, CoreWeave’s margins have only decayed in the last four quarters, going from negative 3%, to 2%, to 4%, and now, back down to negative 6%.  This suggests a fundamental weakness in the business model of renting out GPUs, which brings into question the value of NVIDIA’s $68.13bn in Q4 FY2026 revenue , or indeed, Coreweave’s $66.8bn revenue backlog. Remember: CoreWeave is an NVIDIA-backed ( and backstopped to the point that it’s guaranteeing CoreWeave’s lease payments ) neocloud with every customer they could dream of.  I think it’s reasonable to ask whether NVIDIA might have sold hundreds of billions of dollars of GPUs that only ever lose money. Nebius — which counts Microsoft and Meta as its customers — lost $249.6m on $227.7m of revenue in FY2025 . No hyperscaler discloses their actual revenues from renting out these GPUs (or their own silicon), which is not something you do when things are going well. Lots of people have come up with very complex ways of arguing we’re in a “supercycle” or “AI boom” or some such bullshit, so I’m condensing some of these talking points and the ways to counteract them: Anyway, let’s talk about how much OpenAI has raised, and how none of that makes sense either. Great news! If you don’t think about it for a second or read anything, OpenAI raised $110bn , with $50bn from Amazon, $30bn from NVIDIA and $30bn from SoftBank. Well, okay, not really. Per The Information : Yet again, the media is simply repeating what they’ve been told versus reading publicly-available information. Talking of The Information, they also reported that OpenAI intends to raise another $10bn from other investors, including selling the shares from the nonprofit entity: It’s so cool that OpenAI is just looting its non-profit! Nobody seems to mind.  Talking of things that nobody seems to mind, on Friday Sam Altman accidentally said the quiet part out loud , live on CNBC, when asked about the very obviously circular deals with NVIDIA, Amazon and Microsoft (emphasis mine): Hey Sam, what does “the whole thing” refer to here? Because I know you probably mean the AI industry, but this sounds exactly like a ponzi scheme!   Now, jokes aside, ponzi schemes work entirely through feeding investor money to other investors. OpenAI and AI companies are not a ponzi scheme. There’s real revenues, people are paying it money. Much like NVIDIA isn’t Enron , OpenAI isn’t a ponzi scheme. However , the way that OpenAI describes the AI industry sure does sound like a scam. It’s very obvious that neither OpenAI nor its peers have any plan to make any of this work beyond saying “well we’ll just keep making more money,” and I’m being quite literal, per The Information : That’s right , by the end of 2026 OpenAI will make as much money as Paypal, by the end of 2027 it’ll make $20bn more than SAP, Visa, and Salesforce, and by the end of 2028 it’ll make more than TSMC, the company that builds all the crap that runs OpenAI’s services. By the end of 2030, OpenAI will, apparently, make nearly as much annual revenue as Microsoft ($305.45 billion). It’s just that easy. And all it’ll take is for OpenAI to burn another $230 billion…though I think it’ll need far more than that. Please note that I am going to humour some numbers that I have serious questions about , but they still illustrate my point.  Per The Information , OpenAI had around $17.5bn in cash and cash equivalents at the end of June 2025 on $4.3bn of revenue, with $2.5bn in inference spend and $6.7bn in training compute. Per CNBC in February , OpenAI (allegedly!) pulled in $13.1bn in revenue in 2025, and only had a loss of $8bn but this doesn’t really make sense at all!  Please note, I doubt these numbers! I think they are very shifty! My own numbers say that OpenAI only made $4.3bn through the end of September, and it spent $8.67bn on inference! Nevertheless, I can still make my point. Let’s be real simple for a second: suppose we are to believe that in the first half of the year, it cost $2.5 bn in inference to make $4.3bn in revenue, so around 58 cents per dollar. For OpenAI to make $8.8bn — the distance between $4.3bn and $13.1bn — that’s another $5.1bn in inference, and keep in mind that OpenAI launched Sora 2 in September 2025 and done massive pushes around its Codex platform, guaranteeing higher inference costs. Then there’s the issue of training. For $2.5bn of revenue, OpenAI spent $6.7bn in training costs — or around $2.68 per dollar of revenue. At that rate, OpenAI spent a further $23.58bn on training, bringing us to $28.6bn in burn just for the back half of 2025. Now, you might think I’m being a little unfair here — training costs aren’t necessarily linear with revenues like inference is — but there’s a compelling argument to be made that costs are far higher than we thought. Now, I want to be clear that on February 20 2026 , The Information reported that OpenAI had “about $40 billion in cash at the end of 2025,” but that doesn’t really make sense!  Assuming $17.5bn in cash and cash equivalents at the end of June 2025, plus $8.8bn in revenue, plus $8.3bn in venture funding, plus $22.5bn from Masayoshi son…that’s $57.1bn. If there were a negative cash burn of $8bn, that would be $49.1bn, and no, I’m sorry, “about $40 billion in cash” cannot be rounded down from $49.1bn! In my mind, it’s far more likely that OpenAI’s losses were in excess of $10bn or even $20bn, especially when you factor in that OpenAI is paying an average of $1.5 million in yearly stock based compensation, per the Wall Street Journal .  There’s also another possible answer: I think OpenAI is lying to the media, because it knows the media won’t think too hard about the numbers or compare them. I also want to be clear that this is not me bagging on The Information — they just happen to be reporting these numbers the most. I think they do a great job of reporting, I pay for their subscription out of my own pocket, and my only problem is that there doesn’t seem to be efforts made to talk about the inconsistency of OpenAI’s numbers. I get that it’s difficult too. You want to keep access. Reporting this stuff is important and relevant. The problem is — and I say this as somebody who has read every single story about OpenAI’s funding and revenues! — that this company is clearly just…lying?  Sure you can say “it’s projections,” but there is a clear attempt to use the media to misinform investors and the general public. For example, OpenAI claimed SoftBank would spend $3bn a year on agents in 2025. That never happened!  Anyway, let’s get to it: What I’m trying to get at is that OpenAI (and, for that matter, Anthropic) has spent the last two years increasingly obfuscating the truth through leak after leak to the media.  The numbers do not make any sense when you actually put them together, and the reason that these companies continue to do this is that they’re confident that these outlets will never say a thing, or cover for the discrepancies by saying “these are projections!”  These are projections, and I think it’s a noteworthy story that these companies either wildly miss their projections (IE: costs) or almost exactly make their projections (revenues), which is even weirder.  But the biggest thing to take away from this is that one of the classic arguments against my work is that “costs will just come down,” but the costs never come down.  That, and it appears that both of these companies are deliberately obfuscating their real numbers as a means of making themselves look better.  Well, leaking and outright posting it. On December 17 2025, OpenAI’s Twitter account posted the following: These numbers are, of course, bullshit. OpenAI may have hit $6bn ARR in 2024 ($500m in a 30 day period, though OpenAI has never defined this number) or $20bn ($1.67bn in a 30 day period) ARR in 2025, but this is specifically diagramed to make you think “$20bn in 2025” and “$6bn in 2024.” There are members of the media who defend OpenAI saying that “these are annualized figures,” but OpenAI does not state that, because OpenAI loves to lie.  Anthropic isn’t much better, as I discussed a few weeks ago in the Hater’s Guide . Chief Executive Dario Amodei has spent the last few years massively overstating what LLMs can do in the pursuit of eternal growth.  He’s also framed himself as a paragon of wisdom and Anthropic as a bastion of safety and responsibility. There appears to be some confusion around what happened in the last few days that I’d like to clear up, especially after the outpouring of respect for Anthropic “doing the right thing” when the Department of Defense threatened to label it a supply chain risk for not agreeing to its terms. Per Anthropic , on Friday February 27 2026:  Anthropic, of course, leaves out one detail: Hegseth said that “...effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” If Hegseth follows through, Anthropic’s business will collapse, though Anthropic and its partners are ignoring this statement as a supply chain risk only forbids Anthropic from working with the US government itself. When the US military attacked Iran a day later, people quickly interpreted Anthropic’s narrow (by its own words) and specific limitations with some sort of anti-war position. Claude quickly rocketed to the top of the iOS app charts, I assume because people believe that Dario Amodei was saying “I don’t want the war in Iran!” versus “I fully support the war in Iran and any uses you might need my software for other than the two I’ve mentioned, let me or support know if you have any issues!” To be clear, these were the only issues that Anthropic had with the contract. Whether or not these are things that an LLM is actually good at, Anthropic (and I quote!) “...[supports] all lawful uses of AI for national security aside from the two narrow exceptions above.”  The military’s demands were for “all lawful uses,” though I don’t think Anthropic really gives a shit about whether the war in Iran is legal , because if it did it would have shut down the chatbot rather than supported the conflict. Just as a note: Anthropic is also the only AI model that appears to be available for classified military operations.  Let’s be explicit: Anthropic’s Claude (and its various models) are fully approved for use in the military, and, to quote its own blog post , “has supported American warfighters since June 2024 and has every intention of continuing to do so.” To be explicit about what “support” means, I’ll quote the Wall Street Journal : In reality, Claude is likely being used to go through a bunch of images and to answer questions about particular scenarios. There is very little specialized military training data, and I imagine many of the demands for “full access to powerful AI” have come as a result of Amodei and Altman’s bloviating about the “incredible power of AI.” More than likely, Centcom and the rest of the military pepper it with questions that allow it to justify acts that blow up schools , kill US servicemembers and threaten to continue the forever war that has killed millions of people and thrown the Middle East into near-permanent disarray. Nevertheless, Dario Amodei gets fawning press about being a patriot that deeply cares about safety less than a week after Anthropic dropped its safety pledge to not train an AI system unless it could guarantee in advance that its safety measures were accurate.  Here’re some other facts about Dario Amodei from his interview with CBS ! “What’s right,” to be clear, involves allowing Claude to choose who lives or dies and to be used to plan and execute armed conflicts.  Let’s stop pretending that Anthropic is some sort of ethical paragon! It’s the same old shit!  In any case, it’s unclear what happens next. Anthropic appears ready to challenge the supply chain risk designation in court, and said designation doesn’t kick in immediately, requiring a series of procedures including an inquiry into whether there are other ways to reduce the risk associated. In any case, the DoD has a six-month-long taper-off period with Anthropic’s software. The real problem will be if Hegseth is serious about the stuff that isn’t legally within his power — namely limiting contractors, suppliers or partners from working with Anthropic entirely. While no legal authority exists to carry this through, seemingly every tech CEO has lined up to kiss up to the Trump Administration .  If Hegseth and the administration were to truly want to punish Anthropic, they could put pressure on Amazon, Microsoft and Google to cut off Anthropic, which would cut it off from its entire compute operation — and yes, all three of them do business with the US military, as does Broadcom , which is building $21 billion in TPUs for it . While I think it’s far more likely that the US government itself shuts the door on Anthropic working with it for the foreseeable future even without the supply chain risk designation, it’s worth noting that Hegseth was quite explicit — “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”  The reality of the negotiations was a little simpler, per the Atlantic . The Department of Defense had agreed to terms around not using Claude for mass domestic surveillance or fully autonomous killing machines (the former of which it’s not particularly good at and the latter of which it flat out cannot do), but, well, actually very much intended to use Claude for domestic surveillance anyway: Now, I’m about to give you another quote about autonomous weapons, and I really want you to pay attention to where I emphasize certain things for a subtle clue about Anthropic’s ethics: So, let’s be clear: Anthropic wants to help the military make more accurate kill drones , and in fact loves them . One might take this to be somewhat altruistic — Dario Amodei doesn’t want the US military to hit civilians — but remember: Anthropic is totally fine with the US military using Claude for anything else, even though hallucinations are an inevitable result of using a Large Language Model .   Any dithering around the accuracy of a drone exists only to obfuscate that Anthropic sells software that helps militaries hand over the messy ethical decisions to a chatbot that exists specifically to tell you what you want to hear. Stinky, nasty, duplicitous conman Sam Altman smelled blood amidst these negotiations and went in for the kill, striking a deal on Friday with the Pentagon for ChatGPT and OpenAI’s other models to be used in the military’s classified systems , with initial reports saying that it had “similar guardrails to those requested by Anthropic.”  In a post about the contract , Clammy Sammy said that the DoD displayed “a deep respect for safety and a desire to partner to achieve the best possible outcome,” adding: Undersecretary Jeremy Levin almost immediately countered this notion , saying that the contract “...flows from the touchstone of “all lawful use.” This quickly created a diplomatic incident where OpenAI decided that the best time to discuss the contract was an entire Saturday and that the way to discuss it was posting. It shared some details on the contract , which included the fatal phrase that the Department of Defense “...may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.” Per The Verge’s Hayden Field : As questions mounted about the actual terms of the deal, Sam Altman realized that his only solution was to post, and at 4:13PM PT on Saturday February 28 2026, he said down to make things significantly worse in a brief-yet-chaotic AMA , including: All of this is to say that Altman definitely, absolutely loves war, and wants OpenAI to make money off of it, though according to OpenAI NatSec head Katrina Mulligan, said contract is only worth a few million dollars .  It’s unclear. A late-evening story from Axios on Monday reported that “OpenAI and the Pentagon have agreed to strengthen their recently agreed contract, following widespread backlash that domestic mass surveillance was still a real risk under the deal — though the language has not been formally signed.” The language seen by Axios states: One has to wonder how different this is to what Anthropic wanted, but if I had to guess, it’s those words “intentionally” and “deliberate.” The same goes for “consistent with applicable laws.” One useful thing that Altman confirmed was that ChatGPT will not be used with the NSA… and that any services to those agencies would require a follow-on modification to the contract. Doesn’t mean they won’t sign one! Forgive me for being cynical about something from Sam fucking Altman , but I just don’t trust the guy, and this is an (as of writing this sentence) unsigned contract with bus-sized loopholes. Per Tyson Brody (who has a great thread breaking down the issues), these weasel words allow the DoD to surveil Americans as long as the data is collected “incidentally,” per Section 702 of FISA.  This announcement gives OpenAI the air cover to pretend it got exactly the same deal as Anthropic, even though those nasty little words allow the DoD to do just about anything it wants. Oh, it wasn’t deliberate surveillance, we just looked up whether some people had said stuff about the administration. Oh it wasn’t deliberately looking, I just asked it to find suspicious people , of which domestic people happened to be a part of! Whoopsie! This is ultimately a PR move to make Altman seem more ethical, and position Amodei as a pedant that rejects his patriotism and prioritizes legalese over freedom.   If it kills Anthropic, we must memorialize this as one of the most underhanded and outright nasty things in the history of Silicon Valley. If it doesn’t, we should memorialize it as two men desperately trying to pretend they crave peace and democracy as they spar for the opportunity to monetize death and destruction.  The funniest outcome of this chaos is that many people are very, very angry at Sam Altman and OpenAI, assuming that ChatGPT was somehow used in the conflict in Iran, and that Amodei and Anthropic somehow took a stand against a war it used as a means of generating revenue.  In reality, we should loathe both Altman and Amodei for their natural jingoism and continual deception. Amodei and Anthropic timed their defiance of the Department of Defense to make it seem like its “red lines” were related to the war. I think it’s good they have those red lines, but remember, those red lines do not involve stopping a war that threatens the lives of millions of people. Amodei supports that. Anthropic both supports and enables that.  Altman, on the other hand, is a slimy little creep that wants you to believe that he signed the same deal as Anthropic wanted, but actually signed one that allows “any lawful use.”  And in both cases, these men are both enthusiastic to work with a part of the government calling itself the Department of War. Both of them are willing and able to provide technology that will surveil or kill people, and while Amodei may have blushed at something to do with autonomous weapons or domestic surveillance, neither appear to have an issue with the actual harms that their models perpetuate. Remember: Anthropic just pitched its technology as part of an ongoing Department of Defense drone swarm contest . It loves war! Its only issue was that there wasn’t a human in the loop somewhere . Neither of these men deserve a shred of credit or celebration. Both of them were and are ready and willing to monetize war, as long as it sort-of-kind-of follows the law .  And rattling around at the bottom of this story is a dark problem caused by the fanciful language of both Altman and Amodei. When it’s about cloud software, Dario Amodei is more than willing to say that it will cause mass elimination of jobs across technology, finance, law and consulting,” and that it will replace half of all white collar labor . When it’s time to raise money, Altman is excited to tell us that AI will surpass human intelligence in the next four years . Now that lives are theoretically at stake, Altman vaguely cares about the things that an LLM “ isn’t very good at ,” Once Claude is used to choose places to bomb and people to kill, suddenly Anthropic cares that “frontier AI systems are simply not reliable enough,” and even then not so much as to stop a chatbot that hallucinates from being used in military scenarios.  Altman and Amodei want it both ways. They want to be pop culture icons that go on Jimmy Fallon and thought leaders who tell ghost stories about indeterminately-powerful software they sell through deceit and embellishment . They want to be pontificators and spokespeople, elder statesmen that children look up to, with the specious profiles and glowing publicity to boot. They want Claude or ChatGPT as seen as capable of doing anything that any white collar worker is capable of, even if they have to lie to do so, helped by a tech and business media asleep at the wheel. They also want to be as deeply-connected to the military industrial complex as Lockheed Martin or RTX (née Raytheon). Anthropic has been working with the DoD since 2024, and OpenAI was so desperate to take its place that Altman has immolated part of his reputation to do so.  Both of these companies are enthusiastic parts of America’s war machine. This is not an overstatement — Dario Amodei and Anthropic “ believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries .” OpenAI and Sam Altman are “terrified of a world where AI companies act like they have more power than the government.”  For all the stories about Anthropic creating a “ nation of benevolent AI geniuses ,” Dario Amodei seems far more interested in creating a world dictated by what the United States of America deems to be legal or just, and providing services to help pursue those goals, as does OpenAI and, I’d argue, basically every AI lab. We’re barely two weeks divorced from the agonizing press around Amanda Askell, Anthropic’s “ resident philosopher ,” whose job, per the Wall Street Journal, is to “teach Claude how to be good.” There are no mentions in any story I can find about what she might teach Claude about what targets are considered fair game in military combat.  WIRED’s profile of her starts with a title that aged like milk in the sun, saying that “ the only thing standing between humanity and an AI apocalypse…is Claude ?” Tell that to the people in Tehran. I wonder what Askell taught Claude to say about war? I wonder what she taught Claude to say about democracy? I wonder if she even gives a shit. I doubt it.  —  Generative AI isn’t intelligent, but it allows people to pretend that it is, especially when the people selling the software — Altman and Amodei — so regularly overstate what it can do.  By giving warmongers and jingoists the cover to “trust” this “authoritative” service — whether or not that’s the case, they can simply point to the specious press — the ethical concern of whether or not an attack was ethical or not is now, whenever any western democracy needs it to be, something that can be handed off to Claude, and justified with the cold, logical framing of “intelligence” and “data.”  None of this would be possible without the consistent repetition of the falsehoods peddled by OpenAI and Anthropic. Without this endless puffery and overstatements about the “power of AI,” we wouldn’t have armed conflicts dictated by what a chatbot can burp up from the files it’s fed. The deaths that follow will be a direct result of those who choose to continue to lie about what an LLM does.  Make no mistake, LLMs are still incapable of unique ideas and are still, outside of coding (which requires massive subsidies to even be kind of useful), questionable in their efficacy and untrustworthy in their outputs. Nothing about the military’s use of Claude makes it more useful or powerful than it was before — they’re probably just loading files into it and asking it long questions about things and going “huh” at the end.  The vulgar dishonesty of Altman and Amodei puts blood on both of their hands, and it’s the duty of every single member of the media to remind people of this whenever you discuss their software.  I get that you probably think I’m being dramatic, but tell me — do you think that the US military would’ve trusted LLMs had they not been marketed as capable of basically anything? Do you think any of this would’ve happened had there been an honest, realistic discussion of what AI can do today, and what it might do tomorrow?  I guess we’ll never know, and the people blown to bloody pieces at the other end of an LLM-generated stratagem won’t be alive to find out either. In Q3 2025 , CoreWeave had $1.36bn in revenue on 590MW of compute, working out to $2.3m per megawatt. In Q4 2025, CoreWeave had $1.57bn in revenue on 850MW of compute, working out to $1.847m per megawatt. OpenAI had $13.1bn in revenue in 2025! They only lost $8bn ! Did it? Based on my own reporting , which has been ignored (I guess it’s easier to do that than think about it?) by much of the press, OpenAI made $4.33bn through the end of September, and spent $8.67bn on inference in that period. Notice how I said “inference.” Training costs, data costs, and simply, the costs of doing business are in addition to that. OpenAI has 900m weekly active users! Yeah everybody is talking about AI 24/7 and ChatGPT is the one everybody talks about.  Google Gemini Has 750m- Google changed Google Assistant to Gemini on literally everything, including Google Home , and force-fed it to users of Google Docs and Google Search.  Claude Code is changing the world! It’s writing SaaS now! It’s replacing all coders! As I discussed both at the beginning of the Hater’s Guide To Private Equity and my free newsletter last week , software is not as simple as spitting out code, neither is it able to automatically clone the SaaS experience.  Midwits and the illiterate claim that this somehow defeats my previous theses where I allegedly said the word “useless.” While I certainly goofed claiming generative AI had three quarters left in March 2024, my argument was that I thought that “generative AI [wouldn’t become] a society-altering technology, but another form of efficiency-driving cloud computing software that benefits a relatively small niche of people ,” as I have said that people really do use them for coding . Even Claude Code, the second coming of Christ in the minds of some of Silicon Valley’s most concussed boosters, only made $203m in revenue ( $2.5bn ARR ) for a product that at times involves Anthropic spending anywhere from $8 to $13.50 for every dollar it makes . People Doubted Amazon But It Made Lots Of Money In The End! No they didn’t. Benedict Evans defended Amazon’s business model . Jay Yarow of Business Insider defended it too . Practical Ecommerce called Amazon Web Services “Amazon’s cash cow” in October 2013 . In April 2013, WIRED’s Marcus Wohlsen managed to name one skeptic — Paulo Santos, based in Portugal, who appears to have dropped off the map after 2024, but remained a hater long after AWS hit profitability in 2009. I cannot find any other skeptics of Amazon, and I cannot for the life of me find a single skeptic of AWS itself.  AWS Cost A Lot Of Money So We Should Spend So Much Money On AI! I’m sick and fucking tired of this point so I went and did the work, which you can view here , to find every single year of capex that Amazon spent  When you add together all of Amazon’s capital expenditures between 2002 and 2017, which encompasses its internal launch, 2006 public launch, and it becoming profitable in 2015 , you get $37.8bn in total capex (or $52.1bn adjusted for inflation). For some context, OpenAI raised around $42bn in 2025 alone. The fact that we have multiple different supposedly well-informed journalists making the “Amazon spent lots of money!” point to this day is a sign that we’re fundamentally living in hell. OpenAI raised $15bn from Amazon, with $35bn contingent on AGI or an IPO. OpenAI got commitments from SoftBank and NVIDIA, who may or may not have committed to $30bn each, and will be paying in three installments. Please note that CNBC authoritatively reported in September that “the initial $10 billion tranche locked in at a $500 billion valuation was expected to close within a month” for a deal that was only ever a Letter of Intent. This is why it’s important not to report things as closed before they’re closed. As of right now, evidence suggests that nobody has actually sent OpenAI any money. Per NVIDIA’s 10-K filed last week , it is (and I quote) “...finalizing an investment and partnership agreement with OpenAI [and] there is no assurance that we will enter into an investment and partnership agreement with OpenAI or that a transaction will be completed.” It’s going to be interesting seeing how SoftBank funds this. It funded OpenAI’s last $7.5bn check with part of the proceeds from a $15bn, one-year-long bridge loan , and the remaining $22.5bn by selling its $5.83bn in NVIDIA stock and its $13.5bn margin loan using its ARM stock . Nevertheless, per its own statement , SoftBank intends to pay OpenAI $10bn on April 1 2026, July 1 2026, and October 1 2026, all out of the Vision Fund 2. Its statement also adds that “the Follow-on Investment is expected to be financed initially through bridge loans and other financing arrangements from major financial institutions, and subsequently replaced over time through the utilization of existing assets and other financing measures. Per The Information, OpenAI was at $17.5bn in cash and cash equivalents at the end of June 2025. It had just raised $10bn from SoftBank and other investors. OpenAI would raise another $8.3bn on August 1 2025 , bringing that cash and equivalents pile to $25.8bn, assuming it remained untouched. OpenAI would raise another $22.5bn from SoftBank on December 31 2025 , bringing up the total to $48.3bn. In the second half of the year, OpenAI would (allegedly) make another $8.8bn, which would bring us up to $57.1bn — with a total year loss of either $9bn or $8 bn depending on whether you believe The Information or CNBC .  But wait, that doesn’t make sense as a total year loss!  Let’s look at the first half numbers again. When we take the raw cost of inference ($2.5bn) and training ($6.7bn) and subtract revenue ($4.3 billion), we’re left with a $4.9bn loss just for the first half , and that’s before you include things like headcount, sales and marketing, and general operating expenses, which (per The Information) amounted to $2bn in the first half of the year. Now, let’s run these numbers again but with my napkin math estimates — $23.58bn in training costs and $5.1bn in inference costs, for a total of $28.42bn. Add another $2bn in sales and marketing costs, $1.76bn in revenue share to Microsoft (20% of $8.8bn), guesstimating the cash salaries of OpenAI’s staff (based on them being around 17.5% of the company’s revenue in 2024 ) at $1.54bn, SG&A costs (about 15% in 2024) of $1.32bn, data costs (12.5% in 2024) of about $1.1bn, and hosting costs (10% in 2024 of about $880,, we’re at around $37bn — leaving OpenAI with about…$20bn in cash at the end of the year. In October 2024 , The Information reported that OpenAI only burned $340m in the first half of 2024, that its “cash burn has been lower than previously thought,” that it “projected total losses from 2023 to 2028 to be $44 billion,” and that it would be EBITDA profitable (minus training costs, lol) in 2026. The piece also says OpenAI would make $14bn in profit in 2029, and somehow also burn $200bn by 2030. Confusingly, this piece said net losses for 2024 were $3bn through the first half of 2024, but would go on to project a net loss for the year of $5.5bn! In February 2025 , The Information reported that OpenAI would make $12.7bn in 2025, with $3bn of that coming from SoftBank spending $3bn a year on its “agents,” something that never happened and nobody talks about anymore. The same piece said OpenAI would burn $7bn in 2025, and now expected to spend $320bn on compute between 2025 and 2030. Burn for 2026 is estimated at $8bn, and $20bn in 2027. Revenue for 2026 is estimated at $28bn . The maths does not make a lick of sense here. In April 2025 , The Information reported that OpenAI projected $174bn in revenue through 2030 and said that gross margins were 40% in 2024, and would be 48% in 2025, and hit 69% in 2029. Confusingly, the same piece says that OpenAI expects to burn $46bn in cash between 2025 and 2029, which does not make sense if you factor in any of the previously-discussed compute costs. In early September 2025 , The Information would report that — psyche! — OpenAI would actually burn $115bn through 2029, with the plan to burn $35bn in 2027 and $45bn in 2028, which is a lot higher than “$44bn in five years.” Revenue for 2026 is now $30bn, and burn for 2027 is now $35bn.  In late September 2025 , The Information would report that OpenAI had a net loss of $13.5bn in the first half of 2025 with revenue estimates of $30bn in 2026 and $62bn in 2027. On February 20, 2026 , The Information reported that OpenAI would actually burn $230bn through 2030, cloud costs would be $665bn, and that gross margins got worse (33%! Down from the “46% it had set for itself,” or 48% if you count previous published projections ), and that it would burn $26bn in 2026 alone, or more than half of the October 2024 projections for its burnrate between 2023 and 2028! On having political views: “We don't-- we don't have views-- we don't think about general political issues, and we try to work together whenever there's common ground.” On being “woke”: “So this idea that we've somehow been partisan or that we haven't been evenhanded, we've been studiously evenhanded. And-- and again, we can't control if someone, even-- even the president, you know, ha-- has an opinion about us. That's not under our control. What's under our control is that we can be reasonable. We can be neutral. And we can stand up for what we believe.” On what Anthropic believes: “ We believe in-- defeating our autocratic adversaries. We believe in defending America. The red lines we have drawn, we drew because we-- we-- we-- we believe that crossing those red lines is-- is contrary to American values. And we wanted to stand up for American values.” On the US government’s handling of the situation: “ And that's why we're committed to standing up to-- you know, actions that we think are not in line with the values of this country. It's-- it's not about any particular person. It's not about any particular administration. It's about the principle of standing up for what's right.” Altman approving of non-domestic AI surveillance , saying he “didn’t like it” but “accepted it,” echoing The Day Today’s Peter O'Hanraha-hanrahan .  Altman saying that the supply chain risk designation would be “very bad for our industry and our country,” that “successfully building safe superintelligence and widely sharing the benefits is way more important that any company competition,” and that he “saw in some other tweet that [he] must not be willing to criticize the DoW (it said something about sucking their dick too hard to be able to say anything critical, but I assume this was the intent).”  Altman saying that the deal was rushed “ as an attempt to de-escalate matters at a time when it felt like things could get extremely hot .” Yeah man, you’re really de-escalating the Anthropic situation by providing a replacement for its software. Altman saying he was prepared to go to jail if OpenAI was asked to do something unconstitutional or illegal . Altman saying that “...the people in our military are far more committed to the constitution than an average person off the streets,” and that he “didn’t think OpeNAI was above the constitution either.” Altman declaring that he did “...not believe unelected leaders of private companies should have as much power as our democratically elected government,” and that we should have sympathy for the Department of Defense because Anthropic had refused to help them and called them “kind of evil.”

0 views

Premium: The Hater's Guide to Private Equity

We have a global intelligence crisis, in that a lot of people are being really fucking stupid. As I discussed in this week’s free piece , alleged financial analyst Citrini Research put out a truly awful screed called the “2028 Global Intelligence Crisis” — a slop-filled scare-fiction written and framed with the authority of deeply-founded analysis, so much so that it caused a global selloff in stocks .  At 7,000 words, you’d expect the piece to have some sort of argument or base in reality, but what it actually says is that “AI will get so cheap that it will replace everything, and then most white collar people won’t have jobs, and then they won’t be able to pay their mortgages, also AI will cause private equity to collapse because AI will write all software.”  This piece is written specifically to spook *and* ingratiate anyone involved in the financial markets with the idea that their investments are bad but investing in AI companies is good, and also that if they don't get behind whatever this piece is about (which is unclear!), they'll be subject to a horrifying future where the government creates a subsidy generated by a tax on AI inference (seriously). And, most damningly, its most important points about HOW this all happens are single sentences that read "and then AI becomes more powerful and cheaper too and runs on a device."  Part of the argument is that AI agents will use cryptocurrency to replace MasterCard and Visa. It’s dogshit. I’m shocked that anybody took it seriously. The fact this moved markets should suggest that we have a fundamentally flawed financial system — and here’s an annotated version with my own comments. This is the second time our markets have been thrown into the shitter based on AI booster hype. A mere week and a half ago, a software sell-off began because of the completely fanciful and imaginary idea that AI would now write all software . I really want to be explicit here: AI does not threaten the majority of SaaS businesses, and they are jumping at ghost stories.  If I am correct, those dumping software stocks believe that AI will replace these businesses because people will be able to code their own software solutions. This is an intellectually bankrupt position, one that shows an alarming (and common) misunderstanding of very basic concepts. It is not just a matter of “enough prompts until it does this” — good (or even functional!) software engineering is technical, infrastructural, and philosophical, and the thing you are “automating” is not just the code that makes a thing run.  Let's start with the simplest, and least-technical way of putting it: even in the best-case scenario, you do not just type "Build Be A Salesforce Competitor" and it erupts, fully-formed, from your Terminal window. It is not capable of building it, but even if it were, it would need to actually be on a cloud hosting platform, and have all manner of actual customer data entered into it. Building software is not writing code and then hitting enter and a website appears, requiring all manner of infrastructural things (such as "how does a customer access it in a consistent and reliable way," "how do I make sure that this can handle a lot of people at once," and "is it quick to access," with the more-complex database systems requiring entirely separate subscriptions just to keep them connecting ).  Software is a tremendous pain in the ass. You write code, then you have to make sure the code actually runs, and that code needs to run in some cases on specific hardware, and that hardware needs to be set up right, and some things are written in different languages, and those languages sometimes use more memory or less memory and if you give them the wrong amounts or forget to close the door in your code on something everything breaks, sometimes costing you money or introducing security vulnerabilities.  In any case, even for experienced, well-versed software engineers, maintaining software that involves any kind of customer data requires significant investments in compliance, including things like SOC-2 audits if the customer itself ever has to interact with the system, as well as massive investments in security.  And yet, the myth that LLMs are an existential threat to existing software companies has taken root in the market, sending the share prices of the legacy incumbents tumbling. A great example would be SAP, down 10% in the last month.  SAP makes ERP (Enterprise Resource Planning, which I wrote about in the Hater's Guide To Oracle ) software, and has been affected by the sell-off. SAP is also a massive, complex, resource-intensive database-driven system that involves things like accounting, provisioning and HR, and is so heinously complex that you often have to pay SAP just to make it function (if you're lucky it might even do so). If you were to build this kind of system yourself, even with "the magic of Claude Code" (which I will get to shortly), it would be an incredible technological, infrastructural and legal undertaking.  Most software is like this. I’d say all software that people rely on is like this. I am begging with you, pleading with you to think about how much you trust the software that’s on every single thing you use, and what you do when a piece of software stops working, and how you feel about the company that does that. If your money or personal information touches it, they’ve had to go through all sorts of shit that doesn’t involve the code to bring you the software.  Any company of a reasonable size would likely be committing hundreds of thousands if not millions of dollars of legal and accounting fees to make sure it worked, engineers would have to be hired to maintain it, and you, as the sole customer of this massive ERP system, would have to build every single new feature and integration you want. Then you'd have to keep it running, this massive thing that involves, in many cases, tons of personally identifiable information. You'd also need to make sure, without fail, that this system that involves money was aware of any and all currencies and how they fluctuate, because that is now your problem. Mess up that part and your system of record could massively over or underestimate your revenue or inventory, which could destroy your business. If that happens, you won't have anyone to sue. When bugs happen, you'll have someone who's job it is to fix it that you can fire, but replacing them will mean finding a new person to fix the mess that another guy made.  And then we get to the fact that building stuff with Claude Code is not that straightforward. Every example you've read about somebody being amazed by it has built a toy app or website that's very similar to many open source projects or website templates that Anthropic trained its training data on. Every single piece of SaaS anyone pays for is paying for both access to the product and a transfer of the inherent risk or chaos of running software that involves people or money. Claude Code does not actually build unique software. You can say "create me a CRM," but whatever CRM it pops out will not magically jump onto Amazon Web Services, nor will it magically be efficient, or functional, or compliant, or secure, nor will it be differentiated at all from, I assume, the open source or publicly-available SaaS it was trained on. You really still need engineers, if not more of them than you had before. It might tell you it's completely compliant and that it will run like a hot knife through butter — but LLMs don’t know anything, and you cannot be sure Claude is telling the truth as a result. Is your argument that you’d still have a team of engineers (so they know what the outputs mean), but they’d be working on replacing your SaaS subscription? You’re basically becoming a startup with none of the benefits.  To quote Nik Suresh, an incredibly well-credentialed and respected software engineer (author of I Will Fucking Piledrive You If You Mention AI Again ), “...for some engineers, [Claude Code] is a great way to solve certain, tedious problems more quickly, and the responsible ones understand you have to read most of the output, which takes an appreciable fraction of the time it would take to write the code in many cases. Claude doesn't write terrible code all the time, it's actually good for many cases because many cases are boring. You just have to read all of it if you aren't a fucking moron because it periodically makes company-ending decisions.” Just so you know, “company-ending decisions” could start with your vibe-coded Stripe clone leaking user credit card numbers or social security numbers because you asked it to “just handle all the compliance stuff.” Even if you have very talented engineers, are those engineers talented in the specifics of, say, healthcare data or finance? They’re going to need to be to make sure Claude doesn’t do anything stupid !  So, despite all of this being very obvious , it’s clear that the markets and an alarming number of people in the media simply do not know what they are talking about. The “AI replaces software” story is literally “Anthropic has released a product and now the resulting industry is selling off,” such as when it launched a cybersecurity tool that could check for vulnerabilities (a product that has existed in some form for nearly a decade) causing a sell-off in cybersecurity stocks like Crowdstrike — you know, the one that had a faulty bit of code cause a global cybersecurity incident that lost the Fortune 500 billions , and led to Delta Air Lines suspending over 1,200 flights over six long days of disruption .  There is no rational basis for anything about this sell-off other than that our financial media and markets do not appear to understand the very basic things about the stuff they invest in. Software may seem complex, but (especially in these cases) it’s really quite simple: investors are conflating “an AI model can spit out code” with “an AI model can create the entire experience of what we know as “software,” or is close enough that we have to start freaking out.” This is thanks to the intentionally-deceptive marketing pedalled by Anthropic and validated by the media. In a piece from September 2025, Bloomberg reported that Claude Sonnet 4.5 could “code on its own for up to 30 hours straight,”  a statement directly from Anthropic repeated by other outlets that added that it did so “on complex, multi-step tasks,” none of which were explained. The Verge, however, added that apparently Anthropic “ coded a chat app akin to Slack or Teams ,” and no, you can’t see it, or know anything about how much it costs or its functionality. Does it run? Is it useful? Does it work in any way? What does it look like? We have absolutely no proof this happened other than them saying it, but because the media repeated it it’s now a fact.  Perhaps it’s not a particularly novel statement, but it’s becoming kind of obvious that maybe the people with the money don’t actually know what they’re doing, which will eventually become a problem when they all invest in the wrong thing for the wrong reasons.   SaaS (Software as a Service, which almost always refers to business software) stocks became a hot commodity because they were perpetual growth machines with giant sales teams that existed only to make numbers go up, leading to a flurry of investment based on the assumption that all numbers will always increase forever, and every market is as giant as we want. Not profitable? No problem! You just had to show growth. It was easy to raise money because everybody saw a big, obvious path to liquidity, either from selling to a big firm or taking the company public… …in theory.  Per Victor Basta , between 2014 and 2017, the number of VC rounds in technology companies halved with a much smaller drop in funding, adding that a big part was the collapse of companies describing themselves as SaaS, which dropped by 40% in the same period. In a 2016 chat with VC David Yuan, Gainsight CEO Nick Mehta added that “the bar got higher and weights shifted in the public markets,” citing that profitability was now becoming more important to investors.  Per Mehta, one savior had arrived — Private Equity, with Thoma Bravo buying Blue Coat Systems in 2011 for $1.3 billion (which had been backed by a Canadian teacher’s pension fund!), Vista Equity buying Tibco for $4.3 billion in 2014, and Permira Advisers (along with the Canadian Pension Plan Investment Board) buying Informatica for $5.3 billion ( with participation from both Salesforce and Microsoft ) in 2015, 16 years after its first IPO. In each case, these firms were purchased using debt that immediately gets dumped onto the company’s balance sheet, known as a leveraged buyout.  In simple terms, you buy a company with money that the company you just bought has to pay off. The company in question also has to grow like gangbusters to keep up with both that debt and the private equity firm’s expectations. And instead of being an investor with a board seat who can yell at the CEO, it’s quite literally your company, and you can do whatever you want with (or to) it. Yuan added that the size of these deals made the acquisitions problematic, as did their debt-filled: Symantec would acquire Blue Coat for $4.65 billion in 2016 , for just under a 4x return. Things were a little worse for Tibco. Vista Equity Partners tried to sell it in 2021 amid a surge of other M&A transactions , with the solution — never change, private equity! — being to buy Citrix for $16.5 billion (a 30%% premium on its stock price) and merge it with Tibco, magically fixing the problem of “what do we do with Tibco?” by hiding it inside another transaction. Informatica eventually had a $10 billion IPO in 2021, which was flat in its first day of trading , never really did more than stay at its IPO price, then sold to Salesforce for $8 billion in 2025 , at an equity value of $8 billion , which seems fine but not great until you realize that, with inflation, the $5.3 billion that Permira invested in 2015 was about $7.15 billion in 2025’s money. In every case, the assumption was very simple: these businesses would grow and own their entire industries, the PE firm would be the reason they did this (by taking them private and filling them full of debt while making egregious growth demands), and the meteoric growth of SaaS would continue in perpetuity.  Yet the real year that broke things was 2021. As everybody returned to the real world, consumer and business spending skyrocketed, leading ( per Bloomberg ) to a massive surge in revenues that convinced private equity to shove even more cash and debt up the ass of SaaS: Bloomberg is a little nicer than I am, so they’re not just writing “deals were waved through because everybody assumed that software grows forever and nobody actually knew a thing about the technology or why it would grow so fast.” Unsurprisingly, this didn’t turn out to be true. Per The Information , PE firms invested in or bought 1,167 U.S. software companies for $202 billion, and usually hold investments for three to five years. Thankfully, they also included a chart to show how badly this went:  2021 was the year of overvaluation, and ( per Jason Lemkin of SaaStr ) 60% of unicorns (startups with $1bn+) valuations hadn’t raised funds in years. The massive accumulated overinvestment, combined with no obvious pathway to an exit, led to people calling these companies “ Zombie Unicorns ”: The problem, to quote The Information, is that “PE firms don’t want to lock in returns that are lower than what they promised their backers, say some executives at these firms,” and “many enterprise software firms’ revenue growth has slowed.” Per CNBC in November 2025 , private equity firms were facing the same zombie problem: Per Jason Lemkin , private equity is sitting on its largest collection of companies held for longer than four years since 2012, with McKinsey estimating that more than 16,000 companies (more than 52% of the total buyout-backed inventory) had been held by private equity for more than four years, the highest on record. In very simple terms, there are hundreds of billions of tech companies sitting in the wings of private equity firms that they’re desperate to sell, with the only customers being big tech firms, other private equity firms, and public offerings in one of the slowest IPO markets in history . Investing used to be easy. There were so many ideas for so many companies, companies that could be worth billions of dollars once they’d been fattened up with venture capital and/or private equity. There were tons of acquirers, it was easy to take them public, and all you really had to do was exist and provide capital. Companies didn’t have to be good , they just had to look good enough to sell. This created a venture capital and private equity industry based on symbolic value, and chased out anyone who thought too hard about whether these companies could actually survive on their own merits. Per PitchBook, since 2022, 70% of VC-backed exits were valued at less than the capital put in , with more than a third of them being startups buying other startups in 2024. Private equity firms are now holding assets for an average of 7 years , McKinsey also added one horrible detail for the overall private equity market, emphasis mine:  You see, private equity is fucking stupid, doesn’t understand technology, doesn’t understand business, and by setting up its holdings with debt based on the assumption of unrealistic growth, they’ve created a crisis for both software companies and the greater tech industry.  On February 6, more than $17.7 billion of US tech company loans dropped to “distressed” trading levels (as in trading as if traders don’t believe they’ll get paid, per Bloomberg ), growing the overall group of distressed tech loans to $46.9 billion, “dominated by firms in SaaS.” These firms included huge investments like Thoma Bravo’s Dayforce ( which it purchased two days before this story ran for $12.3 billion ) and Calabrio ( which it acquired for “over” $1 billion in April 2021 and merged with Verint in November 2025 ).  This isn’t just about the shit they’ve bought , but the destruction of the concept of “value” in the tech industry writ large. “Value” was not based on revenues, or your product, or anything other than your ability to grow and, ideally, trap as many customers as possible , with the vague sense that there would always be infinitely more money every year to spend on software.  Revenue growth came from massive sales teams compensated with heavy commissions and yearly price increases, except things have begun to sour, with renewals now taking twice as long to complete , and overall SaaS revenue growth slowing for years . To put it simply, much of the investment in software was based on the idea that software companies will always grow forever, and SaaS companies — which have “sticky” recurring revenues — would be the standard-bearer. When I got into the tech industry in 2008, I immediately became confused about the amount of unprofitable or unsustainable companies that were worth crazy amounts of money, and for the most part I’d get laughed at by reporters for being too cynical.  For the best part of 20 years, software startups have been seen as eternal growth-engines. All you had to do was find a product-market fit, get a few hundred customers locked in, up-sell them on new features and grow in perpetuity as you conquered a market. The idea was that you could just keep pumping them with cash, hire as many pre-sales (technical person who makes the sale), sales and customer experience (read: helpful person who also loves to tell you more stuff) people as you need to both retain customers and sell them as much stuff as you need.  Innovation was, as you’d expect, judged entirely by revenue growth and net revenue retention : In practice, this sounds reasonable: what percentage of your revenue are you making year-over-year? The problem is that this is a very easy to game stat, especially if you’re using it to raise money, because you can move customer billing periods around to make sure that things all continue to look good. Even then, per research by Jacco van der Kooji and Dave Boyce , net revenue retention is dropping quarter over quarter. The other problem is that the entire process of selling software has separated from the end-user, which means that products (and sales processes) are oriented around selling that software to the person responsible for buying it rather than those doomed to use it.  Per Nik Suresh’s Brainwash An Executive Today , in a conversation with the Chief Technology Officer of a company with over 10,000 people, who had asked if “data observability,” a thing that they did not (and would not need to, in their position) understand, was a problem, and whether Nik had heard of Monte Carlo. It turned out that the executive in question had no idea what Monte Carlo or data observability was , but because they’d heard about it on LinkedIn, it was now all they could think about.  This is the environment that private equity bought into — a seemingly-eternal growth engine with pliant customers desperate to spend money on a product that didn’t have to be good , just functional-enough. These people do not know what they are talking about or why they are buying these companies other than being able to mumble out shit like “ARR” and “NRR+” and “TAM” and “CAC” and “ARPA” in the right order to convince themselves that something is a good idea without ever thinking about what would happen if it wasn’t. This allowed them to stick to the “big picture,” meaning “numbers that I can look at rather than any practical experience in software development.” While I guess the concept of private equity isn’t morally repugnant, its current form — which includes venture capital — has led the modern state of technology into the fucking toilet, combining an initial flux of viable businesses, frothy markets and zero interest rates making it deceptively easy to raise money to acquire and deploy capital, leading to brainless investing, the death of logical due diligence, and potentially ruinous consequences for everybody involved. Private equity spent decades buying a little bit of just about everything, enriching the already-rich by engaging with the most vile elements of the Rot Economy’s growth-at-all-costs mindset . Its success is predicated on near-perpetual levels of liquidity and growth in both its holdings and the holdings of those who exist only to buy their stock, and on a tech and business media that doesn’t think too hard about the reality of the problems their companies claim to solve. The reckoning that’s coming is one built specifically to target the ignorant hubris that made them rich.  Private equity has yet to be punished by its limited partners and banks for investing in zombie assets, allowing it to pile into the unprofitable data centers underpinning the AI bubble, meaning that companies like Apollo, Blue Owl and Blackstone — all of whom participated in the ugly $10.2 billion acquisition of Zendesk in 2022 ( after it rejected another PE offer of $17 billion in 2021 ) that included $5 billion in debt — have all become heavily-leveraged in giant, ugly debt deals covering assets that are obsolete to useless in a few years . Alongside the fumbling ignorance of private equity sits the $3 trillion private credit industry , an equally-putrid, growth-drunk, and poorly-informed industry run with the same lax attention to detail and Big Brain Number Models that can justify just about any investment they want. Their half-assed due diligence led to billions of dollars of loans being given to outright frauds like First Brands , Tricolor and PosiGen , and, to paraphrase JP Morgan’s Jamie Dimon, there are absolutely more fraudulent cockroaches waiting to emerge . You may wonder why this matters, as all of this is private credit. Well, they get their money from banks. Big banks. In fact, according to the Federal Reserve of Boston , about 14% ($300 billion) of large banks’ total loan commitments to non-banking financial institutions in 2023 went to private equity and private credit, with Moody’s pegging the number around $285 billion, with an additional $340 billion in unused-yet-committed cash waiting in the wings . Oh, and they get their money from you . Pension funds are among some of the biggest backers of private credit companies , with the New York City Employees Retirement System and CalPERS increasing their investments.  Today, I’m going to teach you all about private equity, private credit, and why years of reframing “value” to mean “growth” may genuinely threaten the global banking system, as well as how effectively every company raises money. An entirely-different system exists for the wealthy to raise and deploy capital, one with flimsy due diligence, a genuine lack of basic industrial knowledge, and hundreds of billions of dollars of crap it can’t sell.  These people have been able to raise near-unlimited capital to do basically anything they want because there was always somebody stupid enough to buy whatever they were selling, and they have absolutely no plan for what happens when their system stops working.  They’ll loan to anyone or invest in anything that confirms their biases, and those biases are equal parts moronic and malevolent. Now they’re investing teachers’ pensions and insurance premiums in unprofitable and unsustainable data centers, all because they have no idea what a good investment actually looks like.  Welcome to the Hater’s Guide To Private Equity, or “The Stupidest Assholes In The Room.”

0 views

On NVIDIA and Analyslop

Hey all! I’m going to start hammering out free pieces again after a brief hiatus, mostly because I found myself trying to boil the ocean with each one, fearing that if I regularly emailed you you’d unsubscribe. I eventually realized how silly that was, so I’m back, and will be back more regularly. I’ll treat it like a column, which will be both easier to write and a lot more fun. As ever, if you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 18,000 words, including vast, extremely detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I am regularly several steps ahead in my coverage, and you get an absolute ton of value. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual.  Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.  Before we go any further, I want to remind everybody I’m not a stock analyst nor do I give investment advice.  I do, however, want to say a few things about NVIDIA and its annual earnings report, which it published on Wednesday, February 25: NVIDIA’s entire future is built on the idea that hyperscalers will buy GPUs at increasingly-higher prices and at increasingly-higher rates every single year. It is completely reliant on maybe four or five companies being willing to shove tens of billions of dollars a quarter directly into Jensen Huang’s wallet. If anything changes here — such as difficulty acquiring debt or investor pressure cutting capex — NVIDIA is in real trouble, as it’s made over $95 billion in commitments to build out for the AI bubble .  Yet the real gem was this part: Hell yeah dude! After misleading everybody that it intended to invest $100 billion in OpenAI last year ( as I warned everybody about months ago , the deal never existed and is now effectively dead ), NVIDIA was allegedly “close” to investing $30 billion . One would think that NVIDIA would, after Huang awkwardly tried to claim that the $100 billion was “ never a commitment ,” say with its full chest how badly it wanted to support OpenAI and how intentionally it would do so. Especially when you have this note in your 10-K: What a peculiar world we live in. Apparently NVIDIA is “so close” to a “partnership agreement” too , though it’s important to remember that Altman, Brockman, and Huang went on CNBC to talk about the last deal and that never came together. All of this adds a little more anxiety to OpenAI's alleged $100 billion funding round which, as The Information reports , Amazon's alleged $50 billion investment will actually be $15 billion, with the next $35 billion contingent on AGI or an IPO: And that $30 billion from NVIDIA is shaping up to be a Klarna-esque three-installment payment plan: A few thoughts: Anyway, on to the main event. New term: analyslop, when somebody writes a long, specious piece of writing with few facts or actual statements with the intention of it being read as thorough analysis.  This week, alleged financial analyst Citrini Research (not to be confused with Andrew Left’s Citron Research)  put out a truly awful piece called the “2028 Global Intelligence Crisis,” slop-filled scare-fiction written and framed with the authority of deeply-founded analysis, so much so that it caused a global selloff in stocks .  This piece — if you haven’t read it, please do so using my annotated version — spends 7000 or more words telling the dire tale of what would happen if AI made an indeterminately-large amount of white collar workers redundant.  It isn’t clear what exactly AI does, who makes the AI, or how the AI works, just that it replaces people, and then bad stuff happens. Citrini insists that this “isn’t bear porn or AI-doomer fan-fiction,” but that’s exactly what it is — mediocre analyslop framed in the trappings of analysis, sold on a Substack with “research” in the title, specifically written to spook and ingratiate anyone involved in the financial markets.  Its goal is to convince you that AI (non-specifically) is scary, that your current stocks are bad, and that AI stocks (unclear which ones those are, by the way) are the future. Also, find out more for $999 a year. Let me give you an example: The goal of a paragraph like this is for you to say “wow, that’s what GPUs are doing now!” It isn’t, of course. The majority of CEOs report little or no return on investment from AI , with a study of 6000 CEOs across the US, UK, Germany and Australia finding that “ more than 80%  [detected] no discernable impact from AI on either employment or productivity .” Nevertheless, you read “GPU” and “North Dakota” and you think “wow! That’s a place I know, and I know that GPUs power AI!”  I know a GPU cluster in North Dakota — CoreWeave’s one with Applied Digital that has debt so severe that it loses both companies money even if they have the capacity rented out 24/7 . But let’s not let facts get in the way of a poorly-written story. I don’t need to go line-by-line — mostly because I’ll end up writing a legally-actionable threat — but I need you to know that most of this piece’s arguments come down to magical thinking and the utterly empty prose. For example, how does AI take over the entire economy?  That’s right, they just get better. No need to discuss anything happening today. Even AI 2027 had the balls to start making stuff about “OpenBrain” or whatever. This piece literally just says stuff, including one particularly-egregious lie:  This is a complete and utter lie. A bald-faced lie. This is not something that Claude Code can do. The fact that we have major media outlets quoting this piece suggests that those responsible for explaining how things work don’t actually bother to do any of the work to find out, and it’s both a disgrace and embarrassment for the tech and business media that these lies continue to be peddled.  I’m now going to quote part of my upcoming premium (the Hater’s Guide To Private Equity, out Friday), because I think it’s time we talked about what Claude Code actually does. I’ve worked in or around SaaS since 2012, and I know the industry well. I may not be able to code, but I take the time to speak with software engineers so that I understand what things actually do and how “impressive” they are. Similarly, I make the effort to understand the underlying business models in a way that I’m not sure everybody else is trying to, and if I’m wrong, please show me an analysis of the financial condition of OpenAI or Anthropic from a booster. You won’t find one, because they’re not interested in interacting with reality. So, despite all of this being very obvious , it’s clear that the markets and an alarming number of people in the media simply do not know what they are talking about or are intentionally avoiding thinking about it. The “AI replaces software” story is literally “Anthropic has released a product and now the resulting industry is selling off,” such as when it launched a cybersecurity tool that could check for vulnerabilities (a product that has existed in some form for nearly a decade) causing a sell-off in cybersecurity stocks like Crowdstrike — you know, the one that had a faulty bit of code cause a global cybersecurity incident that lost the Fortune 500 billions , and resulted in Delta Airlines having to cancel over 1,200 flights over a period of several days .  There is no rational basis for anything about this sell-off other than that our financial media and markets do not appear to understand the very basic things about the stuff they invest in. Software may seem complex, but (especially in these cases) it’s really quite simple: investors are conflating “an AI model can spit out code” with “an AI model can create the entire experience of what we know as ‘software,’ or is close enough that we have to start freaking out.” This is thanks to the intentionally-deceptive marketing pedalled by Anthropic and validated by the media. In a piece from September 2025, Bloomberg reported that Claude Sonnet 4.5 could “code on its own for up to 30 hours straight,”  a statement directly from Anthropic repeated by other outlets that added that it did so “on complex, multi-step tasks,” none of which were explained. The Verge, however, added that apparently Anthropic “ coded a chat app akin to Slack or Teams ,” and no, you can’t see it, or know anything about how much it costs or its functionality. Does it run? Is it useful? Does it work in any way? What does it look like? We have absolutely no proof this happened other than Anthropic saying it, but because the media repeated it it’s now a fact.  As I discussed last week, Anthropic’s primary business model is deception , muddying the waters of what’s possible today and what might be possible tomorrow through a mixture of flimsy marketing statements and chief executive Dario Amodei’s doomerist lies about all white collar labor disappearing .  Anthropic tells lies of obfuscation and omission.  Anthropic exploits bad journalism, ignorance and a lack of critical thinking. As I said earlier, the “wow, Claude Code!” articles are mostly from captured boosters and people that do not actually build software being amazed that it can burp up its training data and make an impression of software engineering.  And even if we believe the idea that Spotify’s best engineers are not writing any code , I have to ask: to what end? Is Spotify shipping more software? Is the software better? Are there more features? Are there less bugs? What are the engineers doing with the time they’re saving? A study from last year from METR said that despite thinking they were 24% faster, LLM coding tools made engineers 19% slower.  I also think we need to really think deeply about how, for the second time in a month, the markets and the media have had a miniature shitfit based on blogs that tell lies using fan fiction. As I covered in my annotations of Matt Shumer’s “Something Big Is Happening,” the people that are meant to tell the general public what’s happening in the world appear to be falling for ghost stories that confirm their biases or investment strategies, even if said stories are full of half-truths and outright lies. I am despairing a little. When I see Matt Shumer on CNN or hear from the head of a PE firm about Citrini Research, I begin to wonder whether everybody got where they were not through any actual work but by making the right noises.  This is the grifter economy, and the people that should be stopping them are asleep at the wheel. NVIDIA beat estimates and raised expectations, as it has quarter after quarter. People were initially excited, then started reading the 10-K and seeing weird little things that stood out. $68.1 billion in revenue is a lot of money! That’s what you should expect from a company that is the single vendor in the only thing anybody talks about.  Hyperscaler revenue accounted for slightly more than 50% of NVIDIA’s data center revenue . As I wrote about last year , NVIDIA’s diversified revenue — that’s the revenue that comes from companies that aren’t in the magnificent 7 — continues to collapse. While data center revenue was $62.3 billion, 50% ($31.15 billion) was taken up by hyperscalers…and because we don’t get a 10-Q for the fourth quarter, we don’t get a breakdown of how many individual customers made up that quarter’s revenue. Boo! It is both peculiar and worrying that 36% (around $77.7 billion) of its $215.938 billion in FY2026 revenue came from two customers. If I had to guess, they’re likely Foxconn or Quanta computing, two large Taiwanese ODMs (Original Design Manufacturers) that build the servers for most hyperscalers.  If you want to know more, I wrote a long premium piece that goes into it (among the ways in which AI is worse than the dot com bubble). In simple terms, when a hyperscaler buys GPUs, they go straight to one of these ODMs to put them into servers. This isn’t out of the ordinary, but I keep an eye on the ODM revenues (which publish every month) to see if anything shifts, as I think it’ll be one of the first signs that things are collapsing. NVIDIA’s inventories continue to grow, sitting at over $21 billion (up from around $19 billion last quarter). Could be normal! Could mean stuff isn’t shipping. NVIDIA has now agreed to $27 billion in multi-year-long cloud service agreements — literally renting its GPUs back from the people it sells them to — with $7 billion of that expected in its FY2027 (Q1 FY2027 will report in May 2026).  For some context, CoreWeave (which reports FY2025 earnings today, February 26) gave guidance last November that it expected its entire annual revenue to be between $5 billion and $5.15 billion. CoreWeave is arguably the largest AI compute vendor outside of the hyperscalers. If there was significant demand, none of this would be necessary. NVIDIA “invested” $17.5bn in AI model makers and other early-stage AI startups, and made a further $3.5bn in land, power, and shell guarantees to “support the build-out of complex datacenter infrastructures.” In total, it spent $21bn propping up the ecosystem that, in turn, feeds billions of dollars into its coffers.  NVIDIA’s l ong-term supply and capacity obligations soared from $30.8bn to $95.2bn , largely because NVIDIA’s latest chips are extremely complex and require TSMC to make significant investments in hardware and facilities , and it’s unwilling to do that without receiving guarantees that it’ll make its money back.  NVIDIA expects these obligations to grow .  NVIDIA’s accounts receivable (as in goods that have been shipped but are yet to be paid for) now sits at $38.4 billion, of which 56% ($21.5 billion) is from three customers. This is turning into a very involved and convoluted process! It turns out that it's pretty difficult to actually raise $100 billion. This is a big problem, because OpenAI needs $655 billion in the next five years to pay all its bills , and loses billions of dollars a year. If OpenAI is struggling to raise $100 billion today, I don't see how it's possible it survives. If you're to believe reports, OpenAI made $13.1 billion in revenue in 2025 on $8 billion of losses , but remember, my own reporting from last year said that OpenAI only made around $4.329 billion through September 2025 with $8.67 billion of inference costs alone. It is kind of weird that nobody seems to acknowledge my reporting on this subject. I do not see how OpenAI survives. it coded for 30 hours [from which you are meant to intimate the code was useful or good and that these hours were productive].  it made a Microsoft Teams competitor [that you are meant to assume was full-featured and functional like Teams or Slack, or…functional? And they didn’t even have to prove it by showing you it]  It was able to write uninterruptedly [which you assume was because it was doing good work that didn’t need interruption].

0 views

Premium: The Hater's Guide to Anthropic

In May 2021, Dario Amodei and a crew of other former OpenAI researchers formed Anthropic and dedicated themselves to building the single-most-annoying Large Language Model company of all time.  Pardon me, sorry, I mean safest , because that’s the reason that Amodei and his crew claimed was why they left OpenAI : I’m also being a little sarcastic. Anthropic, a “public benefit corporation” (a company that is quasi-legally required to sometimes sort of focus on goals that aren’t profit driven, and in this case, one that chose to incorporate in Delaware as opposed to California, where it would have actual obligations), is the only meaningful competitor to OpenAI, one that went from (allegedly) making about $116 million in March 2025 to making $1.16 billion in February 2026 , in the very same month it raised $30 billion from thirty-seven different investors, including a “partial” investment from NVIDIA and Microsoft announced in November 2025 that was meant to be “up to” $15 billion.  Anthropic’s models regularly dominate the various LLM model leaderboards , and its Claude Code command-line interface tool (IE: a terminal you type stuff into) has become quite popular with developers who either claim it writes every single line of their code, or that it’s vaguely useful in some situations.  CEO Dario Amodei predicted last March that in six months AI would be writing 90% of code, and when that didn’t happen, he simply made the same prediction again in January , because, and I do not say this lightly, Dario Amodei is full of shit. You see, Anthropic has, for the best part of five years, been framing itself as the trustworthy , safe alternative to OpenAI, focusing more on its paid offerings and selling to businesses (realizing that the software sales cycle usually focuses on dimwitted c-suite executives rather than those who actually use the products ), as opposed to building a giant, expensive free product that lots of people use but almost nobody pays for.  Anthropic, separately, has avoided following OpenAI in making gimmicky (and horrendously expensive) image and video generation tools, which I assume is partly due to the cost, but also because neither of those things are likely something that an enterprise actually cares about.  Anthropic also caught on early to the idea that coding was the one use case that Large Language Models fit naturally: Anthropic has held the lead in coding LLMs since the launch of June 2024’s Claude Sonnet 3.5 , and as a story from The Information from December 2024 explained, this terrified OpenAI : Cursor would, of course, eventually go on to become its own business, raising $3.2 billion in 2025 to compete with Claude Code, a product made by Anthropic, which Cursor pays to offer its models through its AI coding product. Cursor is Anthropic’s largest customer, with the second being Microsoft’s Github Copilot . I have heard from multiple sources that Cursor is spending more than 100% of its revenue on API calls, with the majority going to Anthropic and OpenAI, both of whom now compete with Cursor. Anthropic sold itself as the stable, thoughtful, safety-oriented AI lab, with Amodei himself saying in an August 2023 interview that he purposefully avoided the limelight: A couple of months later in October 2023, Amodei joined The Logan Bartlett show , saying that he “didn’t like the term AGI” because, and I shit you not, “...because we’re closer to the kinds of things that AGI is pointing at,” making it “no longer a useful term.” He said that there was a “future point” where a model could “build dyson spheres around the sun and calculate the meaning of life,” before rambling incoherently and suggesting that these things were both very close and far away at the same time. He also predicted that “no sooner than 2025, maybe 2026” that AI would “really invent new science.” This was all part of Anthropic’s use of well-meaning language to tell a story that said “you should be scared” and “only Anthropic will save you.” In July 2023 , Amodei spoke before a senate committee about AI oversight and regulation, starting sensible (IE: if AI does become powerful, we should have regulations to mitigate those problems) and eventually veering aggressively into marketing slop: This is Amodei’s favourite marketing trick — using a vague timeline (2-3 years) to suggest that something vaguely bad that’s also good for Anthropic is just around the corner, but managed correctly, could also be good for society (a revolution in technology and science! But also, havoc! ). Only Dario has  the answers (regulations that start with “securing the AI supply chain” meaning “please stop China from competing”).  In retrospect, this was the most honest that he’d ever be. In 2024, Amodei would quickly learn that he loved personalizing companies, and that destroying his soul fucking rocked.  In October 2024, Amodei put out a 15,000-word-long blog — ugh, AI is coming for my job! — where he’d say that Anthropic needed to “avoid the perception of propaganda” while also saying that “as early as 2026 (but there are also ways it could take much longer),” AI would be smarter than a Nobel Prize winner, autonomously able to complete weeks-long tasks, and be the equivalent of a “country of geniuses in a datacenter.”  This piece, like all of his proclamations, had two goals: generating media coverage and investment. Amodei is a deeply dishonest man, couching “predictions” based on nothing in terms like “maybe,” “possibly,” or “as early as,” knowing that the media will simply ignore those words and report what he says as a wise, evidence-based fact.  Amodei (and by extension Anthropic) nakedly manipulates the media by having them repeat these things without analysis or counterpoints — such as that “AI could surpass almost all humans at almost everything shortly after 2027 (which I’ll get back to in a bit).” He knows that these things aren’t true. He knows he doesn’t have any proof. And he knows that nobody will ask, and that his bullshit will make for a sexy traffic-grabbing headline. To be clear, that statement was made three months after Amodei’s essay said that AI labs needed to avoid “the perception of propaganda.” Amodei is a con artist that knows he can’t sell Anthropic’s products by explaining what they actually do, and everybody is falling for it. And, almost always, these predictions match up with Anthropic’s endless fundraising. On September 23, 2024, The Information reported that Anthropic was raising a round at a $30-$40 billion valuation, and on October 12 2024, Amodei pooped out Machines of Loving Grace with the express position that he and Anthropic “had not talked that much about powerful AI’s upsides.”  A month later on November 22, 2024 , Anthropic would raise another $4 billion from Amazon, a couple of weeks after doing a five-hour-long interview with Lex Fridman in which he’d say that “someday AI would be better at everything.”  On November 27, 2024 , Amodei would do a fireside chat at Eric Newcomer’s Cerebral Valley AI Summit where he’d say that in 2025, 2026, or 2027 (yes, he was that vague), AI could be as “good as a Nobel Prize winner, polymathic across many fields,” and have “agency [to] act on its own for hours or days,” the latter of which deliberately laid foundation for one of Anthropic’s greatest lies: that AI can “work uninterrupted” for periods of time, leaving the reader or listener to fill in the (unsaid) gap of “...and actually create useful stuff.” Amodei crested 2024 with an interview with the Financial Times , and let slip what I believe will eventually become Anthropic’s version of WeWork’s Community-Adjusted EBITDA , by which I mean “a way to lie and suggest profitability when a company isn’t profitable”: Yeah man, if a company made $300 million in revenue and spent $1 billion. No amount of DarioMath about how a model “costs this much and makes this much revenue” changes the fact that profitability is when a company makes more money than it spends.  On January 5, 2025, Forbes would report that Anthropic was working on a $60 billion round that would make Amodei, his sister Daniela, and five other cofounders billionaires . Anyway, as I said at Davos on January 21, 2025, Amodei said that he was “more confident than ever” that we’re “very close” to “powerful capabilities,” defined as “systems that are better than almost all humans at almost all terms,” citing his long, boring essay. A day later, Anthropic would raise another $1 billion from Google . On January 27, 2025, he’d tell Economist editor-in-chief Zanny Minton Beddoes that AI would get “as good and eventually better” at thinking as human beings, and that the ceiling of what models could do was “well above humans.”  On February 18, 2025, he’d tell Beddoes that we’d get a model “...that can do everything a human can do at the level of a Nobel laureate across many fields” by 2026 or 2027, and that we’re “on the eve of something that has great challenges” that would “upend the balance of power” because we’d have “10 million people smarter than any human alive…” oh god, I’m not fucking writing it out. I’m sorry. It’s always the same shit. The models are people, we’re so scared.  On February 28, 2025, Amodei would join the New York Times’ Hard Fork , saying that he wanted to “slow down authoritarians,” and that “public officials and leaders at companies” would “look back at this period [where humanity would become a “post-powerful AI society that co-exists with powerful intelligences]” and “feel like a fool,” and that that was the number one goal of these people. Amodei would also add that he had been in the field for 10 years — something he loves to say! — and that there was a 70-80% chance that we will “get a very large number of AI systems that are much smarter than humans at almost everything” before the end of the decade. Three days later, Anthropic would raise $3.5 billion at a $61.5 billion valuation . Beneath the hype, Anthropic is, like OpenAI, a company making LLMs that can generate code and text, and that can interpret data from images and videos, all while burning billions of dollars and having no path to profitability. Per The Information , Anthropic made $4.5 billion in revenue and lost $5.2 billion generating it, and based on my own reporting from last year , costs appear to scale linearly above revenue. Some will argue that the majority of Anthropic’s losses ($4.1 billion) were from training, and I think it’s time we had a chat about what “training” means, especially as Anthropic plans to spend $100 billion on it in the next four years . Per my piece from last week: In an interview on the Dwarkesh Podcast , Amodei even admitted that if you “never train another model” you “don’t have any demand because you’ll fall behind.” Training is opex, and should be part of gross margins. It’s time we had an honest conversation about Anthropic.  Despite its positioning as the trustworthy, “nice” AI lab, Anthropic is as big, ugly and wasteful as OpenAI, and Dario Amodei is an even bigger bullshit artist than Sam Altman. It burns just as much of its revenue on inference (59%, or $2.79 billion on $4.5 billion of revenue , versus OpenAI’s 62%, or $2.5 billion  on $4.3 billion of revenue in the first half of 2025 , if you use The Information’s numbers), and shows no sign of any “efficiency” or “cost-cutting.” Worse still, Anthropic continually abuses its users through varying rate limits to juice revenues and user numbers — along with Amodei’s gas-leak-esque proclamations — to mislead the media, the general public, and investors about the financial condition of the company.  Based on an analysis of many users’ actual token burn on Claude Code, I believe Anthropic is burning anywhere from $3 to $20 to make $1, and that the product that users are using (and the media is raving about) is not one that Anthropic can actually support long-term.  I also see signs that Amodei himself is playing fast and loose with financial metrics in a way that will blow up in his face if Anthropic ever files its paperwork to go public . In simpler terms, Anthropic’s alleged “ 38% gross margins ” are, if we are to believe Amodei’s own words, not the result of “revenue minus COGS” but “how much a model costs and how much revenue it’s generated.” Anthropic is also making promises it can’t keep. It’s promising to spend $30 billion on Microsoft Azure (and an additional "up to one gigawatt”), “tens of billions” on Google Cloud , $21 billion on Google TPUs with Broadcom , “$50 billion on American infrastructure,” as much as $3 billion on Hut8’s data center in Louisiana , and an unknowable (yet likely in the billions) amount of money with Amazon Web Services. Not to worry, Dario also adds that if you’re off by a couple of years on your projections of revenue and ability to pay for compute, it’ll be “ruinous.” I think that he’s right. Anthropic cannot afford to pay its bills, as the ruinous costs of training — which will never, ever stop — and inference will always outpace whatever spikes of revenue it can garner through media campaigns built on deception, fear-mongering, and an exploitation of reporters unwilling to ask or think about the hard questions.  I see no difference between OpenAI’s endless bullshit non-existent deal announcements and what Anthropic has done in the last few months. Anthropic is as craven and deceptive as OpenAI, and Dario Amodei is as willing a con artist as Altman, and I believe is desperately jealous of his success. And after hours and hours of listening to Amodei talk, I think he is one of the most annoying, vacuous, bloviating fuckwits in tech history. He rambles endlessly, stutters more based on how big a lie he’s telling, and will say anything and everything to get on TV and say noxious, fantastical, intentionally-manipulative bullshit to people who should know better but never seem to learn. He stammers, he blithers, he rambles, he continually veers between “this is about to happen” and “actually it’s far away” so that nobody can say he’s a liar, but that’s exactly what I call a person who intentionally deceives people, even if they couch their lies in “maybes” and “possiblies.”  Dario Amodei fucking sucks, and it’s time to stop pretending otherwise. Anthropic has no more soul or ethics than OpenAI — it’s just done a far better job of conning people into believing otherwise. This is the Hater’s Guide To Anthropic, or “DarioWare: Get It Together.”  Thanks to sites like Stack Overflow and Github, as well as the trillions of lines of open source code in circulation, there’s an absolute fuckton of material to train the model on. Software engineers are data perverts (I mean this affectionately), and will try basically anything to speed up, automate or “add efficiency” to their work. Software engineering is a job that most members of the media don’t understand. Software engineers never shut the fuck up when they’ve found something new that feels good. Software engineers will spend hours only defending the honour of any corporation that courts them. Software engineers will at times overestimate their capabilities, as demonstrated by  the METR study that found that developers believed they were 24% faster when using LLMs, when in fact coding models made them 19% slower . This, naturally, makes them quite defensive of the products they use, and whether or not they’re actually seeing improvements.

0 views

Premium: The AI Data Center Financial Crisis

Since the beginning of 2023, big tech has spent over $814 billion in capital expenditures, with a large portion of that going towards meeting the demands of AI companies like OpenAI and Anthropic.  Big tech has spent big on GPUs, power infrastructure, and data center construction,  using a variety of financing methods to do so, including (but not limited to) leasing. And the way they’re going about structuring these finance deals is growing increasingly bizarre.  I’m not merely talking about Meta’s curious arrangement for its facility in Louisiana , though that certainly raised some eyebrows. Last year, Morgan Stanley published a report that claimed hyperscalers were increasingly relying on finance leases to obtain the “powered shell” of a data center, rather than the more common method of operating leases.  The key difference here is that finance leases, unlike operating leases, are effectively long-term loans where the borrower is expected to retain ownership of the asset (whether that be a GPU or a building) at the end of the contract. Traditionally, these types of arrangements have been used to finance the bits of a data center that have a comparatively limited useful life — like computer hardware, which grows obsolete with time. The spending to date is, as I’ve written about again and again , an astronomical amount of spending considering the lack of meaningful revenue from generative AI.  Even after a year straight of manufacturing consent for Claude Code as the be-all-end-all of software development resulted in putrid results for Anthropic — $4.5 billion of revenue and $5.2 billion of losses before interest, taxes, depreciation and amortization according to The Information — with ( per WIRED ) Claude Code only accounting for around $1.1 billion in annualized revenue in December, or around $92 million in monthly revenue. This was in a year where Anthropic raised a total of $16.5 billion (with $13 billion of that coming in September 2025), and it’s already working on raising another $25 billion . This might be because it promised to buy $21 billion of Google TPUs from Broadcom , or because Anthropic expects AI model training costs to cost over $100 billion in the next 3 years . And it just raised another $30 billion — albeit with the caveat that some of said $30 billion came from previously-announced funding agreements with Nvidia and Microsoft, though how much remains a mystery. According to Anthropic’s new funding announcement, Claude Code’s run rate has grown to “over $2.5 billion” as of February 12 2026 — or around $208 million. Based on literally every bit of reporting about Anthropic, costs have likely spiked along with revenue, which hit $14 billion annualized ($1.16 billion in a month) as of that date.  I have my doubts, but let’s put them aside for now. Anthropic is also in the midst of one of the most aggressive and dishonest public relations campaigns in history. While its Chief Commercial Officer Paul Smith told CNBC that it was “focused on growing revenue” rather than “spending money,” it’s currently making massive promises — tens of billions on Google Cloud , “ $50 billion in American AI infrastructure ,” and $30 billion on Azure . And despite Smith saying that Anthropic was less interested in “flashy headlines,” Chief Executive Dario Amodei has said, in the last three weeks , that “ almost unimaginable power is potentially imminent ,” that AI could replace all software engineers in the next 6-12 months , that AI may (it’s always fucking may ) cause “ unusually painful disruption to jobs ,” and wrote a 19,000 word essay — I guess AI is coming for my job after all! — where he repeated his noxious line that “we will likely get a century of scientific and economic progress compressed in a decade.” Yet arguably the most dishonest part is this word “training.” When you read “training,” you’re meant to think “oh, it’s training for something, this is an R&D cost,” when “training LLMs” is as consistent a cost as inference (the creation of the output) or any other kind of maintenance.  While most people know about pretraining — the shoving of large amounts of data into a model (this is a simplification I realize) — in reality a lot of the current spate of models use post-training , which covers everything from small tweaks to model behavior to full-blown reinforcement learning where experts reward or punish particular responses to prompts. To be clear, all of this is well-known and documented, but the nomenclature of “training” suggests that it might stop one day, versus the truth: training costs are increasing dramatically, and “training” covers anything from training new models to bug fixes on existing ones. And, more fundamentally, it’s an ongoing cost — something that’s an essential and unavoidable cost of doing business.  Training is, for an AI lab like OpenAI and Anthropic, as common (and necessary) a cost as those associated with creating outputs (inference), yet it’s kept entirely out of gross margins : This is inherently deceptive. While one would argue that R&D is not considered in gross margins, training isn’t gross margins — yet gross margins generally include the raw materials necessary to build something, and training is absolutely part of the raw costs of running an AI model. Direct labor and parts are considered part of the calculation of gross margin, and spending on training — both the data and the process of training itself — are absolutely meaningful, and to leave them out is an act of deception.  Anthropic’s 2025 gross margins were 40% — or 38% if you include free users of Claude — on inference costs of $2.7 (or $2.79) billion, with training costs of around $4.1 billion . What happens if you add training costs into the equation?  Let’s work it out! Training is not an up front cost , and considering it one only serves to help Anthropic cover for its wretched business model. Anthropic (like OpenAI) can never stop training, ever, and to pretend otherwise is misleading. This is not the cost just to “train new models” but to maintain current ones, build new products around them, and many other things that are direct, impossible-to-avoid components of COGS. They’re manufacturing costs, plain and simple. Anthropic projects to spend $100 billion on training in the next three years, which suggests it will spend — proportional to its current costs — around $32 billion on inference in the same period, on top of $21 billion of TPU purchases, on top of $30 billion on Azure (I assume in that period?), on top of “tens of billions” on Google Cloud. When you actually add these numbers together (assuming “tens of billions” is $15 billion), that’s $200 billion.  Anthropic ( per The Information’s reporting ) tells investors it will make $18 billion in revenue in 2026 and $55 billion in 2027 — year-over-year increases of 400% and 305% respectively, and is already raising $25 billion after having just closed a $30bn deal. How does Anthropic pay its bills? Why does outlet after outlet print these fantastical numbers without doing the maths of “how does Anthropic actually get all this money?” Because even with their ridiculous revenue projections, this company is still burning cash, and when you start to actually do the maths around anything in the AI industry, things become genuinely worrying.  You see, every single generative AI company is unprofitable, and appears to be getting less profitable over time. Both The Information and Wall Street Journal reported the same bizarre statement in November — that Anthropic would “turn a profit more quickly than OpenAI,” with The Information saying Anthropic would be cash flow positive in 2027 and the Journal putting the date at 2028, only for The Information to report in January that 2028 was the more-realistic figure.  If you’re wondering how, the answer is “Anthropic will magically become cash flow positive in 2028”: This is also the exact same logic as OpenAI, which will, per The Information in September , also, somehow, magically turn cashflow positive in 2030: Oracle, which has a 5-year-long, $300 billion compute deal with OpenAI that it lacks the capacity to serve and that OpenAI lacks the cash to pay for, also appears to have the same magical plan to become cash flow positive in 2029 : Somehow, Oracle’s case is the most legit, in that theoretically at that time it would be done, I assume, paying the $38 billion it’s raising for Stargate Shackelford and Wisconsin, but said assumption also hinges on the idea that OpenAI finds $300 billion somehow . it also relies upon Oracle raising more debt than it currently has — which, even before the AI hype cycle swept over the company, was a lot.  As I discussed a few weeks ago in the Hater’s Guide To Oracle , a megawatt of data center IT load generally costs  (per Jerome Darling of TD Cowen) around $12-14m  in construction (likely more due to skilled labor shortages, supply constraints and rising equipment prices) and $30m a megawatt in GPUs and associated hardware. In plain terms, Oracle (and its associated partners) need around $189 billion to build the 4.5GW of Stargate capacity to make the revenue from the OpenAI deal, meaning that it needs around another $100 billion once it raises $50 billion in combined debt, bonds, and printing new shares by the end of 2026. I will admit I feel a little crazy writing this all out, because it’s somehow a fringe belief to do the very basic maths and say “hey, Oracle doesn’t have the capacity and OpenAI doesn’t have the money.” In fact, nobody seems to want to really talk about the cost of AI, because it’s much easier to say “I’m not a numbers person” or “they’ll work it out.” This is why in today’s newsletter I am going to lay out the stark reality of the AI bubble, and debut a model I’ve created to measure the actual, real costs of an AI data center. While my methodology is complex, my conclusions are simple: running AI data centers is, even when you remove the debt required to stand up these data centers, a mediocre business that is vulnerable to basically any change in circumstances.  Based on hours of discussions with data center professionals, analysts and economists, I have calculated that in most cases, the average AI data center has gross margins of somewhere between 30% and 40% — margins that decay rapidly for every day, week, or month that you take putting a data center into operation. This is why Oracle has negative 100% margins on NVIDIA’s GB200 chips — because the burdensome up-front cost of building AI data centers (as GPUs, servers, and other associated) leaves you billions of dollars in the hole before you even start serving compute, after which you’re left to contend with taxes, depreciation, financing, and the cost of actually powering the hardware.  Yet things sour further when you face the actual financial realities of these deals — and the debt associated with them.  Based on my current model of the 1GW Stargate Abilene data center, Oracle likely plans to make around $11 billion in revenue a year from the 1.2GW (or around 880MW of critical IT). While that sounds good, when you add things like depreciation, electricity, colocation costs of $1 billion a year from Crusoe, opex, and the myriad of other costs, its margins sit at a stinkerific 27.2% — and that’s assuming OpenAI actually pays, on time, in a reliable way. Things only get worse when you factor in the cost of debt. While Oracle has funded Abilene using a mixture of bonds and existing cashflow, it very clearly has yet to receive the majority of the $25 billion+ in GPUs and associated hardware (with only 96,000 GPUs “ delivered ”), meaning that it likely bought them out of its $18 billion bond sale from last September .  If we assume that maths, this means that Oracle is paying a little less than $963 million a year ( per the terms of the bond sale ) whether or not a single GPU is even turned on, leaving us with a net margin of 22.19%... and this is assuming OpenAI pays every single bill, every single time, and there are absolutely no delays. These delays are also very, very expensive. Based on my model, if we assume that 100MW of critical IT load is operational (roughly two buildings and 100,000 GB200s) but has yet to start generating revenue, Oracle is burning, without depreciation ( EDITOR’S NOTE: sorry! This previously said depreciation was a cash expense and was included in this number (even though it wasn’t! ) , but it's correct in the model! ), around $4.69 million a day in cash . I have also confirmed with sources in Abilene that there is no chance that Stargate Abilene is fully operational in 2026. In simpler terms: I will admit I’m quite disappointed that the media at large has mostly ignored this story. Limp, cautious “are we in an AI bubble?” conversations are insufficient to deal with the potential for collapse we’re facing.  Today, I’m going to dig into the reality of the costs of AI, and explain in gruesome detail exactly how easily these data centers can rapidly approach insolvency in the event that their tenants fail to pay.  The chain of pain is real: Today I’m going to explain how easily it breaks. If Anthropic’s gross margin was 38% in 2025, that means its COGS (cost of goods sold) was $2.79 billion. If we add training, this brings COGS to $6.89 billion, leaving us with -$2.39 billion after $4.5 billion in revenue. This results in a negative 53% gross margin. AI startups are all unprofitable, and do not appear to have a path to sustainability.  AI data centers are being built in anticipation of demand that doesn’t exist, and will only exist if AI startups — which are all unprofitable — can afford to pay them. Oracle, which has committed to building 4.5GW of data centers, is burning cash every day that OpenAI takes to set up its GPUs, and when it starts making money, it does so from a starting position of billions and billions of dollars in debt. Margins are low throughout the entire stack of AI data center operators — from landlords like Applied Digital to compute providers like CoreWeave — thanks to the billions in debt necessary to fund both construction and IT hardware to make them run, putting both parties in a hole that can only be filled with revenues that come from either hyperscalers or AI startups.  In a very real sense, the AI compute industry is dependent on AI “working out,” because if it doesn’t, every single one of these data centers will become a burning hole in the ground.

1 views

Premium: The Hater's Guide To Microsoft

Have you ever looked at something too long and felt like you were sort of seeing through it? Has anybody actually looked at a company this much in a way that wasn’t some sort of obsequious profile of a person who worked there? I don’t mean this as a way to fish for compliments — this experience is just so peculiar, because when you look at them hard enough, you begin to wonder why everybody isn’t just screaming all the time.  Yet I really do enjoy it. When you push aside all the marketing and the interviews and all that and stare at what a company actually does and what its users and employees say, you really get a feel of the guts of a company. I’m enjoying it. The Hater’s Guides are a lot of fun, and I’m learning all sorts of things about the ways in which companies try to hide their nasty little accidents and proclivities.  Today, I focus on one of the largest.  In the last year I’ve spoken to over a hundred different tech workers, and the ones I hear most consistently from are the current and former victims of Microsoft, a company with a culture in decline, in large part thanks to its obsession with AI. Every single person I talk to about this company has venom on their tongue, whether they’re a regular user of Microsoft Teams or somebody who was unfortunate to work at the company any time in the last decade. Microsoft exists as a kind of dark presence over business software and digital infrastructure. You inevitably have to interact with one of its products — maybe it’s because somebody you work with uses Teams, maybe it’s because you’re forced to use SharePoint, or perhaps you’re suffering at the hands of PowerBI — because Microsoft is the king of software sales. It exists entirely to seep into the veins of an organization and force every computer to use Microsoft 365, or sit on effectively every PC you use, forcing you to interact with some sort of branded content every time you open your start menu . This is a direct results of the aggressive monopolies that Microsoft built over effectively every aspect of using the computer, starting by throwing its weight around in the 80s to crowd out potential competitors to MS-DOS and eventually moving into everything including cloud compute, cloud storage, business analytics, video editing, and console gaming, and I’m barely a third through the list of products.  Microsoft uses its money to move into new markets, uses aggressive sales to build long-term contracts with organizations, and then lets its products fester until it’s forced to make them better before everybody leaves, with the best example being the recent performance-focused move to “ rebuild trust in Windows ” in response to the upcoming launch of Valve’s competitor to the Xbox (and Windows gaming in general), the Steam Machine . Microsoft is a company known for two things: scale and mediocrity. It’s everywhere, its products range from “okay” to “annoying,” and virtually every one of its products is a clone of something else.  And nowhere is that mediocrity more obvious than in its CEO. Since taking over in 2014, CEO Satya Nadella has steered this company out of the darkness caused by aggressive possible chair-thrower Steve Ballmer , transforming from the evils of stack ranking to encouraging a “growth mindset” where you “believe your most basic abilities can be developed through dedication and hard work.” Workers are encouraged to be “learn-it-alls” rather than “know-it-alls,” all part of a weird cult-like pseudo-psychology that doesn’t really ring true if you actually work at the company .  Nadella sells himself as a calm, thoughtful and peaceful man, yet in reality he’s one of the most merciless layoff hogs in known history. He laid off 18,000 people in 2014 months after becoming CEO, 7,800 people in 2015 , 4,700 people in 2016 , 3,000 people in 2017 , “hundreds” of people in 2018 , took a break in 2019, every single one of the workers in its physical stores in 2020 along with everybody who worked at MSN , took a break in 2021, 1,000 people in 2022 , 16,000 people in 2023 , 15,000 people in 2024 and 15,000 people in 2025 .  Despite calling for a “ referendum on capitalism ” in 2020 and suggesting companies “grade themselves” on the wider economic benefits they bring to society, Nadella has overseen an historic surge in Microsoft’s revenues — from around $83 billion a year when he joined in 2014 to around $300 billion on a trailing 12-month basis — while acting in a way that’s callously indifferent to both employees and customers alike.  At the same time, Nadella has overseen Microsoft’s transformation from an asset-light software monopolist that most customers barely tolerate to an asset-heavy behemoth that feeds its own margins into GPUs that only lose it money. And it’s that transformation that is starting to concern investors , and raises the question of whether Microsoft is heading towards a painful crash.  You see, Microsoft is currently trying to pull a fast one on everybody, claiming that its investments in AI are somehow paying off despite the fact that it stopped reporting AI revenue in the first quarter of 2025 . In reality, the one segment where it would matter — Microsoft Azure, Microsoft’s cloud platform where the actual AI services are sold — is stagnant, all while Redmond funnels virtually every dollar of revenue directly into more GPUs.  Intelligent Cloud also represents around 40% of Microsoft’s total revenue, and has done so consistently since FY2022. Azure sits within Microsoft's Intelligent Cloud segment, along with server products and enterprise support. For the sake of clarity, here’s how Microsoft describes Intelligent Cloud in its latest end-of-year K-10 filing : Our Intelligent Cloud segment consists of our public, private, and hybrid server products and cloud services that power modern business and developers. This segment primarily comprises: It’s a big, diverse thing — and Microsoft doesn’t really break things down further from here — but Microsoft makes it clear in several places that Azure is the main revenue driver in this fairly diverse business segment.  Some bright spark is going to tell me that Microsoft said it has 15 million paid 365 Copilot subscribers (which, I add, sits under its Productivity and Business Processes segment), with reporters specifically saying these were corporate seats, a fact I dispute, because this is the quote from Microsoft’s latest conference call around earnings : At no point does Microsoft say “corporate seat” or “business seat.” “Enterprise Copilot Chat” is a free addition to multiple different Microsoft 365 products , and Microsoft 365 Copilot could also refer to Microsoft’s $18 to $21-a-month addition to Copilot Business , as well as Microsoft’s enterprise $30-a-month plans. And remember: Microsoft regularly does discounts through its resellers to bulk up these numbers. When Nadella took over, Microsoft had around $11.7 billion in PP&E (property, plant, and equipment ). A little over a decade later, that number has ballooned to $261 billion, with the vast majority added since 2020 (when Microsoft’s PP&E sat around $41 billion).  Also, as a reminder: Jensen Huang has made it clear that GPUs are going to be upgraded on a yearly cycle, guaranteeing that Microsoft’s armies of GPUs regularly hurtle toward obsolescence. Microsoft, like every big tech company, has played silly games with how it depreciates assets , extending the “useful life” of all GPUs so that they depreciate over six years, rather than four.  And while someone less acquainted with corporate accounting might assume that this move is a prudent, fiscally-conscious tactic to reduce spending by using assets for longer, and stretching the intervals between their replacements, in reality it’s a handy tactic to disguise the cost of Microsoft’s profligate spending on the balance sheet.  You might be forgiven for thinking that all of this investment was necessary to grow Azure, which is clearly the most important part of Microsoft’s Intelligent Cloud segment. I n Q2 FY2020 , Intelligent Cloud revenue sat at $11.9 billion on PP&E of around $40 billion, and as of Microsoft’s last quarter, Intelligent Cloud revenue sat at around $32.9 billion on PP&E that has increased by over 650%.  Good, right? Well, not really. Let’s compare Microsoft’s Intelligent Cloud revenue from the last five years: In the last five years, Microsoft has gone from spending 38% of its Intelligent Cloud revenue on capex to nearly every penny (over 94%) of it in the last six quarters, at the same time in two and a half years that Intelligent Cloud has failed to show any growth.  Things, I’m afraid, get worse. Microsoft announced in July 2025 — the end of its 2025 fiscal year— that Azure made $75 billion in revenue in FY2025 . This was, as the previous link notes, the first time that Microsoft actually broke down how much Azure actually made, having previously simply lumped it in with the rest of the Intelligent Cloud segment.  I’m not sure what to read from that, but it’s still not good. meaning that Microsoft spent every single penny of its Azure revenue from that fiscal year on capital expenditures of $88 billion and then some, a little under 117% of all Azure revenue to be precise. If we assume Azure regularly represents 71% of Intelligent Cloud revenue, Microsoft has been spending anywhere from half to three-quarters of Azure’s revenue on capex. To simplify: Microsoft is spending lots of money to build out capacity on Microsoft Azure (as part of Intelligent Cloud), and growth of capex is massively outpacing the meager growth that it’s meant to be creating.  You know what’s also been growing? Microsoft’s depreciation charges, which grew from $2.7 billion in the beginning of 2023 to $9.1 billion in Q2 FY2026 , though I will add that they dropped from $13 billion in Q1 FY2026, and if I’m honest, I have no idea why! Nevertheless, depreciation continues to erode Microsoft’s on-paper profits, growing (much like capex, as the two are connected!) at a much-faster rate than any investment in Azure or Intelligent Cloud. But worry not, traveler! Microsoft “beat” on earnings last quarter, making a whopping $38.46 billion in net income …with $9.97 billion of that coming from recapitalizing its stake in OpenAI. Similarly, Microsoft has started bulking up its Remaining Performance Obligations. See if you can spot the difference between Q1 and Q2 FY26, emphasis mine: So, let’s just lay it out: …Microsoft’s upcoming revenue dropped between quarters as every single expenditure increased, despite adding over $200 billion in revenue from OpenAI. A “weighted average duration” of 2.5 years somehow reduced Microsoft’s RPOs. But let’s be fair and jump back to Q4 FY2025… 40% of $375 billion is $150 billion. Q3 FY25 ? 40% on $321 billion, or $128.4 billion. Q2 FY25 ? $304 billion, 40%, or $121.6 billion.  It appears that Microsoft’s revenue is stagnating, even with the supposed additions of $250 billion in spend from OpenAI and $30 billion from Anthropic , the latter of which was announced in November but doesn’t appear to have manifested in these RPOs at all. In simpler terms, OpenAI and Anthropic do not appear to be spending more as a result of any recent deals, and if they are, that money isn’t arriving for over a year. Much like the rest of AI, every deal with these companies appears to be entirely on paper, likely because OpenAI will burn at least $115 billion by 2029 , and Anthropic upwards of $30 billion by 2028, when it mysteriously becomes profitable two years before OpenAI “does so” in 2030 .  These numbers are, of course, total bullshit. Neither company can afford even $20 billion of annual cloud spend, let alone multiple tens of billions a year, and that’s before you get to OpenAI’s $300 billion deal with Oracle that everybody has realized ( as I did in September ) requires Oracle to serve non-existent compute to OpenAI and be paid hundreds of billions of dollars that, helpfully, also don’t exist. Yet for Microsoft, the problems are a little more existential.  Last year, I calculated that big tech needed $2 trillion in new revenue by 2030 or investments in AI were a loss , and if anything, I think I slightly underestimated the scale of the problem. As of the end of its most recent fiscal quarter, Microsoft has spent $277 billion or so in capital expenditures since the beginning of FY2022, with the majority of them ($216 billion) happening since the beginning of FY2024. Capex has ballooned to the size of 45.5% of Microsoft’s FY26 revenue so far — and over 109% of its net income.  This is a fucking disaster. While net income is continuing to grow, it (much like every other financial metric) is being vastly outpaced by capital expenditures, none of which can be remotely tied to profits , as every sign suggests that generative AI only loses money. While AI boosters will try and come up with complex explanations as to why this is somehow alright, Microsoft’s problem is fairly simple: it’s now spending 45% of its revenues to build out data centers filled with painfully expensive GPUs that do not appear to be significantly contributing to overall revenue, and appear to have negative margins. Those same AI boosters will point at the growth of Intelligent Cloud as proof, so let’s do a thought experiment (even though they are wrong): if Intelligent Cloud’s segment growth is a result of AI compute, then the cost of revenue has vastly increased, and the only reason we’re not seeing it is that the increased costs are hitting depreciation first. You see, Intelligent Cloud is stalling, and while it might be up by 8.8% on an annualized basis (if we assume each quarter of the year will be around $30 billion, that makes $120 billion, so about an 8.8% year-over-year increase from $106 billion), that’s come at the cost of a massive increase in capex (from $88 billion for FY2025 to $72 billion for the first two quarters of FY2026 ), and gross margins that have deteriorated from 69.89% in Q3 FY2024 to 68.59% in FY2026 Q2 , and while operating margins are up, that’s likely due to Microsoft’s increasing use of contract workers and increased recruitment in cheaper labor markets. And as I’ll reveal later, Microsoft has used OpenAI’s billions in inference spend to cover up the collapse of the growth of the Intelligent Cloud segment. OpenAI’s inference spend now represents around 10% of Azure’s revenue. Microsoft, as I discussed a few weeks ago , is in a bind. It keeps buying GPUs, all while waiting for the GPUs it already has to start generating revenue, and every time a new GPU comes online, its depreciation balloons. Capex for GPUs began in seriousness in Q1 FY2023 following October’s shipments of NVIDIA’s H100 GPUs , with reports saying that Microsoft bought 150,000 H100s in 2023 (around $4 billion at $27,000 each) and 485,000 H100s in 2024 ($13 billion). These GPUs are yet to provide much meaningful revenue, let alone any kind of profit , with reports suggesting ( based on Oracle leaks ) that the gross margins of H100s are around 26% and A100s (an older generation launched in 2020) are 9%, for which the technical term is “dogshit.”  Somewhere within that pile of capex also lies orders for H200 GPUs, and as of 2024, likely NVIDIA’s B100 (and maybe B200) Blackwell GPUs too. You may also notice that those GPU expenses are only some portion of Microsoft’s capex, and the reason is because Microsoft spends billions on finance leases and construction costs. What this means in practical terms is that some of this money is going to GPUs that are obsolete in 6 years, some of it’s going to paying somebody else to lease physical space, and some of it is going into building a bunch of data centers that are only useful for putting GPUs in. And none of this bullshit is really helping the bottom line! Microsoft’s More Personal Computing segment — including Windows, Xbox, Microsoft 365 Consumer, and Bing — has become an increasingly-smaller part of revenue, representing in the latest quarter a mere 17.64% of Microsoft’s revenue in FY26 so far, down from 30.25% a mere four years ago. We are witnessing the consequences of hubris — those of a monopolist that chased out any real value creators from the organization, replacing them with an increasingly-annoying cadre of Business Idiots like career loser Jay Parikh and scummy, abusive timewaster Mustafa Suleyman .  Satya Nadella took over Microsoft with the intention of fixing its culture, only to replace the aggressive, loudmouthed Ballmer brand with a poisonous, passive aggressive business mantra of “you’ve always got to do more with less.” Today, I’m going to walk you through the rotting halls of Redmond’s largest son, a bumbling conga line of different businesses that all work exactly as well as Microsoft can get away with.  Welcome to The Hater’s Guide To Microsoft , or Instilling The Oaf Mindset. Server products and cloud services, including Azure and other cloud services, comprising cloud and AI consumption-based services, GitHub cloud services, Nuance Healthcare cloud services, virtual desktop offerings, and other cloud services; and Server products, comprising SQL Server, Windows Server, Visual Studio, System Center, related Client Access Licenses (“CALs”), and other on-premises offerings. Enterprise and partner services, including Enterprise Support Services, Industry Solutions, Nuance professional services, Microsoft Partner Network, and Learning Experience. Q1: $398 billion of RPOs, 40% within 12 months, $159.2 billion in upcoming revenue. Q2: $625 billion of RPOs, 25% within 12 months, $156.25 billion in upcoming revenue.

0 views

Premium: The Hater's Guide to Oracle

You can’t avoid Oracle. No, really, you can’t. Oracle is everywhere. It sells ERP software – enterprise resource planning, which is a rat king of different services for giant companies for financial services, procurement (IE: sourcing and organizing the goods your company needs to run), compliance, project management, and human resources. It sells database software, and even owns the programming language Java as part of its acquisition of Sun Microsystems back in 2010 .  Its customers are fucking everyone: hospitals ( such as England’s National Health Service ), large corporations (like Microsoft), health insurance companies, Walmart, and multiple different governments. Even if you have never even heard of Oracle before, it’s almost entirely certain that your personal data is sitting in an Oracle-designed system somewhere.  Once you let Oracle into your house, it never leaves. Canceling contracts is difficult, to the point that one Redditor notes that some clients agreed to spend a minimum amount of money on services without realizing, meaning that you can’t remove services you don’t need even during the renewal of a contract . One user from three years ago told the story of adding two users to their contract for Oracle’s Netsuite Starter Edition ( around $1000 a month in today’s pricing ), only for an Oracle account manager to call a day later to demand they upgrade to the more expensive package ($2500 per month) for every user.   In a thread from a year ago , another user asked for help renegotiating their contract for Netsuite, adding that “[their] company is no where near the state needed to begin an implementation” and “would use a third party partner to implement” software that they had been sold by Oracle. One user responded by saying that Oracle would play hardball and “may even use [the] threat of attorneys.”  In fact, there are entire websites about negotiations with Oracle, with Palisade Compliance saying that “Oracle likes a frenetic pace where contracts are reviewed and dialogues happen under the constant pressure of Oracle’s quarter closes,” describing negotiations with them as “often rushed, filled with tension, and littered with threats from aggressive sales and Oracle auditing personnel.” This is something you can only do when you’ve made it so incredibly difficult to change providers. What’re you gonna do? Have your entire database not work? Pay up. Oracle also likes to do “audits” of big customers where it makes sure that every single part of your organization that uses Oracle software is paying for it, or were not using it in a way that was not allowed based on their contract . For example, Oracle sued healthcare IT company Perry Johnson & Associates in 2020 because the company that built PJ&A’s database systems used Oracle’s database software. The case was settled. This is all to say that Oracle is a big company that sells lots of stuff, and increases the pressure around its quarterly earnings as a means of boosting revenues. If you have a company with computers that might be running Java or Oracle’s software — even if somebody else installed it for you! — you’ll be paying Oracle, one way or another. They even tried to sue Google for using the open source version of Java to build its Android operating system (though they lost).  Oracle is a huge, inevitable pain in the ass, and, for the most part, an incredibly profitable one . Every time a new customer signs on at Oracle, they pledge themselves to the Graveyard Smash and permanent fealty to Larry Ellison’s database empire.  As a result, founder Larry Ellison has become one of the richest people in the world — the fifth-largest as of writing this sentence — owning 40% of Oracle’s stock and, per Martin Peers of The Information, will earn about $2.3 billion in dividends in the next year.  Oracle has also done well to stay out of bullshit hype-cycles. While it quickly spun up vague blockchain and metaverse offerings, its capex stayed relatively flat at around $1 billion to $2.1 billion a fiscal year (which runs from June 1 to May 31), until it burst to $4.511 in FY2022 (which began on June 1, 2021, for reference), $8.695 billion in FY2023, $6.86 billion in FY2024, and then increasing a teeny little bit to $21.25 billion in FY2025 as it stocked up on AI GPUs and started selling compute. You may be wondering if that helped at all, and it doesn’t appear to have at all. Oracle’s net income has stayed in the $2 billion to $3 billion range for over a decade , other than a $2.7 billion spike last quarter from its sale of its shares in Ampere . You see, things have gotten weird at Oracle, in part because of the weirdness of the Ellisons themselves, and their cozy relationship with the Trump Administration ( and Trump itself ). Ellison’s massive wealth backed son David Ellison’s acquisition of Paramount , putting conservative Bari Weiss at the helm of CBS in an attempt to placate and empower the right wing, and is currently trying to buy Warner Brothers Discovery ( though it appears Netflix may have won ), all in pursuit of kissing up to a regime steeped in brutality and bigotry that killed two people in Minnesota. Oracle will serve as the trusted security partner, responsible for auditing and ensuring compliance with National Security Terms, according to a memo. The company already provides cloud services for TikTok and manages user data in the U.S. Notably, Oracle previously made a bid for TikTok back in 2020. I know that you’re likely a little scared that an ultra right-wing billionaire has bought another major social network. I know you think that Oracle, a massive and inevitable cloud storage platform owned by a man who looks like H.R. Giger drew Jerry Stiller. I know you’re likely worried about a replay of the Elon Musk Twitter fiasco, where every week it seemed like things would collapse but it never seemed to happen, and then Musk bought an election. What if I told you that things were very different, and far more existentially perilous for Oracle? You see, Oracle is arguably one of the single-most evil and successful companies in the world, and it’s got there by being an aggressive vendor of database and ERP software, one that, like a tick with a law degree, cannot be removed without some degree of bloodshed. Perhaps not the highest-margin business in the world, but you know, it worked. Oracle has stuck to the things it’s known for for years and years and done just fine… …until AI, that is. Let’s see what AI has done for Oracle’s gross margi- OH MY GOD ! The scourge of AI GPUs has taken Oracle’s gross margin from around 79% in 2021 to 68.54% in 2025, with CNBC reporting that FactSet-polled analysts saw it falling to 49% by 2030 , which I think is actually being a little optimistic.   Oracle was very early to high-performance computing, becoming the first cloud in the world to have general availability of NVIDIA’s A100 GPUs back in September 2020 , and in June 2023 (at the beginning of Oracle’s FY2024), Ellison declared that Oracle would spend “billions” on NVIDIA GPUs, naming AI firm Cohere as one of its customers.  In May 2024, Musk and Ellison discussed a massive cloud compute contract — a multi-year, $10 billion deal that fell apart in July 2024 when Musk got impatient , a blow that was softened by Microsoft’s deal to buy compute capacity for OpenAI , for chips to be rented out of a data center in Abilene Texas that, about six months later, OpenAI would claim was part of a “$500 billion Stargate initiative” announcement between Oracle, SoftBank and OpenAI that was so rushed that Ellison had to borrow a coat to stay warm on the White House lawn, per The Information . “Stargate” is commonly misunderstood as a Trump program, or something that has raised $500 billion, when what it actually is is Oracle raising debt to build data centers for OpenAI. Instead of staying in its lane as a dystopian datacenter mobster, Oracle entered into negative-to-extremely-low margin realm of GPU rentals, raising $58 billion in debt and signing $248 billion in data center leases to service a 5-year-long $300 billion contract with OpenAI that it doesn’t have the capacity for and OpenAI doesn’t have the money to pay for . Oh, and TikTok? The billion-user social network that Oracle sort-of-just bought? There’s one little problem with it: per The Information , ByteDance investors estimate TikTok lost several billion dollars last year on revenues of roughly $20 billion, attributed to its high growth costs and, per The Information, “higher operational and labor costs in overseas markets compared to China.” Now, I know what you’re gonna say: Ellison bought TikTok as a propaganda tool, much like Musk bought Twitter. “The plan isn’t for it to be profitable,” you say. “It’s all about control” you say, and I say, in response, that you should know exactly how fucked Oracle is. In its last quarter, Oracle had negative $13 billion in cash flow , and between 2022 and late 2025 quintupled its PP&E (from $12.8 billion to $67.85 billion), primarily through the acquisition of GPUs for AI compute. Its remaining performance obligations are $523 billion , with $300 billion of that coming from OpenAI in a deal that starts, according to the Wall Street Journal, “ in 2027 ,” with data centers that are so behind in construction that the best Oracle could muster is saying that 96,000 B200 GPUs had been “delivered” to the Stargate Abilene data center in December 2025 for a data center of 450,000 GPUs that has to be fully operational by the end of 2026 without fail.  And what’re the margins on those GPUs? Negative 100% .  Oracle, a business borne of soulless capitalist brutality, has tied itself existentially to not just the success of AI , but the specific, incredible, impossible success of OpenAI , which will have to muster up $30 billion in less than a year to start paying for it, and another $270 billion or more to pay for the rest… at a time when Oracle doesn’t have the capacity and has taken on brutal debt to build it. For Oracle to survive , OpenAI must find a way to pay it four times the annual revenue of Microsoft Azure ($75 billion) , and because OpenAI burns billions of dollars, it’s going to have to raise all of that money at a time of historically low liquidity for venture capital .  Did I mention that Oracle took on $56 billion of debt to build data centers specifically for OpenAI? Or that the banks who invested in these deals don’t seem to be able to sell off the debt ? Let me put it really simply: We are setting up for a very funny and chaotic situation where Oracle simply runs out of money, and in the process blows up Larry Ellison’s fortune. However much influence Ellison might have with the administration, Oracle has burdened itself with debt and $248 billion in data center lease obligations — costs that are inevitable, and are already crushing the life out of the company (and the stock).  The only way out is if OpenAI becomes literally the most-successful cash-generating company of all time within the next two years, and that’s being generous. This is not a joke. This is not an understatement. Sam Altman holds Larry Ellison’s future in his clammy little hands, and there isn’t really anything anybody can do about it other than hope for the best, because Oracle already took on all that debt and capex. Forget about politics, forget about the fear in your heart that the darkness always wins, and join me in The Hater’s Guide To Oracle, or My Name’s Larry Ellison, and Welcome To Jackass. Larry Ellison’s wealth is almost entirely tied up in Oracle stock. Oracle’s stock is tied to the company “Oracle,” which is currently destroying its margins and annihilating its available cash to buy GPUs to serve a customer that cannot afford to pay it. Oracle has taken on ruinous debt that can only be paid if this customer, which cannot afford it and needs to raise money from an already-depleted venture capital pool, actually pays it. Oracle’s stock has already been punished for these debts , and that’s before OpenAI fails to pay for its contract. Oracle now owns part of one of its largest cloud customers, TikTok, which loses billions of dollars a year, and the US entity says, per Bloomberg , that it will “retrain, test and update the content recommendation algorithm on US user data,” guaranteeing that it’ll fuck up whatever makes it useful, reducing its efficacy for advertisers. Larry Ellison’s entire financial future is based on whether OpenAI lives or dies. If it dies, there isn’t another entity in the universe that can actually afford (or has interest in) the scale of the compute Oracle is building.

0 views