Latest Posts (20 found)

Premium: The Hater's Guide to Private Equity

We have a global intelligence crisis, in that a lot of people are being really fucking stupid. As I discussed in this week’s free piece , alleged financial analyst Citrini Research put out a truly awful screed called the “2028 Global Intelligence Crisis” — a slop-filled scare-fiction written and framed with the authority of deeply-founded analysis, so much so that it caused a global selloff in stocks .  At 7,000 words, you’d expect the piece to have some sort of argument or base in reality, but what it actually says is that “AI will get so cheap that it will replace everything, and then most white collar people won’t have jobs, and then they won’t be able to pay their mortgages, also AI will cause private equity to collapse because AI will write all software.”  This piece is written specifically to spook *and* ingratiate anyone involved in the financial markets with the idea that their investments are bad but investing in AI companies is good, and also that if they don't get behind whatever this piece is about (which is unclear!), they'll be subject to a horrifying future where the government creates a subsidy generated by a tax on AI inference (seriously). And, most damningly, its most important points about HOW this all happens are single sentences that read "and then AI becomes more powerful and cheaper too and runs on a device."  Part of the argument is that AI agents will use cryptocurrency to replace MasterCard and Visa. It’s dogshit. I’m shocked that anybody took it seriously. The fact this moved markets should suggest that we have a fundamentally flawed financial system — and here’s an annotated version with my own comments. This is the second time our markets have been thrown into the shitter based on AI booster hype. A mere week and a half ago, a software sell-off began because of the completely fanciful and imaginary idea that AI would now write all software . I really want to be explicit here: AI does not threaten the majority of SaaS businesses, and they are jumping at ghost stories.  If I am correct, those dumping software stocks believe that AI will replace these businesses because people will be able to code their own software solutions. This is an intellectually bankrupt position, one that shows an alarming (and common) misunderstanding of very basic concepts. It is not just a matter of “enough prompts until it does this” — good (or even functional!) software engineering is technical, infrastructural, and philosophical, and the thing you are “automating” is not just the code that makes a thing run.  Let's start with the simplest, and least-technical way of putting it: even in the best-case scenario, you do not just type "Build Be A Salesforce Competitor" and it erupts, fully-formed, from your Terminal window. It is not capable of building it, but even if it were, it would need to actually be on a cloud hosting platform, and have all manner of actual customer data entered into it. Building software is not writing code and then hitting enter and a website appears, requiring all manner of infrastructural things (such as "how does a customer access it in a consistent and reliable way," "how do I make sure that this can handle a lot of people at once," and "is it quick to access," with the more-complex database systems requiring entirely separate subscriptions just to keep them connecting ).  Software is a tremendous pain in the ass. You write code, then you have to make sure the code actually runs, and that code needs to run in some cases on specific hardware, and that hardware needs to be set up right, and some things are written in different languages, and those languages sometimes use more memory or less memory and if you give them the wrong amounts or forget to close the door in your code on something everything breaks, sometimes costing you money or introducing security vulnerabilities.  In any case, even for experienced, well-versed software engineers, maintaining software that involves any kind of customer data requires significant investments in compliance, including things like SOC-2 audits if the customer itself ever has to interact with the system, as well as massive investments in security.  And yet, the myth that LLMs are an existential threat to existing software companies has taken root in the market, sending the share prices of the legacy incumbents tumbling. A great example would be SAP, down 10% in the last month.  SAP makes ERP (Enterprise Resource Planning, which I wrote about in the Hater's Guide To Oracle ) software, and has been affected by the sell-off. SAP is also a massive, complex, resource-intensive database-driven system that involves things like accounting, provisioning and HR, and is so heinously complex that you often have to pay SAP just to make it function (if you're lucky it might even do so). If you were to build this kind of system yourself, even with "the magic of Claude Code" (which I will get to shortly), it would be an incredible technological, infrastructural and legal undertaking.  Most software is like this. I’d say all software that people rely on is like this. I am begging with you, pleading with you to think about how much you trust the software that’s on every single thing you use, and what you do when a piece of software stops working, and how you feel about the company that does that. If your money or personal information touches it, they’ve had to go through all sorts of shit that doesn’t involve the code to bring you the software.  Any company of a reasonable size would likely be committing hundreds of thousands if not millions of dollars of legal and accounting fees to make sure it worked, engineers would have to be hired to maintain it, and you, as the sole customer of this massive ERP system, would have to build every single new feature and integration you want. Then you'd have to keep it running, this massive thing that involves, in many cases, tons of personally identifiable information. You'd also need to make sure, without fail, that this system that involves money was aware of any and all currencies and how they fluctuate, because that is now your problem. Mess up that part and your system of record could massively over or underestimate your revenue or inventory, which could destroy your business. If that happens, you won't have anyone to sue. When bugs happen, you'll have someone who's job it is to fix it that you can fire, but replacing them will mean finding a new person to fix the mess that another guy made.  And then we get to the fact that building stuff with Claude Code is not that straightforward. Every example you've read about somebody being amazed by it has built a toy app or website that's very similar to many open source projects or website templates that Anthropic trained its training data on. Every single piece of SaaS anyone pays for is paying for both access to the product and a transfer of the inherent risk or chaos of running software that involves people or money. Claude Code does not actually build unique software. You can say "create me a CRM," but whatever CRM it pops out will not magically jump onto Amazon Web Services, nor will it magically be efficient, or functional, or compliant, or secure, nor will it be differentiated at all from, I assume, the open source or publicly-available SaaS it was trained on. You really still need engineers, if not more of them than you had before. It might tell you it's completely compliant and that it will run like a hot knife through butter — but LLMs don’t know anything, and you cannot be sure Claude is telling the truth as a result. Is your argument that you’d still have a team of engineers (so they know what the outputs mean), but they’d be working on replacing your SaaS subscription? You’re basically becoming a startup with none of the benefits.  To quote Nik Suresh, an incredibly well-credentialed and respected software engineer (author of I Will Fucking Piledrive You If You Mention AI Again ), “...for some engineers, [Claude Code] is a great way to solve certain, tedious problems more quickly, and the responsible ones understand you have to read most of the output, which takes an appreciable fraction of the time it would take to write the code in many cases. Claude doesn't write terrible code all the time, it's actually good for many cases because many cases are boring. You just have to read all of it if you aren't a fucking moron because it periodically makes company-ending decisions.” Just so you know, “company-ending decisions” could start with your vibe-coded Stripe clone leaking user credit card numbers or social security numbers because you asked it to “just handle all the compliance stuff.” Even if you have very talented engineers, are those engineers talented in the specifics of, say, healthcare data or finance? They’re going to need to be to make sure Claude doesn’t do anything stupid !  So, despite all of this being very obvious , it’s clear that the markets and an alarming number of people in the media simply do not know what they are talking about. The “AI replaces software” story is literally “Anthropic has released a product and now the resulting industry is selling off,” such as when it launched a cybersecurity tool that could check for vulnerabilities (a product that has existed in some form for nearly a decade) causing a sell-off in cybersecurity stocks like Crowdstrike — you know, the one that had a faulty bit of code cause a global cybersecurity incident that lost the Fortune 500 billions , and led to Delta Air Lines suspending over 1,200 flights over six long days of disruption .  There is no rational basis for anything about this sell-off other than that our financial media and markets do not appear to understand the very basic things about the stuff they invest in. Software may seem complex, but (especially in these cases) it’s really quite simple: investors are conflating “an AI model can spit out code” with “an AI model can create the entire experience of what we know as “software,” or is close enough that we have to start freaking out.” This is thanks to the intentionally-deceptive marketing pedalled by Anthropic and validated by the media. In a piece from September 2025, Bloomberg reported that Claude Sonnet 4.5 could “code on its own for up to 30 hours straight,”  a statement directly from Anthropic repeated by other outlets that added that it did so “on complex, multi-step tasks,” none of which were explained. The Verge, however, added that apparently Anthropic “ coded a chat app akin to Slack or Teams ,” and no, you can’t see it, or know anything about how much it costs or its functionality. Does it run? Is it useful? Does it work in any way? What does it look like? We have absolutely no proof this happened other than them saying it, but because the media repeated it it’s now a fact.  Perhaps it’s not a particularly novel statement, but it’s becoming kind of obvious that maybe the people with the money don’t actually know what they’re doing, which will eventually become a problem when they all invest in the wrong thing for the wrong reasons.   SaaS (Software as a Service, which almost always refers to business software) stocks became a hot commodity because they were perpetual growth machines with giant sales teams that existed only to make numbers go up, leading to a flurry of investment based on the assumption that all numbers will always increase forever, and every market is as giant as we want. Not profitable? No problem! You just had to show growth. It was easy to raise money because everybody saw a big, obvious path to liquidity, either from selling to a big firm or taking the company public… …in theory.  Per Victor Basta , between 2014 and 2017, the number of VC rounds in technology companies halved with a much smaller drop in funding, adding that a big part was the collapse of companies describing themselves as SaaS, which dropped by 40% in the same period. In a 2016 chat with VC David Yuan, Gainsight CEO Nick Mehta added that “the bar got higher and weights shifted in the public markets,” citing that profitability was now becoming more important to investors.  Per Mehta, one savior had arrived — Private Equity, with Thoma Bravo buying Blue Coat Systems in 2011 for $1.3 billion (which had been backed by a Canadian teacher’s pension fund!), Vista Equity buying Tibco for $4.3 billion in 2014, and Permira Advisers (along with the Canadian Pension Plan Investment Board) buying Informatica for $5.3 billion ( with participation from both Salesforce and Microsoft ) in 2015, 16 years after its first IPO. In each case, these firms were purchased using debt that immediately gets dumped onto the company’s balance sheet, known as a leveraged buyout.  In simple terms, you buy a company with money that the company you just bought has to pay off. The company in question also has to grow like gangbusters to keep up with both that debt and the private equity firm’s expectations. And instead of being an investor with a board seat who can yell at the CEO, it’s quite literally your company, and you can do whatever you want with (or to) it. Yuan added that the size of these deals made the acquisitions problematic, as did their debt-filled: Symantec would acquire Blue Coat for $4.65 billion in 2016 , for just under a 4x return. Things were a little worse for Tibco. Vista Equity Partners tried to sell it in 2021 amid a surge of other M&A transactions , with the solution — never change, private equity! — being to buy Citrix for $16.5 billion (a 30%% premium on its stock price) and merge it with Tibco, magically fixing the problem of “what do we do with Tibco?” by hiding it inside another transaction. Informatica eventually had a $10 billion IPO in 2021, which was flat in its first day of trading , never really did more than stay at its IPO price, then sold to Salesforce for $8 billion in 2025 , at an equity value of $8 billion , which seems fine but not great until you realize that, with inflation, the $5.3 billion that Permira invested in 2015 was about $7.15 billion in 2025’s money. In every case, the assumption was very simple: these businesses would grow and own their entire industries, the PE firm would be the reason they did this (by taking them private and filling them full of debt while making egregious growth demands), and the meteoric growth of SaaS would continue in perpetuity.  Yet the real year that broke things was 2021. As everybody returned to the real world, consumer and business spending skyrocketed, leading ( per Bloomberg ) to a massive surge in revenues that convinced private equity to shove even more cash and debt up the ass of SaaS: Bloomberg is a little nicer than I am, so they’re not just writing “deals were waved through because everybody assumed that software grows forever and nobody actually knew a thing about the technology or why it would grow so fast.” Unsurprisingly, this didn’t turn out to be true. Per The Information , PE firms invested in or bought 1,167 U.S. software companies for $202 billion, and usually hold investments for three to five years. Thankfully, they also included a chart to show how badly this went:  2021 was the year of overvaluation, and ( per Jason Lemkin of SaaStr ) 60% of unicorns (startups with $1bn+) valuations hadn’t raised funds in years. The massive accumulated overinvestment, combined with no obvious pathway to an exit, led to people calling these companies “ Zombie Unicorns ”: The problem, to quote The Information, is that “PE firms don’t want to lock in returns that are lower than what they promised their backers, say some executives at these firms,” and “many enterprise software firms’ revenue growth has slowed.” Per CNBC in November 2025 , private equity firms were facing the same zombie problem: Per Jason Lemkin , private equity is sitting on its largest collection of companies held for longer than four years since 2012, with McKinsey estimating that more than 16,000 companies (more than 52% of the total buyout-backed inventory) had been held by private equity for more than four years, the highest on record. In very simple terms, there are hundreds of billions of tech companies sitting in the wings of private equity firms that they’re desperate to sell, with the only customers being big tech firms, other private equity firms, and public offerings in one of the slowest IPO markets in history . Investing used to be easy. There were so many ideas for so many companies, companies that could be worth billions of dollars once they’d been fattened up with venture capital and/or private equity. There were tons of acquirers, it was easy to take them public, and all you really had to do was exist and provide capital. Companies didn’t have to be good , they just had to look good enough to sell. This created a venture capital and private equity industry based on symbolic value, and chased out anyone who thought too hard about whether these companies could actually survive on their own merits. Per PitchBook, since 2022, 70% of VC-backed exits were valued at less than the capital put in , with more than a third of them being startups buying other startups in 2024. Private equity firms are now holding assets for an average of 7 years , McKinsey also added one horrible detail for the overall private equity market, emphasis mine:  You see, private equity is fucking stupid, doesn’t understand technology, doesn’t understand business, and by setting up its holdings with debt based on the assumption of unrealistic growth, they’ve created a crisis for both software companies and the greater tech industry.  On February 6, more than $17.7 billion of US tech company loans dropped to “distressed” trading levels (as in trading as if traders don’t believe they’ll get paid, per Bloomberg ), growing the overall group of distressed tech loans to $46.9 billion, “dominated by firms in SaaS.” These firms included huge investments like Thoma Bravo’s Dayforce ( which it purchased two days before this story ran for $12.3 billion ) and Calabrio ( which it acquired for “over” $1 billion in April 2021 and merged with Verint in November 2025 ).  This isn’t just about the shit they’ve bought , but the destruction of the concept of “value” in the tech industry writ large. “Value” was not based on revenues, or your product, or anything other than your ability to grow and, ideally, trap as many customers as possible , with the vague sense that there would always be infinitely more money every year to spend on software.  Revenue growth came from massive sales teams compensated with heavy commissions and yearly price increases, except things have begun to sour, with renewals now taking twice as long to complete , and overall SaaS revenue growth slowing for years . To put it simply, much of the investment in software was based on the idea that software companies will always grow forever, and SaaS companies — which have “sticky” recurring revenues — would be the standard-bearer. When I got into the tech industry in 2008, I immediately became confused about the amount of unprofitable or unsustainable companies that were worth crazy amounts of money, and for the most part I’d get laughed at by reporters for being too cynical.  For the best part of 20 years, software startups have been seen as eternal growth-engines. All you had to do was find a product-market fit, get a few hundred customers locked in, up-sell them on new features and grow in perpetuity as you conquered a market. The idea was that you could just keep pumping them with cash, hire as many pre-sales (technical person who makes the sale), sales and customer experience (read: helpful person who also loves to tell you more stuff) people as you need to both retain customers and sell them as much stuff as you need.  Innovation was, as you’d expect, judged entirely by revenue growth and net revenue retention : In practice, this sounds reasonable: what percentage of your revenue are you making year-over-year? The problem is that this is a very easy to game stat, especially if you’re using it to raise money, because you can move customer billing periods around to make sure that things all continue to look good. Even then, per research by Jacco van der Kooji and Dave Boyce , net revenue retention is dropping quarter over quarter. The other problem is that the entire process of selling software has separated from the end-user, which means that products (and sales processes) are oriented around selling that software to the person responsible for buying it rather than those doomed to use it.  Per Nik Suresh’s Brainwash An Executive Today , in a conversation with the Chief Technology Officer of a company with over 10,000 people, who had asked if “data observability,” a thing that they did not (and would not need to, in their position) understand, was a problem, and whether Nik had heard of Monte Carlo. It turned out that the executive in question had no idea what Monte Carlo or data observability was , but because they’d heard about it on LinkedIn, it was now all they could think about.  This is the environment that private equity bought into — a seemingly-eternal growth engine with pliant customers desperate to spend money on a product that didn’t have to be good , just functional-enough. These people do not know what they are talking about or why they are buying these companies other than being able to mumble out shit like “ARR” and “NRR+” and “TAM” and “CAC” and “ARPA” in the right order to convince themselves that something is a good idea without ever thinking about what would happen if it wasn’t. This allowed them to stick to the “big picture,” meaning “numbers that I can look at rather than any practical experience in software development.” While I guess the concept of private equity isn’t morally repugnant, its current form — which includes venture capital — has led the modern state of technology into the fucking toilet, combining an initial flux of viable businesses, frothy markets and zero interest rates making it deceptively easy to raise money to acquire and deploy capital, leading to brainless investing, the death of logical due diligence, and potentially ruinous consequences for everybody involved. Private equity spent decades buying a little bit of just about everything, enriching the already-rich by engaging with the most vile elements of the Rot Economy’s growth-at-all-costs mindset . Its success is predicated on near-perpetual levels of liquidity and growth in both its holdings and the holdings of those who exist only to buy their stock, and on a tech and business media that doesn’t think too hard about the reality of the problems their companies claim to solve. The reckoning that’s coming is one built specifically to target the ignorant hubris that made them rich.  Private equity has yet to be punished by its limited partners and banks for investing in zombie assets, allowing it to pile into the unprofitable data centers underpinning the AI bubble, meaning that companies like Apollo, Blue Owl and Blackstone — all of whom participated in the ugly $10.2 billion acquisition of Zendesk in 2022 ( after it rejected another PE offer of $17 billion in 2021 ) that included $5 billion in debt — have all become heavily-leveraged in giant, ugly debt deals covering assets that are obsolete to useless in a few years . Alongside the fumbling ignorance of private equity sits the $3 trillion private credit industry , an equally-putrid, growth-drunk, and poorly-informed industry run with the same lax attention to detail and Big Brain Number Models that can justify just about any investment they want. Their half-assed due diligence led to billions of dollars of loans being given to outright frauds like First Brands , Tricolor and PosiGen , and, to paraphrase JP Morgan’s Jamie Dimon, there are absolutely more fraudulent cockroaches waiting to emerge . You may wonder why this matters, as all of this is private credit. Well, they get their money from banks. Big banks. In fact, according to the Federal Reserve of Boston , about 14% ($300 billion) of large banks’ total loan commitments to non-banking financial institutions in 2023 went to private equity and private credit, with Moody’s pegging the number around $285 billion, with an additional $340 billion in unused-yet-committed cash waiting in the wings . Oh, and they get their money from you . Pension funds are among some of the biggest backers of private credit companies , with the New York City Employees Retirement System and CalPERS increasing their investments.  Today, I’m going to teach you all about private equity, private credit, and why years of reframing “value” to mean “growth” may genuinely threaten the global banking system, as well as how effectively every company raises money. An entirely-different system exists for the wealthy to raise and deploy capital, one with flimsy due diligence, a genuine lack of basic industrial knowledge, and hundreds of billions of dollars of crap it can’t sell.  These people have been able to raise near-unlimited capital to do basically anything they want because there was always somebody stupid enough to buy whatever they were selling, and they have absolutely no plan for what happens when their system stops working.  They’ll loan to anyone or invest in anything that confirms their biases, and those biases are equal parts moronic and malevolent. Now they’re investing teachers’ pensions and insurance premiums in unprofitable and unsustainable data centers, all because they have no idea what a good investment actually looks like.  Welcome to the Hater’s Guide To Private Equity, or “The Stupidest Assholes In The Room.”

0 views

On NVIDIA and Analyslop

Hey all! I’m going to start hammering out free pieces again after a brief hiatus, mostly because I found myself trying to boil the ocean with each one, fearing that if I regularly emailed you you’d unsubscribe. I eventually realized how silly that was, so I’m back, and will be back more regularly. I’ll treat it like a column, which will be both easier to write and a lot more fun. As ever, if you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 18,000 words, including vast, extremely detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I am regularly several steps ahead in my coverage, and you get an absolute ton of value. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual.  Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.  Before we go any further, I want to remind everybody I’m not a stock analyst nor do I give investment advice.  I do, however, want to say a few things about NVIDIA and its annual earnings report, which it published on Wednesday, February 25: NVIDIA’s entire future is built on the idea that hyperscalers will buy GPUs at increasingly-higher prices and at increasingly-higher rates every single year. It is completely reliant on maybe four or five companies being willing to shove tens of billions of dollars a quarter directly into Jensen Huang’s wallet. If anything changes here — such as difficulty acquiring debt or investor pressure cutting capex — NVIDIA is in real trouble, as it’s made over $95 billion in commitments to build out for the AI bubble .  Yet the real gem was this part: Hell yeah dude! After misleading everybody that it intended to invest $100 billion in OpenAI last year ( as I warned everybody about months ago , the deal never existed and is now effectively dead ), NVIDIA was allegedly “close” to investing $30 billion . One would think that NVIDIA would, after Huang awkwardly tried to claim that the $100 billion was “ never a commitment ,” say with its full chest how badly it wanted to support OpenAI and how intentionally it would do so. Especially when you have this note in your 10-K: What a peculiar world we live in. Apparently NVIDIA is “so close” to a “partnership agreement” too , though it’s important to remember that Altman, Brockman, and Huang went on CNBC to talk about the last deal and that never came together. All of this adds a little more anxiety to OpenAI's alleged $100 billion funding round which, as The Information reports , Amazon's alleged $50 billion investment will actually be $15 billion, with the next $35 billion contingent on AGI or an IPO: And that $30 billion from NVIDIA is shaping up to be a Klarna-esque three-installment payment plan: A few thoughts: Anyway, on to the main event. New term: analyslop, when somebody writes a long, specious piece of writing with few facts or actual statements with the intention of it being read as thorough analysis.  This week, alleged financial analyst Citrini Research (not to be confused with Andrew Left’s Citron Research)  put out a truly awful piece called the “2028 Global Intelligence Crisis,” slop-filled scare-fiction written and framed with the authority of deeply-founded analysis, so much so that it caused a global selloff in stocks .  This piece — if you haven’t read it, please do so using my annotated version — spends 7000 or more words telling the dire tale of what would happen if AI made an indeterminately-large amount of white collar workers redundant.  It isn’t clear what exactly AI does, who makes the AI, or how the AI works, just that it replaces people, and then bad stuff happens. Citrini insists that this “isn’t bear porn or AI-doomer fan-fiction,” but that’s exactly what it is — mediocre analyslop framed in the trappings of analysis, sold on a Substack with “research” in the title, specifically written to spook and ingratiate anyone involved in the financial markets.  Its goal is to convince you that AI (non-specifically) is scary, that your current stocks are bad, and that AI stocks (unclear which ones those are, by the way) are the future. Also, find out more for $999 a year. Let me give you an example: The goal of a paragraph like this is for you to say “wow, that’s what GPUs are doing now!” It isn’t, of course. The majority of CEOs report little or no return on investment from AI , with a study of 6000 CEOs across the US, UK, Germany and Australia finding that “ more than 80%  [detected] no discernable impact from AI on either employment or productivity .” Nevertheless, you read “GPU” and “North Dakota” and you think “wow! That’s a place I know, and I know that GPUs power AI!”  I know a GPU cluster in North Dakota — CoreWeave’s one with Applied Digital that has debt so severe that it loses both companies money even if they have the capacity rented out 24/7 . But let’s not let facts get in the way of a poorly-written story. I don’t need to go line-by-line — mostly because I’ll end up writing a legally-actionable threat — but I need you to know that most of this piece’s arguments come down to magical thinking and the utterly empty prose. For example, how does AI take over the entire economy?  That’s right, they just get better. No need to discuss anything happening today. Even AI 2027 had the balls to start making stuff about “OpenBrain” or whatever. This piece literally just says stuff, including one particularly-egregious lie:  This is a complete and utter lie. A bald-faced lie. This is not something that Claude Code can do. The fact that we have major media outlets quoting this piece suggests that those responsible for explaining how things work don’t actually bother to do any of the work to find out, and it’s both a disgrace and embarrassment for the tech and business media that these lies continue to be peddled.  I’m now going to quote part of my upcoming premium (the Hater’s Guide To Private Equity, out Friday), because I think it’s time we talked about what Claude Code actually does. I’ve worked in or around SaaS since 2012, and I know the industry well. I may not be able to code, but I take the time to speak with software engineers so that I understand what things actually do and how “impressive” they are. Similarly, I make the effort to understand the underlying business models in a way that I’m not sure everybody else is trying to, and if I’m wrong, please show me an analysis of the financial condition of OpenAI or Anthropic from a booster. You won’t find one, because they’re not interested in interacting with reality. So, despite all of this being very obvious , it’s clear that the markets and an alarming number of people in the media simply do not know what they are talking about or are intentionally avoiding thinking about it. The “AI replaces software” story is literally “Anthropic has released a product and now the resulting industry is selling off,” such as when it launched a cybersecurity tool that could check for vulnerabilities (a product that has existed in some form for nearly a decade) causing a sell-off in cybersecurity stocks like Crowdstrike — you know, the one that had a faulty bit of code cause a global cybersecurity incident that lost the Fortune 500 billions , and resulted in Delta Airlines having to cancel over 1,200 flights over a period of several days .  There is no rational basis for anything about this sell-off other than that our financial media and markets do not appear to understand the very basic things about the stuff they invest in. Software may seem complex, but (especially in these cases) it’s really quite simple: investors are conflating “an AI model can spit out code” with “an AI model can create the entire experience of what we know as ‘software,’ or is close enough that we have to start freaking out.” This is thanks to the intentionally-deceptive marketing pedalled by Anthropic and validated by the media. In a piece from September 2025, Bloomberg reported that Claude Sonnet 4.5 could “code on its own for up to 30 hours straight,”  a statement directly from Anthropic repeated by other outlets that added that it did so “on complex, multi-step tasks,” none of which were explained. The Verge, however, added that apparently Anthropic “ coded a chat app akin to Slack or Teams ,” and no, you can’t see it, or know anything about how much it costs or its functionality. Does it run? Is it useful? Does it work in any way? What does it look like? We have absolutely no proof this happened other than Anthropic saying it, but because the media repeated it it’s now a fact.  As I discussed last week, Anthropic’s primary business model is deception , muddying the waters of what’s possible today and what might be possible tomorrow through a mixture of flimsy marketing statements and chief executive Dario Amodei’s doomerist lies about all white collar labor disappearing .  Anthropic tells lies of obfuscation and omission.  Anthropic exploits bad journalism, ignorance and a lack of critical thinking. As I said earlier, the “wow, Claude Code!” articles are mostly from captured boosters and people that do not actually build software being amazed that it can burp up its training data and make an impression of software engineering.  And even if we believe the idea that Spotify’s best engineers are not writing any code , I have to ask: to what end? Is Spotify shipping more software? Is the software better? Are there more features? Are there less bugs? What are the engineers doing with the time they’re saving? A study from last year from METR said that despite thinking they were 24% faster, LLM coding tools made engineers 19% slower.  I also think we need to really think deeply about how, for the second time in a month, the markets and the media have had a miniature shitfit based on blogs that tell lies using fan fiction. As I covered in my annotations of Matt Shumer’s “Something Big Is Happening,” the people that are meant to tell the general public what’s happening in the world appear to be falling for ghost stories that confirm their biases or investment strategies, even if said stories are full of half-truths and outright lies. I am despairing a little. When I see Matt Shumer on CNN or hear from the head of a PE firm about Citrini Research, I begin to wonder whether everybody got where they were not through any actual work but by making the right noises.  This is the grifter economy, and the people that should be stopping them are asleep at the wheel. NVIDIA beat estimates and raised expectations, as it has quarter after quarter. People were initially excited, then started reading the 10-K and seeing weird little things that stood out. $68.1 billion in revenue is a lot of money! That’s what you should expect from a company that is the single vendor in the only thing anybody talks about.  Hyperscaler revenue accounted for slightly more than 50% of NVIDIA’s data center revenue . As I wrote about last year , NVIDIA’s diversified revenue — that’s the revenue that comes from companies that aren’t in the magnificent 7 — continues to collapse. While data center revenue was $62.3 billion, 50% ($31.15 billion) was taken up by hyperscalers…and because we don’t get a 10-Q for the fourth quarter, we don’t get a breakdown of how many individual customers made up that quarter’s revenue. Boo! It is both peculiar and worrying that 36% (around $77.7 billion) of its $215.938 billion in FY2026 revenue came from two customers. If I had to guess, they’re likely Foxconn or Quanta computing, two large Taiwanese ODMs (Original Design Manufacturers) that build the servers for most hyperscalers.  If you want to know more, I wrote a long premium piece that goes into it (among the ways in which AI is worse than the dot com bubble). In simple terms, when a hyperscaler buys GPUs, they go straight to one of these ODMs to put them into servers. This isn’t out of the ordinary, but I keep an eye on the ODM revenues (which publish every month) to see if anything shifts, as I think it’ll be one of the first signs that things are collapsing. NVIDIA’s inventories continue to grow, sitting at over $21 billion (up from around $19 billion last quarter). Could be normal! Could mean stuff isn’t shipping. NVIDIA has now agreed to $27 billion in multi-year-long cloud service agreements — literally renting its GPUs back from the people it sells them to — with $7 billion of that expected in its FY2027 (Q1 FY2027 will report in May 2026).  For some context, CoreWeave (which reports FY2025 earnings today, February 26) gave guidance last November that it expected its entire annual revenue to be between $5 billion and $5.15 billion. CoreWeave is arguably the largest AI compute vendor outside of the hyperscalers. If there was significant demand, none of this would be necessary. NVIDIA “invested” $17.5bn in AI model makers and other early-stage AI startups, and made a further $3.5bn in land, power, and shell guarantees to “support the build-out of complex datacenter infrastructures.” In total, it spent $21bn propping up the ecosystem that, in turn, feeds billions of dollars into its coffers.  NVIDIA’s l ong-term supply and capacity obligations soared from $30.8bn to $95.2bn , largely because NVIDIA’s latest chips are extremely complex and require TSMC to make significant investments in hardware and facilities , and it’s unwilling to do that without receiving guarantees that it’ll make its money back.  NVIDIA expects these obligations to grow .  NVIDIA’s accounts receivable (as in goods that have been shipped but are yet to be paid for) now sits at $38.4 billion, of which 56% ($21.5 billion) is from three customers. This is turning into a very involved and convoluted process! It turns out that it's pretty difficult to actually raise $100 billion. This is a big problem, because OpenAI needs $655 billion in the next five years to pay all its bills , and loses billions of dollars a year. If OpenAI is struggling to raise $100 billion today, I don't see how it's possible it survives. If you're to believe reports, OpenAI made $13.1 billion in revenue in 2025 on $8 billion of losses , but remember, my own reporting from last year said that OpenAI only made around $4.329 billion through September 2025 with $8.67 billion of inference costs alone. It is kind of weird that nobody seems to acknowledge my reporting on this subject. I do not see how OpenAI survives. it coded for 30 hours [from which you are meant to intimate the code was useful or good and that these hours were productive].  it made a Microsoft Teams competitor [that you are meant to assume was full-featured and functional like Teams or Slack, or…functional? And they didn’t even have to prove it by showing you it]  It was able to write uninterruptedly [which you assume was because it was doing good work that didn’t need interruption].

0 views

Premium: The Hater's Guide to Anthropic

In May 2021, Dario Amodei and a crew of other former OpenAI researchers formed Anthropic and dedicated themselves to building the single-most-annoying Large Language Model company of all time.  Pardon me, sorry, I mean safest , because that’s the reason that Amodei and his crew claimed was why they left OpenAI : I’m also being a little sarcastic. Anthropic, a “public benefit corporation” (a company that is quasi-legally required to sometimes sort of focus on goals that aren’t profit driven, and in this case, one that chose to incorporate in Delaware as opposed to California, where it would have actual obligations), is the only meaningful competitor to OpenAI, one that went from (allegedly) making about $116 million in March 2025 to making $1.16 billion in February 2026 , in the very same month it raised $30 billion from thirty-seven different investors, including a “partial” investment from NVIDIA and Microsoft announced in November 2025 that was meant to be “up to” $15 billion.  Anthropic’s models regularly dominate the various LLM model leaderboards , and its Claude Code command-line interface tool (IE: a terminal you type stuff into) has become quite popular with developers who either claim it writes every single line of their code, or that it’s vaguely useful in some situations.  CEO Dario Amodei predicted last March that in six months AI would be writing 90% of code, and when that didn’t happen, he simply made the same prediction again in January , because, and I do not say this lightly, Dario Amodei is full of shit. You see, Anthropic has, for the best part of five years, been framing itself as the trustworthy , safe alternative to OpenAI, focusing more on its paid offerings and selling to businesses (realizing that the software sales cycle usually focuses on dimwitted c-suite executives rather than those who actually use the products ), as opposed to building a giant, expensive free product that lots of people use but almost nobody pays for.  Anthropic, separately, has avoided following OpenAI in making gimmicky (and horrendously expensive) image and video generation tools, which I assume is partly due to the cost, but also because neither of those things are likely something that an enterprise actually cares about.  Anthropic also caught on early to the idea that coding was the one use case that Large Language Models fit naturally: Anthropic has held the lead in coding LLMs since the launch of June 2024’s Claude Sonnet 3.5 , and as a story from The Information from December 2024 explained, this terrified OpenAI : Cursor would, of course, eventually go on to become its own business, raising $3.2 billion in 2025 to compete with Claude Code, a product made by Anthropic, which Cursor pays to offer its models through its AI coding product. Cursor is Anthropic’s largest customer, with the second being Microsoft’s Github Copilot . I have heard from multiple sources that Cursor is spending more than 100% of its revenue on API calls, with the majority going to Anthropic and OpenAI, both of whom now compete with Cursor. Anthropic sold itself as the stable, thoughtful, safety-oriented AI lab, with Amodei himself saying in an August 2023 interview that he purposefully avoided the limelight: A couple of months later in October 2023, Amodei joined The Logan Bartlett show , saying that he “didn’t like the term AGI” because, and I shit you not, “...because we’re closer to the kinds of things that AGI is pointing at,” making it “no longer a useful term.” He said that there was a “future point” where a model could “build dyson spheres around the sun and calculate the meaning of life,” before rambling incoherently and suggesting that these things were both very close and far away at the same time. He also predicted that “no sooner than 2025, maybe 2026” that AI would “really invent new science.” This was all part of Anthropic’s use of well-meaning language to tell a story that said “you should be scared” and “only Anthropic will save you.” In July 2023 , Amodei spoke before a senate committee about AI oversight and regulation, starting sensible (IE: if AI does become powerful, we should have regulations to mitigate those problems) and eventually veering aggressively into marketing slop: This is Amodei’s favourite marketing trick — using a vague timeline (2-3 years) to suggest that something vaguely bad that’s also good for Anthropic is just around the corner, but managed correctly, could also be good for society (a revolution in technology and science! But also, havoc! ). Only Dario has  the answers (regulations that start with “securing the AI supply chain” meaning “please stop China from competing”).  In retrospect, this was the most honest that he’d ever be. In 2024, Amodei would quickly learn that he loved personalizing companies, and that destroying his soul fucking rocked.  In October 2024, Amodei put out a 15,000-word-long blog — ugh, AI is coming for my job! — where he’d say that Anthropic needed to “avoid the perception of propaganda” while also saying that “as early as 2026 (but there are also ways it could take much longer),” AI would be smarter than a Nobel Prize winner, autonomously able to complete weeks-long tasks, and be the equivalent of a “country of geniuses in a datacenter.”  This piece, like all of his proclamations, had two goals: generating media coverage and investment. Amodei is a deeply dishonest man, couching “predictions” based on nothing in terms like “maybe,” “possibly,” or “as early as,” knowing that the media will simply ignore those words and report what he says as a wise, evidence-based fact.  Amodei (and by extension Anthropic) nakedly manipulates the media by having them repeat these things without analysis or counterpoints — such as that “AI could surpass almost all humans at almost everything shortly after 2027 (which I’ll get back to in a bit).” He knows that these things aren’t true. He knows he doesn’t have any proof. And he knows that nobody will ask, and that his bullshit will make for a sexy traffic-grabbing headline. To be clear, that statement was made three months after Amodei’s essay said that AI labs needed to avoid “the perception of propaganda.” Amodei is a con artist that knows he can’t sell Anthropic’s products by explaining what they actually do, and everybody is falling for it. And, almost always, these predictions match up with Anthropic’s endless fundraising. On September 23, 2024, The Information reported that Anthropic was raising a round at a $30-$40 billion valuation, and on October 12 2024, Amodei pooped out Machines of Loving Grace with the express position that he and Anthropic “had not talked that much about powerful AI’s upsides.”  A month later on November 22, 2024 , Anthropic would raise another $4 billion from Amazon, a couple of weeks after doing a five-hour-long interview with Lex Fridman in which he’d say that “someday AI would be better at everything.”  On November 27, 2024 , Amodei would do a fireside chat at Eric Newcomer’s Cerebral Valley AI Summit where he’d say that in 2025, 2026, or 2027 (yes, he was that vague), AI could be as “good as a Nobel Prize winner, polymathic across many fields,” and have “agency [to] act on its own for hours or days,” the latter of which deliberately laid foundation for one of Anthropic’s greatest lies: that AI can “work uninterrupted” for periods of time, leaving the reader or listener to fill in the (unsaid) gap of “...and actually create useful stuff.” Amodei crested 2024 with an interview with the Financial Times , and let slip what I believe will eventually become Anthropic’s version of WeWork’s Community-Adjusted EBITDA , by which I mean “a way to lie and suggest profitability when a company isn’t profitable”: Yeah man, if a company made $300 million in revenue and spent $1 billion. No amount of DarioMath about how a model “costs this much and makes this much revenue” changes the fact that profitability is when a company makes more money than it spends.  On January 5, 2025, Forbes would report that Anthropic was working on a $60 billion round that would make Amodei, his sister Daniela, and five other cofounders billionaires . Anyway, as I said at Davos on January 21, 2025, Amodei said that he was “more confident than ever” that we’re “very close” to “powerful capabilities,” defined as “systems that are better than almost all humans at almost all terms,” citing his long, boring essay. A day later, Anthropic would raise another $1 billion from Google . On January 27, 2025, he’d tell Economist editor-in-chief Zanny Minton Beddoes that AI would get “as good and eventually better” at thinking as human beings, and that the ceiling of what models could do was “well above humans.”  On February 18, 2025, he’d tell Beddoes that we’d get a model “...that can do everything a human can do at the level of a Nobel laureate across many fields” by 2026 or 2027, and that we’re “on the eve of something that has great challenges” that would “upend the balance of power” because we’d have “10 million people smarter than any human alive…” oh god, I’m not fucking writing it out. I’m sorry. It’s always the same shit. The models are people, we’re so scared.  On February 28, 2025, Amodei would join the New York Times’ Hard Fork , saying that he wanted to “slow down authoritarians,” and that “public officials and leaders at companies” would “look back at this period [where humanity would become a “post-powerful AI society that co-exists with powerful intelligences]” and “feel like a fool,” and that that was the number one goal of these people. Amodei would also add that he had been in the field for 10 years — something he loves to say! — and that there was a 70-80% chance that we will “get a very large number of AI systems that are much smarter than humans at almost everything” before the end of the decade. Three days later, Anthropic would raise $3.5 billion at a $61.5 billion valuation . Beneath the hype, Anthropic is, like OpenAI, a company making LLMs that can generate code and text, and that can interpret data from images and videos, all while burning billions of dollars and having no path to profitability. Per The Information , Anthropic made $4.5 billion in revenue and lost $5.2 billion generating it, and based on my own reporting from last year , costs appear to scale linearly above revenue. Some will argue that the majority of Anthropic’s losses ($4.1 billion) were from training, and I think it’s time we had a chat about what “training” means, especially as Anthropic plans to spend $100 billion on it in the next four years . Per my piece from last week: In an interview on the Dwarkesh Podcast , Amodei even admitted that if you “never train another model” you “don’t have any demand because you’ll fall behind.” Training is opex, and should be part of gross margins. It’s time we had an honest conversation about Anthropic.  Despite its positioning as the trustworthy, “nice” AI lab, Anthropic is as big, ugly and wasteful as OpenAI, and Dario Amodei is an even bigger bullshit artist than Sam Altman. It burns just as much of its revenue on inference (59%, or $2.79 billion on $4.5 billion of revenue , versus OpenAI’s 62%, or $2.5 billion  on $4.3 billion of revenue in the first half of 2025 , if you use The Information’s numbers), and shows no sign of any “efficiency” or “cost-cutting.” Worse still, Anthropic continually abuses its users through varying rate limits to juice revenues and user numbers — along with Amodei’s gas-leak-esque proclamations — to mislead the media, the general public, and investors about the financial condition of the company.  Based on an analysis of many users’ actual token burn on Claude Code, I believe Anthropic is burning anywhere from $3 to $20 to make $1, and that the product that users are using (and the media is raving about) is not one that Anthropic can actually support long-term.  I also see signs that Amodei himself is playing fast and loose with financial metrics in a way that will blow up in his face if Anthropic ever files its paperwork to go public . In simpler terms, Anthropic’s alleged “ 38% gross margins ” are, if we are to believe Amodei’s own words, not the result of “revenue minus COGS” but “how much a model costs and how much revenue it’s generated.” Anthropic is also making promises it can’t keep. It’s promising to spend $30 billion on Microsoft Azure (and an additional "up to one gigawatt”), “tens of billions” on Google Cloud , $21 billion on Google TPUs with Broadcom , “$50 billion on American infrastructure,” as much as $3 billion on Hut8’s data center in Louisiana , and an unknowable (yet likely in the billions) amount of money with Amazon Web Services. Not to worry, Dario also adds that if you’re off by a couple of years on your projections of revenue and ability to pay for compute, it’ll be “ruinous.” I think that he’s right. Anthropic cannot afford to pay its bills, as the ruinous costs of training — which will never, ever stop — and inference will always outpace whatever spikes of revenue it can garner through media campaigns built on deception, fear-mongering, and an exploitation of reporters unwilling to ask or think about the hard questions.  I see no difference between OpenAI’s endless bullshit non-existent deal announcements and what Anthropic has done in the last few months. Anthropic is as craven and deceptive as OpenAI, and Dario Amodei is as willing a con artist as Altman, and I believe is desperately jealous of his success. And after hours and hours of listening to Amodei talk, I think he is one of the most annoying, vacuous, bloviating fuckwits in tech history. He rambles endlessly, stutters more based on how big a lie he’s telling, and will say anything and everything to get on TV and say noxious, fantastical, intentionally-manipulative bullshit to people who should know better but never seem to learn. He stammers, he blithers, he rambles, he continually veers between “this is about to happen” and “actually it’s far away” so that nobody can say he’s a liar, but that’s exactly what I call a person who intentionally deceives people, even if they couch their lies in “maybes” and “possiblies.”  Dario Amodei fucking sucks, and it’s time to stop pretending otherwise. Anthropic has no more soul or ethics than OpenAI — it’s just done a far better job of conning people into believing otherwise. This is the Hater’s Guide To Anthropic, or “DarioWare: Get It Together.”  Thanks to sites like Stack Overflow and Github, as well as the trillions of lines of open source code in circulation, there’s an absolute fuckton of material to train the model on. Software engineers are data perverts (I mean this affectionately), and will try basically anything to speed up, automate or “add efficiency” to their work. Software engineering is a job that most members of the media don’t understand. Software engineers never shut the fuck up when they’ve found something new that feels good. Software engineers will spend hours only defending the honour of any corporation that courts them. Software engineers will at times overestimate their capabilities, as demonstrated by  the METR study that found that developers believed they were 24% faster when using LLMs, when in fact coding models made them 19% slower . This, naturally, makes them quite defensive of the products they use, and whether or not they’re actually seeing improvements.

0 views

Premium: The AI Data Center Financial Crisis

Since the beginning of 2023, big tech has spent over $814 billion in capital expenditures, with a large portion of that going towards meeting the demands of AI companies like OpenAI and Anthropic.  Big tech has spent big on GPUs, power infrastructure, and data center construction,  using a variety of financing methods to do so, including (but not limited to) leasing. And the way they’re going about structuring these finance deals is growing increasingly bizarre.  I’m not merely talking about Meta’s curious arrangement for its facility in Louisiana , though that certainly raised some eyebrows. Last year, Morgan Stanley published a report that claimed hyperscalers were increasingly relying on finance leases to obtain the “powered shell” of a data center, rather than the more common method of operating leases.  The key difference here is that finance leases, unlike operating leases, are effectively long-term loans where the borrower is expected to retain ownership of the asset (whether that be a GPU or a building) at the end of the contract. Traditionally, these types of arrangements have been used to finance the bits of a data center that have a comparatively limited useful life — like computer hardware, which grows obsolete with time. The spending to date is, as I’ve written about again and again , an astronomical amount of spending considering the lack of meaningful revenue from generative AI.  Even after a year straight of manufacturing consent for Claude Code as the be-all-end-all of software development resulted in putrid results for Anthropic — $4.5 billion of revenue and $5.2 billion of losses before interest, taxes, depreciation and amortization according to The Information — with ( per WIRED ) Claude Code only accounting for around $1.1 billion in annualized revenue in December, or around $92 million in monthly revenue. This was in a year where Anthropic raised a total of $16.5 billion (with $13 billion of that coming in September 2025), and it’s already working on raising another $25 billion . This might be because it promised to buy $21 billion of Google TPUs from Broadcom , or because Anthropic expects AI model training costs to cost over $100 billion in the next 3 years . And it just raised another $30 billion — albeit with the caveat that some of said $30 billion came from previously-announced funding agreements with Nvidia and Microsoft, though how much remains a mystery. According to Anthropic’s new funding announcement, Claude Code’s run rate has grown to “over $2.5 billion” as of February 12 2026 — or around $208 million. Based on literally every bit of reporting about Anthropic, costs have likely spiked along with revenue, which hit $14 billion annualized ($1.16 billion in a month) as of that date.  I have my doubts, but let’s put them aside for now. Anthropic is also in the midst of one of the most aggressive and dishonest public relations campaigns in history. While its Chief Commercial Officer Paul Smith told CNBC that it was “focused on growing revenue” rather than “spending money,” it’s currently making massive promises — tens of billions on Google Cloud , “ $50 billion in American AI infrastructure ,” and $30 billion on Azure . And despite Smith saying that Anthropic was less interested in “flashy headlines,” Chief Executive Dario Amodei has said, in the last three weeks , that “ almost unimaginable power is potentially imminent ,” that AI could replace all software engineers in the next 6-12 months , that AI may (it’s always fucking may ) cause “ unusually painful disruption to jobs ,” and wrote a 19,000 word essay — I guess AI is coming for my job after all! — where he repeated his noxious line that “we will likely get a century of scientific and economic progress compressed in a decade.” Yet arguably the most dishonest part is this word “training.” When you read “training,” you’re meant to think “oh, it’s training for something, this is an R&D cost,” when “training LLMs” is as consistent a cost as inference (the creation of the output) or any other kind of maintenance.  While most people know about pretraining — the shoving of large amounts of data into a model (this is a simplification I realize) — in reality a lot of the current spate of models use post-training , which covers everything from small tweaks to model behavior to full-blown reinforcement learning where experts reward or punish particular responses to prompts. To be clear, all of this is well-known and documented, but the nomenclature of “training” suggests that it might stop one day, versus the truth: training costs are increasing dramatically, and “training” covers anything from training new models to bug fixes on existing ones. And, more fundamentally, it’s an ongoing cost — something that’s an essential and unavoidable cost of doing business.  Training is, for an AI lab like OpenAI and Anthropic, as common (and necessary) a cost as those associated with creating outputs (inference), yet it’s kept entirely out of gross margins : This is inherently deceptive. While one would argue that R&D is not considered in gross margins, training isn’t gross margins — yet gross margins generally include the raw materials necessary to build something, and training is absolutely part of the raw costs of running an AI model. Direct labor and parts are considered part of the calculation of gross margin, and spending on training — both the data and the process of training itself — are absolutely meaningful, and to leave them out is an act of deception.  Anthropic’s 2025 gross margins were 40% — or 38% if you include free users of Claude — on inference costs of $2.7 (or $2.79) billion, with training costs of around $4.1 billion . What happens if you add training costs into the equation?  Let’s work it out! Training is not an up front cost , and considering it one only serves to help Anthropic cover for its wretched business model. Anthropic (like OpenAI) can never stop training, ever, and to pretend otherwise is misleading. This is not the cost just to “train new models” but to maintain current ones, build new products around them, and many other things that are direct, impossible-to-avoid components of COGS. They’re manufacturing costs, plain and simple. Anthropic projects to spend $100 billion on training in the next three years, which suggests it will spend — proportional to its current costs — around $32 billion on inference in the same period, on top of $21 billion of TPU purchases, on top of $30 billion on Azure (I assume in that period?), on top of “tens of billions” on Google Cloud. When you actually add these numbers together (assuming “tens of billions” is $15 billion), that’s $200 billion.  Anthropic ( per The Information’s reporting ) tells investors it will make $18 billion in revenue in 2026 and $55 billion in 2027 — year-over-year increases of 400% and 305% respectively, and is already raising $25 billion after having just closed a $30bn deal. How does Anthropic pay its bills? Why does outlet after outlet print these fantastical numbers without doing the maths of “how does Anthropic actually get all this money?” Because even with their ridiculous revenue projections, this company is still burning cash, and when you start to actually do the maths around anything in the AI industry, things become genuinely worrying.  You see, every single generative AI company is unprofitable, and appears to be getting less profitable over time. Both The Information and Wall Street Journal reported the same bizarre statement in November — that Anthropic would “turn a profit more quickly than OpenAI,” with The Information saying Anthropic would be cash flow positive in 2027 and the Journal putting the date at 2028, only for The Information to report in January that 2028 was the more-realistic figure.  If you’re wondering how, the answer is “Anthropic will magically become cash flow positive in 2028”: This is also the exact same logic as OpenAI, which will, per The Information in September , also, somehow, magically turn cashflow positive in 2030: Oracle, which has a 5-year-long, $300 billion compute deal with OpenAI that it lacks the capacity to serve and that OpenAI lacks the cash to pay for, also appears to have the same magical plan to become cash flow positive in 2029 : Somehow, Oracle’s case is the most legit, in that theoretically at that time it would be done, I assume, paying the $38 billion it’s raising for Stargate Shackelford and Wisconsin, but said assumption also hinges on the idea that OpenAI finds $300 billion somehow . it also relies upon Oracle raising more debt than it currently has — which, even before the AI hype cycle swept over the company, was a lot.  As I discussed a few weeks ago in the Hater’s Guide To Oracle , a megawatt of data center IT load generally costs  (per Jerome Darling of TD Cowen) around $12-14m  in construction (likely more due to skilled labor shortages, supply constraints and rising equipment prices) and $30m a megawatt in GPUs and associated hardware. In plain terms, Oracle (and its associated partners) need around $189 billion to build the 4.5GW of Stargate capacity to make the revenue from the OpenAI deal, meaning that it needs around another $100 billion once it raises $50 billion in combined debt, bonds, and printing new shares by the end of 2026. I will admit I feel a little crazy writing this all out, because it’s somehow a fringe belief to do the very basic maths and say “hey, Oracle doesn’t have the capacity and OpenAI doesn’t have the money.” In fact, nobody seems to want to really talk about the cost of AI, because it’s much easier to say “I’m not a numbers person” or “they’ll work it out.” This is why in today’s newsletter I am going to lay out the stark reality of the AI bubble, and debut a model I’ve created to measure the actual, real costs of an AI data center. While my methodology is complex, my conclusions are simple: running AI data centers is, even when you remove the debt required to stand up these data centers, a mediocre business that is vulnerable to basically any change in circumstances.  Based on hours of discussions with data center professionals, analysts and economists, I have calculated that in most cases, the average AI data center has gross margins of somewhere between 30% and 40% — margins that decay rapidly for every day, week, or month that you take putting a data center into operation. This is why Oracle has negative 100% margins on NVIDIA’s GB200 chips — because the burdensome up-front cost of building AI data centers (as GPUs, servers, and other associated) leaves you billions of dollars in the hole before you even start serving compute, after which you’re left to contend with taxes, depreciation, financing, and the cost of actually powering the hardware.  Yet things sour further when you face the actual financial realities of these deals — and the debt associated with them.  Based on my current model of the 1GW Stargate Abilene data center, Oracle likely plans to make around $11 billion in revenue a year from the 1.2GW (or around 880MW of critical IT). While that sounds good, when you add things like depreciation, electricity, colocation costs of $1 billion a year from Crusoe, opex, and the myriad of other costs, its margins sit at a stinkerific 27.2% — and that’s assuming OpenAI actually pays, on time, in a reliable way. Things only get worse when you factor in the cost of debt. While Oracle has funded Abilene using a mixture of bonds and existing cashflow, it very clearly has yet to receive the majority of the $25 billion+ in GPUs and associated hardware (with only 96,000 GPUs “ delivered ”), meaning that it likely bought them out of its $18 billion bond sale from last September .  If we assume that maths, this means that Oracle is paying a little less than $963 million a year ( per the terms of the bond sale ) whether or not a single GPU is even turned on, leaving us with a net margin of 22.19%... and this is assuming OpenAI pays every single bill, every single time, and there are absolutely no delays. These delays are also very, very expensive. Based on my model, if we assume that 100MW of critical IT load is operational (roughly two buildings and 100,000 GB200s) but has yet to start generating revenue, Oracle is burning, without depreciation ( EDITOR’S NOTE: sorry! This previously said depreciation was a cash expense and was included in this number (even though it wasn’t! ) , but it's correct in the model! ), around $4.69 million a day in cash . I have also confirmed with sources in Abilene that there is no chance that Stargate Abilene is fully operational in 2026. In simpler terms: I will admit I’m quite disappointed that the media at large has mostly ignored this story. Limp, cautious “are we in an AI bubble?” conversations are insufficient to deal with the potential for collapse we’re facing.  Today, I’m going to dig into the reality of the costs of AI, and explain in gruesome detail exactly how easily these data centers can rapidly approach insolvency in the event that their tenants fail to pay.  The chain of pain is real: Today I’m going to explain how easily it breaks. If Anthropic’s gross margin was 38% in 2025, that means its COGS (cost of goods sold) was $2.79 billion. If we add training, this brings COGS to $6.89 billion, leaving us with -$2.39 billion after $4.5 billion in revenue. This results in a negative 53% gross margin. AI startups are all unprofitable, and do not appear to have a path to sustainability.  AI data centers are being built in anticipation of demand that doesn’t exist, and will only exist if AI startups — which are all unprofitable — can afford to pay them. Oracle, which has committed to building 4.5GW of data centers, is burning cash every day that OpenAI takes to set up its GPUs, and when it starts making money, it does so from a starting position of billions and billions of dollars in debt. Margins are low throughout the entire stack of AI data center operators — from landlords like Applied Digital to compute providers like CoreWeave — thanks to the billions in debt necessary to fund both construction and IT hardware to make them run, putting both parties in a hole that can only be filled with revenues that come from either hyperscalers or AI startups.  In a very real sense, the AI compute industry is dependent on AI “working out,” because if it doesn’t, every single one of these data centers will become a burning hole in the ground.

1 views

Premium: The Hater's Guide To Microsoft

Have you ever looked at something too long and felt like you were sort of seeing through it? Has anybody actually looked at a company this much in a way that wasn’t some sort of obsequious profile of a person who worked there? I don’t mean this as a way to fish for compliments — this experience is just so peculiar, because when you look at them hard enough, you begin to wonder why everybody isn’t just screaming all the time.  Yet I really do enjoy it. When you push aside all the marketing and the interviews and all that and stare at what a company actually does and what its users and employees say, you really get a feel of the guts of a company. I’m enjoying it. The Hater’s Guides are a lot of fun, and I’m learning all sorts of things about the ways in which companies try to hide their nasty little accidents and proclivities.  Today, I focus on one of the largest.  In the last year I’ve spoken to over a hundred different tech workers, and the ones I hear most consistently from are the current and former victims of Microsoft, a company with a culture in decline, in large part thanks to its obsession with AI. Every single person I talk to about this company has venom on their tongue, whether they’re a regular user of Microsoft Teams or somebody who was unfortunate to work at the company any time in the last decade. Microsoft exists as a kind of dark presence over business software and digital infrastructure. You inevitably have to interact with one of its products — maybe it’s because somebody you work with uses Teams, maybe it’s because you’re forced to use SharePoint, or perhaps you’re suffering at the hands of PowerBI — because Microsoft is the king of software sales. It exists entirely to seep into the veins of an organization and force every computer to use Microsoft 365, or sit on effectively every PC you use, forcing you to interact with some sort of branded content every time you open your start menu . This is a direct results of the aggressive monopolies that Microsoft built over effectively every aspect of using the computer, starting by throwing its weight around in the 80s to crowd out potential competitors to MS-DOS and eventually moving into everything including cloud compute, cloud storage, business analytics, video editing, and console gaming, and I’m barely a third through the list of products.  Microsoft uses its money to move into new markets, uses aggressive sales to build long-term contracts with organizations, and then lets its products fester until it’s forced to make them better before everybody leaves, with the best example being the recent performance-focused move to “ rebuild trust in Windows ” in response to the upcoming launch of Valve’s competitor to the Xbox (and Windows gaming in general), the Steam Machine . Microsoft is a company known for two things: scale and mediocrity. It’s everywhere, its products range from “okay” to “annoying,” and virtually every one of its products is a clone of something else.  And nowhere is that mediocrity more obvious than in its CEO. Since taking over in 2014, CEO Satya Nadella has steered this company out of the darkness caused by aggressive possible chair-thrower Steve Ballmer , transforming from the evils of stack ranking to encouraging a “growth mindset” where you “believe your most basic abilities can be developed through dedication and hard work.” Workers are encouraged to be “learn-it-alls” rather than “know-it-alls,” all part of a weird cult-like pseudo-psychology that doesn’t really ring true if you actually work at the company .  Nadella sells himself as a calm, thoughtful and peaceful man, yet in reality he’s one of the most merciless layoff hogs in known history. He laid off 18,000 people in 2014 months after becoming CEO, 7,800 people in 2015 , 4,700 people in 2016 , 3,000 people in 2017 , “hundreds” of people in 2018 , took a break in 2019, every single one of the workers in its physical stores in 2020 along with everybody who worked at MSN , took a break in 2021, 1,000 people in 2022 , 16,000 people in 2023 , 15,000 people in 2024 and 15,000 people in 2025 .  Despite calling for a “ referendum on capitalism ” in 2020 and suggesting companies “grade themselves” on the wider economic benefits they bring to society, Nadella has overseen an historic surge in Microsoft’s revenues — from around $83 billion a year when he joined in 2014 to around $300 billion on a trailing 12-month basis — while acting in a way that’s callously indifferent to both employees and customers alike.  At the same time, Nadella has overseen Microsoft’s transformation from an asset-light software monopolist that most customers barely tolerate to an asset-heavy behemoth that feeds its own margins into GPUs that only lose it money. And it’s that transformation that is starting to concern investors , and raises the question of whether Microsoft is heading towards a painful crash.  You see, Microsoft is currently trying to pull a fast one on everybody, claiming that its investments in AI are somehow paying off despite the fact that it stopped reporting AI revenue in the first quarter of 2025 . In reality, the one segment where it would matter — Microsoft Azure, Microsoft’s cloud platform where the actual AI services are sold — is stagnant, all while Redmond funnels virtually every dollar of revenue directly into more GPUs.  Intelligent Cloud also represents around 40% of Microsoft’s total revenue, and has done so consistently since FY2022. Azure sits within Microsoft's Intelligent Cloud segment, along with server products and enterprise support. For the sake of clarity, here’s how Microsoft describes Intelligent Cloud in its latest end-of-year K-10 filing : Our Intelligent Cloud segment consists of our public, private, and hybrid server products and cloud services that power modern business and developers. This segment primarily comprises: It’s a big, diverse thing — and Microsoft doesn’t really break things down further from here — but Microsoft makes it clear in several places that Azure is the main revenue driver in this fairly diverse business segment.  Some bright spark is going to tell me that Microsoft said it has 15 million paid 365 Copilot subscribers (which, I add, sits under its Productivity and Business Processes segment), with reporters specifically saying these were corporate seats, a fact I dispute, because this is the quote from Microsoft’s latest conference call around earnings : At no point does Microsoft say “corporate seat” or “business seat.” “Enterprise Copilot Chat” is a free addition to multiple different Microsoft 365 products , and Microsoft 365 Copilot could also refer to Microsoft’s $18 to $21-a-month addition to Copilot Business , as well as Microsoft’s enterprise $30-a-month plans. And remember: Microsoft regularly does discounts through its resellers to bulk up these numbers. When Nadella took over, Microsoft had around $11.7 billion in PP&E (property, plant, and equipment ). A little over a decade later, that number has ballooned to $261 billion, with the vast majority added since 2020 (when Microsoft’s PP&E sat around $41 billion).  Also, as a reminder: Jensen Huang has made it clear that GPUs are going to be upgraded on a yearly cycle, guaranteeing that Microsoft’s armies of GPUs regularly hurtle toward obsolescence. Microsoft, like every big tech company, has played silly games with how it depreciates assets , extending the “useful life” of all GPUs so that they depreciate over six years, rather than four.  And while someone less acquainted with corporate accounting might assume that this move is a prudent, fiscally-conscious tactic to reduce spending by using assets for longer, and stretching the intervals between their replacements, in reality it’s a handy tactic to disguise the cost of Microsoft’s profligate spending on the balance sheet.  You might be forgiven for thinking that all of this investment was necessary to grow Azure, which is clearly the most important part of Microsoft’s Intelligent Cloud segment. I n Q2 FY2020 , Intelligent Cloud revenue sat at $11.9 billion on PP&E of around $40 billion, and as of Microsoft’s last quarter, Intelligent Cloud revenue sat at around $32.9 billion on PP&E that has increased by over 650%.  Good, right? Well, not really. Let’s compare Microsoft’s Intelligent Cloud revenue from the last five years: In the last five years, Microsoft has gone from spending 38% of its Intelligent Cloud revenue on capex to nearly every penny (over 94%) of it in the last six quarters, at the same time in two and a half years that Intelligent Cloud has failed to show any growth.  Things, I’m afraid, get worse. Microsoft announced in July 2025 — the end of its 2025 fiscal year— that Azure made $75 billion in revenue in FY2025 . This was, as the previous link notes, the first time that Microsoft actually broke down how much Azure actually made, having previously simply lumped it in with the rest of the Intelligent Cloud segment.  I’m not sure what to read from that, but it’s still not good. meaning that Microsoft spent every single penny of its Azure revenue from that fiscal year on capital expenditures of $88 billion and then some, a little under 117% of all Azure revenue to be precise. If we assume Azure regularly represents 71% of Intelligent Cloud revenue, Microsoft has been spending anywhere from half to three-quarters of Azure’s revenue on capex. To simplify: Microsoft is spending lots of money to build out capacity on Microsoft Azure (as part of Intelligent Cloud), and growth of capex is massively outpacing the meager growth that it’s meant to be creating.  You know what’s also been growing? Microsoft’s depreciation charges, which grew from $2.7 billion in the beginning of 2023 to $9.1 billion in Q2 FY2026 , though I will add that they dropped from $13 billion in Q1 FY2026, and if I’m honest, I have no idea why! Nevertheless, depreciation continues to erode Microsoft’s on-paper profits, growing (much like capex, as the two are connected!) at a much-faster rate than any investment in Azure or Intelligent Cloud. But worry not, traveler! Microsoft “beat” on earnings last quarter, making a whopping $38.46 billion in net income …with $9.97 billion of that coming from recapitalizing its stake in OpenAI. Similarly, Microsoft has started bulking up its Remaining Performance Obligations. See if you can spot the difference between Q1 and Q2 FY26, emphasis mine: So, let’s just lay it out: …Microsoft’s upcoming revenue dropped between quarters as every single expenditure increased, despite adding over $200 billion in revenue from OpenAI. A “weighted average duration” of 2.5 years somehow reduced Microsoft’s RPOs. But let’s be fair and jump back to Q4 FY2025… 40% of $375 billion is $150 billion. Q3 FY25 ? 40% on $321 billion, or $128.4 billion. Q2 FY25 ? $304 billion, 40%, or $121.6 billion.  It appears that Microsoft’s revenue is stagnating, even with the supposed additions of $250 billion in spend from OpenAI and $30 billion from Anthropic , the latter of which was announced in November but doesn’t appear to have manifested in these RPOs at all. In simpler terms, OpenAI and Anthropic do not appear to be spending more as a result of any recent deals, and if they are, that money isn’t arriving for over a year. Much like the rest of AI, every deal with these companies appears to be entirely on paper, likely because OpenAI will burn at least $115 billion by 2029 , and Anthropic upwards of $30 billion by 2028, when it mysteriously becomes profitable two years before OpenAI “does so” in 2030 .  These numbers are, of course, total bullshit. Neither company can afford even $20 billion of annual cloud spend, let alone multiple tens of billions a year, and that’s before you get to OpenAI’s $300 billion deal with Oracle that everybody has realized ( as I did in September ) requires Oracle to serve non-existent compute to OpenAI and be paid hundreds of billions of dollars that, helpfully, also don’t exist. Yet for Microsoft, the problems are a little more existential.  Last year, I calculated that big tech needed $2 trillion in new revenue by 2030 or investments in AI were a loss , and if anything, I think I slightly underestimated the scale of the problem. As of the end of its most recent fiscal quarter, Microsoft has spent $277 billion or so in capital expenditures since the beginning of FY2022, with the majority of them ($216 billion) happening since the beginning of FY2024. Capex has ballooned to the size of 45.5% of Microsoft’s FY26 revenue so far — and over 109% of its net income.  This is a fucking disaster. While net income is continuing to grow, it (much like every other financial metric) is being vastly outpaced by capital expenditures, none of which can be remotely tied to profits , as every sign suggests that generative AI only loses money. While AI boosters will try and come up with complex explanations as to why this is somehow alright, Microsoft’s problem is fairly simple: it’s now spending 45% of its revenues to build out data centers filled with painfully expensive GPUs that do not appear to be significantly contributing to overall revenue, and appear to have negative margins. Those same AI boosters will point at the growth of Intelligent Cloud as proof, so let’s do a thought experiment (even though they are wrong): if Intelligent Cloud’s segment growth is a result of AI compute, then the cost of revenue has vastly increased, and the only reason we’re not seeing it is that the increased costs are hitting depreciation first. You see, Intelligent Cloud is stalling, and while it might be up by 8.8% on an annualized basis (if we assume each quarter of the year will be around $30 billion, that makes $120 billion, so about an 8.8% year-over-year increase from $106 billion), that’s come at the cost of a massive increase in capex (from $88 billion for FY2025 to $72 billion for the first two quarters of FY2026 ), and gross margins that have deteriorated from 69.89% in Q3 FY2024 to 68.59% in FY2026 Q2 , and while operating margins are up, that’s likely due to Microsoft’s increasing use of contract workers and increased recruitment in cheaper labor markets. And as I’ll reveal later, Microsoft has used OpenAI’s billions in inference spend to cover up the collapse of the growth of the Intelligent Cloud segment. OpenAI’s inference spend now represents around 10% of Azure’s revenue. Microsoft, as I discussed a few weeks ago , is in a bind. It keeps buying GPUs, all while waiting for the GPUs it already has to start generating revenue, and every time a new GPU comes online, its depreciation balloons. Capex for GPUs began in seriousness in Q1 FY2023 following October’s shipments of NVIDIA’s H100 GPUs , with reports saying that Microsoft bought 150,000 H100s in 2023 (around $4 billion at $27,000 each) and 485,000 H100s in 2024 ($13 billion). These GPUs are yet to provide much meaningful revenue, let alone any kind of profit , with reports suggesting ( based on Oracle leaks ) that the gross margins of H100s are around 26% and A100s (an older generation launched in 2020) are 9%, for which the technical term is “dogshit.”  Somewhere within that pile of capex also lies orders for H200 GPUs, and as of 2024, likely NVIDIA’s B100 (and maybe B200) Blackwell GPUs too. You may also notice that those GPU expenses are only some portion of Microsoft’s capex, and the reason is because Microsoft spends billions on finance leases and construction costs. What this means in practical terms is that some of this money is going to GPUs that are obsolete in 6 years, some of it’s going to paying somebody else to lease physical space, and some of it is going into building a bunch of data centers that are only useful for putting GPUs in. And none of this bullshit is really helping the bottom line! Microsoft’s More Personal Computing segment — including Windows, Xbox, Microsoft 365 Consumer, and Bing — has become an increasingly-smaller part of revenue, representing in the latest quarter a mere 17.64% of Microsoft’s revenue in FY26 so far, down from 30.25% a mere four years ago. We are witnessing the consequences of hubris — those of a monopolist that chased out any real value creators from the organization, replacing them with an increasingly-annoying cadre of Business Idiots like career loser Jay Parikh and scummy, abusive timewaster Mustafa Suleyman .  Satya Nadella took over Microsoft with the intention of fixing its culture, only to replace the aggressive, loudmouthed Ballmer brand with a poisonous, passive aggressive business mantra of “you’ve always got to do more with less.” Today, I’m going to walk you through the rotting halls of Redmond’s largest son, a bumbling conga line of different businesses that all work exactly as well as Microsoft can get away with.  Welcome to The Hater’s Guide To Microsoft , or Instilling The Oaf Mindset. Server products and cloud services, including Azure and other cloud services, comprising cloud and AI consumption-based services, GitHub cloud services, Nuance Healthcare cloud services, virtual desktop offerings, and other cloud services; and Server products, comprising SQL Server, Windows Server, Visual Studio, System Center, related Client Access Licenses (“CALs”), and other on-premises offerings. Enterprise and partner services, including Enterprise Support Services, Industry Solutions, Nuance professional services, Microsoft Partner Network, and Learning Experience. Q1: $398 billion of RPOs, 40% within 12 months, $159.2 billion in upcoming revenue. Q2: $625 billion of RPOs, 25% within 12 months, $156.25 billion in upcoming revenue.

0 views

Premium: The Hater's Guide to Oracle

You can’t avoid Oracle. No, really, you can’t. Oracle is everywhere. It sells ERP software – enterprise resource planning, which is a rat king of different services for giant companies for financial services, procurement (IE: sourcing and organizing the goods your company needs to run), compliance, project management, and human resources. It sells database software, and even owns the programming language Java as part of its acquisition of Sun Microsystems back in 2010 .  Its customers are fucking everyone: hospitals ( such as England’s National Health Service ), large corporations (like Microsoft), health insurance companies, Walmart, and multiple different governments. Even if you have never even heard of Oracle before, it’s almost entirely certain that your personal data is sitting in an Oracle-designed system somewhere.  Once you let Oracle into your house, it never leaves. Canceling contracts is difficult, to the point that one Redditor notes that some clients agreed to spend a minimum amount of money on services without realizing, meaning that you can’t remove services you don’t need even during the renewal of a contract . One user from three years ago told the story of adding two users to their contract for Oracle’s Netsuite Starter Edition ( around $1000 a month in today’s pricing ), only for an Oracle account manager to call a day later to demand they upgrade to the more expensive package ($2500 per month) for every user.   In a thread from a year ago , another user asked for help renegotiating their contract for Netsuite, adding that “[their] company is no where near the state needed to begin an implementation” and “would use a third party partner to implement” software that they had been sold by Oracle. One user responded by saying that Oracle would play hardball and “may even use [the] threat of attorneys.”  In fact, there are entire websites about negotiations with Oracle, with Palisade Compliance saying that “Oracle likes a frenetic pace where contracts are reviewed and dialogues happen under the constant pressure of Oracle’s quarter closes,” describing negotiations with them as “often rushed, filled with tension, and littered with threats from aggressive sales and Oracle auditing personnel.” This is something you can only do when you’ve made it so incredibly difficult to change providers. What’re you gonna do? Have your entire database not work? Pay up. Oracle also likes to do “audits” of big customers where it makes sure that every single part of your organization that uses Oracle software is paying for it, or were not using it in a way that was not allowed based on their contract . For example, Oracle sued healthcare IT company Perry Johnson & Associates in 2020 because the company that built PJ&A’s database systems used Oracle’s database software. The case was settled. This is all to say that Oracle is a big company that sells lots of stuff, and increases the pressure around its quarterly earnings as a means of boosting revenues. If you have a company with computers that might be running Java or Oracle’s software — even if somebody else installed it for you! — you’ll be paying Oracle, one way or another. They even tried to sue Google for using the open source version of Java to build its Android operating system (though they lost).  Oracle is a huge, inevitable pain in the ass, and, for the most part, an incredibly profitable one . Every time a new customer signs on at Oracle, they pledge themselves to the Graveyard Smash and permanent fealty to Larry Ellison’s database empire.  As a result, founder Larry Ellison has become one of the richest people in the world — the fifth-largest as of writing this sentence — owning 40% of Oracle’s stock and, per Martin Peers of The Information, will earn about $2.3 billion in dividends in the next year.  Oracle has also done well to stay out of bullshit hype-cycles. While it quickly spun up vague blockchain and metaverse offerings, its capex stayed relatively flat at around $1 billion to $2.1 billion a fiscal year (which runs from June 1 to May 31), until it burst to $4.511 in FY2022 (which began on June 1, 2021, for reference), $8.695 billion in FY2023, $6.86 billion in FY2024, and then increasing a teeny little bit to $21.25 billion in FY2025 as it stocked up on AI GPUs and started selling compute. You may be wondering if that helped at all, and it doesn’t appear to have at all. Oracle’s net income has stayed in the $2 billion to $3 billion range for over a decade , other than a $2.7 billion spike last quarter from its sale of its shares in Ampere . You see, things have gotten weird at Oracle, in part because of the weirdness of the Ellisons themselves, and their cozy relationship with the Trump Administration ( and Trump itself ). Ellison’s massive wealth backed son David Ellison’s acquisition of Paramount , putting conservative Bari Weiss at the helm of CBS in an attempt to placate and empower the right wing, and is currently trying to buy Warner Brothers Discovery ( though it appears Netflix may have won ), all in pursuit of kissing up to a regime steeped in brutality and bigotry that killed two people in Minnesota. Oracle will serve as the trusted security partner, responsible for auditing and ensuring compliance with National Security Terms, according to a memo. The company already provides cloud services for TikTok and manages user data in the U.S. Notably, Oracle previously made a bid for TikTok back in 2020. I know that you’re likely a little scared that an ultra right-wing billionaire has bought another major social network. I know you think that Oracle, a massive and inevitable cloud storage platform owned by a man who looks like H.R. Giger drew Jerry Stiller. I know you’re likely worried about a replay of the Elon Musk Twitter fiasco, where every week it seemed like things would collapse but it never seemed to happen, and then Musk bought an election. What if I told you that things were very different, and far more existentially perilous for Oracle? You see, Oracle is arguably one of the single-most evil and successful companies in the world, and it’s got there by being an aggressive vendor of database and ERP software, one that, like a tick with a law degree, cannot be removed without some degree of bloodshed. Perhaps not the highest-margin business in the world, but you know, it worked. Oracle has stuck to the things it’s known for for years and years and done just fine… …until AI, that is. Let’s see what AI has done for Oracle’s gross margi- OH MY GOD ! The scourge of AI GPUs has taken Oracle’s gross margin from around 79% in 2021 to 68.54% in 2025, with CNBC reporting that FactSet-polled analysts saw it falling to 49% by 2030 , which I think is actually being a little optimistic.   Oracle was very early to high-performance computing, becoming the first cloud in the world to have general availability of NVIDIA’s A100 GPUs back in September 2020 , and in June 2023 (at the beginning of Oracle’s FY2024), Ellison declared that Oracle would spend “billions” on NVIDIA GPUs, naming AI firm Cohere as one of its customers.  In May 2024, Musk and Ellison discussed a massive cloud compute contract — a multi-year, $10 billion deal that fell apart in July 2024 when Musk got impatient , a blow that was softened by Microsoft’s deal to buy compute capacity for OpenAI , for chips to be rented out of a data center in Abilene Texas that, about six months later, OpenAI would claim was part of a “$500 billion Stargate initiative” announcement between Oracle, SoftBank and OpenAI that was so rushed that Ellison had to borrow a coat to stay warm on the White House lawn, per The Information . “Stargate” is commonly misunderstood as a Trump program, or something that has raised $500 billion, when what it actually is is Oracle raising debt to build data centers for OpenAI. Instead of staying in its lane as a dystopian datacenter mobster, Oracle entered into negative-to-extremely-low margin realm of GPU rentals, raising $58 billion in debt and signing $248 billion in data center leases to service a 5-year-long $300 billion contract with OpenAI that it doesn’t have the capacity for and OpenAI doesn’t have the money to pay for . Oh, and TikTok? The billion-user social network that Oracle sort-of-just bought? There’s one little problem with it: per The Information , ByteDance investors estimate TikTok lost several billion dollars last year on revenues of roughly $20 billion, attributed to its high growth costs and, per The Information, “higher operational and labor costs in overseas markets compared to China.” Now, I know what you’re gonna say: Ellison bought TikTok as a propaganda tool, much like Musk bought Twitter. “The plan isn’t for it to be profitable,” you say. “It’s all about control” you say, and I say, in response, that you should know exactly how fucked Oracle is. In its last quarter, Oracle had negative $13 billion in cash flow , and between 2022 and late 2025 quintupled its PP&E (from $12.8 billion to $67.85 billion), primarily through the acquisition of GPUs for AI compute. Its remaining performance obligations are $523 billion , with $300 billion of that coming from OpenAI in a deal that starts, according to the Wall Street Journal, “ in 2027 ,” with data centers that are so behind in construction that the best Oracle could muster is saying that 96,000 B200 GPUs had been “delivered” to the Stargate Abilene data center in December 2025 for a data center of 450,000 GPUs that has to be fully operational by the end of 2026 without fail.  And what’re the margins on those GPUs? Negative 100% .  Oracle, a business borne of soulless capitalist brutality, has tied itself existentially to not just the success of AI , but the specific, incredible, impossible success of OpenAI , which will have to muster up $30 billion in less than a year to start paying for it, and another $270 billion or more to pay for the rest… at a time when Oracle doesn’t have the capacity and has taken on brutal debt to build it. For Oracle to survive , OpenAI must find a way to pay it four times the annual revenue of Microsoft Azure ($75 billion) , and because OpenAI burns billions of dollars, it’s going to have to raise all of that money at a time of historically low liquidity for venture capital .  Did I mention that Oracle took on $56 billion of debt to build data centers specifically for OpenAI? Or that the banks who invested in these deals don’t seem to be able to sell off the debt ? Let me put it really simply: We are setting up for a very funny and chaotic situation where Oracle simply runs out of money, and in the process blows up Larry Ellison’s fortune. However much influence Ellison might have with the administration, Oracle has burdened itself with debt and $248 billion in data center lease obligations — costs that are inevitable, and are already crushing the life out of the company (and the stock).  The only way out is if OpenAI becomes literally the most-successful cash-generating company of all time within the next two years, and that’s being generous. This is not a joke. This is not an understatement. Sam Altman holds Larry Ellison’s future in his clammy little hands, and there isn’t really anything anybody can do about it other than hope for the best, because Oracle already took on all that debt and capex. Forget about politics, forget about the fear in your heart that the darkness always wins, and join me in The Hater’s Guide To Oracle, or My Name’s Larry Ellison, and Welcome To Jackass. Larry Ellison’s wealth is almost entirely tied up in Oracle stock. Oracle’s stock is tied to the company “Oracle,” which is currently destroying its margins and annihilating its available cash to buy GPUs to serve a customer that cannot afford to pay it. Oracle has taken on ruinous debt that can only be paid if this customer, which cannot afford it and needs to raise money from an already-depleted venture capital pool, actually pays it. Oracle’s stock has already been punished for these debts , and that’s before OpenAI fails to pay for its contract. Oracle now owns part of one of its largest cloud customers, TikTok, which loses billions of dollars a year, and the US entity says, per Bloomberg , that it will “retrain, test and update the content recommendation algorithm on US user data,” guaranteeing that it’ll fuck up whatever makes it useful, reducing its efficacy for advertisers. Larry Ellison’s entire financial future is based on whether OpenAI lives or dies. If it dies, there isn’t another entity in the universe that can actually afford (or has interest in) the scale of the compute Oracle is building.

0 views

Premium: This Is Worse Than The Dot Com Bubble

Soundtrack - Radiohead - Karma Police I just spent a week at the Consumer Electronics Show, and one word kept coming up: bullshit.  LG, a company known for making home appliances and televisions, demonstrated a robot (named “CLOiD” for some reason) that could “fold laundry” (extremely slowly, in limited circumstances, and even then it sometimes failed) or cook (by which I mean put things in an oven that opened automatically) or find your keys (in a video demo), one that it has no intention of releasing. The media generally gave it an easy go, with one reporter suggesting that a barely-functioning tech demo somehow “ marked a turning point ” because LG was now “entering the robotics space” with a product it had no intention of selling. So, why did LG demo the robot? To con the media and investors, of course! Hundreds of other companies demoed other robots you couldn’t buy, and despite what reports might say, we were not shown “ the future of robotics ” in any meaningful sense. We got to see what happens when companies run out of ideas and can only copy each other. CES 2026 was the “year of robotics” in the same way that somebody is a sailor because they wore a captain’s hat while sitting in a cardboard box. Yet the robotics companies were surprisingly ethical compared to the nonsensical tide of LLM-driven wank, from no-name dregs in the basement of the Venetian Expo Center to companies like Lenovo warbling about its “ AI super agent .” In fact, fuck it, let’s talk about that. “AI is evolving and getting new capabilities, sensing our three-dimensional world, understanding how things move and connect,” said Lenovo CEO Yang Yuanqing , leading into a demo of Lenovo Qira, before claiming it “redefines what it means to have technology built around you.” One would think the demo that follows would be an incredible demonstration of futuristic technology. Instead, a spokesperson walked up, asked Qira to show what it could see (IE: multimodal capabilities available for years in many models), received a summary of notifications (available in effectively any LLM integration, and incredibly prone to hallucinations), and asked “what to get her kids when she had some free time,” at which point Qira told her, and I quote, that “the Las Vegas Fashion Mall has some Labubus that children will go crazy for,” referring to the kind of tool-based web search that’s been available since 2024.   The presenter noted that Qira also can add reminders — something that has been available for years on most iOS or Android devices — and search for documents, then showed a proof-of-concept wearable that can record and transcribe meetings, a product that I saw no less than seven times during my time at CES. Lenovo rented out the entirety of the Las Vegas Sphere to do a demonstration of a fucking chatbot powered by OpenAI’s models on Microsoft Azure , and everybody acted like it was something new. No, Qira is not a “ big bet ” on AI — it’s a fucking chatbot forced on anybody buying a Lenovo PC, full of features like “summarize this” or “transcribe this” or “tell me what’s on my calendar,” features peddled by business idiots that have little experience with any real-world applications of just about anything , marketed with the knowledge that the media will do the hard work of explaining why anybody should give a shit. Want better-looking video or audio from your TV? Get fucked! You’re getting nano banana image generation from Google and other LLM features from Samsung You can now generate images on your TV using Google’s Nano Banana model — a useless idea peddled by a company that doesn’t know what consumers actually want, varnished as making your TV-based assistant “ more helpful and more visually engaging .” As David Katzmaier correctly said, nobody asked for LLMs in their TVs , allowing you to “click to search” something that’s on your TV, something that no normal person will do. In fact, most of the show felt like companies doing madlibs with startup decks to try and trick people into thinking they’d done anything other than staple a frontend on top of a Large Language Model. Nowhere was that more obvious than the glut of useless AI-powered “smart” glasses , all of which claim to do translation, or dictation, or run “apps” using clunky, ugly and hard-to-use interfaces, all using the same LLMs, all doing effectively the same thing. These products only exist because Meta decided to blow several billion dollars on launching “ AI glasses ,” with the slew of copycats phrased as being “part of a new category” rather than “a bunch of companies making a bunch of useless bullshit nobody wants or needs.” These are not the actions of companies that truly fear missing the mark, let alone the judgment of the media, analysts or investors. These are the actions of a tech industry that has escaped any meaningful criticism — let alone regulation! — of their core businesses or new products under the auspices of “giving them a chance” or “being open to new ideas,” and those ideas are always whatever the tech industry just said, even if it’s nonsensical.  When Facebook announced it was changing its name to Meta as a means of pursuing “ the successor to the mobile internet ,” it didn’t really provide any proof beyond a series of extremely shitty VR apps , but not to worry, Casey Newton of Platformer was there to tell us that Facebook was going to “strive to build a maximalist, interconnected set of experiences straight out of sci-fi — a world known as the metaverse,” adding that the metaverse was “having a moment.” Similarly, Futurum Group’s Dan Newman said in April 2022 that “the metaverse was coming” and that it “would likely continue to be one of the biggest trends for years to come.”  Three years and $70 billion later , the metaverse is dead, and everybody acts as if it didn’t happen. Whoops! In a sane society, investors, analysts and the media would never trust a single word out of Mark Zuckerberg’s mouth ever again. Instead, the media gleefully covered his mid-2025 “ Personal Superintelligence ” blog where he promised everybody would have a “personal superintelligence” to “help you achieve your goals.” Do LLMs do that? No. Can they ever do that? No. Doesn’t matter! This is the tech industry. There is no punishment, no consequence, no critique, no cynicism, and no comeuppance — only celebration and consideration, only growth.   All the while, the largest tech firms have continued growing, always finding new ways (largely through aggressive monopolies and massive sales teams) to make Number Go Up to the point that the media, analysts and investors have stopped asking any challenging questions, and naturally assumed that they — and the financiers that back them — would never do something really stupid. The tech, business and finance media had been well-trained at this point to understand that progress was always the story, and that failure was somehow “necessary for innovation,” whether or not anything was innovative. Over time, this created an evolutionary problem. The successes of companies like Uber — which grew to quasi-profitability after more than a decade of burning billions of dollars — convinced journalists that startups had to burn tons of money to grow . All that it took to convince some members of the media that something was a good idea was $50 million or more in funding, with larger funding rounds making it — for whatever reason — less palatable to critique a company, for fear that you would “bet against a winner,” as the assumption would be that this company would go public or get acquired, and nobody wants to be wrong , do they? This naturally created a world of startup investment and innovation that oriented itself around the growth-at-all-costs nightmare of The Rot Economy . Startups were rewarded not for creating real businesses, or having good ideas, or even creating new categories, but for their ability to play “ brainwash a venture capitalist ,” either through being “a founder to bet on” or appealing to the next bazillion-dollar TAM boondoggle. Perhaps they’d find some sort of product-market fit, or grow a large audience by providing a service at an unsustainable cost, but this was all done with the knowledge of an upcoming bailout via IPO or acquisition. Over the years, venture capital was rewarded for funding “big ideas” and that, for the most part, paid off. Eventually those “big ideas” stopped being “big ideas for necessary companies” and became “big ideas for growing as fast as possible and dumping onto the public markets or other companies afraid that they’d be left behind.”  Taking a company public used to be easy[ From 2015-2019, there were over 100 IPOs annually , with a consistent flow of M&A giving startups somewhere to sell themselves, leading up to the egregious excess of the frothy M&A and IPO market of 2021 (a year that also saw $643 billion in venture capital investment ), which led to 311 IPOs that shed 60% of their value by October 2023 . Years of stupid bets based on the assumption that the markets or big tech would buy any company that remotely scared them piled up. This created the current liquidity crisis in venture capital, where funds raised after 2018 have struggled to return any investor money , making investing in venture capital firms less lucrative, which in turn made raising money from Limited Partners harder, which in turn led to less money being available for startups that were now paying higher rates as SaaS companies — some of whom were startups — gouged their customers with higher rates every year .  Every single one of these problems comes down to one simple thing: growth. Limited Partners invest in venture capitalists that can show growth, and venture capital invested in companies that would show growth, which would in turn increase their value, which would allow them to sell for a greater amount of money. The media covers companies based not on what they do but their potential value , a value that’s largely dictated by the vibes of the company and the amount of money that they’ve raised from investors. And all of that only makes sense if there’s liquidity, and based on the overall TVPI (the amount of money you made for each dollar invested) of funds raised after 2018 , the majority of VC firms have not been able to actually make their investors more than even money in years.  Why? Because they invested in bullshit. It’s that simple. The companies they invested in are dogs that will never go public or sell to another company. While many people believe that venture capital is about making early, risky bets on vestigial companies, the truth is that the majority of venture dollars go into late-stage bets . A kinder person would frame this as “doubling down on established companies,” but those of us living in reality see it for what it is — a culture that has more in common with investing in penny stocks than it does in understanding any business fundamentals. Perhaps I’m a little bit naive, but my perception of venture capital was that it was about discovering nascent technologies and giving them the means to make their ideas a reality. The risk was that these companies were early and thus might die , but those that didn’t die would soar. Instead, Silicon Valley waits for angel and seed investors to take the risk first, reads TechCrunch, watches the ( Well Well Well, If It Isn’t The ) Technology Brothers , or browses Twitter all day and discovers the next thing to pile into.  The problem with a system like this is that it naturally rewards grifting, and it was inevitable that a kind of technology would come along that worked against a system that had chased out any good sense or independent thought.  Generative AI lowers the barrier of entry for anybody to cobble together a startup that can say all the right things to a venture capitalist. Vibe coding can create a “working prototype” of a product that can’t scale (but can raise money!), the nebulous problems of LLMs — their voracious need for data, the massive data security issues, and so on — offer founders the chance to create slews of nebulous “observability” and “data veracity” companies, and the burdensome cost of running anything LLM-adjacent means that venture capitalists can make huge bets on companies with inflated valuations, allowing them to raise the Net Asset Value of their holdings arbitrarily as other desperate investors pile into later rounds. As a result, AI startups took up 65% of all venture capital funding in Q4 2025 . Venture capital’s fundamental disconnection from value-creation (or reality) has led to hundreds of billions of dollars flowing into AI startups that have already-negative margins that get worse as their customer base grows and the cost of inference (creating outputs) is increasing , and at this point it’s obvious that it is impossible to create a foundation lab or LLM-powered service that makes a profit, on top of the fact that it appears that renting the GPUs for AI services is also unprofitable.  I also need to be clear that this is far, far worse than the dot com bubble.  US venture capital invested $11.49 billion ($23.08bn in today’s money) in 1997, $14.27 billion ($28.21 billion in today’s money) in 1998 , $48.3 billion ($95.50 billion in today’s money) in 1999 , and over $100 billion ($197.71 billion) in 2000 for a grand total of $344.49 billion (in today’s money) — a mere $6.174 billion more than the $338.3 billion raised in 2025 alone , with somewhere between 40% and 50% of that (around $168 billion) going into AI investments, and in 2024, North American AI startups raised around $106 billion .  According to the New York Times , “48 percent of dot-com companies founded since 1996 were still around in late 2004, more than four years after the Nasdaq’s peak in March 2000.” The ones that folded were predominantly dodgy and nakedly unsustainable eCommerce shops like WebVan ( $393m in venture capital), Pets.com ( $15m ) and Kozmo ( $233m ), all of which filed to go public, though Kozmo failed to dump itself onto the markets in time .  Yet in a very real sense, the “dot com bubble” that everybody experienced had very little to do with actual technology. Investors in the public markets rushed with their eyes closed and their wallets out to invest in any company that even smelled like the computer, leading to basically any major tech or telecommunications stock trading at a ridiculous multiple of their earnings per share ( 60x in Microsoft’s case ).  The bubble burst when the bullshit dot-com stocks died on their ass and the world realized that the magic of the internet was not a panacea that would fix every business model, and there was no magic moment where a company like WebVan or Pets.com would turn a horribly-unprofitable business into a real one. Similarly, companies like Lucent Technologies stopped being rewarded for doing dodgy, circular deals with companies like Winstar , leading to the collapse of the telecommunications bubble that led to millions of miles of dark fiber being sold dirt cheap in 2002. The oversupply of dark fiber was eventually seen as a positive , leading to an eventual surge in demand as billions of people came online toward the end of the 2000s.  Now, I know what you’re thinking. Ed, isn’t that exactly what’s happening here? We’ve got overvalued startups, we’ve got multiple unprofitable, unsustainable AI companies promising to IPO , we’ve got overvalued tech stocks, and we’ve got one of the largest infrastructural buildouts of all time. Tech companies are trading at ridiculous multiples of their earnings-per-share, but the multiples aren’t as high . That’s good, right?  No. No it isn’t. AI boosters and well-wishers are obsessed with making this comparison because saying “things worked out after the dot com bubble” allows them to rationalize doing stupid, destructive and reckless things.  Even if this was just like the dot com bubble, things would be absolutely fucking catastrophic — the NASDAQ dropped 78% from its peak in March 2000 — but due to the incredible ignorance of both the private and public power brokers of the tech industry, I expect consequences that range from calamitous to catastrophic, dependent almost entirely on how long the bubble takes to burst, and how willing the SEC is to greenlight an IPO. The AI bubble bursting will be worse, because the investments are larger, the contagion is wider, and the underlying asset — GPUs — are entirely different in their costs, utility and basic value than dark fiber. Furthermore, the basic unit economics of AI — both in its infrastructure and the AI companies themselves — are magnitudes more horrifying than anything we saw in the dot com bubble. In simpler terms, I’m really fucking worried, and I’m sick and tired of hearing people making this comparison.

0 views

2025, A Retrospective

I'm not dropping this on the actual newsletter feed because it's a little self-indulgent and I'm not sure 88,000 or so people want an email about it. I have a lot of trouble giving myself credit for anything, and genuinely think I could be doing more or that I "didn't do that much" because I'm at a computer or on a microphone versus serving customers in person or something or rather. To try and give some sort of scale to the work from the last year, I've written down the highlights. It appears that 2025 was an insane year for me. Here's the rundown: I also did no less than 50 different interviews, with highlights including: Next year I will be finishing up my book Why Everything Stopped Working (due out in 2027), and continuing to dig into the nightmare of corporate finance I've found myself in the center of. I have no idea what happens next. My fear - and expectation - is that many people still do not realize that there is an AI bubble or will not accept how significant and dangerous the bubble is, meaning that everybody is going to act like AI is the biggest most hugest and most special thing in the world right up until they accept that it isn't. I will always cover tech, but I get the sense I'll be looking into other things next year - private equity, for one - that have caught my eye toward the end of the year. I realize right now everything feels a little intense and bleak, but at this time of year it's always worth remembering to be kinder and more thoughtful toward those close to us. It's cheesy, but it's the best thing you can possibly do. It's easy to feel isolated by the amount of hogs oinking at the prospect of laying you off or replacing you - and it turns out there are far more people that are afraid or outraged than there are executives or AI boosters. Never forget (or forgive them for) what they've done to the computer, and never forget that those scorned by the AI bubble are legion. Join me on r/Betteroffline , you are far from alone. I intend to spend the next year becoming a better writer, analyst, broadcaster, entertainer and person. I appreciate every single one of you that reads my work, and hope you'll continue to do so in the future. See you in 2026, [email protected] Cory Doctorow quoted me at the very front of his new book . I recorded over 110 episodes of my tech podcast Better Offline , starting with a 13.5 hour-long pop-up radio show at CES 2025. And yes, it's back next week, featuring David Roth, Adam Conover, Ed Ongweso Jr., Chloe Radcliffe, Robert Evans, Gare Davis, Cory Doctorow and a host of other great guests. Better Offline also won the Webby for best business podcast episode for last year's episode The Man That Destroyed Google Search . I also had some fantastic interviews, like when I went out to North Carolina to interview Steve Burke of GamersNexus , chatted to author Adam Becker about the technoligarchs , Pablo Torres and David Roth about independent media , and even comedian Andy Richter . I wrote over 440,000 words, not including the work I've done on the book or any notes I took to prepare for my show or newsletter. The newsletter also grew from 47,000~ish people at the end of last year to around 88,500 people. I want to be at 150,000 this time next year. I wrote some of my favourite free newsletters (many of which were turned into episodes of the show): Deep Impact , my analysis of the DeepSeek situation and why it scared the American AI industry (clue: it's cost-related and nothing to do with "national security"). Power Cut , an early warning sign that the bubble was bursting as Microsoft pulled out of gigawatts of data center deals. CoreWeave Is A Time Bomb , published March 17 2025, way before most had even bothered to think about this company deeply, a savage analysis of a "neocloud" - a company that only sells AI compute - backed by NVIDIA, who is also a customer, who CoreWeave also buys billions of GPUs from. The Era of the Business Idiot , probably my favourite piece I wrote this year, the story of how middle management has seized power, breeding out true meritocracy and value-creation in favor of symbolic growth and superficial intelligence. It ties together everything I've ever written. Make Fun Of Them , the piece that restarted my fire after a bit of a low point, where I call for a radical new approach to tech CEOs: mocking them, because they talk like idiots and provide little value to society outside of their dedication to shareholder value. The Hater's Guide To The AI Bubble , a piece that elevated me in a way that I never expected, a thorough and brutal broadside against an industry that has no profits and terrible costs, discussing how generative AI is nothing like Uber or Amazon Web Services, there are no profitable generative AI companies, agents do not and cannot exist, there is no AI SaaS story, and everything rides - and dies - on selling GPUs. AI Is A Money Trap , a piece about how AI companies' ridiculous valuations and unsustainable businesses make exits or IPOs impossible, how data center developers have no exit route, and US economic growth has become shouldered entirely by big tech. How To Argue With An AI Booster , a comprehensive guide to arguing with AI boosters, addressing both their bad faith debate style and their specific (and flimsy) arguments as to why generative AI is the future. The Case Against Generative AI , a comprehensive analysis of a financial collapse built on myths, the markets’ unhealthy obsession with NVIDIA's growth, and the fact that there is not enough money in the world to fund OpenAI. NVIDIA Isn't Enron, So What Is It? - A lighthearted and indepth analysis of NVIDIA as a company, a historic rundown of what happened with Lucent, WorldCom and Enron, as well as a guide to how it makes money, how its future relies on endless debt, how millions of GPUs are sitting waiting to be installed, and why it no longer makes sense to buy more GPUs. The Enshittifinancial Crisis , a piece about The Enshittifinancial Crisis, the fourth stage of enshittification, where companies turn on their shareholders. Unprofitable, unsustainable AI threatens future of venture capital, private equity and the markets themselves. I published two massive exclusives: How Much Anthropic and Cursor Spend On Amazon Web Services , which is exactly what it sounds like. How Much OpenAI Spends On Inference and Its Revenue Share With Microsoft , which also includes evidence that OpenAI's revenues were at around $4.5 billion by the end of September, a vast difference from the $4.3 billion for the first half of the year published by other outlets. The Financial Times , The Register and TechCrunch covered, while others aggressively ignored it. I launched the premium edition of my newsletter, and published multiple deeply important pieces of research: The Hater's Guide to NVIDIA , the single-most exhaustive rundown of the rickety nature of the company sitting at the top of the stock market – how its future is dependent on massive debt, how AI revenues will never pay back the cost of these GPUs, and how there are likely millions of GPUs sitting in warehouses, as there's no chance that 6 million Blackwell GPUs have actually been installed and turned on. Published November 24 2025, I made this call several weeks before famed short seller Michael Burry would do the same . How Does GPT-5 Work? - an exclusive piece (reported using internal documents from an infrastructure provider) on how GPT-5's router mode actually costs OpenAI more money to run. OpenAI Burned $4.1 Billion More Than We Knew - Where Is Its Money Going? - an analysis of reported cash burn and investments in OpenAI that proved the company burned more than $4 billion more than we know. OpenAI and Oracle Are Full of Crap - on September 12 2025, months before anybody started worrying about it, I published proof that OpenAI couldn't afford to pay Oracle and Oracle didn't have the capacity to service their farcical $300 billion, 5-year-long deal . OpenAI Needs A Trillion Dollars In The Next Four Years - on September 26 2025, I published a thorough review and analysis of OpenAI's agreed-upon compute and data center deals, and proved that it needed at least $1 trillion in the next four years to pull any of it off, several weeks before anyone else did . The Hater's Guide To The AI Bubble Volume 2 : a massive omnibus summary of every major AI company's weaknesses - the pathetic revenues, terrible margins and horrifying costs, and how hopeless everything feels. My own interview in the New Yorker's legendary "Talk Of The Town" section . Profiles with Slate , the Financial Times and FastCompany . An interview with MarketWatch about The Hater's Guide to the AI Bubble . A panel in Seattle with Cory Doctorow about Enshittification and The Rot Economy . A chat with Brooke Gladstone on NPR about the AI bubble . Two interviews with the BBC. An interview with Van Lathan and Rachel Lindsay on The Ringer's Higher Learning . Two episodes of Chapo Trap House. Interviews with The Lever , Parker Molloy's The Present Age , Bloomberg's Everybody's Business , The Majority Report , Newsweek's 1600 Podcast , TechCrunch , Defector , the New Yorker (by the legendary Cal Newport) , Guy Kawasaki's Remarkable People , both Slate's Death, Sex & Money and the excellent TBD podcast , TrashFuture multiple times, The Times Radio (I think multiple times?) and NPR Marketplace . Citations in an astonishing amount of major media outlets, with highlights including The Economist , The Guardian , Charlie Brooker (!) in The Hollywood Reporter , ArsTechnica , CNN , Semafor and ZDNet

0 views

The Enshittifinancial Crisis

Soundtrack: Lynyrd Skynyrd — Free Bird This piece is over 19,000 words, and took me a great deal of writing and research. If you liked it, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 15,000 words, including vast, extremely detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I am regularly several steps ahead in my coverage, and you get an absolute ton of value. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual.  Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.  If you have any issues signing up for premium, please email me at [email protected]. One time, a good friend of mine told me that the more I learned about finance, the more pissed off I’d get. He was right. There is an echoing melancholy to this era, as we watch the end of Silicon Valley’s hypergrowth era, the horrifying result of 15+ years of steering the tech industry away from solving actual problems in pursuit of eternal growth. Everything is more expensive, and every tech product has gotten worse, all so that every company can “do AI,” whatever the fuck that means. We are watching one of the greatest wastes of money in history, all as people are told that there “just isn’t the money” to build things like housing, or provide Americans with universal healthcare, or better schools, or create the means for the average person to accumulate wealth. The money does exist, it just exists for those who want to gamble — private equity firms, “ business development companies ” that exist to give money to other companies , venture capitalists, and banks that are getting desperate and need an overnight shot of capital from the Federal Reserve’s Overnight Repurchase Facility or Discount Window , two worrying indicators of bank stress I’ll get into later. No, the money does not exist for you or me or a person . Money is for entities that could potentially funnel more money into the economy , even if the ways that these entities use the money are reckless and foolhardy, because the system’s intent on keeping entities alive incentivizes it. We are in an era where the average person is told to pull up their bootstraps, to work harder, to struggle more , because, as Martin Luther King Jr. once said, it’s socialism for the rich and rugged free market capitalism for the poor. The “free market” is a fucking con . When you or I run out of money, our things are taken from us, we receive increasingly-panicked letters, we get phone calls and texts and emails and demands, we are told that all will be lost if we don’t “work it out,” because the financial system is not about an exchange of value but whether or not you can enter into the currently agreed-upon con.  By letting neoliberalism and the scourge of the free markets rule , modern society created the conditions for what I call The Enshittifinancial Crisis — the place at which my friend Cory Doctorow’s Enshittification Theory meets my own Rot Economy Thesis in a fourth stage of Enshittification. Per The New Yorker : I’ll walk you through it. Facebook was a huge, free platform, much like Instagram, that offered fast and easy access to everybody you knew. It acquired Instagram in 2012 to kill off a likely competitor, and over time would start making both products worse — clickbait notifications, a mandatory algorithmic feed that deliberately emotionally manipulated people and stoked political division, eventually becoming full of AI slop and videos, all so that Meta could continue to sell billions of dollars of ads a quarter. Per Kyle Chayka of the New Yorker, “Facebook’s feed, now choked with A.I.-generated garbage and short-form videos, is well into the third act of enshittification.” The third stage is critical, in that it’s when the company also turns on its business customers. A Marketing Brew story from September of last year told the tale of multiple advertisers who found their campaigns switching to different audiences, wasting their money and getting questionable results. A New York Times story from 2021 described companies losing upwards of 70% of their revenue during a Facebook ads outage , another from 2018 described how Meta (then Facebook) deliberately hid issues with its measurement of engagement on videos from advertisers for over a year , and more recently, Meta’s ads tools started switching out top-performing ads with AI-generated ones , in one case targeting men aged 30 to 45 with an AI-generated grandma, all without warning the advertiser . Meta doesn’t give a shit, because investors and analysts don’t give a shit. I could say “sell-side analysts” here — the ones that are trying to get you to buy a stock — but based on every analyst report I’ve read from a major bank or hedge fund, I truly think everybody is complicit.  In November 2025, Reuters revealed that Meta projected in late 2024 that 10% of its annual revenue ($16 billion) would come from advertisements for scams or banned goods , mere weeks after Meta announced a ridiculous $27 billion data center debt package , one that used deep accountancy magic to keep it off of its balance sheet despite Meta guaranteeing the entirety of the loan. One would think this would horrify investors for two reasons: One would be wrong. Morgan Stanley said a few weeks ago that it is “one of the handful of companies that can leverage its leading data, distribution and investments in AI,” and raised its target to $750, with a $1000-a-share bull case. Wedbush raised Meta’s price to $920, and Bank of America staunchly held firm at…$810 . I can find no analyst commentary on Meta making sixteen billion dollars on fraud , because it doesn’t matter to them, because this is the Rot Economy, and all that matters is number go up.   Reality — such as whether there’s any revenue in AI, or whether it’s a good idea that Meta is spending over $70 billion this year on capital expenditures on a product that has generated no revenue (and please, fucking spare me the bullshit around “Meta’s AI ads play,” that whole story is nonsense) — doesn’t matter to analysts, because stocks are thoroughly, inextricably enshittified, and analysts don’t even realize it’s happening. The stages of enshittification usually involve some sort of devil’s deal.  We have now entered Enshittification Stage 4, where businesses turn on shareholders. Analysts and investors have become trapped in the same kind of loathsome platform play as consumers and businesses, and face exactly the same kinds of punishment through the devaluation of the stock itself. Where platforms have prioritized profits over the health and happiness of users or business customers, they are now prioritizing stock value over literally anything , and have — through the remarkable growth of tech stocks in particular — created a placated and thoroughly whipped investor and analyst sect that never asks questions and always celebrates whatever the next big thing is meant to be. The value of a “stock” is not based on whether the business is healthy, or its future certain, but on its potential price to grow, and analysts have, thanks to an incredible bull run of tech stocks going on over a decade, been able to say “I bet software will be big” for most of the time, going on CNBC or Bloomberg and blandly repeating whatever it is that a tech CEO just said, all without any worries about “responsibility” or “the truth.”  This is because big tech stocks — and many other big stocks, if I’m honest — have made their lives easy as long as they don’t ask questions. Number always seems to be going up for software companies, and all you need to do is provide a vociferous defense of the “next big thing,” and come up with a smart-sounding model that justifies eternal growth.  This is entirely disconnected from the products themselves, which don’t matter as long as Number Go Up . If net income is high and the company estimates it will continue to grow, then the company can do whatever the fuck it want with the product it sells or the things that it buys. Software Has Eaten The World in the sense that Andreesen got his wish, with investors now caring more about the “intrinsic value” of software companies rather than the businesses or products themselves. And because that’s happening, investors aren’t bothering to think too hard about the tech itself, or the deteriorating products underlying tech companies, because “these guys have always worked it out” and “these companies have always managed to keep growing.” As a result, nobody really looks too deep. Minute changes to accounting in earnings filings are ignored, egregious amounts of debt are waved off, and hundreds of billions of dollars of capital expenditures are seen as “the new AI revolution” versus “a huge waste of money.” By incentivizing the Rot Economy — making stocks disconnected from the value of the company beyond net income and future earnings guidance — companies have found ways to enshittify their own stocks, and shareholders will be the ones who suffer, all thanks to the very downstream pressure that they’ve chosen to ignore for decades. You see, while one might (correctly) see that the deterioration of products like Facebook and Google Search was a sign of desperation, it’s important to also see it as the companies themselves orienting around what they believe analysts and investors want to see.   You can also interpret this as weakness, but I see it another way: stock manipulation, and a deliberate attempt to reshape what “value” means in the eyes of customers and investors. If the true value of a stock is meant to be based on the value of its business, cash flow, earnings and future growth, a company deliberately changing its products is an intentional interference with value itself, as are any and all deceptive accounting practices used to boost valuations. But the real problem is that analysts do not…well…analyze, not, at least, if it goes against the market consensus. That’s why Goldman Sachs and JP Morgan and Futurum and Gartner and Forrester and McKinsey and Morgan Stanley all said that the metaverse was inevitable — because they do not actually care about the underlying business itself, just its ability to grow on paper.  Need proof that none of these people give a fuck about actual value? Mark Zuckerberg burned $77 billion on the metaverse , creating little revenue or shareholder value and also burning all that money without any real explanation as to where it went. The street didn’t give a shit because meta’s existent ads business continued to grow, same as it didn’t give a shit that Mark Zuckerberg burned $70 billion on capex, even though we also really don’t know where that went either. In fact, we really have no idea where all this AI spending is going. These companies don’t tell us anything. They don’t tell us how many GPUs they have, or where those GPUs are, or how many of them are installed, or what their capacity is, or how much money they cost to run, or how much money they make. Why would we? Analysts don’t even look at earnings beyond making sure they beat on estimates. They’ve been trained for 20 years to take a puddle-deep look at the numbers to make sure things look okay, look around their peers and make sure nobody else is saying something bad, and go on and collect fees.  The same goes for hedge funds and banks propping up these stocks rather than asking meaningful questions or demanding meaningful answers. In the last two years, every major hyperscaler has extended the “useful life” of its servers from 3 years to either 5.5 or 6 years — and in simple terms, this allowed them to incur a smaller depreciation expense each quarter as a result, boosting net income. Those who are meant to be critical — analysts and investors sinking money into these stocks — had effectively no reaction, despite the fact that Meta used ( per the Wall Street Journal ) this adjustment to reduce its expenses by $2.3 billion in the first three quarters of this year.   This is quite literally disconnected from reality, and done based on internal accounting that we are not party to. Every single tech firm buying GPUs did this and benefited to the tune of billions of dollars in decreased revenues, and analysts thought it was fine and dandy because number went up.  Shareholders are now subordinate to the shares themselves, reacting in the way that the shares demand they do, being happy for what the companies behind the shares give them, and analysts, investors and even the media spend far more energy fighting the doubters than they do showing these companies scrutiny.   Much like a user of an enshittified platform, investors and analysts are frogs in a pot, the experience of owning a stock deteriorating since Jack Welch and GE taught corporations that the markets are run with the kind of simplistic mindset built for grifter exploitation.  And much like those platforms, corporations have found as many ways as possible to abuse shareholders, seeing what they can get away with, seeing how far they can push things as long as the numbers look right, because analysts are no longer looking for sensible ideas. Let me give you an example I’ve used before. Back in November 1998, Winstar Communications signed a “$2 billion equipment and finance agreement with Lucent Technologies” where Winstar would borrow money from Lucent to buy stuff from Lucent, all to create $100 million in revenue over 5 years.  In December 1999, Barron’s wrote a piece called “ In 1999 Tech Ruled ”: Airnet? Bankrupt . WinStar? Horribly bankrupt. While Ciena survived, it had spent over a billion dollars to acquire other companies (all stock , of course), only to see its revenue dwindle basically overnight from $1.6bn to $300 million as the optical cable industry collapsed .   One would have been able to work out that Winstar was a dog, or that all of these companies were dogs, if you were to look at the numbers, such as “how much they made versus how much they were spending.” Instead, analysts, the media and banks chose to pump up these stocks because the numbers kept getting bigger, and when the collapse happened, rationalizations were immediately created — there were a few bad apples (Enron, Winstar, WorldCom), “the fiber was useful” and thus laying it was worthwhile, and otherwise everything was fine. The problem, in everybody else’s mind, was that everybody had got a bit distracted and some companies that weren’t good would die. All of that lost money was only a problem because it didn’t pay off. This was a misplaced gamble, and it taught tech executives one powerful lesson: earnings must be good, without fail, by any means necessary, and otherwise nothing else matters to Wall Street.  It’s all about incentives. A sell-side analyst that tells you not to buy something is a problem. A journalist that is skeptical or critical of an industry in the midst of a growth or hype cycle is considered a “hater” — don’t I fucking know it . Analysts that do not sing the same tune as everybody else are marginalized, mocked and aggressively policed. And I don’t fucking care. Stop being fucking cowards. By not being skeptical or critical you are going to lead regular people into the jaws of another collapse. The dot com bubble was actually a great time to start reevaluating how and why we value stocks — to say “hey, wait, that $2 billion deal will only make $100 million in revenue?” or “this company spends $5 for every $1 it makes!” — but nobody, it appears, remained particularly suspicious of the tech industry, or a stock market that was increasingly orienting itself around conning shareholders. And because shareholders, analysts and the media alike refused to retain a single shred of suspicion leaving the dot com era, the mania never actually subsided. Financial publications still found themselves dedicated to explaining why the latest hype cycle was real. Journalists still found themselves told by editors that they had to cover the latest fad, even if it was nonsensical or clearly rotten. Analysts still grabbed their swords and rushed to protect the very companies that have spent decades misleading them.  Much like we spent years saying that Facebook was a “good deal” because it was free, analysts and investors say tech stocks are “great to hold” because they kept growing, even if the reason they “kept growing” was a series of interlocking monopolies, difficult-to-leave platforms and impossible-to-fight traction and pricing, all of which have an eventual sell-by date. I realize I’m pearl-clutching over the amoral status of capitalism and the stock market, but hear me out: what if we’re actually in a 15-to-20-year-long knife-catching competition? What if all anybody has done is look at cashflow, net income, future growth guidance, and called it a day? A lack of scrutiny has allowed these companies to do effectively anything they want, bereft of worrisome questions like "will this ever make a profit?" What if we basically don’t know what the fuck is going on? What if all of this was utterly senseless? As I wrote last year, the tech industry has run out of hypergrowth ideas, facing something I call “the Rot Com bubble .” In simple terms, they’re only “doing AI” because there do not appear to be any other viable ideas to continue the Rot Economy’s eternal growth-at-all-costs dance.  Yet because growth hasn’t slowed yet , analysts, the media and other investors are quick to claim that AI is “ paying off ,” even if nobody has ever said how much AI revenue is being generated or, in the case of Salesforce, they can say “ nearly $1.4 billion ARR ,” which sounds really big until you realize a company with $10.9 billion in revenue is boasting about making less than $116 million in revenue in a month. Nevertheless, because Salesforce set a new revenue target of $60 billion by 2030, the stock jumped 4% . It doesn’t matter that most Agentforce customers don’t pay for the service, or that AI isn’t really making much money, or really anything, other than Number Go Up. The era we live in is one of abject desperation, to the point that analysts and investors — and shareholders by extension — will take any abuse from management. They will allow companies to spend as much money as they want in whatever ways they want, as long as it continues the charade of “number go up.” Let me spell it out a little more, using the latest earnings of various hyperscalers as an example. We have no idea, because analysts and investors are in an abusive relationship with tech stocks. It is fundamentally insane that Microsoft, Meta, Amazon and Google have spent $776 billion in capital expenditures in the space of three years , and even more so that analysts and investors, when faced with such egregious numbers, simply sit back and say “they’re building the infrastructure of the future, baby!” Analysts and traders and investors and reporters do not think too hard about the underlying numbers, because doing so immediately makes you run head-first into a number of worrying questions such as “where did all that money go?” and “will any of this pay off?” and “how many GPUs do they actually own?” Analysts have, on some level, become the fractional marketing team for the stocks they’re investing in. When Oracle announced its $300 billion deal with OpenAI in September — one that Oracle does not have the capacity to fill and OpenAI does not have the money to pay for – analysts heaved and stammered like horny teenagers seeing their first boob: These are the same people that retail and institutional investors rely upon for advice on what stocks to buy, all acting with the disregard for the truth that comes from years of never facing a consequence. Three months later, and Oracle has lost basically all of the stock bump it saw from the OpenAI deal, meaning that any retail investor that YOLO’d into the trade because, say, analysts from major institutions said it was a good idea and news outlets acted like this deal was real , already got their ass kicked.  And please, spare me the “oh they shouldn’t trade off of analysts” bullshit. That’s the kind of victim-blaming that allows these revered fuckwits to continue farting out these meaningless calls. In reality, we’re in an era of naked, blatant, shameless stock manipulation, both privately and publicly, because a “stock” no longer refers to a unit of ownership in a company so much as it is a chip at a casino where the house constantly changes the rules. Perhaps you’re able to occasionally catch the house showing its hand, and perhaps the house meant for you to see it. Either way, you are always behind, because the people responsible for buying and selling stocks at scale under the auspices of “knowing what’s going on” don’t seem to know what they’re talking about, or don’t care to find out. Let’s walk through the latest surge of blatant stock manipulation, and how the media and analysts helped it happen. Oracle announces its unfillable, unpayable $300 billion deal with OpenAI , leading to 30%+ bump in stock price . Analysts, who should ostensibly be able to count, call it “momentous” and say they’re “in shock.” On September 22 2025, CEO Safra Catz steps down , and nobody seems to think that’s weird or suspicious.  Two months later, Oracle’s stock is down 40% , with investors worried about Oracle’s growing capex, which is surprising I suppose if you didn’t think about how Oracle would build the fucking data centers. Basically anyone who traded into this got burned. NVIDIA announced a “strategic partnership” to invest “up to $100 billion” and build 10GW of data centers with OpenAI, with the first gigawatt to be deployed in the second half of 2026. Where would the data centers go? How would OpenAI afford to build them? How would OpenAI build a gigawatt in less than a year? Don’t ask questions, pig! NVIDIA’s stock bumped from from $175.30 to $181 in the space of a day. The media wrote about the story as if the deal was done, with CNBC claiming that “the initial $10 billion tranche [was] expected to close within a month or so once the transaction has been finalized.” I read at least ten stories that said that “NVIDIA had invested $100 billion.” Analysts would say that NVIDIA was “locking in OpenAI” to “remain the backbone of the next-gen AI infrastructure,” that “demand for NVIDIA GPUs is effectively baked into the development of frontier AI models,” that the deal “[strengthened] the partnership between the two companies…[and] validates NVIDIA’s long-term growth numbers with so much volume and compute capacity.” Others would say that NVIDIA was “enabling OpenAI to meet surging demand.” Three analysts — Rasgon at Bernstein, Luria at D.A. Davidson and Wagner at Aptus Capital — all raised circular deal concerns, but they were the minority, and those concerns were still often buried under buoyant optimism about the prospects of the company. One eensy weensy problem though, everyone! This was a “letter of intent” — it said so in the announcement! — and on NVIDIA’s November earnings , it said that it “entered into a letter of intent with an opportunity to invest in OpenAI.”  It turns out the deal didn’t exist and everybody fell for it! NVIDIA hasn’t sent a dime and likely won’t. A letter of intent is a “concept of a plan.” Back in October, Reuters reported that Samsung and SK Hynix had " signed letters of intent to supply memory chips for OpenAI's data centers ," with South Korea's presidential office saying that said chip demand was expected to reach "900,000 wafers a month," with "much of that from Samsung and SK Hynix," which was quickly extrapolated to mean around 40% of global DRAM output . Stocks in both companies, to quote Reuters , “soared,” with Samsung climbing 4% and SK Hynix more than 12% to an all-time high. Analyst Jeff Kim of KB Securities said that “there have been worries about high bandwidth memory prices falling next year on intensifying competition, but such worries will be easily resolved by the strategic partnership,” adding that “Since Stargate is a key project led by President Trump, there also is a possibility the partnership will have a positive impact on South Korea's trade negotiations with the U.S.” Donald Trump is not “leading Stargate.” Stargate is a name used to refer to data centers built by OpenAI. KB Securities has around $43 billion of assets under management. This is the level of analysis you get from these analysts! This is how much they know! On SK Hynix's October 29 2025 earnings call , weeks after the announcement, its CEO, Kim Woo-Hyun, was asked a question about High Bandwidth Memory growth by SK Kim from Daiwa Securities: This is the only mention of OpenAI. Otherwise, SK Hynix has not added any guidance that would suggest that its DRAM sales will spike beyond overall growth, other than mentioning it had "completed year 2026 supply discussions with key customers." There is no mention of OpenAI in any earnings presentation. On Samsung's October 30 2025 earnings call , Samsung mentioned the term "DRAM" 18 times, and neither mentioned OpenAI nor any letters of intent. In its Q3 2025 earnings presentation, Samsung mentions it will "prioritize the expansion of the HBM4 [high bandwidth memory 4] business with differentiated performance to address increasing AI demand." Analysts do not appear to have noticed a lack of revenue from an apparent deal for 40% of the world’s RAM! Oh well! Pobody’s nerfect! Both Samsung and SK Hynix’s stocks have continued to rise since, and you’d be forgiven for thinking this deal was something to do with it, even though it wasn’t. AMD announced that it had entered a “multi-year, multi-generation agreement” with OpenAI to build 6 GW of data centers, with “the first 1GW deployment set to begin in the second half of 2026,” calling the agreement “definitive” with terms that allowed OpenAI to buy up to 10% of AMD’s stock, vesting over “specific milestones” that started with the first gigawatt of data center development. Said data centers would also use AMD’s yet-to-be-released MI450 GPUs. The deal would, per Reuters , bring in “tens of billions of dollars of revenue.” Where would those data centers go? How would OpenAI pay for them? Would the chips be ready in time? Silence, worm! How dare you ask questions? How dare you? Why are you asking questions? NUMBER GO UP! AMD’s shares surged by 34% , with analyst Dan Ives of Wedbush saying that this was a “major valuation moment” for AMD. As an aside, Ives said that NVIDIA would benefit from the metaverse in 2021 , and told CBS News in November 22 2021 that “ the metaverse [was] real and Wall Street [was] looking for winners .” One would think that AMD’s November earnings — a month after the announcement — might be a barn-burner full of remaining performance obligations from OpenAI. In fact, CEO Lisa Su said that “[AMD expected] this partnership will significantly accelerate [its] data center AI business, with the potential to generate well over $100 billion in revenue over the next few years.” Here’s how AMD’s 10-Q filing referred to it: …so, no revenue from OpenAI at all, I guess? AMD raised guidance by 35% over the next five years   AMD's trailing 12-month revenue is $32 billion . "Tens of billions of dollars" would surely lead to more than a 35% boost (an increase of $11.2 billion or so) in the next five years? Guess all of that was for nothing. No follow-up from the media, no questions from analysts, just a shrug and we all move on. Anyway, AMD’s stock is now down from a high of $259 at the end of October to around $214 as of writing this sentence. Everybody who traded in based on analyst and media comments got fucked. So, back on September 5, Broadcom said on its earnings call that it had a $10 billion order from a mystery customer, which analysts quickly assumed was OpenAI , leading to the stock popping 9%, and gradually increasing to a high of $369 or so on September 10, before declining a little until October 13, when Broadcom announced its ridiculous 10 gigawatt deal with OpenAI , claiming that it would deploy 10GW of OpenAI-designed chips, with the first racks to deploy the second half of 2026 and the entire deployment completed by end of 2029. The same day, its president of semiconductor solutions Charlie Kawwas added that said mystery customer was actually somebody else : Nevertheless, Broadcom's stock popped by 9% on the news about the 10GW deal, with CNBC adding that "the companies have been working together for 18 months." Because it's OpenAI, nobody sat and thought about whether somebody at Broadcom saying "well, OpenAI has yet to order these chips yet" was a problem. In fact, the answer to “how does OpenAI afford this?” appeared to be “they’d afford it” when it came to analysts: Not to worry, OpenAI’s solution was far simpler: it didn’t order any chips. During Broadcom's November earnings call, where Broadcom revealed that the $10 billion order was actually from Anthropic , another LLM startup that burns billions of dollars, which was buying Google's TPUs, and also booked another $11 billion in orders. Analysts somehow believed that Anthropic is “positioned to spend heavily” despite being another venture-backed welfare recipient in the same flavor as OpenAI. Oh, right, that 10GW OpenAI deal. Broadcom CEO Hock Tan said that he did “ not expect much in 2026 ” from the deal, and guidance did not change to reflect it. Broadcom climbed to a high of $412 leading up to its earnings, and I imagine it did so based on people trading on the belief that OpenAI and Broadcom were doing a deal together, which does not appear to be happening. While there’s an alleged $73 billion backlog, every dollar from Anthropic is questionable. Actually, yes we can. Whenever a company says “letter of intent” — as NVIDIA and SK Hynix/Samsung did — it’s important to immediately stop taking the deal seriously until you get the word “contract” involved. Not “agreement” or “deal” or “announcement,” but “contract,” because contracts are the only thing that actually matters. Similarly, it’s time for everybody — analysts, the media, members of congress, the fucking pope, I don’t care — to start treating these companies with suspicion, and to start demanding timelines. NVIDIA and Microsoft announced their $15 billion investment in Anthropic over a month ago. Where’s the money? Why does the agreement say “up to $10 billion” for NVIDIA and “up to $5 billion” from Microsoft? These subtle details suggest that the deal is not going to be for $15 billion, and the lack of activity suggests it might not happen at all.   These deals are announced with the intention of suggesting there is more revenue and money in generative AI than actually exists. Furthermore, it is irresponsible and actively harmful for analysts and the media to continually act as if these deals will actually get paid when you consider the financial conditions of these companies. As part of its alleged funding announcement with NVIDIA and Microsoft, Anthropic agreed to purchase $30 billion of Azure compute . It also agreed to spend "tens of billions of dollars" with Google Cloud . It ordered $10 billion in chips from Broadcom earlier in the year, and apparently placed another $11 billion order in its latest fiscal quarter . How does it pay for those? It allegedly will burn $2.8 billion this year (I believe it burned much, much more ) and raised $16.5 billion in funding (before Microsoft and NVIDIA’s involvement, which we cannot confirm has actually happened). How are investors tolerating Broadcom not directly stating “the future financial condition of this company is questionable”? Has Broadcom created a reserve for this deal?  If not, why not? Anthropic will make no more than $5 billion this year, and has raised $17.5bn (with a further $2.5bn coming in the form of debt). How can it foreseeably afford to pay $10 billion, or $11 billion, or $21 billion, considering its already massive losses and all those other obligations mentioned? Will Jensen Huang hand over $10 billion so that Anthropic can hand it to Broadcom? I realize the counter-argument is that companies aren’t responsible for their counterparties’ financial health, but my argument is that it’s the responsibility of any public company to give a realistic view of its financial health, which includes noting if a chunk of its revenue is from a startup that can’t afford to pay for its orders. There is no counter to that! Anthropic cannot afford to pay Broadcom $10 billion right now!  Nevertheless, the problem is that in any bubble, being really stupid and ignorant works right up until it doesn’t, and however harsh the dot com bubble might have been, it wasn’t harsh enough and those who were responsible were left unpunished and unashamed, guaranteeing that this cycle would happen again.  I want to be really, abundantly clear about what’s happening: every single stock you see “growing because of AI” outside of those selling RAM and GPUs is actually growing because of something else. Microsoft, Amazon, Google and Meta all have other products that are making them money. AI is not doing it, and because analysts and investors do not think about things for two seconds, they have allowed themselves to be beaten down and turned into supplicants for public stocks.  Investors have allowed themselves to be played, and the results will be worse than the dot com bubble bursting by several echelons. I’m gonna be really simplistic for a second. I am skeptical of AI because everybody loses money. I believe every AI company is unprofitable with margins that are getting increasingly worse as they scale , and as a result that none of them will be able to either get acquired or go public.  This means that venture capitalists that have sunk money into AI stocks are going to be sitting on a bunch of assets under management (AUM) — the same assets they collect fees on — that will eventually crater or go to zero, because there will be no way for any liquidity event to occur.  This is at a time of historically-low liquidity for venture capitalists, with Pitchbook estimating there will only be $100.8 billion in venture capital funds available at the end of 2025 .  Venture capitalists raise money from limited partners, who invest in venture capital with the hope of returns that outpace investing in the public markets. Venture capital vastly overinvested during 2021 and 2022, This was also a problem in private equity . In simple terms, this means these funds are sitting on tons of stock that they cannot shift, and the longer it takes for a company to either go public or acquired, the more likely it is the VC or PE firm will have to mark down its value.  This is so bad that according to Carta, as of August 2024, less than 10% of VC funds raised in 2021 have made any distributions to their investors . In a piece from September , Carta revealed that “about 15% of funds” from 2023 have generated any disbursements as of Q2 2025, and the median net internal rate of return was a median 0.1% , meaning that, at best, most investors got their money back and absolutely nothing else . In fact, investing in venture capital has kinda fucking sucked. According to Carta, “As of the end of Q2, most VC funds across all recent vintages had a  TVPI somewhere between 0.8x and 2x. But there are some areas where standout TVPIs are surfacing.” TVPI means Total Value To Paid-in Capital, or the amount of money you made for each dollar invested. This chart may seem confusing, it tells you that for the most part, VCs have struggled to provide even money returns since 2017. A “decent” TVPI is 2.5x, and as you’ll see, things have effectively collapsed since 2021. Companies are not going public or being acquired at the same rate, meaning that investor capital is increasingly locked up, meaning that limited partners are still waiting for a payoff from the last bubble, let alone this one. Carta would update the piece in December 2025 , and things would somehow get worse. TVPI soured further, suggesting a further lack of exits across the board. The only slight improvement was the median IRR rose to 0.5% for funds from 2021 and 0.1% for funds from 2022.  In simple terms, we are looking at years of locked-up capital leaving venture capital cash-starved and a little desperate. The worst part? All of this is happening during a generational increase in the amounts that startups need to raise thanks to the ruinous costs of generative AI, and the negative margins of AI-powered services. To quote myself : None of these companies are profitable, nor do they have any path to an acquisition or IPO. Why? Because even the most advanced AI software company is ultimately prompting Anthropic or OpenAI’s models, meaning that their only real intellectual property is those prompts and their staff, and whatever they can build around the models they don’t control, which has been obvious from the meager “acquisitions” we’ve seen so far.  Windsurf, which was allegedly being sold to OpenAI, ended up selling its assets to Cognition in July , with Google paying $2.4 billion for its co-founders and a “licensing agreement,” similar to its acquisition of Character.Ai , where it paid $2.7 billion to rehire Noam Shazeer , license its tech, and pay off the stock of its remaining staff. This is also exactly what Microsoft did with Inflection AI and its co-founder Mustafa Suleyman . OpenAI’s acquisitions of Statsig ($1.1bn), Io Products ($6.5bn) and Neptune ($400m) were all-stock. Every other acquisition — Wiz, Confluent, Informatica, and so on ( CRN has a great list here ) — is either somebody trying to pretend that (for example) Wiz is related to AI, or trying to say that a data streaming platform is AI-related because AI needs that, which may be true, but doesn’t mean that any AI startups are actually selling. And they’re not, which is a problem, as 41% of US venture dollars in 2025 have gone into AI as of August, and according to Axios, the global number was around 51% . A crisis is brewing. Nerdlawyer, back in October, wrote about the explosive growth of secondary markets :  In simpler terms, there are now Hot Potato Funds, where either another limited partner buys another one’s allocation, the companies themselves buy back their stock, or the stock is resold to other private investors.  While this piece frames this as a positive, the reality is far grimmer. Venture capitalists are sitting on piles of immovable equity in companies worth far less than they invested at, and the answer, it appears, is to find somebody else to buy the dead weight.  According to Newcomer , only 1117 venture funds closed in 2025 (down from 2100 in 2024), and 43% of dollars raised went to the largest venture funds, per The New York Times and PitchBook, suggesting limited partners are becoming less-interested in pumping cash into the system at a time when AI startups are demanding more capital than has ever been raised. How long can the venture capital industry keep handing out $100 million to $500 million to multiple startups a year? Because all signs suggest that the current pace of funding must continue in perpetuity , as nobody appears to have worked out that generative AI is inherently unprofitable, and thus every single company is on the Silicon Valley Welfare System until everybody gives up, or the system itself cannot sustain the pressure. I’ve read too many people make off-handed comments about this “being like the dot com boom” and saying that “lots of startups might die but what’s left over will be good,” and I hate them for both their flippancy and ignorance.  None of the current stack of AI companies can survive on their own, meaning that the venture capital industry is holding them up. If even one of these companies falters and dies, the entire narrative will die. If that happens, it will be harder for AI companies to raise, and even harder to sell an AI company to someone else. This is a punishment for a decade-plus of hubris, where companies were invested in without ever considering a path to profitability. Venture capital has made the same mistake again and again, believing that because Uber, or Facebook, or Airbnb, or any number of companies founded nearly twenty years ago were unprofitable (with paths to profitability in all three cases, mind), it was totally okay to keep pumping up companies that had no path to profitability, which eventually became “had no apparent business model” (see: the metaverse, web3), which eventually became “have negative margins so severe and valuations so high that we will need an IPO at a market cap higher than Netflix.” This is Silicon Valley’s Rot Economy — the desperate, growth-at-all-costs attachment to startups where you “really like the founder,” where “the market could be huge” (who knows if it is!), where you just don’t need to worry about profitability because IPOs and exits were easy.  Venture capital also used to be easy , because we were still in the era of hypergrowth. You could be a stupid asshole that doesn’t know anything, but there were so many good deals , and the more well-known you were, the more likely you’d be brought them first, guaranteeing a bigger payout, guaranteeing more LP capital, guaranteeing more opportunities that were of a higher quality because you were a big name. It was easier to make a valuable company, easier to get funded, and easier to sell, because the goal was always “get funded, grow as large an audience as possible, or go public/get acquired.” As a result, venture capital encouraged growth-at-all-costs thinking. In 2010, Ben Horwitz said that “the only thing worse for an entrepreneur than start-up hell (bankruptcy) is start-up purgatory”: This poisonous theory paid off, in that startups got used to building high-growth, low-margin companies that would easily sell to other companies or the markets themselves.  Until it didn’t, of course. Per Nerdlawyer , IPOs have collapsed as an exit route, along with easy-to-raise capital.  Per PitchBook, since 2022, 70% of VC-backed exits were valued at less than the capital put in , with more than a third of them being startups buying other startups in 2024. The money is drying up as the value of VCs’ assets is decreasing , at a time when VCs need more money than ever , because everybody is heavily leveraged in the single-most-expensive funding climate in history. And as we hit this historic liquidity crisis, the two largest companies — OpenAI and Anthropic — are becoming drains on the system that, in a very real sense, are participating in a massive redistribution of capital reserved for startups to one of a few public companies. No, really!  OpenAI is trying to raise as much as $100 billion in funding so it can continue to pass money to one of a few public companies — $38 billion to Amazon Web Services over seven years, $22.4 billion to CoreWeave over five years, and $250 billion over an indeterminate period on Microsoft Azure . If successful, OpenAI’s venture telethon will raise more money than has ever been raised in a single round, draining funds that actual startups need. Anthropic has agreed to $70 billion in compute and chip deals across Google, Amazon and Broadcom, and that’s not including the Hut8 compute deal that Google is backing . This money will come from what remains of venture capital, private equity and hyperscaler generosity.  Yet elsewhere, even the money that goes to regular startups is ultimately being sent to hyperscalers. That AI startup that needs to keep raising $100 million in a single round isn’t sending that cash to other startups — it’s mostly going to OpenAI (Microsoft, Amazon, CoreWeave, Google), Anthropic (Google, Microsoft, Amazon), or one of the large hyperscalers for Azure, AWS or Google Cloud.  Silicon Valley didn’t birth the next big tech firm. It incubated yet another hyperscaler-level parasite, except instead of just spending money on hyperscaler services (and raising money to do so), both Anthropic and OpenAI actively drain the venture capital system as well, as they both burn billions of dollars.  By creating something that’s incredibly expensive to run, they naturally create startups more-dependent on the venture capital system, and the venture capital system has no idea what to do other than say “just grow, baby!” Both OpenAI and Anthropic’s models might be getting cheaper on a per-million-token basis, but use more tokens, increasing the cost of inference , which in turn increases the costs of startups doing business, which in turn means OpenAI, Anthropic, and all connected startups lose more money, which increases the burn on venture capital. This is a doom-spiral, one that can only be reversed through the most magical and aggressive turnaround we will have seen in history, and it will have to happen next year, without fail.  It won’t.  So why did venture do this? Folks, we haven’t seen values this big in a long time. These are the biggest numbers we’ve ever seen. They’re simply tremendous. OpenAI is maybe worth $830 billion dollars , can you believe that? They lose so much money but folks we don’t worry about that, because they’re growing so fast. We love that Clammy Sam Altman — they call him “Clamuel” — tells everybody he’s giving them one billion dollars. Data centers are going to have the biggest deals we’ve ever seen, even [ tchhh sound through teeth ] if we have to work with Dario. You see, right now AI startups are big, exciting news for the limited partners funding LLM firms.  Things feel exciting because the value of the assets under management (AUM) are going up, which is nothing dodgy, but just how VCs value things and if they are valuing AI stocks, that is how their fees are paid. Investing early in OpenAI allows a VC — or even an asset manager like Blackstone, which invested in 2024 — to say it has a big holding and a big increase in its AUM.  We are currently in the sowing stage . Nevertheless, AI stocks make VCs who bet on them two years ago look like geniuses on paper. You got in early on OpenAI, Anthropic, Cursor, Cognition, Perplexity or any other company that loves to burn several dollars per dollar of revenue, you have a big, beautiful number, the biggest you’ve ever seen, and your limited partners need to pay you a fee just to manage it. Venture capital hasn’t seen valuations like this in a long time , and on paper , it feels like a lot of VCs got in on companies worth billions of dollars. On paper, Cognition is worth $10.2 billion , Perplexity $18 billion , Cursor $29.3 billion , Lovable $6.6 billion , Cohere $6.8 billion , Replit $3 billion , and Glean $7.2 billion — massive valuations for companies that all basically do products that OpenAI or Anthropic or Amazon or Google or any number of Chinese companies are already working to clone. They are all losing tons of money and have no path to profitability.  But right now the numbers are simply tremendous. I’ve heard venture capitalists tell me that there are times when they have to agree to invest with little to no information or know that they’ll lose the opportunity to another sucker investor. I’ve heard venture capitalists say they don’t have any insight into finances. Venture capitalists would, of course, claim I’m insane, saying that the “growth is obviously there” while pointing to whatever startup has made $100 million ARR ($8.3 million in a month), all while not discussing the underlying operating expenses. The idea, I believe, is that the current spate of AI spending is only set to increase next year, and that will…somehow lead to fixing margins? Venture capitalists staunchly refuse to learn anything other than “invest in growth and then profit from growth,” even if “profiting from growth” doesn’t seem to be happening anymore. In reality, venture capital shouldn’t have touched LLMs with a fifteen foot pole, because the margins were obviously, blatantly bad from the very beginning. We knew OpenAI would lose $5 billion in the middle of 2024 . A sane venture capital climate would have fucking panicked , but instead chose to double, triple and quadruple down. I believe that massive valuation drawdowns are a certainty. There are losses coming. Venture capitalists, I have to ask you: what happens if OpenAI dies? Do you think that this will make investors interested in funding or acquiring other AI startups? How much longer are we going to do this? When will venture capital realize it’s setting itself up for disaster? And what, exactly, is the plan? OpenAI and Anthropic will suck the lakes dry like an NVIDIA GPU named after Nancy Reagan. How is this meant to continue, and what will be left when it does? The answer is simple: there won’t be money for venture capital for a while. Those AI holdings are going to be worth, at best, 50%, if they retain any value at all. Once one of these startups die, a panic will ensue, sending venture capitalists scrambling to get their holdings acquired, until there’s little or no investor interest left. Why would LPs ever trust venture capital after this? Why would anybody? Because based on the past four years, it doesn’t appear that venture capital is actually good at investing money — it just got lucky, year after year, until there were few ideas that could sell for hundreds of millions or billions of dollars.  Venture capital believed it knew better as it turned its back on basic business fundamentals, starting with Clubhouse, crypto, the metaverse, and now generative AI. Yet they’re far from the only fuckwits on the dickhead express. Per Bloomberg , there were at least $178.5 billion in data-center credit deals in the US in 2025, rivaling the $215.4 billion invested in US venture capital in 2024 and the $197.2 billion invested in US VC through August 7 2025 , and over $100 billion more than the $60.69 billion of data center credit deals done in 2024 . I’m very worried, and I’m going to tell you why, using a company called CoreWeave that I’ve been actively warning people about since March . CoreWeave is something called a “neocloud.” It’s a company that sells AI compute, and does so by renting out NVIDIA GPUs, and as I explained a few months ago , it does so by building data centers backed by endless debt:  CoreWeave is one of the largest providers of AI compute in the world, and its business model is indicative of how most data center companies make money, and to explain my concerns, I’m going to explain why using this chart from CoreWeave’s Q2 2025 earnings presentation . First, CoreWeave signs contracts — such as its $14 billion deal with Meta and $22.4 billion deal with OpenAI — before it has the physical infrastructure to service them. It then raises debt using this contract as collateral , orders the GPUs from NVIDIA, which arrive after three months, and then take another three months to install, at which point monthly client payments begin. To really simplify this: data center developers are raising money months up to a year before they ever expect to make a penny. In fact, I can find no consistent answer to “how long a data center takes to build,” and the answer here is pretty important, because that’s how the money is gonna get made from these things. You may notice that “monthly payments” begin at 6 to 30 months, a curious and broad blob of time. You see, data centers are extremely difficult to build, and the concept of an “AI data center” is barely a few years old, with the concept of hundreds of megawatts in one data center campus entirely made up of AI GPUs barely two years old, which means basically everybody building one is doing so for the first time, and even experienced developers are running into problems. For example, Core Scientific, CoreWeave’s weird partner organization it tried and failed to buy , has been trying to convert its Denton Texas cryptocurrency mining data center into an AI data center since November 2024 , specifically so that CoreWeave can rent it to Microsoft for OpenAI. This hasn’t gone well, with the Wall Street Journal reporting a few weeks ago that Denton has been wracked with “several months” of delays thanks to rainstorms preventing contractors from pouring concrete. The cluster is apparently going to have 260MW of capacity. What this means for CoreWeave is that it can’t start getting paid by OpenAI, because, per its contract, customers don’t have to start paying until the compute is actually available. This is a very important detail to know for literally any data center development you’ve ever seen. As of its latest Q3 2025 earnings filing , CoreWeave is sitting on $1.1 billion in deferred revenue ( income for services not yet rendered ), up from $951 million in Q2 2025 and $436 million in Q1 2025 . This means deposits have been made, but the contract has yet to be serviced. Now, I’m a curious little critter , so I went and found the 921-page $2.6 billion DDTL 3.0 loan agreement between CoreWeave and banks including Morgan Stanley, MUFG Bank and Goldman Sachs , and in doing so learned the following: I apologize, that suggests that CoreWeave isn’t already in trouble. Buried inside NVIDIA’s latest earnings (page 17) there was a little clue:  Credit where credit is due — eagle-eyed analyst JustDario caught this in November — but in CoreWeave’s condensed consolidated balance sheets, there sits a $477.5 million line-item under “restricted cash and cash equivalents, non-current.” Though this might not be the NVIDIA escrow — this number shifted from $617m in Q1 to $340m in Q2 — it lines up all-too-precisely…and who else would NVIDIA be guaranteeing?  In any case, CoreWeave is likely getting the best deals in data center debt outside of Oracle. It has top-tier financiers (who I will get to shortly), the full backing of NVIDIA (which is both an investor, customer and apparent financial backstop), and the ability to raise debt quickly . CoreWeave’s deals are likely indicative of how data center financing takes place, and those top-tier financiers? It’s been in basically every deal. In fact… So, I went and dug through a pile of 26 prominent data center loan deals, including the proposed $38 billion debt package that Oracle and Vantage Data Center Partners are raising for Stargate Shackelford and Wisconsin, Stargate Abilene, New Mexico, SoftBank’s $15 billion bridge loan (which I included for a reason that will become obvious shortly) and multiple CoreWeave loans, and found a few commonalities: I realize there are far more data center deals than these, but I wanted to show you exactly how centralized these deals are .  The largest deals — the $38 billion Stargate TX/WI deal and $18 billion Stargate New Mexico deal — both involved Goldman Sachs, BNP Paribas, SMBC and MUFG, and all four of those companies have, at some point, funded CoreWeave. In fact, everybody appears to have funded CoreWeave at some point — CitiBank, Credit Agricole, Societe Generale, Wells Fargo, Carlyle, Blackstone, BlackRock, Barclays, Magentar, and Jefferies to name a few. Of the 40 banks and financial institutions I researched, 24 have, at some point, loaned to or organized debt for CoreWeave. Of those institutions, Blackstone, Deutsche Bank, JP Morgan Chase, Morgan Stanley, MUFG and Wells Fargo have done so multiple times.  CoreWeave is a deeply unprofitable company saddled with incredible debt and deteriorating margins, with one of its largest clients paying net 360, and, as I’ve said, is arguably the best-financed data center company in the world.  What I’m getting at is that most data center deals are likely much worse than the terms that CoreWeave faces, and are likely financed in a similar way , where a client is signed for data center capacity that doesn’t exist, such as when Nebius raised $4.3 billion through a share sale and convertible notes (read: loans) to handle its $17.4 billion data center contract with Microsoft , and guess what? Goldman Sachs acted as lead underwriter on the deal, with assistance from Bank of America, CitiGroup, and Morgan Stanley, all three of which have invested in CoreWeave. AI data centers are expensive, require debt due to the massive cost of construction and GPUs, and all take at least a year, if not two to start generating revenue, at which point they also begin losing money because it seems that renting out AI GPUs is really unprofitable .  Every single major bank and financial institution has piled hundreds of millions if not billions of dollars into building data centers that take forever to even start generating money, at which point they only seem to lose it. Worse still, NVIDIA sells GPUs on a one-year upgrade cycle, meaning that all of those data centers being built right now are being filled with Blackwell chips, and by the time they turn on, NVIDIA will be selling its next-generation Vera Rubin chips. Now, you’ve probably heard that Vera Rubin will use the same racks (Oberon) as Blackwell, which is true to an extent , but won’t be true for long, as NVIDIA intends to shift to Kyber racks in 2027 , hoping to build 1MW IT racks (which will involve entire racks-full of power supplies!), meaning that all of those data centers you see today — whenever they get built! — will be full of racks incompatible with the next generation of GPUs. This will also decrease the value of the assets inside the data centers, which will in turn decrease the value of the assets held by the firms investing. Stargate Abilene? The one invested in by JP Morgan, Blue Owl, Primary Digital Infrastructure and Societe Generale? The one that’s heavily delayed and won’t be ready until the end of 2026 at earliest? Full to the brim with two-year-old GB200 racks !  By the beginning of 2027, Stargate Abilene will be obsolete, as will any and all data centers filled with Blackwell GPUs, as will any and all data centers being built today. Every single one takes 1-3 years and hundreds of millions (or billions) in debt, every single one faces the same kinds of construction delays, and better yet, almost all of them will turn on in roughly the same time frame. Now, I ain’t no economist, but I do know that “supply and demand” has an effect on pricing. What do you believe happens to the price of renting a Blackwell GPU when all of these data centers come on? Do you think it becomes more valuable? Or less?   And while we’re on the subject, what do you think happens if there isn’t sufficient demand?  Right now, OpenAI makes up a large chunk of the global sale of compute — at least $8.67 billion of Azure revenue through September 2025, $22.4 billion of CoreWeave’s backlog, $38 billion of Amazon’s backlog, and so on and so forth — and made, based on my reporting, just over $4.5 billion in that period . It cannot afford to pay anybody, and nowhere is that more obvious than when it negotiated year-long payment terms for CoreWeave.   Otherwise, when you remove the contracts signed by hyperscalers and OpenAI (which I do not believe has paid anybody other than Microsoft yet), based on my analysis , there was less than a billion dollars of AI compute revenue in 2025, or 0.5831% of the money spent on data centers.   Hyperscaler revenue is also immediately questionable, with Microsoft’s deal with Nebius ( per its 6k filing ) set to default in the event that Nebius cannot provide the capacity it sold out of its unfinished Vineland, New Jersey data center, which is being built by DataOne, a company which has never built an AI data center with a CEO that has his LinkedIn location set to “ United Arab Emirates ” with funding from a concrete firm that is also a vendor on the construction project . I also believe Microsoft is setting Nebius up to fail. Based on discussions with sources with direct knowledge of plans for the Vineland, New Jersey data center, Nebius has agreed to timelines that involve having 18,000 NVIDIA B200 and B300 GPUs by the end of January for a total of 50MW, with another 18,000 B300s due by the end of May. On speaking with experts in the field about how viable these plans are, two laughed, and one told me to fuck off. If Nebius fails to build the capacity, Microsoft can walk away, much like OpenAI can walk away from Stargate in the event that Oracle fails to build it on time ( as reported by The Information in April ), and I believe that this is the case for literally any data center provider that’s building a data center for any signed-up tenant. This is another layer of risk to data center development that nobody bothers to discuss, because everybody loves seeing these big, beautiful numbers. Except the numbers might have become a little too beautiful for some.  A few weeks ago, the Financial Times reported that Blue Owl Capital had pulled out of the $10 billion Michigan Stargate Data Center project , citing “concerns about its rising debt and artificial intelligence spending.” To quote the FT, “Blue Owl had been in discussions with lenders and Oracle about investing in the planned 1 gigawatt data centre being built to serve OpenAI in Saline Township, Michigan.” What debt, you ask? Well, Blue Owl — formerly the loosest legs in data center financing — was in CoreWeave’s $600 million and $750 million debt deals for its planned Virginia data center with Chirisa Technology Parks , as well as a $4 billion CoreWeave data center project in Lancaster, Pennsylvania , Stargate Abilene and Stargate Mexico, Meta’s $30 billion Hyperion data center , and a $1.3 billion data center deal in Australia through Stack Infrastructure, a company it owns through its acquisition of IPI Partners.  To be clear, Blue Owl “pulling out” is not the same as a regular deal. It’s a BDC — Business Development Corporation — that invests both its own money and rallies together various banks, in this case SMBC, BNP Paribas, MUFG and Goldman Sachs (all part of Stargate New Mexico).  Blue Owl is incredibly well-connected and experienced in putting together these kinds of deals, and very likely went to the many banks it’s worked with over the years, who apparently had “concerns about its rising debt,” much of it issued by them! While rumours suggest that Blackstone may “step in,” the banks that will actually back a $10 billion deal are fairly narrow, and “stepping in” would require billions of dollars and legal logistics. So, why are things looking shaky? Well, remember that thing about how this data center would be leased to Oracle? Well, it had a free cash flow of negative thirteen billion on revenues of $16 billion , with its most-recent earnings only "beat" on estimates only thanks to the sale of its $2.68 billion stake in Ampere . Its debt is exploding (with over a billion dollars in interest payments in its last quarter), its GPU gross margins are 14% (which does not mean profitable) , its latest NVIDIA GB200 GPUs have a negative 100% gross margin , and it has $248 billion in upcoming data center leases yet to begin.  All, for the most part, to handle compute for one customer: OpenAI, which needs to raise $100 billion, I guess. We’ve already got some signs of concern within the banking world around data center exposure.  In November, the FT reported that Deutsche Bank — which backed CoreWeave multiple times and several data centers — was “exploring ways to hedge its exposure to data centers after extending billions of dollars in debt,” including shorting a “basket of AI-related stocks” or buying default protection on some of its debt using synthetic risk transfers , which are when a bank sells the full or partial credit risk of a loan (or loans) to another bank while keeping the loans on their book, paying a monthly fee to investors (this is a simplification). In December, Fortune reported that Morgan Stanley (CoreWeave three times, IPI Partners, Hyperion, SoftBank Bridge Loan) was also considering synthetic risk transfers on “loans to businesses involved in AI infrastructure.” Back in April , SMBC sold synthetic risk transfers tied to “private debt BDCs” — and while this predates the large data center deals done by Blue Owl, SMBC has overseen multiple Blue Owl deals in the past. In December, SMBC closed another SRT , selling off risk from “Australian and Asian project finance loans,” though I can’t confirm if any of them were data center related. In December, Goldman Sachs paused a planned mortgage-bond sale for data center operator CyrusOne , with the intent to revive it in the first quarter of 2026. Oracle’s credit risk reached a 16-year high in the middle of December , with credit default swaps (basically, betting that Oracle will default on its debts, an unlikely yet no-longer-impossible event) climbing to their highest price since the great financial crisis.  While Morgan Stanley and Deutsche Bank’s SRTs are yet to close, it’s still notable that two of the largest players in data center financing feel the need to hedge their bets. So, what exactly are they hedging against? Simple! That tenants won’t arrive and debts won’t get paid.  I also believe they’re going to need bigger hedges, because I don’t think there is enough actual demand for AI to meet the data centers being built, and I think most data center loans end up being underwater within the next two years. I realize we’ve taken a great deal of words to get here, but every single part was necessary to explain what I think happens next. Let’s start by quoting my premium newsletter from a few weeks ago : You see, every little link in the chain of pain is necessary to understand things.  In really simple terms, I believe that almost every investment in a data center or AI startup may go to zero.  Let me explain. If we assume that 50% of $171.5 (so $85.75) billion in data center debt is in GPUs, that’s around 3.2GW of data center capacity, based on my model of NVIDIA’s approximate split of sales between different AI GPUs from my premium piece last week . The likelihood of the majority of these projects being A) completed within the next year and B) completed on budget is very, very small. Every delay increases the likelihood of default, as each of these projects is heavily debt-based. The customers of these projects are either hyperscalers (who are only “doing AI” because they have no other hypergrowth ideas and because Wall Street currently approves) or AI startups, all of whom are unprofitable. While there are potentially hedge funds or other companies looking for “private AI” integrations, I think this is a very, very small market. On top of that, AI compute itself may not be profitable, and because, by my estimate, everybody has spent about $85 billion on filling data centers with the same GPUs, the aggregate price of renting out GPUs will decline. Already the average price of Blackwell GPUs has declined to an average of $4.41 an hour according to Silicon Data , and that’s before the majority of Blackwell-powered GPUs come online. Yet the customer base shrinks from there, because the majority of AI startups aren’t actually renting GPUs — they build products on top of models built by OpenAI or Anthropic, who have made it clear they’re buying capacity from either hyperscalers or, in OpenAI’s case, getting Oracle or CoreWeave to build it for them. Why? Because building your own model is incredibly capital-intensive, and it’s hard to tell if the results will be worth it. Now, let’s assume — I don’t actually believe it will, but let’s try anyway — that all of that 3.2GW of capacity comes online. How much compute does an AI company use? OpenAI claims it has 2GW of capacity as of the end of 2025 , and is allegedly approaching 900 million weekly active users . I don’t think there are any AI companies with even 10% of that userbase, but even if there were, OpenAI spent $8.67 billion on inference through the end of September. Who can afford to pay even 10% of that a year? Or 5%?  Yet in reality, OpenAI is likely more indicative of the overall compute spend of the entire AI industry. As I’ve said, most companies are powered not by their own GPU-driven models, but by renting them from other providers.  OpenAI and Anthropic spent a combined $11.33 billion in compute on Azure and AWS respectively through the first 9 months of this year, and as the two largest consumers of AI compute, which suggests two things: In fact, it would take sinking every single dollar of venture capital — over $200 billion — every single year and then some funneled into AI compute just to provide the revenue to justify these deals.  In the space of a year, Microsoft Azure made $75 billion , Google Cloud $43 billion and Amazon Web Services $100 billion .  Need more proof? Still don’t believe me? Then skip to page 18 of NVIDIA’s most-recent earnings : If there’s such incredible, surging demand, why exactly is NVIDIA spending six fucking billion dollars a year in 2026 and 2027 on cloud compute ? NVIDIA doesn’t need the compute — it just shut down its AWS rival DGX Cloud ! It looks far more like NVIDIA is propping up an industry with non-existent demand. I’m afraid there is no secret AWS-sized spend waiting in the wings for the right moment to pounce. There is no secret demand wave, nor is there any capacity crunch that is holding back incredible swaths of revenue. Oracle’s $523 billion in remaining performance obligations are made up of OpenAI, Meta, and fucking NVIDIA .  For AI data centers to make sense, most startups would have to start becoming direct users of AI compute , while also spending more on cloud compute services than they’ve ever spent. The largest consumers of AI compute are both unprofitable, unsustainable monstrosities.  Eventually, reality will dawn on one or more of these banks. Projects will get delayed thanks to weather, or budgetary issues, or when customers walk away ( as just happened to data center REIT Fermi ). Loan payments will start going unpaid. Elsewhere, AI startups will keep asking for money, again and again, and for a while they’ll keep raising, until the valuations get too high, or VC coffers get too low.  You’re probably gonna say at this point that Anthropic or OpenAI might go public, which will infuse capital into the system, and I want to give you a preview of what to look forward to, courtesy of AI labs MiniMax and Zhipu (as reported by The Information), which just filed to go public in Hong Kong.  Anyway, I’m sure these numbers are great- oh my GOD ! In the first half of this year, Zhipu had a net loss of $334 million on $27 million in revenue , and guess what, 85% of that revenue came from enterprise customers. Meanwhile, MiniMax made $53.4 million in revenue in the first nine months of the year, and burned $211 million to earn it. It is time to wake up. These are the real-life costs of running an AI company. OpenAI and Anthropic are going to be even worse. This is why nobody wants to take AI companies public. This is why nobody wants to talk about the actual costs of AI. This is why nobody wants you to know the hourly cost of running a GPU, and this is why OpenAI and Anthropic both burn billions of dollars — the margins fucking stink , every product is unprofitable , and none of these companies can afford their bills based on their actual cashflow. Generative AI is not a functional industry, and once the money works that out, everything burns. Though many AI data centers boast of having tenancy agreements, remember that these agreements are either with AI startups that will run out of money or hyperscalers with legal teams numbering in the thousands. Every single deal that Microsoft, Amazon, Meta, Google or NVIDIA signs is riddled with outs specifically hedging against this scenario, and there won’t be a damn thing that anybody can do if hyperscalers decide to walk away. Before then, NVIDIA’s bubble is likely to burst. As I discussed a few weeks ago, NVIDIA claims to have shipped six million Blackwell GPUs , and while it may be employing very dodgy maths (claiming each Blackwell GPU is actually two GPUs because each one has two chips ), my modeling of its last three quarters suggests that NVIDIA shipped around 5.33GW’s worth of GPUs — and based on reading about every single data center I can find, it doesn’t appear that many have been built and powered on. Worse still, NVIDIA’s diversified revenue is collapsing. In Q1FY26, two customers represented 16% and 14% of revenue, in Q2FY26 two customers represented 23% and 16% of revenue, and in Q3FY26 four customers represented 22%, 15%, 13% and 11% of total revenue, with all that money going toward either GPUs or networking gear. I go into detail here , but I put it in a chart to show you why this is bad: In simpler terms, NVIDIA’s revenue is no longer coming from a diverse swath of customers. In Q1FY26, NVIDIA had $30.84 billion of diversified revenue, Q2 $28.51 billion, and Q3 $22.23 billion.  NVIDIA GPUs are astronomically expensive — $4.5 million for a GB300 rack of 72 B300 GPUs, for example — and filling data centers full of them requires debt unless you’re a hyperscaler. While I can’t say for sure, I believe NVIDIA’s diversified revenue collapse is a sign that smaller data center projects are starting to have issues getting funded, and/or hyperscalers are pulling back on their GPU purchases.  To look through the eyes of an AI booster — all I’m seeing is blue and yellow, as usual! — one might say that these big customers are covering the loss of revenue, but the reality is that these big projects are run on debt issued by banks that are becoming increasingly-worried about nobody paying them back. The mistake that every investor, commentator, analyst and member of the media makes about NVIDIA is believing that its sales are an expression of demand for AI compute, when it’s really more of a statement about the availability of debt from banks and private credit.  Similarly, the continued existence of AI startups is an expression of the desperation of venture capital, and the continuing flow of massive funding rounds is a sign that they see no other avenues for growth.  Eventually, data centers are going to go unbuilt, and data center debt packages will begin to fall apart. Remember, Oracle’s $38 billion data center deal is actually yet to close , much like Stargate New Mexico is yet to close. These deals, while seeming like they’re trending positively, are both incredibly important to the future of the AI bubble, and any failure will spook an already-nervous market. Only one link in the chain needs to break. Every part of the AI bubble — this fucking charade — is unprofitable, save for NVIDIA and the construction firms erecting future laser tag arenas full of negative-margin GPUs. What happens if the debt stops flowing to data centers? How will NVIDIA sell those 20 million Blackwell and Vera Rubin GPUs ? What happens if venture capitalists start running low on funds, and can’t keep feeding hundreds of millions of dollars to AI startups so that they can feed them to Anthropic or OpenAI?  What happens to OpenAI and Anthropic if their already negative-margin businesses when their customers run out of money? What happens to Oracle or CoreWeave’s work-in-progress data centers if OpenAI can’t pay its bills? What happens to Anthropic’s $21 billion of Broadcom orders, or tens of billions of Google Cloud spend? In the last year, I estimate I’ve been asked the question “what if you’re wrong?” over 25 times. Every single time the question comes with an undercurrent of venom — the suggestion that I’m being an asshole for daring to question the wondrous AI bubble. Every single person who has asked this has been poorly-read — both in terms of my work and the surrounding economics and technological possibilities of Large Language Models — and believes they’re defending technology, when in reality they’re defending growth , and the Rot Economy’s growth-at-all-costs mindset.  In many cases they are not excited about technology , but the prospects of being first in line to lick an already-sparkling boot. This has never been about progress or productivity. If it was, we’d actually see progress, or productivity boosts, or anything other than the frothiest debt and venture markets of all time. Large Language Models do not create novel concepts, they are inconsistent and unreliable, and even the “good” things they do vary wildly thanks to the dramatic variance of a giant probability machine. LLMs are not good enough for people to pay regular software prices at any scale, and the consequences of this will be that every single dollar spent on GPUs has been for exactly one point: manipulating the value of their stocks. AI does not have the business returns and may have negative gross margins. It is inconsistent, ugly, unreliable, expensive and environmentally ruinous, pissing off a large chunk of consumers and underwhelming most of the rest, other than those convinced they’re smart for using it or those who have resigned to giving up at the sight of a confidence game sold by a tech industry that stopped making products primarily focused on solving the problems of consumers or businesses some time ago.  You may say that I’m wrong because Google, Microsoft, Meta and Amazon continue to have healthy net revenues and revenue growth, and as I previously said, these companies are not sharing AI revenues and their existing businesses are still growing due to the massive monopolies they’ve built.  And I want to plea to AI boosters and bullish analysts alike: you are being had. Satya Nadella, Sam Altman, Dario Amodei, Jensen Huang, Mark Zuckerberg, Larry Ellison, Safra Catz, Elon Musk, Clay Magouyrk, Mark Sicilia, Michael Truell, Aravind Srivinas — all of them are laughing at you behind your back, because they know that you are never going to ask the obvious questions that would defeat my arguments, and know that you will never, ever push back on them. The enshittification of the shareholder has the downstream effect of an enshittification of the media and Wall Street analysts writ large. These companies own you. They treat you with disdain and condescension, because they know you’ll let them. They know that no sell-side analyst will ever ask them “when will you be profitable?” or “how much are you spending?” or if you do ask, they know you will experience temporary amnesia and forget whatever answer they give, because these are the incentives of an enshittified stock market, where stocks are not extrapolations of shareholder value but chips in a fucking casino where the house always wins and changes the rules every three months. They have changed the meaning of “stock” to mean “what the market will reward,” and when you allow companies to start dictating the terms of what will be rewarded — as neoliberalism, Friedman, Reagan, Nixon, NAFTA, Thatcher, and every other policy has, orienting everything exclusively around growth — companies eventually cut off any powers that may curtail any reevaluation of the fundamental terms of capitalism, and the incentives within.  Focusing on growth-at-all-costs thinking naturally encourages, enables, and empowers grifters, because all they ever have to promise is “more” — more users, more debt, more venture, more features, more everything .  The very institutions that are meant to hold companies accountable — analysts and the media — are far more desperate to trade scoops for interviews, to pull punches, to find ways to explain why a company is right rather than understand what the company is doing, and this is something pushed not by writers, but by editors that want to make sure they stay on the right side of the largest companies. And if I’m right, OpenAI’s death will kill off most if not all other AI startups, Anthropic included. Every investor that invested in AI will take massive losses. Every startup that builds on the back of their models will see their company fold, if it hasn’t already due to the massive costs and upcoming price increases. The majority of GPU-based data centers — which really have no other revenue stream — will be left inert, likely powered down, waiting for the day that somebody works it all out, which they won’t, because literally everybody has these things now and I truly believe they’ve tried everything. I don’t “hate on AI” because I am a hater, I hate on it because it fucking sucks and what I’m worried about happening seems to be happening. The tech industry has run out of hypergrowth ideas, and in its desperation hitched itself to the least-profitable hardware and software in history, then spent three straight years lying about what was possible to the media, analysts and shareholders. And they were allowed to lie , because everybody lapped it the fuck up. They didn’t need to worry about convincing anybody. Financiers, editors, analysts and investors were already drafting reasons why they were excited about something they didn’t really understand or believe in, other than the fact it promised more.  This is what happens when you make everything about growth: everybody becomes stupid, ready to be conned, ready to hear what the next big growth thing is because asking nasty questions gets you fucking fired. And what’s left is a tech industry that doesn’t build technology, but growth-focused startups.  Look at Silicon Valley. Do you see these fucking people ever building a new kind of computer? Do you believe these men fit to even imagine a future? These men care about the status quo, they want to always have more software to sell or ways to increase advertising revenue so that the stock number goes up so they receive more money in the form of stock compensation. They are concerned with neither actual business value, honest exchange of value, or societal value. Their existence is only in shareholder value, which is how they are incentivized by their board of directors.  And really, if you’re still defending AI -- does it matter to any of you that this software fucking sucks, does it? If you think it’s good you don’t know much about software! It does not respond precisely at any point to a user or programmer’s intent. That’s bad software. I don’t care that you have heard developers really like it, because that doesn’t fix the underlying economic and social poison in AI. I don’t care that it sort of replaced search for you. I don’t care if you “know a team of engineers that use it.” Every single AI app is subsidized, its price is fake, you are being lied to, and none of this is real. When the collapse happens, do not let a single person that waved off the economics have a moment’s peace. Do not let anybody who sat in front of Dario Amodei or Sam Altman and squealed with delight at whatever vacuous talking points they burped out forget that they didn’t push them, they didn’t ask hard questions, they didn’t worry or wonder or feel any concern for investors or the general public. Do not let a single analyst that called AI skeptics “luddites” or equated them to flat Earthers hear the end of it. Do not let anybody who claimed that we “lost control of AI” or “ blackmailed developers ” go without their complementary “Fell For It Again” badge. When it happens, I promise I won’t be too insufferable, but I will be calling for accountability for anybody who boosted AI 2027 , who sat in front of Sam Altman or Dario Amodei and refused to ask real questions, and for anyone who collected anything resembling “detailed notes” about me or any other AI skeptic. If you think I’m talking about you, I probably am, and I have a question: why didn’t you approach the AI companies with as much skepticism as you did the skeptics? I also promise you, if I’m wrong , I’ll happily explain how and why, and I’ll do so at length, too. I will have links and citations, I’ll do podcast episodes. I will make a good faith effort to explain every single failing, because my concern is the truth, and I would love everybody else to follow suit. Do you think any booster will have the same courtesy? Do you think they care about the truth? Or do they just want to get a fish biscuit from Sam Altman or Jensen Huang?  Pathetic.   It’s times like this where it’s necessary to make the point that there is absolutely “enough money” to end hunger or build enough affordable housing or have universal healthcare, but they would be “too expensive” or “not profitable enough,” despite having a blatant and obvious economic benefit in that more people would have happier, better lives and — if you must see the world in purely reptilian senses — enable many more people to have disposable income and the means of entering the economy on even terms. By contrast, investments in AI do not appear to be driving much economic growth at all, other than in the revenue driven to NVIDIA from selling these GPUs, and the construction of data centers themselves. Had Microsoft, Google, Meta and Amazon sunk $776 billion into building housing and renting it out, the world would be uneven, we would have horrible new landlords, and it would still be a great deal better than one where nearly a trillion dollars is being wasted propping up a broken, doomed industry, all because the people in charge are fucking idiots obsessed with growth.  The future, I believe, spells chaos, and I am trying to rise to the occasion. My work has transformed from being critical of the tech industry to a larger critique of the global financial system. I’ve had to learn accountancy, the mechanics of venture and private equity, and all sorts of annoying debt-related language, all so that I sufficiently explain what’s going on. I see several worrying signs I have yet to fully understand. The Discount Window — where banks go when they need quick liquidity as a last resort — has seen a steady increase of loans on its books since September 2024 , suggesting that financial institutions are facing liquidity issues, and the last few times that this has happened, financial crises followed.  There is also a brewing bullshit crisis in Private Equity, which is heavily invested in data centers.  In September, Auto parts maker First Brands collapsed in a puff of fraud with billions of dollars “ vanishing ” after it double-pledged the same collateral to multiple loans, off-balance sheet liabilities, falsified invoices, and even leased some of the parts it sold. This wasn’t a case where smaller lenders were swindled, either — global investment banks UBS and Jefferies both lost hundreds of millions of dollars , along with asset manager BlackRock through associated funds.  Subprime auto lender Tricolor collapsed in similar circumstances , burning JPMorgan , Jefferies, and Zions Bancorporation, who also loaned money to First Brands. A similar situation is currently brewing with Solar company PosiGen, which recently filed for bankruptcy after, you guessed it, double-pledging collateral for loans. One of its equity financing backers is Magnetar Capital , who invested in CoreWeave. What appears to be happening is simple: large financial institutions are issuing debt without doing the necessary due diligence or considering the future financial health of the companies involved. Private Equity firms are also heavily-leveraged, sidling acquisitions with debt, and playing silly games where they “volatility launder” — deliberately choosing not to regularly revalue assets held to make returns (or the value of assets) look better to their investors .  I don’t really know what this means right now, but I am worried that these data center loans have been entered into under similarly-questionable circumstances. Every single data center deal is based on the phony logic that AI will somehow become profitable one day, and if there’s even one First Brands situation, the entire thing collapses. I realize this is the longest thing I’ve ever written ( or should I say written so far? ), and I want to end it on a positive note, because hundreds of thousands of people now read and listen to my work, and it’s important to note how much support I’ve received and how awesome it is seeing people pick up my work and run with. I want to be clear that there is very little that separates you from the people running these companies, or many analysts. I have taught myself everything I know from scratch, and I believe you can too, and I hope I have been able to and will be able to teach you everything I know, which is why everything I write is so long. Well, that and I’m working out what I’m going to say as I write it. The AI bubble is an inflation of capital and egos, of people emboldened and outright horny over the prospect of millions of people’s livelihoods being automated away. It is a global event where we’ve realized how the global elite are just as stupid and ignorant as anybody you’d meet on the street — Business Idiots that couldn’t think their way out of a paper bag, empowered by other Business Idiots that desperately need to believe that everything will grow forever. I have had a tremendous amount of help in the last year — from my editor Matt Hughes , Robert and Sophie at Cool Zone Media, Better Offline producer Matt Osowski, Kakashii and JustDario (two pseudonymous analysts that know more about LLMs and finance than most people I read), Kasey Kagawa , Ed Ongweso Jr ., Rob Smith , Bryce Elder and Tabby Kinder of the Financial Times, all of whom have been generous with their time, energy and support. A special shoutout to Caleb Wilson ( Kill The Computer ) and Arif Hasan ( Wide Left ), my cohosts on our NFL podcast 60 Minute Drill .  And I’ve heard from thousands of you about how frustrated you are, and how none of this makes sense, and how crazy you feel seeing AI get shoved into every product, how insane it marks you feel when somebody tells you that LLMs are amazing when their actual outputs fucking suck. We are all being lied to, we all feel gaslit and manipulated and punished for not pledging ourselves to Sam Altman’s graveyard smash, but I believe we are right . In the last year, my work has gone from being relatively popular to being cited by multiple major international news organizations, hedge funds, and internal investor analyses. I was profiled by the Financial Times , went on the BBC twice , and watched as my Subreddit, r/ BetterOffline , grew to around 80,000 visitors a week and became one of the 20th largest podcast Subreddits, which is a bigger deal than it sounds. I believe there are millions of people that are tired of the state of the tech industry, and disgusted at what these people have done to the computer. I believe that they outnumber the boosters, the analysts and the hype-fiends that have propped up this era. I believe that a better world is possible by creating a meaningful consensus around making the powerful prove themselves to us rather than proving it for them. I am honoured that you read me, and even more so if you read this far. I’ll see you in 2026. Meta’s business is both supporting and profiting from organized crime, and at 10% of its revenue, it’s also kind of dependent on it. Meta is using deliberate and insidious accounting tricks to act like a data center that it is paying to build and will be the sole tenant of is somehow an “off balance sheet” operation. In Stage 1, things are good for users: the platform is free, things are easy-to-use, and thus it’s really simple for you and your friends to adopt and become dependent on it. In Stage 2, things become bad for consumers, but good for business customers: the platform begins forcing users to do “profitable” things — like show them more adverts by making search results worse — all while making it difficult to migrate to another one, either through locking in your data or the tacit knowledge that moving platforms is hard, and your friends are usually in one place. Businesses sink tons of money into the platform, knowing that users are unlikely to leave, and make good money buying ads against a populace that increasingly stays because it has to as there are no other options. In Stage 3, things become bad for consumers and businesses, but good for shareholders: the platforms begin to deteriorate to the point that usability is pushed to the brink, and businesses — who are now dependent on the platform because monopolies have pushed out every alternative platform to advertise or reach consumers — begin to see their product crumble, all in favour of shareholder capital, which only cares about stock value, net income and buybacks. According to its latest quarterly filings, Microsoft spent $34.9 billion on capital expenditures , Amazon $34.2 billion , Meta $19.37 billion , and Google $24 billion . The common mantra is that these companies are “spending all this money on GPUs,” but that doesn’t match up with NVIDIA’s revenues. NVIDIA’s last quarterly earnings said that four direct customers made up more than 10% of revenue — 22% ($12.54bn), 15% ($8.55bn), 13% ($7.41bn) and 11% ($6.27bn) out of $57 billion.  While this sort of lines up with capex spend, it doesn’t if you shift back a quarter, when Microsoft spent $21.4 billion , Meta $17.01 billion , Amazon $31.4 billion and Google $22.4 billion , with the vast majority on “technical infrastructure.”  In the same quarter, NVIDIA had only two customers that accounted for more than 10% — one 23% ($10.7bn) and one 16% ($7.47bn) out of $46.7 billion. Another quarter back, and Microsoft spent $22.6 billion , Meta $13.69 billion , Google $17.2 billion and Amazon $22.4 billion . In the same quarter, NVIDIA had two customers accounting for more than 10% of revenue — 16% ($7.49bn) and 14% ($6.168bn). Where, exactly, is all this money going? In Microsoft’s latest earnings (Q1FY26), it said that $19.39 billion went to “additions to property and equipment,” with “roughly half of [its total capex] spend on short-lived assets, primarily GPUs and CPUs.” A quarter (Q4FY2025) back, additions to property and equipment were $16.74 billion, with “roughly half…[spent] on long-lived assets that will support monetization over the next 15 years and beyond.”  Let’s assume that Microsoft is NVIDIA’s biggest customer every single quarter — customer A, spending $12.5 billion (out of $34.9 billion), $10.7 billion (out of $21.4 billion) and $7.049 billion (out of $22.6 billion) a quarter. Assuming that Microsoft is only buying NVIDIA’s Blackwell GPUs (forgive the model numbers, but it’s based on my own modeling. Let’s say 40% B200s, 30% GB200s, 10% B300s and 20% GB300s), that works out to about 457MW of IT load for Q1FY26, 391MW for Q4FY25 and (adjusting to include more H200s, as the B300/GB300s were not shipping yet) 263MW for Q3FY25.  Has Microsoft built 1.11GW of data centers in that time? Apparently! It claims it added 2GW in the last year , but Satya Nadella claimed in November that Microsoft had chips in inventory it couldn’t install due to a lack of power.  In any case, where did the remaining $22.4 billion, $11.9 billion and $15.5 billion in capex flow? We know there are finance leases. What for? More GPUs? What is the actual output of these expenditures? OpenAI appears to have net 360 payment terms from CoreWeave — meaning it can pay literally a year from invoice .  Per CoreWeave’s Q3 earnings (page 19), “...on occasion, the Company has granted payment terms up to net 360 days.” Per CoreWeave’s loan agreement (page 12), under “contract realization ratio,” “the sum of Projected Contracted Cash Flows applicable for the corresponding three-month period as determined on a net 360 basis.” CoreWeave is required to maintain something called a “contract realization ratio” of .85x — meaning that CoreWeave has to make at least 85 cents of every expected dollar or it is  in default on their loan. This is important to note because it means that if, say, OpenAI decides not to pay up in a year, CoreWeave will be in real trouble. Blue Owl was present in every single Stargate deal, other than the $38 billion package being raised by Vantage. It also was involved in a $1.3 billion Australian data center debt package by virtue of owning Stack Infrastructure . Remember that name.  MUFG (Mitsubishi UFJ Financial Group) was present in 17 out of 26 of the deals, including three separate CoreWeave financings, Stargate New Mexico ($18 billion), the $38 billion Stargate TX/WI deal for Oracle , SoftBank’s bridge loan , and a $5 billion “green loan” package for Vantage Data Centers (who are the ones building the Stargate TX/WI data centers). JP Morgan Chase was involved in eight deals, but they were some of the largest — CoreWeave’s October 2024 financing, DDTL 3.0 and November financing , the funding behind Stargate Abilene , the $38 billion Oracle deal, and Blue Owl’s acquisition of IPI Partners’ Data Centers in 2024 . They also were part of SoftBank’s bridge loan. Deutsche Bank was involved in SoftBank’s bridge loan, but also three smaller deals: a $212 million data center in Seoul , CoreWeave’s 2024 debt, CoreWeave’s November financing , and a data center in Latin America. It also was part of a $610 million data center project in Virginia , as well as a €1 billion data center project in Germany (invested in with NVIDIA). BNP Paribas? Seven deals: CoreWeave’s DDTL 3.0, Stargate New Mexico, Stargate WI/TX, the acquisition of IPI Partners by Blue Owl, the $212m deal in Seoul, and a data center in Chile . Morgan Stanley? Eight, including CoreWeave’s October 2024, DDTL 3 and November loans, Stargate New Mexico, Stargate WI/TX, EQT’s EdgeConnex financing deal , and, of course, SoftBank’s bridge loan. SMBC (Sumitomo Mitsui Banking Corporation) ? Seven deals, all notable — CoreWeave’s DDTL 3.0 and November financing, Stargate New Mexico, Stargate TX/WI, a data center in Rowan MD (also involving MUFG, TD Securities and HSBC), as well as the data centers in Chile and Latin America. Oh, and SoftBank’s bridge loan. The enshittified stock market, pumped not by actual cashflow or productivity but by signals read by analysts and investors trained over decades to push consumer investors to invest in magnificent 7 stocks that represent as much as 40% of the value of the S&P 500 , their values pumped by analysts and the media misleading investors into believing that their revenue growth is anything to do with AI. Venture capital’s liquidity crisis, one peaking at a time when AI startups have become more capital-intensive than any other point in history. Ballooning, centralized data center debt, funded based on customer contracts or built for demand that doesn’t exist, funding massive data centers of GPUs that immediately become commoditized as a result of the hysteria. The market for AI compute is very, very small. If you assume that Anthropic spent the same on Google Cloud as it did on AWS ($2.66 billion, for a total of $5.32 billion), and add CoreWeave’s revenue ($5 billion, most of which was either OpenAI (via Microsoft) or NVIDIA), there doesn’t appear to be an AI compute market, outside of serving these two companies. The market for AI compute is not actually growing. In the last two years, no new major consumers of AI compute have emerged. Every company that has signed a large compute deal has either been OpenAI, Anthropic or a hyperscaler. Even if Cursor were to dump its entire $2.3 billion in funding into AI compute, that would still not be enough.

0 views

Premium - How The AI Bubble Bursts In 2026

Hello and welcome to the final premium edition of Where's Your Ed At for the year. Since kicking off premium, we've had some incredible bangers that I recommend you revisit (or subscribe and read in the meantime!): I pride myself on providing a ton of value in these pieces, and I really hope if you're on the fence about subscribing you'll give me a look. Last week has been a remarkably grim one for the AI industry, resplendent with some terrible news and "positive stories" that still leave investors with a vile taste in their mouth. Let's recount: There are a few common threads between all of these stories: And the other key thread is the year 2026. Next year is meant to be the year that everything changes. It was meant to be the year that OpenAI had a gigawatt of data centers built with Broadcom and AMD , and when Stargate Abilene's 8 buildings were fully built and energized . 2026 is meant to be the year that OpenAI opened Stargate UAE , too. Here in reality , absolutely none of this is happening, and I believe that 2026 is the year when everything begins to collapse. In today's piece, I'm going to line up the sharp objects sitting right next to an increasingly-wobbling AI bubble, and why everything hinges on a looming cash crunch for OpenAI, AI data centers, those funding AI data centers, and venture capital itself. The Hater's Guide To NVIDIA , a comprehensive guide to the largest and weirdest company on the stock market, which was several weeks ahead of most on the "GPUs in warehouses" story. Big Tech Needs $2 Trillion In AI Revenue By 2030 or They Wasted Their Capex , a mathematical breakdown of how big tech has to make so much money before 2030 or it will have wasted every penny building AI data centers. Oracle and OpenAI Are Full Of Crap , where I broke down how Oracle doesn't have the capacity and OpenAI doesn't have the money to pay for their $300 billion compute deal, predicting the current state of affairs with Oracle's data centers months in advance. The Ways The AI Bubble Will Burst , a detailed piece about how the collapse of AI data center funding will eventually lead to the collapse of AI startup funding, creating a " chain of pain " that eventually leads to nobody buying GPUs and the end of this era. Disney is investing $1 billion in OpenAI in a deal where OpenAI will " bring beloved characters from Disney's brands to Sora ," including a three-year licensing deal. One might think that a licensing deal is weird, given that Disney is investing, and one would be right! Apparently OpenAI is "paying" to license Disney's characters entirely in stock warrants , and Disney has the opportunity to buy an undisclosed amount of future stock. Amazon is in discussions to invest $10 billion in OpenAI at a valuation of over $500 billion, per The Information , and plans to use Amazon's Trainium AI server chips (its in-house competitor to NVIDIA's GPUs that some startups, per Business Insider , claim have "performance challenges" and "underperformed" NVIDIA's years-old H100 chips), apparently. Any excitement you might have over this deal should be tempered by the fact that OpenAI and Amazon Web Services signed a $38 billion deal back in November , meaning that this is likely a situation where Amazon would hand money to OpenAI, which would then hand the money right back to Amazon, and that's assuming any real money actually changes hands. Though this is just one source, I've heard tell that Amazon, at times, sells Trainium at a loss to get customers. Then again, I think this might be the case with all AI compute. Bloomberg reported that Oracle has pushed back the completion date of multiple data centers being built for OpenAI, "largely due to labor and material shortages." Oracle responded , saying that "there have been no delays to any sites required to meet our contractual commitments, and all milestones remain on track." It isn't clear what data centers these are, but a clue might be... ...that Blue Owl has pulled out of funding a $10 billion deal for a data center for Oracle/OpenAI in Michigan, per The Financial Times . This is a very, very, very bad sign. Blue Owl is arguably the loosest, friendliest lender in the data center space, and while Oracle claims another partner is allegedly talking to Blackstone, one has to wonder whether Blackstone is lining up to fund "the deal that Blue Owl couldn't handle." Blue Owl is the pre-eminent lender in data center financing. It backed Meta's $30 billion Hyperion data center project with $3 billion of its own capital , it sunk $3 billion into OpenAI's Stargate New Mexico deal , and an indeterminate amount in Stargate Abilene, likely   somewhere between $2.5 billion and $5 billion , on top of a $7.1 billion loan provided to Blue Owl and developer Crusoe to finish the project , on top of another $5 billion joint venture with Chrisa and Powerhouse to build a data center for rickety, nasty AI compute company CoreWeave . So why did this deal fall apart? Well, according to the Financial Times, "lenders pushed for stricter leasing and debt terms amid shifting market sentiment around enormous AI spending including Oracle’s own commitments and rising debt levels." If only somebody could have warned them , somehow . Though I'll get into more detail after the premium break, both Oracle and Broadcom reported earnings, and both saw their stocks get dumped like a deadbeat boyfriend with a bad attitude and credit card debt. In Oracle's case it was the same old story — lots of debt, decaying margins and negative cash flow, along with a bunch of commitments. Did I mention that Oracle has $248 billion in upcoming data center lease commitments ? More than double those made by Microsoft? In Broadcom's case, things were a little weirder. While it beat on estimates, it partly did so, per The Coastal Journal , by playing funny non-GAAP (generally accepted accounting practices) games with things like how it handles stock compensation and the amortizations to raise its "adjusted" earnings per share, boosting non-GAAP revenues by $4.4 billion. The other problem was related to OpenAI. Back in October, Broadcom and OpenAI announced a "strategic collaboration" for "10 gigawatts of customer AI accelerators ," with "Broadcom to deploy racks of AI accelerator and network systems targeted to start in the second half of 2026, to complete by 2029." I'll get into the nitty gritty later, but CEO Hock Tan said that Broadcom " did not expect much [revenue]" in 2026 from the deal. CoreWeave's Denton Data Center has become a nightmare, with, per the Wall Street Journal , heavy rains and winds causing "a roughly 60-day delay" that prevented contractors from pouring concrete for the data center, pushing the completion date back by "several months" on top of "additional delays caused by revisions to design" for a data center specifically built to lease to OpenAI. OpenAI doesn't have cash. The Disney licensing deal? Paid for in stock. The AWS contract? Amazon has to give OpenAI $10 billion to pay for it, because OpenAI doesn't have the cash. Broadcom's deal with OpenAI? "not much" revenue in 2026, probably because OpenAI doesn't have the cash. The Money For Data Centers Is Running Out. Blue Owl is the loosest lender in the universe, and if it’s having trouble raising money, everybody will very soon. Investors are aggressively dumping Oracle because it keeps trying to build more data centers for OpenAI, a company that does not have the money to pay for its compute. AI Is Wearing Out Its Welcome, and the AI Bubble Narrative Is Impossible To Ignore It used to be (back in September, at least) that you could announce a big, stupid deal with OpenAI and see a 40% stock bump . Now the markets are suddenly thinking "huh, how is it gonna pay that?" Oracle's stock also got dumped because it increased capital expenditures in its latest quarter to $12 billion, on analyst expectations of $8.4 billion .

0 views

Premium: Mythbusters - AI Edition

I keep trying to think of a cool or interesting introduction to this newsletter, and keep coming back to how fucking weird everything is getting. Two days ago, cloud stalwart Oracle crapped its pants in public, missing on analyst revenue estimates and revealing it spent (to quote Matt Zeitlin of Heatmap News) more than $4 billion more in that quarter than analysts expected on capital expenditures, for a total of $12 billion. The "good" news? Oracle has remaining performance obligations (RPOs) of $523 billion . For those that aren’t fluent in financese, this is future contracted revenue that hasn’t been paid for, or even delivered: So we've got — per Kakashii on Twitter — $68 billion of new compute deals signed in the quarter, with $20 billion from Meta ( announced in October ), and a few other mystery clients that could include the ByteDance/TikTok deal . But wait. Hold the fort — what was that? NVIDIA? NVIDIA? The accelerated computing company? The largest company on the stock market? That NVIDIA? Why is NVIDIA buying cloud compute? The Information reported back in September that NVIDIA was "stepping back from its nascent cloud computing business," intending to use it "for its own researchers." Well, I sure hope those researchers need compute! NVIDIA has, according to its November 10-Q , agreed to $26 billion in cloud compute deals , spending $6 billion in a year each in Fiscal Years 2027 and 2028, $5 billion in FY2029, $4 billion in 2030, and $4 billion in 2031. AI boosters damn near ripped their jorts jumping for joy at the sight of this burst of new performance obligations, yet it seems that the reason that NVIDIA CEO Jensen Huang said back in October that AI compute demand had gone up "substantially" in the last six months was because NVIDIA had stepped in to increase it . It signed a deal to buy $6.3 billion of unused capacity from CoreWeave , another to buy $1.5 billion from Lambda , and now apparently needs to buy even more compute from Oracle, despite Huang saying in November that cloud GPUs are "sold out" , which traditionally means you "can't rent them." We are in the dynasty of bullshit, a deceptive epoch where analysts and journalists who are ostensibly burdened with telling the truth feel the need to continue pushing the Gospel According To Jensen. When all of this collapses there must be a reckoning with how little effort was made to truly investigate the things that executives are saying on the television, in press releases, in earnings filings and even on social media, all because the market consensus demanded that The Number Must Continue Going Up. The AI era is one of mythology, where billions in GPUs are bought to create supply for imaginary demand, where software is sold based on things it cannot reliably do, where companies that burn billions of dollars are rewarded with glitzy headlines and not an ounce of cynicism, and where those that have pushed back against it have been treated with more skepticism and ire than those who would benefit the most from the propagation of propaganda and outright lies. So today I'm giving you Mythbusters — AI Edition. This is the spiritual successor to How To Argue With An AI Booster , where I address the technical, financial and philosophical myths that underpin the endless sales of GPUs and ever-increasing valuation of OpenAI. This is going to be fun , because I truly believe that both the financial and tech press take this all a little too seriously, in the sense that everything is so dull. With a handful of exceptions (The Register being the best example), most publications treat financial reporting as something that must be inherently separate from any kind of analysis or criticism. And so, that’s why, if a publication calls bullshit on something insane, that call is almost always segmented away in its own little piece.  If you asked me why I thought this is the case, I’d say it’s probably because (excluding those cases of genuine malfeasance and fraud, like Enron and Worldcom and Nortel) we haven’t seen anything as egregiously offensive or dishonest as what’s emerged from the AI bubble. And so, reporters are accustomed to a lack of civility that, frankly, isn’t warranted.  I also think the total lack of levity or self-awareness leads to less-effective analysis, too. For example, lots of people are freaking out about Disney investing $1 billion for an equity stake in OpenAI , all while licensing its characters to be used in Sora, and I really think you can simmer the deal down to two points: Oh, and while I'm here, let's talk about TIME naming the "Architects of AI" its person (people) of the year . Who fuckin' cares! Marc Benioff, one of the biggest AI boosters in the world, owns TIME, and has already run no less than three other pieces of booster propaganda, including everything from "researchers finding that AIs can scheme, deceive or blackmail," to the supposed existence of an "AI arms race" to "coding tools like Cursor and Claude code becoming so powerful that engineers across top AI companies are using them for virtually every aspect of their work."  Are any of these points true? No! But that doesn't stop them being printed! Number must go up! AI bubble must inflate! No fact check! No investigation! Just print! Print AI Now! Make AI Go Big Now! Jensen Sell GPU! Ahhhhhhhhhhh! Okay, alright, let's go into it. Let's bust some myths. That sounded better in my head. Wow, $1 billion? That's going to pay for a whole month of OpenAI's inference ! Regardless of how many guardrails OpenAI puts on Sora ( which is currently 21 on free apps on the App Store ), there is nothing that will stop degenerates from making a video of Goofy flying a plane into a building, or Donald Duck recreating Frank's entrance from Blue Velvet, or Darth Vader saying every slur imaginable, which already happened when Disney launched a generative AI Vader in Fortnite .

0 views

NVIDIA Isn't Enron - So What Is It?

At the end of November, NVIDIA put out an internal memo ( that was leaked to Barron's reporter Tae Kim, who is a huge NVIDIA fan and knows the company very well , so take from that what you will) that sought to get ahead of a few things that had been bubbling up in the news, a lot of which I covered in my Hater’s Guide To NVIDIA (which includes a generous free intro).  Long story short, people have a few concerns about NVIDIA, and guess what, you shouldn’t have any concerns, because NVIDIA’s very secret, not-to-be-leaked-immediately document spent thousands of words very specifically explaining how NVIDIA was fine and, most importantly, nothing like Enron . Anyway, all of this is fine and normal . Companies do this all the time, especially successful ones, and there is nothing to be worried about here , because after reading all seven pages of the document, we can all agree that NVIDIA is nothing like Enron.  No, really! NVIDIA is nothing like Enron, and it’s kind of weird that you’re saying that it is! Why would you say anything about Enron? NVIDIA didn’t say anything about Enron. Okay, well now NVIDIA said something about Enron, but that’s because fools and vagabonds kept suggesting that NVIDIA was like Enron, and very normally, NVIDIA has decided it was time to set the record straight.  And I agree! I truly agree. NVIDIA is nothing like Enron. Putting aside how I might feel about the ethics or underlying economics of generative AI, NVIDIA is an incredibly successful business that has incredible profits, holds an effective monopoly on CUDA ( explained here ), which powers the underlying software layer to running software on GPUs, specifically generative AI, and not really much else that has any kind of revenue potential.  And yes, while I believe that one day this will all be seen as one of the most egregious wastes of capital of all time, for the time being, Jensen Huang may be one of the most successful salespeople in business history.  Nevertheless, people have somewhat run away with the idea that NVIDIA is Enron , in part because of the weird, circular deals it’s built with Neoclouds — dedicated AI-focused cloud companies — like CoreWeave, Lambda and Nebius , who run data centers full of GPUs sold by NVIDIA, which they then use as collateral for loans to buy more GPUs from NVIDIA .  Yet as dodgy and weird and unsustainable as this is, it isn’t illegal , and it certainly isn’t Enron, because, as NVIDIA has been trying to tell you, it is nothing like Enron! Now, you may be a little confused — I get it! — that NVIDIA is bringing up Enron at all. Nobody seriously thought that NVIDIA was like Enron before (though JustDario, who has been questioning its accounting practices for years , is a little suspicious), because Enron was one of the largest criminal enterprises in history, and NVIDIA is at worst, I believe, a big, dodgy entity that is doing whatever it can to survive. Wait, what’s that? You still think NVIDIA is Enron ? What’s it going to take to convince you? I just told you NVIDIA isn’t Enron! NVIDIA itself has shown it’s not Enron, and I’m not sure why you keep bringing up Enron all the time! Stop being an asshole. NVIDIA is not Enron! Look, NVIDIA’s own memo said that “NVIDIA does not resemble historical accounting frauds because NVIDIA's underlying business is economically sound, [its] reporting is complete and transparent, and [it] cares about [its] reputation for integrity.” Now, I know what you’re thinking. Why is the largest company on the stock market having to reassure us about its underlying business economics and reporting? One might immediately begin to think — Streisand Effect style — that there might be something up with NVIDIA’s underlying business. But nevertheless, NVIDIA really is nothing like Enron.  But you know what? I’m good. I’m fine. NVIDIA, grab your coat, we’re going out, let’s forget any of this ever happened. Wait, what was that? First, unlike Enron, NVIDIA does not use Special Purpose Entities to hide debt and inflate revenue. NVIDIA has one guarantee for which the maximum exposure is disclosed in Note 9 ($860M) and mitigated by $470M escrow. The fair value of the guarantee is accrued and disclosed as having an insignificant value. NVIDIA neither controls nor provides most of the financing for the companies in which NVIDIA invests. Oh, okay! I wasn’t even thinking about that at all, I was literally just saying how you were nothing like Enron , we’re good. Let’s go home- Second, the article claims that NVIDIA resembles WorldCom but provides no support for the analogy. WorldCom overstated earnings by capitalizing operating expenses as capital expenditures. We are not aware of any claims that NVIDIA has improperly capitalized operating expenses. Several commentators allege that customers have overstated earnings by extending GPU depreciation schedules beyond economic useful life. Rebutting this claim, some companies have increased useful life estimates to reflect the fact that GPUs remain useful and profitable for longer than originally anticipated; in many cases, for six years or more. We provide additional context on the depreciation topic below. I…okay, NVIDIA is also not like WorldCom either. I wasn’t even thinking about WorldCom. I haven’t thought of them in a while.  Per Adam Berger of Ebsco :   …NVIDIA, are you doing something WorldCommy? Why are you bringing up WorldCom?  To be clear, WorldCom was doing capital F fraud , and its CEO Bernie Ebbers went to prison after an internal team of auditors led by WorldCom VP of internal auditing Cynthia Cooper reported $3.8 billion in “misallocated expenses and phony accounting entries.”  So, yeah, NVIDIA, you were really specific about saying you didn’t capitalize operating expenses as capital expenditures. You’re…not doing that, I guess? That’s great. Great stuff. I had literally never thought you had done that before. I genuinely agree that NVIDIA is nothing like WorldCom.  Anyway, also glad to hear about the depreciation stuff, looking forward to reading- Third, unlike Lucent, NVIDIA does not rely on vendor financing arrangements to grow revenue. In typical vendor financing arrangements, customers pay for products over years. NVIDIA's DSO was 53 in Q3. NVIDIA discloses our standard payment terms, with payment generally due shortly after delivery of products. We do not disclose any vendor financing arrangements. Our customers are subject to strict credit evaluation to ensure collectability. NVIDIA would disclose any receivable longer than one year in long-term other assets. The $632M "Other" balance as of Q3 does not include extended receivables; even if it did, the amount would be immaterial to revenue. Erm… Alright man, if anyone asks about whether you’re like famed dot-com crashout Lucent Technologies, I’ll be sure to correct them. After all, Lucent’s situation was really different — well…sort of. Lucent was a giant telecommunications equipment company, one that was, for a time, extremely successful, really really successful, in fact, turned around by the now-infamous Carly Fiorina. From a 2010 profile in CNN : NVIDIA, this sounds great — why wouldn’t you want to be compared to Lucen- Oh. So, to put it simply, Lucent was classifying debt as an asset (we're getting into technicalities here, but it sort of was but was really counting money from loans as revenue, which is dodgy and bad and accountants hate it ), and did something called “vendor financing,” which means you lend somebody money to buy something from you. It turns out Lucent did a lot of this. Okay, NVIDIA, I hate to say this, but I kind of get why somebody might say you’re doing Lucent stuff. After all, rumour has it that your deal with OpenAI — a company that burns billions of dollars a year — will involve it leasing your GPUs , which sure sounds like you’re doing vendor financing... -we do not disclose any vendor financing arrangements- Fine! Fine. Anyway, Lucent really fucked up big time, indulging in the dark art of circular vendor financing. In 1998 it signed its largest deal — a $2 billion “equipment and finance agreement” — with telecommunications company Winstar , which promised to bring in “$100 million in new business over the next five years” and build a giant wireless broadband network, along with expanding Winstar’s optical networking.  To quote The Wall Street Journal : In December 1999, WIRED would say that Winstar’s “small white dish antennas…[heralded] a new era and new mind-set in telecommunications,” and included this awesome quote about Lucent from CEO and founder Will Rouhana: Fuck yeah!  But that’s not the only great part of this piece: Annualized revenues, very nice. We love annualized revenues don't we folks? A company making about $25 million a month a year after taking on $2 billion in financing from Lucent. Weirdly, Winstar’s Wikipedia page says that revenues were $445.6 million for the year ending 1999 — or around $37.1 million a month.  Winstar loved raising money — two years later in November 2000, it would raise $1.02 billion, for example — and it raised a remarkable $5.6 billion between February 1999 and July 2001 according to the Wall Street Journal. $900 million of that came in December 1999 from an investment from Microsoft and “several investment firms,” with analyst Greg Miller of Jefferies & Co saying: Another fun thing happened in November 2000 too.  Lucent would admit it had overstated its fourth-quarter profits by improperly recording $125 million in sales , reducing that quarter’s revenue from “profitable” to “break-even.” Things would eventually collapse when Winstar couldn’t pay its debts, filing for Chapter 11 bankruptcy protection on April 18 2001 after failing to pay $75 million in interest payments to Lucent, which had cut access to the remaining $400 million of its $1 billion loan to Winstar as a result. Winstar would file a $10 billion lawsuit in bankruptcy court in Delaware the very same day, claiming that Lucent breached its contract and forced Winstar into bankruptcy by, well, not offering to give it more money that it couldn’t pay off. Elsewhere, things had begun to unravel for Lucent. A January 2001 story from the New York Times told a strange story of Lucent, a company that had made over $33 billion in revenue in its previous fiscal year, asking to defer the final tranche of payment — $20 million — for an acquisition due to “accounting and financial reporting considerations.” Why? Because Lucent needed to keep that money on the books to boost its earnings, as its stock was in the toilet, and was about to announce it was laying off 10,000 people and a quarterly loss of $1.02 billion .  Over the course of the next few years, Lucent would sell off various entities , and by the end of September 2005 it would have 30,500 staff and have a stock price of $2.99 — down from a high of $75 a share at the end of 1999 and 157,000 employees. According to VC Tomasz Tunguz, Lucent had $8.1 billion of vendor financing deals at its height . Lucent was still a real company selling real things, but had massively overextended itself in an attempt to meet demand that didn’t really exist, and when Lucent realized that, it decided to create demand itself to please the markets. To quote MIT Tech Review (and author Lisa Endlich), it believed that “setting and meeting [the expectations of Wall Street] “subsumed all other goals,” and that “Lucent had little choice but to ride the wave.”  To be clear, NVIDIA is quite different from Lucent. It has plenty of money, and the circular deals it does with CoreWeave and Lambda don’t involve the same levels of risk. NVIDIA is not (to my knowledge) backstopping CoreWeave’s business or providing it with loans , though NVIDIA has agreed to buy $6.3 billion of compute as the “buyer of last resort” of any unsold capacity . NVIDIA can actually afford this, and it isn’t illegal , though it is obviously propping up a company with flagging demand. NVIDIA also doesn’t appear to be taking on masses of debt to fund its empire, with over $56 billion in cash on hand and a mere $8.4 billion in long term debt .   Okay, phew. We got through this man. NVIDIA is nothing like Lucent either . Okay, maybe it’s got some similarities — but it’s different! No worries at all. I know I’m relaxed. You still seem nervous, NVIDIA. I promise you, if anyone asks me if you’re like Lucent I’ll tell them you’re not. I’ll be sure to tell them you’re nothing like that. Are you okay, dude? When did you last sleep?  Inventory growth indicates waning demand Claim: Growing inventory in Q3 (+32% QoQ) suggests that demand is weak and chips are accumulating unsold, or customers are accepting delivery without payment capability, causing inventory to convert to receivables rather than cash. Woah, woah, woah, slow down. Who has been saying this? Oh, everybody ? Did Michael Burry scare you? Did you watch The Big Short and say “ah, fuck, Christian Bale is going to get me! I can’t believe he played drums to Pantera ! Ahh!”  Anyway, now you’ve woken up everybody else in the house and they’re all wondering why you’re talking about receivables. Shouldn’t that be fine? NVIDIA is a big business, and it’s totally reasonable to believe that a company planning to sell $63 billion of GPUs in the next quarter would have ballooning receivables ( $33 billion, up from $27 billion last quarter ) and growing inventory ( $19.78 billion, up from $14.96 billion the last quarter ). It’s a big, asset-heavy business, which means NVIDIA’s clients likely get decent payment terms to raise debt or move cash around to get them paid.  Everybody calm down! Like my buddy NVIDIA, who is nothing like Enron by the way, just said: Response: First, growing inventory does not necessarily indicate weak demand. In addition to finished goods, inventory includes significant raw materials and work-in-progress. Companies with sophisticated supply chains typically build inventory in advance of new product launches to avoid stockouts. NVIDIA's current supply levels are consistent with historical trends and anticipate strong future growth. Second, growing inventory does not indicate customers are accepting delivery without payment capability. NVIDIA recognizes revenue upon shipping a product and deeming collectability probable. The shipment reduces inventory, which is not related to customer payments. Our customers are subject to strict credit evaluation to ensure collectability. Payment is due shortly after product delivery; some customers prepay. NVIDIA's DSO actually decreased sequentially from 54 days to 53 days. Haha, nice dude, you’re totally right, it’s pretty common for companies, especially large ones, to deliver something before they receive the cash, it happens , I’m being sincere. Sounds like companies are paying! Great!  But, you know, just, can you be a little more specific? Like about the whole “shipping things before they’re paid” thing.  NVIDIA recognizes revenue upon shipping a product and deeming collectability probable- Alright, yeah, thought I heard you right the first time. What does “deeming collectability probable” mean? You could’ve just said “we get paid 95% of the time within 2 months” or whatever. Unless it’s not 95%? Or 90%? How often is it? Most companies don’t break this down by the way, but then again, most companies are not NVIDIA, the largest company on the stock market, and if I’m honest, nobody else has recently had to put out anything that said “I’m not like Enron,” and I want to be clear that NVIDIA is not like Enron. For real, Enron was a criminal enterprise. It broke the law, it committed real deal, actual fraud, and NVIDIA is nothing like Enron. In fact, before NVIDIA put out a letter saying how it was nothing like Enron I would have staunchly defended the company against the Enron allegations, because I truly do not think NVIDIA is committing fraud. That being said, it is very strange that NVIDIA wants somebody to think about how it’s nothing like Enron. This was, technically, an internal memo, and thus there is a chance its existence was built for only internal NVIDIANs worried about the value of their stock, and we know it was definitely written to try and deflect Michael Burry’s criticism, as well as that of a random Substacker who clearly had AI help him write a right-adjacent piece that made all sorts of insane and made up statements (including several about Arrow Electronics that did not happen) — and no, I won’t link it, it’s straight up misinformation.  Nevertheless, I think it’s fair to ask: why does NVIDIA need you to know that it’s nothing like Enron? Did it do something like Enron? Is there a chance that I, or you, may mistakenly say “hey, is NVIDIA doing Enron?”  Heeeeeeyyyy NVIDIA. How’re you feeling? Yeah, haha, you had a rough night. You were saying all this crazy stuff about Enron last night, are you doing okay? No, no, I get it, you’re nothing like Enron, you said that a lot last night. So, while you were asleep — yeah it’s been sixteen hours dude, you were pretty messed up, you brought up Lucent then puked in my sink — I did some digging and like, I get it, you are definitely not like Enron, Enron was breaking the law . NVIDIA is definitely not doing that. But…you did kind of use Special Purpose Vehicles recently? I’m sorry, I know, you’re not like Enron! You’re investing $2 billion in Elon Musk’s special purpose vehicle that will then use that money to raise debt to buy GPUs from NVIDIA that will then be rented to Elon Musk . This is very different to what Enron did! I am with you dude , don’t let the haters keep you down! No, I don’t think a t-shirt that says “NVIDIA is not like Enron for these specific reasons” helps.  Wait, wait, okay, look. One thing. You had this theoretical deal lined up with Sam Altman and OpenAI to invest $100 billion — and yes, you said in your latest earnings that "it was actually a Letter of Intent with the opportunity to invest," which doesn’t mean anything, got it — and the plan was that you would “ lease the GPUs to OpenAI .” Now how would you go about doing that NVIDIA? You’d probably need to do exactly the same deal as you just did with xAI. Right? Because you can’t very well rent these GPUs directly to Elon Musk , you need to sell them to somebody so that you can book the revenue, you were telling me that’s how you make money. I dunno, it’s either that or vendor financing.  Oh, you mentioned that already- -unlike Lucent, NVIDIA does not rely on vendor financing arrangements to grow revenue. In typical vendor financing arrangements, customers pay for products over years. NVIDIA's DSO was 53 in Q3. NVIDIA discloses our standard payment terms, with payment generally due shortly after delivery of products. We do not disclose any vendor financing arrangements- Let me stop you right there a second, you were on about this last night before you scared my cats when you were crying about something to do with “two nanometer.”  First of all, why are you bringing up typical vendor financing agreements? Do you have atypical ones?  Also I’m jazzed to hear you “disclose your standard payment terms,” but uh, standard payment terms for what exactly? Where can I find those? For every contract?  Also, you are straight up saying you don’t disclose any vendor financing arrangements , that’s not the same as “not having any vendor financing arrangements.” I “do not disclose” when I go to the bathroom but I absolutely do use the toilet. Let’s not pretend like you don’t have a history in helping get your buddies funding. You have deals with both Lambda and CoreWeave to guarantee that they will have compute revenue, which they in turn use to raise debt, which is used to buy more of your GPUs. You have learned how to feed debt into yourself quite well, I’m genuinely impressed .  This is great stuff, I’m having the time of my life with how not like Enron you are, and I’m serious that I 100% do not believe you are like Enron. But…what exactly are you doing man? What’re you going to do about what Wall Street wants?  Enron was a criminal enterprise! NVIDIA is not. More than likely NVIDIA is doing relatively boring vendor financing stuff and getting people to pay them on 50-60 day time scales — probably net 60, and, like it said, it gets paid upfront sometimes.  NVIDIA truly isn’t like Enron — after all, Meta is the one getting into ENERGY TRADING — to the point that I think it’s time to explain to you what exactly happened with Enron. Or, at least as much as is possible within the confines of a newsletter that isn’t exclusively about Enron… The collapse of Enron wasn’t just — in retrospect — a large business that ultimately failed. If that was all it was, Enron wouldn’t command the same space in our heads as other failures from that era, like WorldCom (which I mentioned earlier) and Nortel (which I’ll get to later), both of whom were similarly considered giants in their fields. It’s also not just about the fact that Enron failed because of proven business and accounting malfeasance. WorldCom entered bankruptcy due to similar circumstances (though, rather than being liquidated, it was acquired as part of Verizon’s acquisition of MCI , the name of a company that had previously merged with WorldCom that WorldCom renamed itself to after bankruptcy ), and unlike Enron, isn’t the subject of flashy Academy-nominated films , or even a Broadway production .  It’s not the size of Enron that makes its downfall so intriguing. Nor, for that matter, is it the fact that Enron did a lot of legally and ethically dubious stuff to bring about its downfall.  No, what makes Enron special is the sheer gravity of its malfeasance, the rotten culture at the heart of the company that encouraged said malfeasance, and the creative ways Enron’s leaders crafted an image of success around what was, at its heart, a dog of a company.  Enron was born in 1985 on the foundations of two older, much less interesting businesses. The first, Houston Natural Gas (HNG), started life as a utility provider, pumping natural gas from the oilfields of Texas to customers throughout the region, before later exiting the industry to focus on other opportunities. The other, InterNorth, was based in Omaha, Nebraska and was in the same business — pipelines.  In the mid-1980s, HNG was the subject of a hostile take-over from Coastal Corporation (which, until 2001, operated a chain of refineries and gas stations throughout much of the US mainland). Unable to fend it off by itself, HNG merged with InterNorth, with the combined corporation renamed Enron .  The CEO of this new entity was Ken Lay, an economist by trade who spent most of his career in the energy sector who also enjoyed deep political connections with the Bush family . He co-chaired George H. W. Bush’s failed 1992 re-election campaign , and allowed Enron’s corporate jet to ferry Bush Sr. and Barbara Bush back and forth to Washington. Center for Public Integrity Director Charles Lewis said that “ there was no company in America closer to George W. Bush than Enron. ” George W. Bush (the second one) even had a nickname for Lay. Kenny Boy . Anyway, in 1987, Enron hired McKinsey — the world’s most evil management consultancy firm — to help the company create a futures market for natural gas. What that means isn’t particularly important to the story, but essentially, a futures contract is where a company agrees to buy or sell an asset in the future at a fixed price.  It’s a way of hedging against risk, whether that be from something like price or currency fluctuations, or from default. If you’re buying oil in dollars, for example, buying a futures contract for oil to be delivered in six months time at a predetermined price means that if your currency weakens against the dollar, your costs won’t spiral.  That bit isn’t terribly important. What does matter is while working with McKinsey, Lay met someone called Jeff Skilling — a young engineer-turned-consultant who impressed the company’s CEO deeply, so much so that Lay decided to poach him from McKinsey in 1990 and give him the role of chairman and CEO of Enron Finance Group.  Anyway, Skilling continued to impress Lay, who gave him greater and greater responsibility, eventually crowning him Chief Operating Officer (COO) of Enron.  With Skilling in a key leadership position, he was able to shape the organization’s culture. He appreciated those who took risks — even if those risks, when viewed with impartial eyes, were deemed reckless, or even criminal.  He introduced the practice of stack-ranking (also known as “rank and yank”) to Enron, which had previously been pioneered by Jack Welch at GE (see The Shareholder Supremacy from last year ). Here, employees were graded on a scale, and those at the bottom of the scale were terminated. Managers had to place at least 10% (other reports say closer to 15%) of employees in the lowest bracket, which created an almost Darwinian drive to survive.  Staffers worked brutal hours. They cut corners. They did some really, really dodgy shit. None of this bothered Skilling in the slightest.  How dodgy, you ask? Well, in 2000 and 2001, California suffered a series of electricity blackouts. This shouldn’t have happened, because California’s total energy demand (at the time) was 28GW and its production capacity was 45GW.  California also shares a transmission grid with other states (and, for what it’s worth, the Canadian provinces of Alberta and British Colombia, as well as part of Baja California in Mexico), meaning that in the event of a shortage, it could simply draw capacity from elsewhere. So, how did it happen?  Well, remember, Enron traded electricity like a commodity, and as a result, it was incentivized to get the highest possible price for that commodity . So, it took power plants off line during peak hours, and exported power to other states when there was real domestic demand.  How does a company like Enron shut down a power station? Well, it just asked .  In one taped phone conversation released after the company’s collapse , an Enron employee called Bill called an official at a Las Vegas power plant (California shares the same grid with Nevada) and asked him to “ get a little creative, and come up with a reason to go down. Anything you want to do over there? Any cleaning, anything like that? " This power crisis had dramatic consequences — for the people of California, who faced outages and price hikes; for Governor Gray Davis, who was recalled by voters and later replaced by Arnold Schwarzenegger; for PG&E, which entered Chapter 11 bankruptcy that year ; and for Southern California Edison, which was pushed to the brink of bankruptcy as a result. This kind of stuff could only happen in an organization whose culture actively rewarded bad behavior .  In fact, Skilling was seemingly determined to elevate the dodgiest of characters to the highest positions within the company, and few were more-ethically-dubious than Andy Fastow, who Skilling mentored like a protegé, and who would later become Enron’s Chief Financial Officer.  Even before vaulting to the top of Enron’s nasty little empire, Fastow was able to shape its accounting practices, with the company adopting mark-to-market accounting practices in 1991 .  Mark-to-market sounds complicated, but it’s really simple. When listing assets on a balance sheet, you don’t use the acquisition cost, but rather the fair-market value of that asset. So, if I buy a baseball card for a dollar, and I see that it’s currently selling for $10 on eBay, I’d say that said asset is worth $10, not the dollar I paid for it, even though I haven’t actually sold it yet.  This sounds simple — reasonable, even — but the problem is that the way you determine the value of that asset matters, and mark-to-market accounting allows companies and individuals to exercise some…creativity.  Sure, for publicly-traded companies (where the price of a share is verifiable, open knowledge), it’s not too bad, but for assets with limited liquidity, limited buyers, or where the price has to be engineered somehow, you have a lot of latitude for fraud.  Let’s go back to the baseball card example. How do you know it’s actually worth $10, and not $1? What if the “fair value” isn’t something you can check on eBay, but what somebody told me in-person it’s worth? What’s to stop me from lying and saying that the card is actually worth $100, or $1000? Well, other than the fact I’d be committing fraud. What if I have ten $1 baseball cards, and I give my friend $10 and tell him to buy one of the cards using the $10 bill I just handed him, allowing me to say that I’ve realized a $9 profit on one of my $1 cards, and my other cards are worth $90 and not $9?  And then, what if I use the phony valuation of my remaining cards to get a $50 loan, using the cards as collateral, even though the collateral isn’t even one-fifth of the value of the loan?  You get the idea. While a lot of the things people can do to alter the mark-to-market value of an asset are illegal (and would be covered under generic fraud laws), it doesn’t change the fact that mark-to-market accounting allows for some shenanigans to take place. Another trait of mark-to-market accounting, as employed by Enron, is that it would count all the long-term potential revenue from a deal as quarterly revenue — even if that revenue would be delivered over the course of a decades-long contract, or if the contract would be terminated before its intended expiration date.  It would also realize potential revenue as actual revenue, even before money changed hands, and when the conclusion of the deal wasn’t a certainty. For example, in 1999, Enron sold a stake in four electricity-generating barges in Nigeria (essentially floating power stations) to Merrill Lynch , which allowed the company to register $12m in profit.  That sale ultimately didn’t happen, though that didn’t stop Enron from selling pieces to Merrill Lynch, which — I’m not kidding — Merrill Lynch quickly sold back to a Special Purpose Vehicle called “LJM2” controlled by Andrew Fastow. You’re gonna hear that name again. Although the Merrill Lynch bankers who participated in the deal were eventually convicted of conspiracy and fraud charges (long after the collapse of Enron), their convictions were later quashed on appeal.   But still, for a moment, it gave a jolt to Enron’s quarterly earnings.  Anyway, Enron was incredibly creative when it came to how it valued its assets. Take, for example, fiber optic cables. As the Dot Com bubble swelled, Enron saw an opportunity, and wanted to be able to trade and control the supply of bandwidth, just like it does with other more conventional commodities (like oil and gas) .  It built, bought, and leased fiber-optic cables throughout the country, and then, using exaggerated estimates of their value and potential long-term revenue, released glowing financial reports that made the company look a lot healthier and more successful than it actually was.  Enron also loved to create special-purpose entities that existed either to generate revenue that didn’t exist, or to hold toxic assets that would otherwise need to be disclosed (with Enron then using its holdings in said entities to boost its balance sheet), or to disguise its debt.  One, Whitewing, was created and capitalized by Enron (and an outside investor), and pretty much exclusively bought assets from Enron — which allowed the company to recognize sales and profits on its balance sheets, even if they were fundamentally contrived.  Another set of entities — known as LJM, named after the first initial of Andy Fastow’s wife and two children , and which I mentioned earlier — did the same thing, allowing the company to hide risky or failing investments, to limit its perceived debt, and to generate artificial profits and revenues. LJM2 was, creatively, the second version of the idea. Even though the assets that LJM held were, ultimately, dogshit, the distance that LJM provided, combined with Enron’s use of mark-to-market accounting, allowed the company to turn a multi-billion collective failure into a resounding and (on paper) profitable triumph.  So, how did this happen, and how did it go on for so long?  Well, first, Enron was, at its peak, worth $70bn. Its failure would be a failure for its investors and shareholders, and nobody — besides the press, that is — wanted to ask tough questions.  It had auditors, but they were paid handsomely, turning a blind eye to the criminal malfeasance at the heart of the company. Auditor Arthur Andersen surrendered its license in 2002, bringing an end to the company — and resulting in 85,000 employees losing their jobs.  Well, it’s not so much as it only turned a blind eye, so much as it turned on a big paper shredder , shredding tons — and I’m using that as a measure of weight, and not figuratively — of documents as Enron started to implode , a crime for which it was later convicted of obstruction of justice.  I’ve talked about Enron’s culture, but I’d be remiss if I didn’t mention that Enron’s highest-performers and its leadership received hefty bonuses in company equity, motivating them to keep the charade going. Enron’s pension scheme, I add, was basically entirely Enron stock, and employees were regularly encouraged to buy more, with Kenneth Lay telling employees weeks before the company’s collapse that “the company is fundamentally sound” and to “hang on to their stock.”  Additionally, per the terms of the Enron pension plan, employees were prevented from shifting their holdings into other pension funds, or other investments, until they turned 50 . When the company collapsed, those people lost everything, even those who didn’t know anything about Enron’s criminality. George Maddox, a retired former Enron employee, had his entire retirement tied up in 14,000 Enron shares (worth at the time more than $1.3 million), was “forced to spend his golden years making ends meet by mowing pastures and living in a run-down East Texas farmhouse.”  The US Government brought criminal charges against Enron’s top leadership. Ken Lay was convicted of four counts of fraud and making false statements , but died on a skiing vacation to Aspen before sentencing . May he burn in Hell. Skilling was convicted on 24 counts of fraud and conspiracy and sentenced to 24 years in jail. This was reduced in 2013 on appeal to 14 years, and he was released to a halfway house in 2018 , and then freed in 2019. He’s since tried to re-enter the energy sector — with one venture combining energy trading and, I kid you not, blockchain technology — although nothing really came out of it.  Andy Fastow pled guilty to two counts — one of manipulation of financial statements, and one of self-dealing . and received ten years in prison. This was later reduced to six years, including two years of probation , in part because he cooperated with the investigations against other Enron executives. He is now a public speaker and a tech investor in an AI company, KeenCorp .  His wife, Lea, who also worked at Enron, received twelve months for conspiracy to commit wire fraud and money laundering and for submitting false tax returns. She was released from custody in July, 2005 .  Enron’s implosion was entirely self-inflicted and horrifyingly, painfully criminal, yet, it had plenty of collateral damage — to the US economy, to those companies that had lent it money, to its employees who lost their jobs and their life savings and their retirements, and to those employees at companies most entangled with Enron, like those at auditing firm Arthur Andersen. This isn’t unique among corporate failures. WorldCom had some dodgy accounting practices. Nortel too. Both companies failed, both companies wrecked the lives of their employees, and the failure of these companies had systemic economic consequences (especially in Canada, where Nortel, at its peak, accounted for one-third of the market cap of all companies on the Toronto Stock Exchange). The reason why Enron remains captured in our imagination — and why NVIDIA is so vociferously opposed to being compared with Enron — is the extent to which Enron manipulated reality to appear stronger and more successful than it was, and how long it was able to get away with it.  While we may have forgotten the memory of Enron — it happened over two decades ago, after all — we haven’t forgotten the instincts that it gave us. It’s why our noses twitch when we see special-purpose vehicles being used to buy GPUs, and why we gag when we see mark-to-market accounting.  It’s entirely possible that everything NVIDIA is doing is above board. Great! But that doesn’t do anything for the deep pit of dread in my stomach.  A few weeks ago, I published the Hater’s Guide to NVIDIA, and included within it a guide to what this company does . If you’re looking at this through the cold, unthinking lenses of late-stage capitalism. This all sounds really good! I’ve basically described a company that has an essential monopoly in the one thing required for a high-growth (if we’re talking exclusively about capex spending) industry to exist.  Moreover, that monopoly is all-but assured, thanks to NVIDIA’s CUDA moat, its first-mover advantage, and the actual capabilities of the products themselves — thereby allowing the company to charge a pretty penny to customers.  And those customers? If we temporarily forget about the likes of Nebius and CoreWeave (oh, how I wish I could forget about CoreWeave permanently), we’re talking about the biggest companies on the planet. Ones that, surely, will have no problems paying their bills.  Back in February 2023, I wrote about The Rot Economy , and how everything in tech had become oriented around growth — even if it meant making products harder to use as a means of increasing user engagement or funnelling them toward more-profitable parts of an app.  Back in June 2024, I wrote about the Rot-Com Bubble , and my greater theory that the tech industry has run out of hypergrowth ideas: In simple terms, big tech — Amazon, Google, Microsoft and Meta, but also a number of other companies — no longer has the “next big thing,” and jumped on AI out of an abundance of desperation.  Hell, look at Oracle. This company started off by selling databases and ERP systems to big companies, and then trapping said companies by making it really, really difficult to migrate to cheaper (and better) solutions, and then bleeding said companies with onerous licensing terms (including some where you pay by the number of CPU cores that use the application). It doesn’t do anything new, or exciting, or impressive, and even when presented with the opportunity to do things that are useful or innovative (like when it bought Sun Microsystems), it turns away. I imagine that, deep down, it recognizes that its current model just isn’t viable in the long-term, and so, it needs something else.  When you haven’t thought about innovation… well… ever, it’s hard to start. Generative AI, on the face of it, probably seemed like a godsend to Larry Ellison.  We also live in an era where nobody knows what big tech CEOs do other than make nearly $100 million a year , meaning that somebody like Satya Nadella can get called a “ thoughtful leader with striking humility ” for pushing Copilot AI in every single part of your Microsoft experience, even Notepad, a place that no human being would want it , and accelerating capital expenditures from $28 billion across the entirey of FY 2023 to $34.9 billion in its latest quarter . In simpler terms, spending money makes a CEO look busy. And at a time when there were no other potential growth avenues, AI was a convenient way to make everybody look busy. Every department can “have an AI strategy,” and every useless manager and executive can yell, as ServiceNow CEO did back in 2022 , “ let me make it clear to everybody here, everything you do: AI, AI, AI, AI, AI. ” I should also add that ChatGPT was the first real, meaningful hit that the American tech industry had produced in a long, long time — the last being, if I’m honest, Uber, and that’s if we allow “successful yet not particularly good businesses” into the pile.  If we insist on things like “profitability” and “sustainability,” US tech hasn’t done so great. Snowflake runs at a loss , Snap runs at a loss , and while Uber has turned things around somewhat , it’s hardly created the next cloud computing or smartphone.  Putting aside finances, the last major “hit” was probably Venmo or Zelle, and maybe, if I’m feeling generous, smart speakers like Amazon Echo and Apple Homepod. Much like Uber, none of these were “the next big thing,” which would be fine except big tech needs more growth forever right now, pig! This is why Google, Amazon and Meta all do 20 different things — although rarely for any length of time, with these “things” often having a shelf life shorter than a can of peaches — because The Rot Economy’s growth-at-all-costs mindset exists only to please the markets, and the markets demanded growth. ChatGPT was different. Not only did it do something new, it did so in a way that was relatively easy to get people to try and “see the potential” of. It was also really easy to convince people it would become something bigger and better , because that’s what tech does. To quote Bender and Hanna, AI is a “marketing term ” — a squishy way of evoking futuristic visions of autonomous computers that can do anything and everything from us, and because both consumers and analysts have been primed to believe and trust the tech industry, everybody believed that whatever ChatGPT was would be the Next Big Thing. And said “Next Big Thing” is powered by Large Language Models, which require GPUs sold by one company — NVIDIA.  AI became a very useful thing to do. If a company wanted to seem futuristic and attract investors, it could now “integrate AI.” If a hyperscaler wanted to seem enterprising and like it was “building for the future,” it could buy a bunch of GPUs, or invest in its own silicon, or, as Google, Microsoft, Amazon and Meta have done, shove AI in every imaginable crevice of the app.  Investors could invest in AI companies, retail investors (IE: regular people) could invest in AI stocks, tech reporters could write about something new in AI, LinkedIn perverts could write long screeds about AI, the markets could become obsessed with AI… …and yeah, you can kind of see how things got out of control. Everybody now had something to do . An excuse to do AI, regardless of whether it made sense, because everybody else was doing it. ChatGPT quickly became one of the most popular websites on the internet — all while OpenAI burned billions of dollars — and because the media effectively published every single thought that Sam Altman had (such as that GPT-4 would “automate away some jobs and create others ” and that he was a “ little bit scared of it ”), AI, as an idea, technology, symbolic stock trope, marketing tool and myth became so powerful that it could do anything, replace anyone, and be worth anything, even the future of your company. Amongst the hype, there was an assumption related to scaling laws ( summarized well by Charlie Meyer ): In simple terms, the paper suggested that shoving more training data and using more compute power would exponentially increase the ability of a model to do stuff. And to make a model that did more stuff, you needed more GPUs and more data centers. Did it matter that there was compelling evidence in 2022 ( Gary Marcus was right! ) that there were limits to scaling laws, and that we would hit the point of diminishing returns? Amidst all this, NVIDIA has sold over $200 billion of GPUs since the beginning of 2023 , becoming the largest company on the stock market and trading at over $170 as of writing this sentence only a few years after being worth $19.52 a share .  You see, Meta, Google, Microsoft and Amazon all wanted to be “part of the future,” so they sunk a lot of their money into NVIDIA, making up 42% of its revenue in its fiscal year 2025. Though there are some arguments about exactly how much of big tech’s billowing capital expenditures are spent on GPUs, some estimate somewhere between 41% to more than 50% of a data center’s capex is spent on them. If you’re wondering what the payoff is, well, you’re in good company. I estimate that there’s only around $61 billion in total generative AI revenue , and that includes every hyperscaler and neocloud. Large Language Models are limited, AI agents are a pipedream and simply do not work , AI-powered products are unreliable and coding LLMs make developers slower , and the cost of inference — the way in which a model produces its output — keeps going up .  So, due to the fact that so much money has now been piled into building AI infrastructure, and big tech has promised to spend hundreds of billions of dollars more in the next year , big tech has found itself in a bit of a hole. How big a hole? Well, By the end of the year, Microsoft, Amazon, Google and Meta will have spent over $400bn in capital expenditures, much of it focused on building AI infrastructure, on top of $228.4 billion in capital expenditures in 2024 and around $148bn in capital expenditures in 2023, for a total of around $776bn in the space of three years, and intends to spend $400 billion or more in 2026. As a result, based on my analysis, big tech needs to make $2 trillion in brand new revenue, specifically from AI by 2030, or all of this was for nothing. I go into detail here in my premium piece , but I’m going to give you a short explanation here. Sadly you’re going to have to learn stuff. I know! I’m sorry. Introducing a term: depreciation. From my October, 31 newsletter : Nobody seems to be able to come to a consensus about how long this should be. In Microsoft’s case, depreciation for its servers is spread over six years — a convenient change it made in August 2022, a few months before the launch of ChatGPT. This means that Microsoft can spread the cost of the tens of thousands of A100 GPUs bought in 2020, or the 450,000 H100 GPUs it bought in 2024 , across six years, regardless of whether those are the years they will be either A) generating revenue or B) still functional.  CoreWeave, for what it’s worth, says the same thing — but largely because it’s betting that it’ll still be able to find users for older silicon after its initial contracts with companies like OpenAI expire. The problem is, as the aforementioned linked CNBC article points out, is that this is pretty much untested ground.  Whereas we know how much, say, a truck or a piece of heavy machinery can last, and how long it can deliver value to an organization, we don’t know the same thing about the kind of data center GPUs that hyperscapers are spending tens of billions of dollars on each year. Any kind of depreciation schedule is based on, at best, assumptions, and at worst, hope.  The assumption that the cards won’t degrade with heavy usage. The assumption that future generations of GPUs won’t be so powerful and impressive, they’ll render the previous ones more obsolete than expected, kind of like how the first jet-powered planes of the 1950s did to those manufactured just one decade prior. The assumption that there will, in fact, be a market for older cards, and that there’ll be a way to lease them profitably. What if those assumptions are wrong? What if that hope is, ultimately, irrational?  Mihir Kshirsagar of the Center for Information Technology Policy framed the problem well : This is why Michael Burry brought it up recently — because spreading out these costs allows big tech to make their net income (IE: profits) look better. In simple terms, by spreading out costs over six years rather than three, hyperscalers are able to reduce a line item that eats into their earnings, which makes their companies look better to the markets. So, why does this create an artificial time limit? In really, really simple terms:  So, now that you know this, there’s a fairly obvious question to ask: why are they still buying GPUs? Also…where the fuck are they going? As I covered in the Hater’s Guide To NVIDIA : While I’m not going to copy-paste my whole (premium) piece, I was only able to find, at most, a few hundred thousand Blackwell GPUs — many of which aren’t even online! — including OpenAI’s Stargate Abilene (allegedly 400,000, though only two buildings are handed over); a theoretical 131,000 GPU cluster owned by Oracle announced in March 2025 ; 5000 Blackwell GPUs at the University of Texas, Austin ; “more than 1500” in a Lambda data center in Columbus, Ohio ; The Department of Energy’s still-in-development 100,000 GPU supercluster, as well as “10,000 NVIDIA Blackwell GPUs” that are “expected to be available in 2026 in its “Equinox” cluster ; 50,000 going into the still-unbuilt Musk-run Colossus 2 supercluster ; CoreWeave’s “largest GB200 Blackwell cluster” of 2496 Blackwell GPUs ; “tens of thousands” of them deployed globally by Microsoft ( including 4600 Blackwell Ultra GPUs ); 260,000 GPUs for five AI data centers for the South Korean government …and I am still having trouble finding one million of these things that are actually allocated anywhere , let alone in a data center, let alone one with sufficient power. I do not know where these six million Blackwell GPUs have gone, but they certainly haven’t gone into data centers that are powered and turned on. In fact, power has become one of the biggest issues with building these things, in that it’s really difficult (and maybe impossible!) to get the amount of power these things need.   In really simple terms: there isn’t enough power or built data centers for those six million Blackwell GPUs, in part because the data centers aren’t built, and in part because there isn’t enough power for the ones that are. Microsoft CEO Satya Nadella recently said on a podcast that his company “[didn’t] have the warm shells to plug into,” meaning buildings with sufficient power, and heavily suggested Microsoft “may actually have a bunch of chips sitting in inventory that [he] couldn’t plug in.” The news that HPE’s (Hewlett Packard Enterprise) AI server business underperformed, and by a significant margin, only raises more questions about where these chips are going .  So why, pray tell, is Jensen Huang of NVIDIA saying that he has 20 million Blackwell and Vera Rubin GPUs ordered through the end of 2026 ? Where are they going to go? I truly don’t know!  AI bulls will tell you about the “insatiable demand for AI” and that these massive amounts of orders are proof of something or rather, and you know what, I’ll give them that — people sure are buying a lot of NVIDIA GPUs! I just don’t know why . Nobody has made a profit from AI, and those making revenue aren’t really making much.  For example, my reporting on OpenAI from a few weeks ago suggests that the company only made $4.329 billion in revenue through the end of September, extrapolated from the 20% revenue share that Microsoft receives from the company. As some people have argued with the figures, claiming they are either A) delayed or B) not inclusive of the revenue that OpenAI is paid from Microsoft as part of Bing’s AI integration and sales of OpenAI’s models via Microsoft Azure, I wanted to be clear of two things: In the same period, it spent $8.67 billion on inference (the process in which an LLM creates an output). This is the biggest company in the generative AI space, with 800 million weekly active users and the mandate of heaven in the eyes of the media. Anthropic, its largest competitor, alleges it will make $833 million in revenue in December 2025 , and based on my estimates will end up having $5 billion in revenue by end of year. Based on my reporting from October, Anthropic spent $2.66 billion on Amazon Web Services through the end of September, meaning that it (based on my own analysis of reported revenues) spent 104% of its $2.55 billion in revenue up until that point just on AWS , and likely spent just as much on Google Cloud.  While everybody wants to tell the story of Anthropic’s “efficiency” and “ only burning $2.8 billion this year ,” one has to ask why a company that is allegedly “reducing costs” had to raise $13 billion in September 2025 after raising $3.5 billion in March 2025 , and after raising $4 billion in November 2024 ? Am I really meant to read stories about Anthropic hitting break even in 2028 with a straight face? Especially as other stories say Anthropic will be cash flow positive “ as soon as 2027 .” These are the two largest companies in the generative AI space, and by extension the two largest consumers of GPU compute. Both companies burn billions of dollars, and require an infinite amount of venture capital to keep alive at a time when the Saudi Public Investment Fund is struggling and the US venture capital system is set to run out of cash in the next year and a half . The two largest sources of actual revenue for selling AI compute are subsidized by venture capital and debt. What happens if these sources dry up? And, in all seriousness, who else is buying AI compute? What are they doing with it? Hyperscalers (other than Microsoft, which chose to stop reporting its AI revenue back in January, when it claimed a $13 billion, or about $1 billion a month, in revenue ) don’t disclose anything about their AI revenue, which in turn means we have no real idea about how much real, actual money is coming in to justify these GPUs.  CoreWeave made $1.36 billion in revenue (and lost $110 million doing so) in its last quarter — and if that’s indicative of the kind of actual, real demand for AI compute, I think it’s time to start panicking about whether all of this was for nothing.  CoreWeave has a backlog of over $50 billion in compute , but $22 billion of that is OpenAI (a company that burns billions of dollars a year and lives on venture subsidies), $14 billion of that is Meta (which has yet to work out how to make any kind of real money from generative AI, and no, its “ generative AI ads ” are not the future, sorry), and the rest is likely a mixture of Microsoft and NVIDIA, which agreed to buy $6.3 billion of any unused compute from CoreWeave through 2032 .  Sorry, I also forgot Google, which is renting capacity from CoreWeave to rent to OpenAI . Also, I also forgot to mention that CoreWeave’s backlog problem stems from data center construction delays . That and CoreWeave has $14 billion in debt mostly from buying GPUs, which it was able to raise by using GPUs as collateral and that it had contracts from customers willing to pay it, such as NVIDIA, which is also selling it the GPUs. So, just to be abundantly clear: CoreWeave has bought all those GPUs to rent to OpenAI, Microsoft (for OpenAI), Meta, Google (OpenAI), and NVIDIA, which is the company that benefits from CoreWeave’s continued ability to buy GPUs.  Otherwise, where’s the fucking business, exactly? Who are the customers? Who are the people renting these GPUs, and for what purpose are they being rented? How much money is renting those GPUs? You can sit and waffle on about the supposedly glorious “AI revolution” all you want, but where’s the money, exactly? And why, exactly, are we buying more GPUs? What are they doing? To whom are they being rented? For what purpose? And why isn’t it creating the kind of revenue that is actually worth sharing?  Is it because the revenue sucks? Is it because it’s unprofitable to provide it?  And why, at this point in history, do we not know? Hundreds of billions of dollars that have made NVIDIA the biggest company on the stock market and we still do not know why people are buying these fucking things. NVIDIA is currently making hundreds of billions in revenue selling GPUs to companies that either plug them in and start losing money or, I assume, put them in a warehouse for safe keeping. This brings me to my core anxiety: why, exactly, are companies pre-ordering GPUs? What benefit is there in doing so? Blackwell does not appear to be “more efficient” in a way that actually makes anybody a profit, and we’re potentially years from seeing these GPUs in operation in data centers at the scale they’re being shipped — so why would anybody be buying more?  I doubt these are new customers — they’re likely hyperscalers, neoclouds like CoreWeave and resellers like Dell and SuperMicro — because the only companies that can actually afford to buy them are those with massive amounts of cash or debt, to the point that even Google , Amazon , Meta and Oracle are taking on massive amounts of new debt, all without a plan to make a profit. NVIDIA’s largest customers are increasingly unable to afford its GPUs, which appear to be increasing in price with every subsequent generation. NVIDIA’s GPUs are so expensive that the only way you can buy them is by already having billions of dollars or being able to raise billions of dollars, which means, in a very real sense, that NVIDIA is dependent not on its customers , but on its customers’ credit ratings and financial backers. To make matters worse, the key reason that one would buy a GPU is to either run services using it or rent it to somebody else, and the two largest parties spending money on these services are OpenAI and Anthropic, both of whom lose billions of dollars, and are thus dependent on venture capital and debt (remember, OpenAI has a $4 billion line of credit , and Anthropic a $2.5 billion one too ). In simple terms, NVIDIA’s customers rely on debt to buy its GPUs, and NVIDIA’s customers’ customers rely on debt to pay to rent them.  Yet it gets worse from there. Who, after all, are the biggest customers renting AI compute? That’s right, AI startups, all of which are deeply unprofitable. Cursor — Anthropic’s largest customer and now its biggest competitor in the AI coding sphere — raised $2.3 billion in November after raising $900 million in June . Perplexity, one of the most “popular” AI companies,  raised $200 million in September after raising $100 million in July after seeming to fail to raise $500 million in May (I’ve not seen any proof this round closed) after raising $500 million in December 2024 . Cognition raised $400 million in September after raising $300 million in March . Cohere raised $100 million in September a month after it raised $500 million .  Venture capital is feeding money to either OpenAI or Anthropic to use their models, or in some cases hyperscalers or neoclouds like CoreWeave or Lambda to rent NVIDIA GPUs. OpenAI and Anthropic then raise venture capital or debt to pay hyperscalers or neoclouds to rent NVIDIA GPUs. Hyperscalers and neoclouds then use either debt or existent cashflow (in the case of hyperscalers, though not for long!) to buy more NVIDIA GPUs. Only one company actually makes a profit here: NVIDIA.  At some point, a link in this debt-backed chain breaks, because very little cashflow exists to prop it up. At some point, venture capitalists will be forced to stop funnelling money into unprofitable, unsustainable AI companies, which will make those companies unable to funnel money into the pockets of those buying GPUs, which will make it harder for those companies buying GPUs to justify (or raise debt for) buying more GPUs.  And if I’m honest, none of NVIDIA’s success really makes any sense. Who is buying so many GPUs? Where are they going?  Why are inventories increasing ? Is it really just pre-buying parts for future orders? Why are accounts receivable climbing , and how much product is NVIDIA shipping before it gets paid? While these are both explainable as “this is a big company and that’s how big companies do business” (which is true!), why do receivables not seem to be coming down?  And how long, realistically, can the largest company on the stock market continue to grow revenues selling assets that only seem to lose its customers money? I worry about NVIDIA, not because I believe there’s a massive scandal, but because so much rides on its success, and its success rides on the back of dwindling amounts of venture capital and debt, because nobody is actually making money to pay for these GPUs.   In fact, I’m not even saying it goes tits up. Hell, it might even have another good quarter or two. It really comes down to how long people are willing to be stupid and how long Jensen Huang is able to call hyperscalers at three in the morning and say “buy one billion dollars of GPUs, pig.”  No, really! I think much of the US stock market’s growth is held up by how long everybody is willing to be gaslit by Jensen Huang into believing that they need more GPUs. At this point it’s barely about AI anymore, as AI revenue — real, actual cash made from selling services run on GPUs — doesn’t even cover its own costs, let alone create the cash flow necessary to buy $70,000 GPUs thousands at a time. It’s not like any actual innovation or progress is driving this bullshit!  In any case, the markets crave a healthy NVIDIA, as so many hundreds of billions of dollars of NVIDIA stock sit in the hands of retail investors and people’s 401ks, and its endless growth has helped paper over the pallid growth of the US stock market and, by extension, the decay of the tech industry’s ability to innovate. Once this pops — and it will pop, because there is simply not enough money to do this forever — there must be a referendum on those that chose to ignore the naked instability of this era, and the endless lies that inflated the AI bubble. Until then, everybody is betting billions on the idea that Wile E. Coyote won’t look down. Let’s start with a horrible fact: it takes about 2.5 years of construction time and $50 billion per gigawatt of data center capacity . One way or another, these GPUs are depreciating in value, either through death (or reduced efficacy through wear and tear) or becoming obsolete, which is very likely as NVIDIA has committed to releasing a new GPU every year . At some point, Wall Street is going to need to see some sort of return on this investment, and right now that return is “negative dollars.”  I break it down in my premium piece, but I estimate that big tech needs to make $2 for every $1 of capex . This revenue must also be brand spanking new, as this capex is only for AI. Meta, Amazon, Google and Microsoft are already years and hundreds of billions of dollars in , and are yet to see a dollar of profit , creating a $1.21 trillion hole just to justify the expenses (so around $605 billion in capex all told, at the time I calculated it). You might argue that there’s a scenario where, say, an A100 GPU is “useful” past the 3 or 6 year shelf life. Even if that were the case, the average rental price of an A100 GPU is 99 cents an hour . This is a four or five-year-old GPU, and customers are paying for it like they would a five-year-old piece of hardware. The same fate awaits H100 GPUs too. Every year, NVIDIA releases a new GPU, lowering the value of all the other GPUs in the process, making it harder to fill in the holes created by all the other GPUs. This whole time, nobody appears to have found a way to make a profit, meaning that the hole created by these GPUs remains unfilled, all while big tech firms buy more GPUs, creating more holes to fill. Big tech keeps buying more GPUs despite the old GPUs failing to pay for themselves. To fix this problem, big tech is buying more GPUs.  Newer generation GPUs — like NVIDIA’s Blackwell and Vera Rubin — require entirely new data center architecture, meaning that one has to either build a brand new data center or retrofit an old one.  Big tech is spending billions of dollars to make sure it’s able to turn on these new GPUs, at which point you may think that they’ll make a profit.  Even when they’re turned on, these things don’t make money. The Information reports that Oracle’s Blackwell GPUs have a negative 100% gross margin .  How exactly are these bloody things meant to make more money than they cost in the next six years, let alone three? They don’t make a profit now and have no path to doing so in the future! I feel like I’m going INSANE! This is accrual accounting, meaning that these numbers are revenue booked in the quarter I reported them. Any comments about quarter-long delays in payments are incorrect. Microsoft’s revenue share payments to OpenAI are pathetic — totalling, based on documents reviewed by this publication, $69.1 million in CY (calendar year) Q3 2025.

0 views

Premium: The Ways The AI Bubble Might Burst

[Editor's Note: this piece previously said "Blackstone" instead of "Blackrock," which has now been fixed.] I've been struggling to think about what to write this week, if only because I've written so much recently and because, if I'm honest, things aren't really making a lot of sense. NVIDIA claims to have shipped six million Blackwell GPUs in the last four quarters — as I went into in my last premium piece — working out to somewhere between 10GW and 12GW of power (based on the power draw of B100 and B200 GPUs and GB200 and GB300 racks), which...does not make sense based on the amount of actual data center capacity brought online. Similarly, Anthropic claims to be approaching $10 billion in annualized revenue — so around $833 million in a month — which would make it competitive with OpenAI's projected $13 billion in revenue, though I should add that based on my reporting extrapolating OpenAI's revenues from Microsoft's revenue share , I estimate the company will miss that projection by several billion dollars, especially now that Google's Gemini 3 launch has put OpenAI on a " Code Red, " shortly after an internal memo revealed that Gemini 3 could “create some temporary economic headwinds for [OpenAI]." Which leads me to another question: why? Gemini 3 is "better," in the same way that every single new AI model is some indeterminate level of "better." Nano Banana Pro is, to Simon Willison, " the best available image generation model. " But I can't find a clear, definitive answer as to why A) this is "so much better," B) why everybody is freaking out about Gemini 3, and C) why this would have created "headwinds" for OpenAI, headwinds so severe that it has had to rush out a model called Garlic "as soon as possible" according to The Information : Right, sure, cool, another model. Again, why is Gemini 3 so much better and making OpenAI worried about "economic headwinds"? Could this simply be a convenient excuse to cover over, as Alex Heath reported a few weeks ago , ChatGPT's slowing download and usage growth ? Experts I've talked to arrived at two conclusions: I don't know about garlic or shallotpeat or whatever , but one has to wonder at some point what it is that OpenAI is doing all day : So, OpenAI's big plan is to improve ChatGPT , make the image generation better , make people like the models better , improve rankings , make it faster, and make it answer more stuff. I think it's fair to ask: what the fuck has OpenAI been doing this whole time if it isn't "make the model better" and "make people like ChatGPT more"? I guess the company shoved Sora 2 out the door — which is already off the top 30 free Android apps in the US and at 17 on the US free iPhone apps rankings as of writing this sentence after everybody freaked out about it hitting number one . All that attention, and for what? Indeed, signs seem to be pointing towards reduced demand for these services. As The Information reported a few days ago ... Microsoft, of course, disputed this, and said... Well, I don't think Microsoft has any problems selling compute to OpenAI — which paid it $8.67 billion just for inference between January and September — as I doubt there is any "sales team" having to sell compute to OpenAI. But I also want to be clear that Microsoft added a word: "aggregate." The Information never used that word, and indeed nobody seems to have bothered to ask what "aggregate" means. I do, however, know that Microsoft has had trouble selling stuff. As I reported a few months ago, in August 2025 Redmond only had 8 million active paying licenses for Microsoft 365 Copilot out of the more-than-440 million people paying for Microsoft 365 . In fact, here's a rundown of how well AI is going for Microsoft: Yet things are getting weird. Remember that OpenAI-NVIDIA deal? The supposedly "sealed" one where NVIDIA would invest $100 billion in OpenAI , with each tranche of $10 billion gated behind a gigawatt of compute? The one that never really seemed to have any fundament to it, but people reported as closed anyway? Well, per NVIDIA's most-recent 10-Q (emphasis mine): A letter of intent "with an opportunity" means jack diddly squat. My evidence? NVIDIA's follow-up mention of its investment in Anthropic: This deal, as ever, was reported as effectively done , with NVIDIA investing $10 billion and Microsoft $5 billion, saying the word "will" as if the money had been wired, despite the "closing conditions" and the words "up to" suggesting NVIDIA hasn't really agreed how much it will really invest. A few weeks later, the Financial Times would report that Anthropic is trying to go public   as early as 2026 and that Microsoft and NVIDIA's money would "form part of a funding round expected to value the group between $300bn and $350bn." For some reason, Anthropic is hailed as some sort of "efficient" competitor to OpenAI, at least based on what both The Information and Wall Street Journal have said, yet it appears to be raising and burning just as much as OpenAI . Why did a company that's allegedly “reducing costs” have to raise $13 billion in September 2025 after raising $3.5 billion in March 2025 , and after raising $4 billion in November 2024 ? Am I really meant to read stories about Anthropic hitting break even in 2028 with a straight face? Especially as other stories say Anthropic will be cash flow positive “ as soon as 2027 .” And if this company is so efficient and so good with money , why does it need another $15 billion, likely only a few months after it raised $13 billion? Though I doubt the $15 billion round closes this year, if it does, it would mean that Anthropic would have raised $31.5 billion in 2025 — which is, assuming the remaining $22.5 billion comes from SoftBank, not far from the $40.8 billion OpenAI would have raised this year. In the event that SoftBank doesn't fund that money in 2025, Anthropic will have raised a little under $2 billion less ($16.5 billion) than OpenAI ($18.3 billion, consisting of $10 billion in June   split between $7.5 billion from SoftBank and $2.5 billion from other investors, and an $8.3 billion round in August ) this year. I think it's likely that Anthropic is just as disastrous a business as OpenAI, and I'm genuinely surprised that nobody has done the simple maths here, though at this point I think we're in the era of "not thinking too hard because when you do so everything feels crazy.” Which is why I'm about to think harder than ever! I feel like I'm asked multiple times a day both how and when the bubble will burst, and the truth is that it could be weeks or months or another year , because so little of this is based on actual, real stuff. While our markets are supported by NVIDIA's eternal growth engine, said growth engine isn't supported by revenues or real growth or really much of anything beyond vibes. As a result, it's hard to say exactly what the catalyst might be, or indeed what the bubble bursting might look like. Today, I'm going to sit down and give you the scenarios — the systemic shocks — that would potentially start the unravelling of this era, as well as explain what a bubble bursting might actually look like, both for private and public companies. This is the spiritual successor to August's AI Bubble 2027 , except I'm going to have a little more fun and write out a few scenarios that range from likely to possible , and try and give you an enjoyable romp through the potential apocalypses waiting for us in 2026. Gemini 3 is good/better at the stuff tested on benchmarks compared to what OpenAI has. OpenAI's growth and usage was decelerating before this happened, and this just allows OpenAI to point to something. Its chips effort is falling behind , with its "Maya" AI chip delayed to 2026, and according to The Information, "when it finally goes into mass production next year, it’s expected to fall well short of the performance of Nvidia’s flagship Blackwell chip." According to The Information in late October 2025 , "more customers have been using Microsoft’s suite of AI copilots, but many of them aren’t paying for it." In October , Australian's Competition and Consumer Commission sued Microsoft for "allegedly misleading 2.7 million Australians over Microsoft 365 subscriptions," by making it seem like they had to pay extra and integrate Copilot into their subscription rather than buy the, and I quote, "undisclosed third option, the Microsoft 365 Personal or Family Classic plans, which allowed subscribers to retain the features of their existing plan, without Copilot, at the previous lower price." This is what a company does when it can't sell shit. Google did the same thing with its workspace accounts earlier in the year . This should be illegal! According to The Information in September 2025 , Microsoft had to "partly" replace OpenAI's models with Anthropic's for some of its Copilot software. Microsoft has, at this point, sunk over ten billion dollars into OpenAI, and part of its return for doing so was exclusively being able to use its models. Cool! According to The Information in September 2025 , Microsoft has had to push discounts for Office 365 Copilot as customers had "found Copilot adoption slow due to high cost and unproven ROI." In late 2024 , customers had paused purchasing further Copilot assistants due to performance and cost issues.

0 views

Premium: The Hater's Guide To NVIDIA

This piece has a generous 3000+ word introduction, because I want as many people to understand NVIDIA as possible. The (thousands of) words after the premium break get into arduous detail, but I’ve written this so that, ideally, most people can pick up the details early on and understand this clusterfuck. Please do subscribe to the premium! I really appreciate it. I've reached a point with this whole era where there are many, many things that don't make sense, and I know I'm not alone. I've been sick since Friday last week, and thus I have had plenty of time to sit and think about stuff. And by "stuff" I mean the largest company on the stock market: NVIDIA.  Look, I'm not an accountant, nor am I a "finance expert." I learned all of this stuff myself. I learn a great deal by coming to things from the perspective of being a dumbass , a valuable intellectual framework of "I need to make sure I understand each bit and explain it as simply as possible." In this piece, I'm going to try and explain both what this company is, how we got here, and ask questions that I, from the perspective of a dumbass, have about the company, and at least try and answer them. Let's start with a very simple point: for a company of such remarkable size, very few people — myself included, at times! —  seem to actually understand NVIDIA. NVIDIA is a company that sells all sorts of stuff, but the only reason you're hearing about it as a normal person is that NVIDIA's stock has become a load-bearing entity in the US stock market. This has happened because NVIDIA sells "GPUs" — graphics processing units — that power the large language model services that are behind the whole AI boom, either through "inference" (the process of creating an output from an AI model) or "training" (feeding data into the model to make its outputs better). NVIDIA also sells other things, which I’ll get to later, but it doesn’t really matter to the bigger picture. Back in 2006, NVIDIA launched CUDA , a software layer that lets you run (some) software on (specifically) NVIDIA graphics cards, and over time this has grown into a massive advantage for the company. The thing is, GPUs are great for parallel processing - essentially spreading a task across multiple, by which I mean thousands, of processor cores at the same time - which means that certain tasks run faster than they would on, say, a CPU. While not every task benefits from parallel processing, or from having several thousand cores available at the same time, the kind of math that underpins LLMs is one such example.  CUDA is proprietary to NVIDIA, and while there are alternatives (both closed- and open-source), none of them have the same maturity and breadth. Pair that with the fact that Nvidia’s been focused on the data center market for longer than, say, AMD, and it’s easy to understand why it makes so much money. There really isn’t anyone who can do the same thing as NVIDIA, both in terms of software and hardware, and certainly not at the scale necessary to feed the hungry tech firms that demand these GPUs. Anyway, back in 2019 NVIDIA acquired a company called Mellanox for $6.9 billion, beating off other would-be suitors, including Microsoft and Intel. Mellanox was a manufacturer of high-performance networking gear, and this acquisition would give NVIDIA a stronger value proposition for data center customers. It wanted to sell GPUs — lots of them — to data center customers, and now it could also sell the high-speed networking technology required to make them work in tandem.  This is relevant because it created the terms under which NVIDIA could start selling billions (and eventually tens of billions) of specialized GPUs for AI workloads. As pseudonymous finance account JustDario connected (both Dario and Kakashii have been immensely generous with their time explaining some of the underlying structures of NVIDIA, and are worth reading, though at times we diverge on a few points), mere months after the Mellanox acquisition, Microsoft announced its $1 billion investment in OpenAI to build "Azure AI supercomputing technologies." Though it took until November 2022 for ChatGPT to really start the fires, in March 2020 , NVIDIA began the AI bubble with the launch of its "Ampere" architecture, and the A100, which provided "the greatest generational performance leap of NVIDIA's eight generations of GPUs," built for "data analytics, scientific computing and cloud graphics." The most important part, however, was the launch of NVIDIA's "Superpod": Per the press release:  One might be fooled into thinking this was Huang suggesting we could now build smaller, more efficient data centers, when he was actually saying we should build way bigger ones that had way more compute power and took up way more space. The "Superpod" concept — groups of GPU servers networked together to work on specific operations — is the "thing" that is driving NVIDIA's sales. To "make AI happen," a company must buy thousands of these things and put them in data centers and you'd be a god damn idiot to not do this and yes, it requires so much more money than you used to spend. At the time, a DGX A100 — a server that housed eight A100 GPUs (starting at around $10,000 at launch per-GPU, increasing with the amount of on-board RAM, as is the case across the board) — started at $199,000. The next generation SuperPod, launched in 2022, was made up of eight H100 GPUs (Starting at $25,000-per-GPU, the next generation "Hopper" chips were apparently 30x times more powerful than the A100), and retailed from $300,000. You'll be shocked to hear the next generation Blackwell SuperPods started at $500,000 when launched in 2024 . A single B200 GPU costs at least $30,000 . Because nobody else has really caught up with CUDA, NVIDIA has a functional monopoly ( edit: I wrote monopsony in a previous version, sorry), and yes, you can have a situation where a market has a monopoly, even if there is, at least in theory, competition. Once a particular brand — and particular way of writing software for a particular kind of hardware — takes hold, there's an implicit cost of changing to another, on top of the fact that AMD and others have yet to come up with something particularly competitive. Anyway, the reason that I'm writing all of this out is because I want you to understand why everybody is paying NVIDIA such extremely large amounts of money. Every year, NVIDIA comes up with a new GPU, and that GPU is much, much more expensive, and NVIDIA makes so much more money, because everybody has to build out AI infrastructure full of whatever the latest NVIDIA GPUs are, and those GPUs are so much more expensive every single year. With Blackwell — the third generation of AI-specialized GPUs — came a problem, in that these things were so much more power-hungry, and required entirely new ways of building data centers, along with different cooling and servers to put them in, much of which was sold by NVIDIA. While you could kind of build around your current data centers to put A100s and H100s into production, Blackwell was...less cooperative, and ran much hotter. To quote NVIDIA Employee Number 4 David Rosenthal : In simple terms, Blackwell runs hot, so much hotter than Ampere (A100) or Hopper (H100) GPUs that it requires entirely different ways to cool it, meaning your current data center needs to be ripped apart to fit them. Huang has confirmed that Vera Rubin, the next generation of GPUs, will have the same architecture as Blackwell . I would bet money that it's also much more expensive. Anyway, all of this has been so good for NVIDIA. As the single vendor for the most important component in the entire AI boom, it has set the terms for how much you pay and how you build any and all AI infrastructure. While there are companies like Supermicro and Dell who buy NVIDIA GPUs and ship them in servers to customers, that's just fine for NVIDIA CEO Jensen Huang, as that's somebody else selling his GPUs for him. NVIDIA has been printing money, quarter after quarter, going from a meager $7.192 billion in total revenue in the third (calendar year) quarter of 2023 to an astonishing $50 billion in just data center revenue (that's where the GPUs are) in its most recent quarter , for a total of $57 billion in revenue , and the company projects to make $63 billion to $67 billion in the next quarter. Now, I'm going to stop you here, because this bit is really important, really simple, yet nobody thinks about it much: NVIDIA makes so much money, and it makes it from a much smaller customer base than most companies, because there are only so many entities that can buy thousands of chips that cost $50,000 or more each.   $35 billion, $39 billion, $44 billion, $46 billion and $57 billion are very large amounts of money, and the entities pumping those numbers into the stratosphere are collectively having to spend hundreds of billions of dollars to make it happen. So, let me give you a theoretical example. I swear I'm going somewhere with this.You, a genius, have decided you are about to join the vaunted ranks of "AI data center ownership." You decide to build a "small" AI data center — 25MW (megawatts, which in this example, refers to the combined power draw of the tech inside the data center). That can't be that much, right? OpenAI is building a 1.2GW one out in Abilene Texas . How much could this tiny little thing cost? Okay, well, let's start with those racks. You're gonna need to give Jensen Huang $600 million right away, as you need 200 GB200 racks. You're also gonna need a way to make them network together, because otherwise they aren't going to be able to handle all those big IT loads , so that's gonna be another $80 million or more, and you're going to need storage and servers to sync all of this up, which is, let's say, another $35 million. So we're at $715 million. Should be fine, right? Everybody's cool and everybody's normal. This is just a small data center after all. Oops, forgot cooling and power delivery stuff — that's another $5 million. $720 million. Okay. Anyway, sadly data centers require something called a "building." Construction costs for a data center are somewhere from $8 million to $12 million per megawatt , so, crap, okay. That's $250 million, but probably more like $300 million. We're now up to $1.02 billion, and we haven't even got the power yet. Okay, sick. Do you have one billion dollars? You don't? No worries! Private credit — money loaned by non-banking entities — has been feeding more than $50 billion dollars a quarter into the hungry mouths of anybody who desires to build a data center . You need $1.02 billion. You get $1.5 billion, because, you know, "stuff happens." Don't worry about those pesky high interest rates — you're about to be printing big money, AI style! Now you're done raising all that cash, it'll now only take anywhere from 6 to 18 months for site selection, permitting, design, development, construction, and energy procurement . You're also going to need about 20 acres of land for that 100,000 square foot data center . You may wonder why 100,000 square feet needs that much space, and that's because all of the power and cooling equipment takes up an astonishing amount of room. So, yeah, after two years and over a billion dollars, you too can own a data center with NVIDIA GPUs that turn on, and at that point, you will offer a service that is functionally identical to everybody else buying GPUs from NVIDIA. Your competitors are Amazon, Google and Microsoft, followed by neoclouds — AI chip companies selling the same thing as you, except they're directly backed by NVIDIA, and frequently, the big hyperscaler companies with brands that most people have heard of, like AWS and Azure. Oh, also, this stuff costs an indeterminately-large amount of money to run. You may wonder why I can't tell you how much, and that's because nobody wants to actually discuss the cost of running GPUs, the thing that underpins our entire stock market. There're good reasons, too. One does not just run "a GPU" — it's a GPU in a server of other GPUs with associated hardware, all drawing power in varying amounts, all running in sync with networking gear that also draws power, with varying amounts of user demand and shifts in the costs of power from the power company. But what we can say is that the up front cost of buying these GPUs and their associated crap is such that it's unclear if they ever will generate a profit, because these GPUs run hot , all the time , and that causes some amount of them to die. Here are some thoughts I have had: The NVIDIA situation is one of the most insane things I've seen in my life. The single-largest, single-most-valuable, single-most-profitable company on the stock market has got there through selling ultra-expensive hardware that takes hundreds of millions or billions of dollars (and years of construction in some cases) to start using, at which point it...doesn't make much revenue and doesn't seem to make a profit.  Said hardware is funded by a mixture of cashflow from healthy businesses (see: Microsoft) or massive amounts of debt (see: everybody who is not a hyperscaler, and, at this point, some hyperscalers). The response to the continued proof that generative AI is not making money is to buy more GPUs, and it doesn't appear anybody has ever worked out why. This problem has been obvious for a long time, too.  Today I'm going to explain to you — simply, but at length — why I am deeply concerned, and how deeply insane this situation has become. A 25MW data center costs about $1 billion, with $600 million of that being GPUs — 200 GB200 racks, to be specific. It needs about 20 acres — 100,000 square feet for the data center, roughly. NVIDIA sells about $50 billion of GPUs and associated hardware in a quarter, so let's say that $40 billion of that is just the GPUs and $10 billion is everything else (primarily networking gear), so around 13,333 GB200 racks. I realize that NVIDIA sells far more than that (GB300 racks, singular GPUs, and so on). Deep-pocketed hyperscalers like Microsoft, Google, Meta and Amazon representing 41.32% of NVIDIA's revenue in the middle of 2025 , funneling free cash flow directly into Jensen Huang's pockets... ...for now. Amazon ( $15 billion ), Google ( $25 billion ), Meta ( $30 billion ) and Oracle ( $18 billion ) have all had to raise massive amounts of debt to continue to fund AI-focused capital expenditures, with more than half of that ( per Rubenstein ) spent on GPUs. Otherwise, basically anybody buying GPUs at any scale has to fund doing so with either venture capital (money raised in exchange for part of the company) or debt. NVIDIA, at this point, is around 8% of the value of the S&P 500 (the 500 leading (meaning they meet certain criteria of size, liquidity (cash availability) and profitability) companies on the US stock market). Its continued health — and representative value as a stock, which is not necessarily based on its actual numbers or health, but in this case kind of is? — has led the stock market to remarkable gains. It is not enough for NVIDIA to simply be a profitable company. It must continue beating the last quarter's revenue, again and again and again and again, forever . If that sounds dramatic, I assure you it is the truth. NVIDIA's continued success — and its ability to continue delivering outsized beats of Wall Street's revenue estimates — depends on: The willingness of a few very large, cash-rich companies (Microsoft, Meta, Amazon and Google) to continue buying successive generations of NVIDIA GPUs forever. The ability of said companies to continue buying successive generations of GPUs forever. The ability of other, less-cash-rich companies like Oracle to continue being able to raise debt to buy massive amounts of GPUs — such as the $40 billion of GPUs that Oracle is buying for Stargate Abilene forever. This is becoming a problem. The ability of unprofitable, debt-ridden companies like CoreWeave, AI "neoclouds" that use the GPUs they purchase from NVIDIA as collateral for loans to buy more GPUs , to to continue raising that debt to buy more GPUs. The ability of anybody who buys these GPUs to actually install them and use them, which requires massive amounts of construction... and more power than is currently available, even to the most well-funded and conspicuous projects . In simple terms, its success depends on the debt markets to continue propping up its revenues, because there is not really enough free cash in the world to continue pumping it into NVIDIA at this rate. And after all of this, large language models, the only way to make any real money on any of these GPUs , must prove they can actually produce a profit. Per my article from September, I can find no compelling evidence (outside of boosters speciously claiming otherwise) that it's profitable to sell access to GPUs. Based on my calculations, there's likely little more than $61 billion of actual AI revenue in 2025 across every single AI company and hyperscaler. Note that I said "revenue." Absolutely nobody is making a profit.

0 views

Premium: The Hater's Guide To The AI Bubble Vol. 2

We’re approaching the most ridiculous part of the AI bubble, with each day bringing us a new, disgraceful and weird headline. As I reported earlier in the week, OpenAI spent $12.4 billion on inference between 2024 and September 2025 , and its revenue share with Microsoft heavily suggests it made at least $2.469 billion in 2024 ( when reports had OpenAI at $3.7 billion for 2024 ), with the only missing revenue to my knowledge being the 20% Microsoft shares with OpenAI when it sells OpenAI models on Azure, and whatever cut Microsoft gives OpenAI from Bing.  Nevertheless, the gap between reported figures and what the documents I’ve seen said is dramatic. Despite reports that OpenAI made, in the first half of 2025, $4.3 billion in revenue on $2.5 billion of “cost of revenue,” what I’ve seen shows that OpenAI spent $5.022 billion on inference (the process of creating an output using a model) in that period, and made at least $2.2735 billion. I, of course, am hedging aggressively, but I can find no explanation for the gaps. I also can’t find an explanation for why Sam Altman said that OpenAI was “profitable on inference” in August 2025 , nor how OpenAI will hit “$20 billion in annualized revenue” by end of 2025 , nor how OpenAI will do “well more” than $13 billion this year . Perhaps there’s a chance that for some 30 day period of this year OpenAI hits $1.66 billion in revenue (AKA $20 billion annualized), but even that would leave it short of its stated target revenue The very same day I ran that piece, somebody posted a clip of Microsoft CEO Satya Nadella saying , who had this to say when asked about recent revenue projections from AI labs:  I don’t know Satya, not fucking make shit up? Not embellishing? Is it too much to ask that these companies make projections that adhere to reality, rather than whatever an investor would want to hear? Or, indeed, projections that perpetuate a myth of inevitability, but fly in the face of reality?  I get that in any investment scenario you want to sell a story, but the idea that the CEO of a company with a $3.8 trillion market cap is sitting around saying “what do you expect them to do, tell the truth? They need money for compute!” is fucking disgraceful.  No, I do not believe a company should make overblown revenue projections, nor do I think it’s good for the CEO of Microsoft to encourage the practice. I also seriously have to ask why Nadella believes that this is happening, and, indeed, who he might be specifically talking about, as Microsoft has particularly good insights into OpenAI’s current and future financial health .  However, because Nadella was talking in generalities, this could refer to Anthropic, and it kinda makes sense, because Anthropic just received near-identical articles about its costs from both The Information and The Wall Street Journal , with The Information saying that Anthropic “projected a positive free cash flow as soon as 2027,” and the Wall Street Journal saying that Anthropic “anticipates breaking even by 2028,” with both pieces featuring the cash burn projections of both OpenAI and Anthropic based on “documents” or “investor projections” shared this summer. Both pieces focus on free cash flow, both pieces focus on revenue, and both pieces say that OpenAI is spending way more than Anthropic, and that Anthropic is on the path to profitability. The Information also includes a graph involving Anthropic’s current and projected gross margins, with the company somehow hitting 75% gross margins by 2028.  How does any of this happen? Nobody seems to know!  Per The Journal: …hhhhooowwwww????? I’m serious! How?  The Information tries to answer: Is…that the case? Are there any kind of numbers to back this up? Because Business Insider just ran a piece covering documents involving startups claiming that Amazon’s chips had "performance challenges,” were “plagued by frequent service disruptions,” and “underperformed” NVIDIA H100 GPUs on latency, making them “less competitive” in terms of speed and cost.” One startup “found Nvidia's older A100 GPUs to be as much as three times more cost-efficient than AWS's Inferentia 2 chips for certain workloads,” and a research group called AI Singapore “determined that AWS’s G6 servers, equipped with NVIDIA GPUs, offered better cost performance than Inferentia 2 across multiple use cases.” I’m not trying to dunk on The Wall Street Journal or The Information, as both are reporting what is in front of them, I just kind of wish somebody there would say “huh, is this true?” or “will they actually do that?” a little more loudly, perhaps using previously-written reporting.  For example, The Information reported that Anthropic’s gross margin in December 2023 was between 50% and 55% in January 2024 , CNBC stated in September 2024 that Anthropic’s “aggregate” gross margin would be 38% in September 2024, and then it turned out that Anthropic’s 2024 gross margins were actually negative 109% (or negative 94% if you just focus on paying customers) according to The Information’s November 2025 reporting . In fact, Anthropic’s gross margin appears to be a moving target. In July 2025, The Information was told by sources that “Anthropic recently told investors its gross profit margin from selling its AI models and Claude chatbot directly to customers was roughly 60% and is moving toward 70%,” only to publish a few months later (in their November piece) that Anthropic’s 2025 gross margin would be…47%, and would hit 63% in 2026. Huh? I’m not bagging on these outlets. Everybody reports from the documents they get or what their sources tell them, and any piece you write comes with the risk that things could change, as they regularly do in running any kind of business. That being said, the gulf between “38%” and “ negative 109%” gross margins is pretty fucking large, and suggests that whatever Anthropic is sharing with investors (I assume) is either so rapidly changing that giving a number is foolish, or made up on the spot as a means of pretending you have a functional business. I’ll put it a little more simply: it appears that much of the AI bubble is inflated on vibes, and I’m a little worried that the media is being too helpful. These companies are yet to prove themselves in any tangible way, and it’s time for somebody to give a frank evaluation of where we stand. if I’m honest, a lot of this piece will be venting, because I am frustrated. When all of this collapses there will, I guarantee, be multiple startups that have outright lied to the media, and done so, in some cases, in ways that are equal parts obvious and brazen. My own work has received significantly more skepticism than OpenAI or Anthropic, two companies worth alleged billions of dollars that appear to change their story with an aloof confidence borne of the knowledge that nobody read or thought too deeply about what it is that their CEOs have to say, other than “wow, Anthropic said a new number !”  So I’m going to do my best to write about every single major AI company in one go. I am going to pull together everything I can find and give a frank evaluation of what they do, where they stand, their revenues, their funding situation, and, well, however else I feel about them.  And honestly, I think we’re approaching the end. The Information recently published one of the grimmest quotes I’ve seen in the bubble so far: Hey, what was that? What was that about “growing concerns regarding the costs and benefits of AI”? What “capital shift”? The fucking companies are telling you, to your face, that they know there’s not a sustainable business model or great use case, and you are printing it and giving it the god damn thumbs up. How can you not be a hater at this point? This industry is loathsome, its products ranging useless to niche at best, its costs unsustainable, and its futures full of fire and brimstone.  This is the Hater’s Guide To The AI Bubble Volume 2 — a premium sequel to the Hater’s Guide from earlier this year — where I will finally bring some clarity to a hype cycle that has yet to prove its worth, breaking down industry-by-industry and company-by-company the financial picture, relative success and potential future for the companies that matter. Let’s get to it.

0 views

Exclusive: Here's How Much OpenAI Spends On Inference and Its Revenue Share With Microsoft

As with my Anthropic exclusive from a few weeks ago , though this feels like a natural premium piece, I decided it was better to publish on my free one so that you could all enjoy it. If you liked or found this piece valuable, please subscribe to my premium newsletter — here’s $10 off the first year of an annual subscription . I have put out over a hundred thousand words of coverage in the last three months, most of which is on my premium, and I’d really appreciate your support. I also did an episode of my podcast Better Offline about this. Before publishing, I discussed the data with a Financial Times reporter. Microsoft and OpenAI both declined to comment to the FT. If you ever want to share something with me in confidence, my signal is ezitron.76, and I’d love to hear from you. What I’ll describe today will be a little more direct than usual, because I believe the significance of the information requires me to be as specific as possible.  Based on documents viewed by this publication, I am able to report OpenAI’s inference spend on Microsoft Azure, in addition to its payments to Microsoft as part of its 20% revenue share agreement, which was reported in October 2024 by The Information . In simpler terms, Microsoft receives 20% of OpenAI’s revenue. I do not have OpenAI’s training spend, nor do I have information on the entire extent of OpenAI’s revenues, as it appears that Microsoft shares some percentage of its revenue from Bing, as well as 20% of the revenue it receives from selling OpenAI’s models.  According to The Verge : Nevertheless, I am going to report what I’ve been told. One small note — for the sake of clarity, every time I mention a year going forward, I’ll be referring to the calendar year, and not Microsoft’s financial year (which ends in June).  These numbers in this post differ to those that have been reported publicly. For example, previous reports had said that OpenAI had spent $2.5 billion on “cost of revenue” - which I believe are OpenAI’s inference costs - in the first half of CY2025 .  According to the documents viewed by this newsletter, OpenAI spent $5.02 billion on inference alone with Microsoft Azure in the first half of Calendar Year CY2025.  This is a pattern that has continued through the end of September. By that point in CY2025 — three months later — OpenAI had spent $8.67 billion on inference.  OpenAI’s inference costs have risen consistently over the last 18 months, too. For example, OpenAI spent $3.76 billion on inference in CY2024, meaning that OpenAI has already doubled its inference costs in CY2025 through September. Based on its reported revenues of $3.7 billion in CY2024 and $4.3 billion in revenue for the first half of CY2025 , it seems that OpenAI’s inference costs easily eclipsed its revenues.  Yet, as mentioned previously, I am also able to shed light on OpenAI’s revenues, as these documents also reveal the amounts that Microsoft takes as part of its 20% revenue share with OpenAI.  Concerningly, extrapolating OpenAI’s revenues from this revenue share does not produce numbers that match those previously reported.  According to the documents, Microsoft received $493.8 million in revenue share payments in CY2024 from OpenAI — implying revenues for CY2024 of at least $2.469 billion, or around $1.23 billion less than the $3.7 billion that has been previously reported .  Similarly, for the first half of CY2025, Microsoft received $454.7 million as part of its revenue share agreement, implying OpenAI’s revenues for that six-month period were at least $2.273 billion, or around $2 billion less than the $4.3 billion previously reported . Through September, Microsoft’s revenue share payments totalled $865.9 million, implying OpenAI’s revenues are at least $4.329 billion. According to Sam Altman, OpenAI’s revenue is “well more” than $13 billion . I am not sure how to reconcile that statement with the documents I have viewed. The following numbers are calendar years. I will add that, where I have them, I will include OpenAI’s leaked or reported revenues. In some cases, the numbers match up. In others they do not. Though I do not know for certain, the only way to reconcile this would be some sort of creative means of measuring “annualized” or “recurring” revenue. I am confident in saying that I have read every single story about OpenAI’s revenue ever written, and at no point does OpenAI (or the documents reporting anything) explain how the company defines “annualized” or “annual recurring revenue.”  I must be clear that the following is me speaking in generalities, and not about OpenAI specifically, but you can get really creative with annualized revenue or annual recurring revenue. You can say 30 days, 28 days, and you can even choose a period of time that isn’t a calendar month too — so, say, the best 30 days of your company’s existence across two different months. I have no idea how OpenAI defines this metric, and default to saying that “annualized” or “ARR” means $Xnumber divided by 12. The Financial Times reported on February 9 2024 that OpenAI’s revenues had “surpassed $2 billion on an annualised basis” in December 2023, working out to $166.6 million in a month: The Information reported on June 12 2024 that OpenAI had “more than doubled its annualized revenue to $3.4 billion in the last six months or so,” working out to around $283 million in a month, likely referring to this period. On September 27 2024, the New York Times reported that “OpenAI’s monthly revenue hit $300 million in August…and the company expects about $3.7 billion in annual sales [in 2024],” according to a financial professional’s review of documents. On June 9, 2025, an OpenAI spokesperson told CNBC that it had hit “$10 billion annual recurring revenue,” excluding licensing revenue from OpenAI’s 20% revenue share and “large, one-time deals.” $10bn annualized revenue works out to around $833 million in a month. These numbers are inclusive of OpenAI’s revenue share payments to Microsoft and OpenAI’s inference spend. There could be potentially royalty payments made to OpenAI as part of its deal to receive 20% of Microsoft’s sales of OpenAI’s models, or other revenue related to its revenue share with Bing.  Due to the sensitivity and significance of this information, I am taking a far more blunt approach with this piece. Based on the information in this piece, OpenAI’s costs and revenues are potentially dramatically different to what we believed. The Information reported in October 2024 that OpenAI’s revenue could be $4 billion, and inference costs $2 billion based on documents “which include financial statements and forecasts,” and specifically added the following: I do not know how to reconcile this with what I am reporting today. In the first half of CY2024, based on the information in the documents, OpenAI’s inference costs were $1.295 billion, and its revenues at least $934 million.  Indeed, it is tough to reconcile what I am reporting with much of what has been reported about OpenAI’s costs and revenues.  OpenAI’s inference spend with Microsoft Azure between CY2024 and Q3 CY2025 was $12.43 billion. That is an astonishing figure, one that dramatically dwarfs any and all reporting, which, based on my analysis, suggested that OpenAI spent $2 billion on inference in 2024 and $2.5 billion through H1 CY2025. In other words, inference costs are nearly triple that reported elsewhere.  Similarly, OpenAI’s extrapolated revenues are dramatically different to those reported.  While we do not have a final tally for 2024, the indicators presented in the documents viewed contrast starkly with the reported predictions from that year.  Both reports of OpenAI’s 2024 revenues ( CNBC , The Information ) are from the same year and are projections of potential final totals, though The Information’s story about OpenAI’s H1 CY2025 revenues said that “OpenAI generated $4.3 billion in revenue in the first half of 2025, about $16% more than it generated all of last year,” which would bring us to $3.612 billion in revenue, or $1.145 billion more than are implied by OpenAI’s revenue share numbers paid to Microsoft. I do not have an answer for inference, other than I believe that OpenAI is spending far more money on inference than we were led to believe, and that the current numbers reported do not resemble those in the documents.  Based on these numbers, it appears that OpenAI may be the single-most cash intensive startup of all time, and that the cost of running large language models may not be something that can be supported by revenues. Even if revenues were to match those that had been reported, OpenAI’s inference spend on Azure consumes them, and appears to scale linearly above revenue.  I also cannot reconcile these numbers with the reporting that OpenAI will have a cash burn of $9 billion in CY2025 . On inference alone, OpenAI has already spent $8.67 billion through Q3 CY2025.  Similarly, I cannot see a path for OpenAI to hit its projected $13 billion in revenue by the end of 2025, nor can I see on what basis Mr. Altman could state that OpenAI will make “well more” than $13 billion this year .  I cannot and will not speak to the financial health of OpenAI in this piece, but I will say this: these numbers are materially different to what has been reported, and the significance of OpenAI’s inference spend alone makes me wonder about the larger cost picture for generative AI. If it costs this much to run inference for OpenAI, I believe it costs this much for any generative AI firm to run on OpenAI’s models. If it does not, OpenAI’s costs are dramatically higher than the prices it is charging its customers, which makes me wonder whether price increases could be necessary to begin making more money, or at the very least losing less. Similarly, if OpenAI’s costs are this high, it makes me wonder about the margins of any frontier model developer.  Inference: $546.8 million Microsoft Revenue Share: $77.3 million Implied OpenAI revenue: at least $386.5 million Inference: $748.3 million Microsoft Revenue Share: $109.5 million Implied OpenAI Revenue: at least $547.5 million Inference: $1.005 billion Microsoft Revenue Share: $139.2 million Implied OpenAI Revenue: at least $696 million Inference: $1.467 billion Microsoft Revenue Share: $167.8 million Implied OpenAI Revenue: at least $839 million Total inference spend for CY2024: $3.767 billion Total implied revenue for CY2024: at least $2.469 billion Reported (projected) revenue for CY2024: $3.7 billion, per CNBC in September 2024. The Information also reported that expected revenue could be as high as $4 billion in a piece from October 2024. Reported inference costs for CY2024: $2 billion, per The Information .  Inference: $2.075 billion Microsoft Revenue Share: $206.4 million Implied OpenAI Revenue: $1.032 billion Inference: $2.947 billion Microsoft Revenue Share: $248.3 million Implied OpenAI Revenue: $1.241.5 billion H1 CY2025 Inference: $5.022 billion H1 CY2025 Revenue: at least $2.273 billion Reported H1 CY2025 Revenue: $4.3 billion ( per The Information ) Reported H1 CY2025 “Cost of Revenue”: $2.5 billion ( per The Information ) Inference: $3.648 billion Microsoft Revenue Share: $411.1 million Implied OpenAI Revenue: at least $2.056 billion

0 views

Premium: OpenAI Burned $4.1 Billion More Than We Knew - Where Is Its Money Going?

Soundtrack: Queens of the Stone Age - Song For The Dead Editor's Note: The original piece had a mathematical error around burnrate, it's been fixed. Also, welcome to another premium issue! Please do subscribe, this is a massive, 7000-or-so word piece, and that's the kind of depth you get every single week for your subscription. A few days ago, Sam Altman said that OpenAI’s revenues were “well more” than $13bn in 2025 , a statement I question based on the fact, based on other outlets’ reporting , OpenAI only made $4.3bn through the first half of 2025, and likely around a billion a month, which I estimate means the company made around $8bn by the end of September. This is an estimate. If I receive information to the contrary, I’ll report it. Nevertheless, OpenAI is also burning a lot of money. In recent public disclosures ( as reported by The Register ), Microsoft noted that it had funding commitments to OpenAI of $13bn, of which $11.6bn had been funded by September 30 2025.  These disclosures also revealed that OpenAI lost $12bn in the last quarter — Microsoft’s Fiscal Year Q1 2026, representing July through September 2025. To be clear, this is actual, real accounting, rather than the figures leaked to reporters. It’s not that leaks are necessarily a problem — it’s just that anything appearing on any kind of SEC filing generally has to pass a very, very high bar. There is absolutely nothing about these numbers that suggests that OpenAI is “profitable on inference” as Sam Altman told a group of reporters at a dinner in the middle of August . Let me get specific.  The Information reported that through the first half of 2025, OpenAI spent $6.7bn on research and development, “which likely include[s] servers to develop new artificial intelligence.” The common refrain here is that OpenAI “is spending so much on training that it’s eating the rest of its margins,” but if that were the case here, it would mean that OpenAI spent the equivalent of six months’ training in the space of three. I think the more likely answer is that OpenAI is spending massive amounts of money on staff, sales and marketing ($2bn alone in the first half of the year), real estate, lobbying , data, and, of course, inference.  According to The Information , OpenAI had $9.6bn in cash at the end of June 2025. Assuming that OpenAI lost $12bn at the end of calendar year Q3 2025, and made — I’m being generous — around $3.3bn (or $1.1bn a month) within that quarter, this would suggest OpenAI’s operations cost them over $15bn in the space of three months. Where, exactly, is this money going? And how do the numbers published actually make sense when you reconcile them with Microsoft’s disclosures?  In the space of three months, OpenAI’s costs — if we are to believe what was leaked to The Information (and, to be clear, I respect their reporting) — went from a net loss of $13.5bn in six months to, I assume, a net loss of $ 12bn in three months.   Though there are likely losses related to stock-based compensation, this only represented a cost of $2.5bn in the first half of 2025. The Information also reported that OpenAI “spent more than $2.5 billion on its cost of revenue,” suggesting inference costs of…around that?  I don’t know. I really don’t know. But something isn't right, and today I'm going to dig into it. In this newsletter I'm going to reveal how OpenAI's reported revenues and costs don't line up - and that there's $4.1 billion of cash burn that has yet to be reported elsewhere.

2 views

Big Tech Needs $2 Trillion In AI Revenue By 2030 or They Wasted Their Capex

As I've established again and again , we are in an AI bubble, and no, I cannot tell you when the bubble will pop, because we're in the stupidest financial era since the great financial crisis — though, I hope, not quite as severe in its eventually apocalyptic circumstances. By the end of the year, Microsoft, Amazon, Google and Meta will have spent over $400bn in capital expenditures, much of it focused on building AI infrastructure, on top of $228.4 bn in capital expenditures in 2024 and around $148bn in capital expenditures in 2023, for a total of around $776bn in the space of three years. At some point, all of these bills will have to come due. You see, big tech has been given incredible grace by the markets, never having to actually show that their revenue growth is coming from selling AI or AI-related services. Only Microsoft ever bothered, piping up in October 2024 to say it was making $833 million a month ($10bn ARR) from AI and then $1.08 billion a month in January 2025 ($13bn ARR), and then choosing to never report it again.  As reported by The Information , $10bn of Microsoft’s Azure revenue this year will come from OpenAI’s spend on compute, which, also reported by The Information , is paid at “...a heavily discounted rental rate that essentially only covers Microsoft’s costs for operating the servers.”  It’s absolutely astonishing that such egregious expenditures have never brought with them any scrutiny of the actual return on investment, or any kind of demands for disclosure of the resulting revenue. As a result, big tech has used their already-successful products and existing growth to pretend that something is actually happening other than Satya Nadella standing with his hands on his hips and talking about his favourite ways to use Copilot , a product that so unpopular that only eight million active Microsoft 365 customers are paying for it out of over 440 million users . This stuff is so unpopular, the world’s biggest and most powerful software company — and one with a virtual monopoly on the office productivity market — had to use dark patterns to get people to pay for this stuff.   Earlier in the week, OpenAI announced that it had “ successfully converted to a more traditional corporate structure ,” giving Microsoft a 27% position in the new entity worth $130bn, with the Wall Street Journal vaguely saying that Microsoft will also have “the ability to get more ownership as the for-profit becomes more valuable.”  Said deal also brought with it a commitment to spend $250bn on Microsoft Azure, which Microsoft has booked as “remaining performance obligations” in the same way that Oracle stuffed its RPOs with $300bn dollars from OpenAI, a company that cannot afford to pay either company even a tenth of those obligations and is on the hook for over a trillion dollars in the next four years . But OpenAI isn’t the only one with a bill coming due. As we speak, the markets are still in the thrall of an egregious, hype-stuffed bubble, with the hogs of Wall Streets braying and oinking their loudest as Jensen Huang claims — without any real breakdown as to who is buying them — that NVIDIA has over $500 bn in bookings for its AI chips , with little worry about whether there’s enough money to actually pay for all of those GPUs or, more operatively, whether anybody plugging them in is making any profits off of them. To be clear, everybody is losing money on AI. Every single startup, every single hyperscaler, everybody who isn’t selling GPUs or servers with GPUs inside them is losing money on AI. No matter how many headlines or analyst emissions you consume, the reality is that big tech has sunk over half a trillion dollars into this bullshit for two or three years, and they are only losing money.  So, at what point does all of this become worth it?  Actually, let me reframe the question: how does any of this become worthwhile? Today, I’m going to try and answer the question, and have ultimately come to a brutal conclusion: due to the onerous costs of building data centers, buying GPUs and running AI services, big tech has to add $2 Trillion in AI revenue in the next four years. Honestly, I think they might need more. No, really. Big tech has already spent $605 billion in capital expenditures since 2023, with a chunk of that dedicated to 5-year-old (A100) and 4-year-old (H100) GPUs, and the rest dedicated to buying Blackwell chips that The Information reports have gross margins of negative 100% : Big tech’s lack of tangible revenue (let alone profits) from selling AI services only compounds the problem, meaning every dollar of capex burned on AI is currently putting these companies further in the hole.  Yet there’s also another problem - that GPUs are uniquely expensive to purchase, run and maintain, requiring billions of dollars of data center construction and labor before you can even make a dollar. Worse still, their value decays every single year, in part thanks to the physics of heat and electricity, and NVIDIA releasing a new chip every single year .

0 views

This Is How Much Anthropic and Cursor Spend On Amazon Web Services

So, I originally planned for this to be on my premium newsletter, but decided it was better to publish on my free one so that you could all enjoy it. If you liked it, please consider subscribing to support my work. Here’s $10 off the first year of annual . I’ve also recorded an episode about this on my podcast Better Offline ( RSS feed , Apple , Spotify , iHeartRadio ), it’s a little different but both handle the same information, just subscribe and it'll pop up.  Over the last two years I have written again and again about the ruinous costs of running generative AI services, and today I’m coming to you with real proof. Based on discussions with sources with direct knowledge of their AWS billing, I am able to disclose the amounts that AI firms are spending, specifically Anthropic and AI coding company Cursor, its largest customer . I can exclusively reveal today Anthropic’s spending on Amazon Web Services for the entirety of 2024, and for every month in 2025 up until September, and that that Anthropic’s spend on compute far exceeds that previously reported.  Furthermore, I can confirm that through September, Anthropic has spent more than 100% of its estimated revenue (based on reporting in the last year) on Amazon Web Services, spending $2.66 billion on compute on an estimated $2.55 billion in revenue. Additionally, Cursor’s Amazon Web Services bills more than doubled from $6.2 million in May 2025 to $12.6 million in June 2025, exacerbating a cash crunch that began when Anthropic introduced Priority Service Tiers, an aggressive rent-seeking measure that begun what I call the Subprime AI Crisis , where model providers begin jacking up the prices on their previously subsidized rates. Although Cursor obtains the majority of its compute from Anthropic — with AWS contributing a relatively small amount, and likely also taking care of other parts of its business — the data seen reveals an overall direction of travel, where the costs of compute only keep on going up .  Let’s get to it. In February of this year, The information reported that Anthropic burned $5.6 billion in 2024, and made somewhere between $400 million and $600 million in revenue: While I don’t know about prepayment for services, I can confirm from a source with direct knowledge of billing that Anthropic spent $1.35 billion on Amazon Web Services in 2024, and has already spent $2.66 billion on Amazon Web Services through the end of September. Assuming that Anthropic made $600 million in revenue, this means that Anthropic spent $6.2 billion in 2024, leaving $4.85 billion in costs unaccounted for.  The Information’s piece also brings up another point: Before I go any further, I want to be clear that The Information’s reporting is sound, and I trust that their source (I have no idea who they are or what information was provided) was operating in good faith with good data. However, Anthropic is telling people it spent $1.5 billion on just training when it has an Amazon Web Services bill of $1.35 billion, which heavily suggests that its actual compute costs are significantly higher than we thought, because, to quote SemiAnalysis, “ a large share of Anthropic’s spending is going to Google Cloud .”  I am guessing, because I do not know, but with $4.85 billion of other expenses to account for, it’s reasonable to believe Anthropic spent an amount similar to its AWS spend on Google Cloud. I do not have any information to confirm this, but given the discrepancies mentioned above, this is an explanation that makes sense. I also will add that there is some sort of undisclosed cut that Amazon gets of Anthropic’s revenue, though it’s unclear how much. According to The Information , “Anthropic previously told some investors it paid a substantially higher percentage to Amazon [than OpenAI’s 20% revenue share with Microsoft] when companies purchase Anthropic models through Amazon.” I cannot confirm whether a similar revenue share agreement exists between Anthropic and Google. This also makes me wonder exactly where Anthropic’s money is going. Anthropic has, based on what I can find, raised $32 billion in the last two years, starting out 2023 with a $4 billion investment from Amazon from September 2023 (bringing the total to $37.5 billion), where Amazon was named its “primary cloud provider” nearly eight months after Anthropic announced Google was Anthropic’s “cloud provider.,” which Google responded to a month later by investing another $2 billion on October 27 2023 , “involving a $500 million upfront investment and an additional $1.5 billion to be invested over time,” bringing its total funding from 2023 to $6 billion. In 2024, it would raise several more rounds — one in January for $750 million, another in March for $884.1 million, another in May for $452.3 million, and another $4 billion from Amazon in November 2024 , which also saw it name AWS as Anthropic’s “primary cloud and training partner,” bringing its 2024 funding total to $6 billion. In 2025 so far, it’s raised a $1 billion round from Google , a $3.5 billion venture round in March, opened a $2.5 billion credit facility in May, and completed a $13 billion venture round in September, valuing the company at $183 billion . This brings its total 2025 funding to $20 billion.  While I do not have Anthropic’s 2023 numbers, its spend on AWS in 2024 — around $1.35 billion — leaves (as I’ve mentioned) $4.85 billion in costs that are unaccounted for. The Information reports that costs for Anthropic’s 521 research and development staff reached $160 million in 2024 , leaving 394 other employees unaccounted for (for 915 employees total), and also adding that Anthropic expects its headcount to increase to 1900 people by the end of 2025. The Information also adds that Anthropic “expects to stop burning cash in 2027.” This leaves two unanswered questions: An optimist might argue that Anthropic is just growing its pile of cash so it’s got a warchest to burn through in the future, but I have my doubts. In a memo revealed by WIRED , Anthropic CEO Dario Amodei stated that “if [Anthropic wanted] to stay on the frontier, [it would] gain a very large benefit from having access to this capital,” with “this capital” referring to money from the Middle East.  Anthropic and Amodei’s sudden willingness to take large swaths of capital from the Gulf States does not suggest that it’s not at least a little desperate for capital, especially given Anthropic has, according to Bloomberg , “recently held early funding talks with Abu Dhabi-based investment firm MGX” a month after raising $13 billion . In my opinion — and this is just my gut instinct — I believe that it is either significantly more expensive to run Anthropic than we know, or Anthropic’s leaked (and stated) revenue numbers are worse than we believe. I do not know one way or another, and will only report what I know. So, I’m going to do this a little differently than you’d expect, in that I’m going to lay out how much these companies spent, and draw throughlines from that spend to its reported revenue numbers and product announcements or events that may have caused its compute costs to increase. I’ve only got Cursor’s numbers from January through September 2025, but I have Anthropic’s AWS spend for both the entirety of 2024 and through September 2025. So, this term is one of the most abused terms in the world of software, but in this case , I am sticking to the idea that it means “month times 12.” So, if a company made $10m in January, you would say that its annualized revenue is $120m. Obviously, there’s a lot of (when you think about it, really obvious) problems with this kind of reporting — and thus, you only ever see it when it comes to pre-IPO firms — but that’s besides the point. I give you this explanation because, when contrasting Anthropic’s AWS spend with its revenues, I’ve had to work back from whatever annualized revenues were reported for that month.  Anthropic’s 2024 revenues are a little bit of a mystery, but, as mentioned above, The Information says it might be between $400 million and $600 million. Here’s its monthly AWS spend.  I’m gonna be nice here and say that Anthropic made $600 million in 2024 — the higher end of The Information’s reporting — meaning that it spent around 226% of its revenue ($1.359 billion) on Amazon Web Services. [Editor's note: this copy originally had incorrect maths on the %. Fixed now.] Thanks to my own analysis and reporting from outlets like The Information and Reuters, we have a pretty good idea of Anthropic’s revenues for much of the year. That said, July, August, and September get a little weirder, because we’re relying on “almosts” and “approachings,” as I’ll explain as we go. I’m also gonna do an analysis on a month-by-month basis, because it’s necessary to evaluate these numbers in context.  In this month, Anthropic’s reported revenue was somewhere from $875 million to $1 billion annualized , meaning either $72.91 million or $83 million for the month of January. In February, as reported by The Information , Anthropic hit $1.4 billion annualized revenue, or around $116 million each month. In March, as reported by Reuters , Anthropic hit $2 billion in annualized revenue, or $166 million in revenue. Because February is a short month, and the launch took place on February 24 2025, I’m considering the launches of Claude 3.7 Sonnet and Claude Code’s research preview to be a cost burden in the month of March. And man, what a burden! Costs increased by $59.1 million, primarily across compute categories, but with a large ($2 million since January) increase in monthly costs for S3 storage. I estimate, based on a 22.4% compound growth rate, that Anthropic hit around $2.44 billion in annualized revenue in April, or $204 million in revenue. Interestingly, this was the month where Anthropic launched its $100 and $200 dollar a month “Max” plan s, and it doesn’t seem to have dramatically increased its costs. Then again, Max is also the gateway to things like Claude Code, which I’ll get to shortly. In May, as reported by CNBC , Anthropic hit $3 billion in annualized revenue, or $250 million in monthly average revenue. This was a big month for Anthropic, with two huge launches on May 22 2025 — its new, “more powerful” models Claude Sonnet and Opus 4, as well as the general availability of its AI coding environment Claude Code. Eight days later, on May 30 2025, a page on Anthropic's API documentation appeared for the first time: " Service Tiers ": Accessing the priority tier requires you to make an up-front commitment to Anthropic , and said commitment is based on a number of months (1, 3, 6 or 12) and the number of input and output tokens you estimate you will use each minute.  As I’ll get into in my June analysis, Anthropic’s Service Tiers exist specifically for it to “guarantee” your company won’t face rate limits or any other service interruptions, requiring a minimum spend, minimum token throughput, and for you to pay higher rates when writing to the cache — which is, as I’ll explain, a big part of running an AI coding product like Cursor. Now, the jump in costs — $65.1 million or so between April and May — likely comes as a result of the final training for Sonnet and Opus 4, as well as, I imagine, some sort of testing to make sure Claude Code was ready to go. In June, as reported by The Information, Anthropic hit $4 billion in annualized revenue, or $333 million. Anthropic’s revenue spiked by $83 million this month, and so did its costs by $34.7 million.  I have, for a while, talked about the Subprime AI Crisis , where big tech and companies like Anthropic, after offering subsidized pricing to entice in customers, raise the rates on their customers to start covering more of their costs, leading to a cascade where businesses are forced to raise their prices to handle their new, exploding costs. And I was god damn right. Or, at least, it sure looks like I am. I’m hedging, forgive me. I cannot say for certain, but I see a pattern.  It’s likely the June 2025 spike in revenue came from the introduction of service tiers, which specifically target prompt caching, increasing the amount of tokens you’re charged for as an enterprise customer based on the term of the contract, and your forecast usage. Per my reporting in July :  Cursor, as Anthropic’s largest client (the second largest being Github Copilot), represents a material part of its revenue, and its surging popularity meant it was sending more and more revenue Anthropic’s way.  Anysphere, the company that develops Cursor, hit $500 million annualized revenue ($41.6 million) by the end of May , which Anthropic chose to celebrate by increasing its costs. On June 16 2025, Cursor launched a $200-a-month “Ultra” plan , as well as dramatic changes to its $20-a-month Pro pricing that, instead of offering 500 “fast” responses using models from Anthropic and OpenAI, now effectively provided you with “at least” whatever you paid a month (so $20-a-month got at least $20 of credit), massively increasing the costs for users , with one calling the changes a “rug pull” after spending $71 in a single day . As I’ll get to later in the piece, Cursor’s costs exploded from $6.19 million in May 2025 to $12.67 million in June 2025, and I believe this is a direct result of Anthropic’s sudden and aggressive cost increases.  Similarly, Replit, another AI coding startup, moved to “Effort-Based Pricing” on June 18 2025 . I have not got any information around its AWS spend. I’ll get into this a bit later, but I find this whole situation disgusting. In July, as reported by Bloomberg , Anthropic hit $5 billion in annualized revenue, or $416 million. While July wasn’t a huge month for announcements, it was allegedly the month that Claude Code was generating “nearly $400 million in annualized revenue,” or $33.3 million ( according to The Information , who says Anthropic was “approaching” $5 billion in annualized revenue - which likely means LESS than that - but I’m going to go with the full $5 billion annualized for sake of fairness.  There’s roughly an $83 million bump in Anthropic’s revenue between June and July 2025, and I think Claude Code and its new rates are a big part of it. What’s fascinating is that cloud costs didn’t increase too much — by only $1.8 million, to be specific. In August, according to Anthropic, its run-rate “ reached over $5 billion ,” or in or around $416 million. I am not giving it anything more than $5 billion, especially considering in July Bloomberg’s reporting said “about $5 billion.” Costs grew by $60.5 this month, potentially due to the launch of Claude Opus 4.1 , Anthropic’s more aggressively expensive model, though revenues do not appear to have grown much along the way. Yet what’s very interesting is that Anthropic — starting August 28 — launched weekly rate limits on its Claude Pro and Max plans. I wonder why? Oh fuck! Look at that massive cost explosion! Anyway, according to Reuters, Anthropic’s run rate is “approaching $7 billion” in October , and for the sake of fairness , I am going to just say it has $7 billion annualized, though I believe this number to be lower. “Approaching” can mean a lot of different things — $6.1 billion, $6.5 billion — and because I already anticipate a lot of accusations of “FUD,” I’m going to err on the side of generosity. If we assume a $6.5 billion annualized rate, that would make this month’s revenue $541.6 million, or 95.8% of its AWS spend.   Nevertheless, Anthropic’s costs exploded in the space of a month by $135.2 million (35%) - likely due to the fact that users, as I reported in mid-July, were costing it thousands or tens of thousands of dollars in compute , a problem it still faces to this day, with VibeRank showing a user currently spending $51,291 in a calendar month on a $200-a-month subscription . If there were other costs, they likely had something to do with the training runs for the launches of Sonnet 4.5 on September 29 2025 and Haiku 4.5 in October 2025 . While these costs only speak to one part of its cloud stack — Anthropic has an unknowable amount of cloud spend on Google Cloud, and the data I have only covers AWS — it is simply remarkable how much this company spends on AWS, and how rapidly its costs seem to escalate as it grows. Though things improved slightly over time — in that Anthropic is no longer burning over 200% of its revenue on AWS alone — these costs have still dramatically escalated, and done so in an aggressive and arbitrary manner.  So, I wanted to visualize this part of the story, because I think it’s important to see the various different scenarios. THE NUMBERS I AM USING ARE ESTIMATES CALCULATED BASED ON 25%, 50% and 100% OF THE AMOUNTS THAT ANTHROPIC HAS SPENT ON AMAZON WEB SERVICES THROUGH SEPTEMBER.  I apologize for all the noise, I just want it to be crystal clear what you see next.   As you can see, all it takes is for Anthropic to spend (I am estimating) around 25% of its Amazon Web Services bills (for a total of around $3.33 billion in compute costs through the end of September) to savage any and all revenue ($2.55 billion) it’s making.  Assuming Anthropic spends half of its  AWS spend on Google Cloud, this number climbs to $3.99 billion, and if you assume - and to be clear, this is an estimate - that it spends around the same on both Google Cloud and AWS, Anthropic has spent $5.3 billion on compute through the end of September. I can’t tell you which it is, just that we know for certain that Anthropic is spending money on Google Cloud, and because Google owns 14% of the company — rivalling estimates saying Amazon owns around 15-19% — it’s fair to assume that there’s a significant spend. I have sat with these numbers for a great deal of time, and I can’t find any evidence that Anthropic has any path to profitability outside of aggressively increasing the prices on their customers to the point that its services will become untenable for consumers and enterprise customers alike. As you can see from these estimated and reported revenues, Anthropic’s AWS costs appear to increase in a near-linear fashion with its revenues, meaning that the current pricing — including rent-seeking measures like Priority Service Tiers — isn’t working to meet the burden of its costs. We do not know its Google Cloud spend, but I’d be shocked if it was anything less than 50% of its AWS bill. If that’s the case, Anthropic is in real trouble - the cost of the services underlying its business increase the more money they make. It’s becoming increasingly apparent that Large Language Models are not a profitable business. While I cannot speak to Amazon Web Services’ actual costs, it’s making $2.66 billion from Anthropic, which is the second largest foundation model company in the world.  Is that really worth $105 billion in capital expenditures ? Is that really worth building a giant 1200 acre data center in Indiana with 2.2GW of electricity? What’s the plan, exactly? Let Anthropic burn money for the foreseeable future until it dies, and then pick up the pieces? Wait until Wall Street gets mad at you and then pull the plug? Who knows.  But let’s change gears and talk about Cursor — Anthropic’s largest client and, at this point, a victim of circumstance. Amazon sells Anthropic’s models through Amazon Bedrock , and I believe that AI startups are compelled to spend some of their AI model compute costs through Amazon Web Services. Cursor also sends money directly to Anthropic and OpenAI, meaning that these costs are only one piece of its overall compute costs. In any case, it’s very clear that Cursor buys some degree of its Anthropic model spend through Amazon. I’ll also add that Tom Dotan of Newcomer reported a few months ago that an investor told him that “Cursor is spending 100% of its revenue on Anthropic.” Unlike Anthropic, we lack thorough reporting of the month-by-month breakdown of Cursor’s revenues. I will, however, mention them in the month I have them. For the sake of readability — and because we really don’t have much information on Cursor’s revenues beyond a few months — I’m going to stick to a bullet point list.  As discussed above, Cursor announced (along with their price change and $200-a-month plan) several multi-year partnerships with xAI, Anthropic, OpenAI and Google, suggesting that it has direct agreements with Anthropic itself versus one with AWS to guarantee “this volume of compute at a predictable price.”  Based on its spend with AWS, I do not see a strong “minimum” spend that would suggest that they have a similar deal with Amazon — likely because Amazon handles more than its infrastructure than just compute, but incentivizes it to spend on Anthropic’s models through AWS by offering discounts, something I’ve confirmed with a source.  In any case, here’s what Cursor spent on AWS. When I wrote that Anthropic and OpenAI had begun the Subprime AI Crisis back in July, I assumed that the increase in costs was burdensome, but having the information from its AWS bills, it seems that Anthropic’s actions directly caused Cursor’s costs to explode by over 100%.  While I can’t definitively say “this is exactly what did it,” the timelines match up exactly, the costs have never come down, Amazon offers provisioned throughput , and, more than likely, Cursor needs to keep a standard of uptime similar to that of Anthropic’s own direct API access. If this is what happened, it’s deeply shameful.  Cursor, Anthropic’s largest customer , in the very same month it hit $500 million in annualized revenue, immediately had its AWS and Anthropic-related costs explode to the point that it had to dramatically reduce the value of its product just as it hit the apex of its revenue growth.  It’s very difficult to see Service Tiers as anything other than an aggressive rent-seeking maneuver. Yet another undiscussed part of the story is that the launch of Claude 4 Opus and Sonnet — and the subsequent launch of Service Tiers — coincided with the launch of Claude Code , a product that directly competes with Cursor, without the burden of having to pay itself for the cost of models or, indeed, having to deal with its own “Service Tiers.” Anthropic may have increased the prices on its largest client at the time it was launching a competitor, and I believe that this is what awaits any product built on top of OpenAI or Anthropic’s models.  I realize this has been a long, number-stuffed article, but the long-and-short of it is simple: Anthropic is burning all of its revenue on compute, and Anthropic will willingly increase the prices on its customers if it’ll help it burn less money, even though that doesn’t seem to be working. What I believe happened to Cursor will likely happen to every AI-native company, because in a very real sense, Anthropic’s products are a wrapper for its own models, except it only has to pay the (unprofitable) costs of running them on Amazon Web Services and Google Cloud. As a result, both OpenAI and Anthropic can (and may very well!) devour the market of any company that builds on top of their models.  OpenAI may have given Cursor free access to its GPT-5 models in August, but a month later on September 15 2025 it debuted massive upgrades to its competitive “Codex” platform.  Any product built on top of an AI model that shows any kind of success can be cloned immediately by OpenAI and Anthropic, and I believe that we’re going to see multiple price increases on AI-native companies in the next few months. After all, OpenAI already has its own priority processing product, which it launched shortly after Anthropic’s in June . The ultimate problem is that there really are no winners in this situation. If Anthropic kills Cursor through aggressive rent-seeking, that directly eats into its own revenues. If Anthropic lets Cursor succeed, that’s revenue , but it’s also clearly unprofitable revenue . Everybody loses, but nobody loses more than Cursor’s (and other AI companies’) customers.  I’ve come away from this piece with a feeling of dread. Anthropic’s costs are out of control, and as things get more desperate, it appears to be lashing out at its customers, both companies like Cursor and Claude Code customers facing weekly rate limits on their more-powerful models who are chided for using a product they pay for. Again, I cannot say for certain, but the spike in costs is clear, and it feels like more than a coincidence to me.  There is no period of time that I can see in the just under two years of data I’ve been party to that suggests that Anthropic has any means of — or any success doing — cost-cutting, and the only thing this company seems capable of doing is increasing the amount of money it burns on a monthly basis.  Based on what I have been party to, the more successful Anthropic becomes, the more its services cost. The cost of inference is clearly increasing for customers , but based on its escalating monthly costs, the cost of inference appears to be high for Anthropic too, though it’s impossible to tell how much of its compute is based on training versus running inference. In any case, these costs seem to increase with the amount of money Anthropic makes, meaning that the current pricing of both subscriptions and API access seems unprofitable, and must increase dramatically — from my calculations, a 100% price increase might work, but good luck retaining every single customer and their customers too! — for this company to ever become sustainable.  I don’t think that people would pay those prices. If anything, I think what we’re seeing in these numbers is a company bleeding out from costs that escalate the more that its user base grows. This is just my opinion, of course.  I’m tired of watching these companies burn billions of dollars to destroy our environment and steal from everybody. I’m tired that so many people have tried to pretend there’s a justification for burning billions of dollars every year, clinging to empty tropes about how this is just like Uber or Amazon Web Services , when Anthropic has built something far more mediocre.  Mr. Amodei, I am sure you will read this piece, and I can make time to chat in person on my show Better Offline. Perhaps this Friday? I even have some studio time on the books.  I do not have all the answers! I am going to do my best to go through the information I’ve obtained and give you a thorough review and analysis. This information provides a revealing — though incomplete — insight into the costs of running Anthropic and Cursor, but does not include other costs, like salaries and compute obtained from other providers. I cannot tell you (and do not have insight into) Anthropic’s actual private moves. Any conclusions or speculation I make in this article will be based on my interpretations of the information I’ve received, as well as other publicly-available information. I have used estimates of Anthropic’s revenue based on reporting across the last ten months. Any estimates I make are detailed and they are brief.  These costs are inclusive of every product bought on Amazon Web Services, including EC2, storage and database services (as well as literally everything else they pay for). Anthropic works with both Amazon Web Services and Google Cloud for compute. I do not have any information about its Google Cloud spend. The reason I bring this up is that Anthropic’s revenue is already being eaten up by its AWS spend. It’s likely billions more in the hole from Google Cloud and other operational expenses. I have confirmed with sources that every single number I give around Anthropic and Cursor’s AWS spend is the final cash paid to Amazon after any discounts or credits. While I cannot disclose the identity of my source, I am 100% confident in these numbers, and have verified their veracity with other sources. Where is the rest of Anthropic’s money going? How will it “stop burning cash” when its operational costs explode as its revenue increases? January 2024 - $52.9 million February 2024 - $60.9 million March 2024 - $74.3 million April 2024 - $101.1 million May 2024 - $100.1 million June 2024 - $101.8 million July 2024 - $118.9 million August 2024 - $128.8 million September 2024 - $127.8 million October 2024 - $169.6 million November 2024 - $146.5 million December 2024 - $176.1 million January 2025 - $1.459 million This, apparently, is the month that Cursor hit $100 million annualized revenue — or $8.3 million, meaning it spent 17.5% of its revenue on AWS. February 2025 - $2.47 million March 2025 - $4.39 million April 2025 - $4.74 million Cursor hit $200 million annualized ($16.6 million) at the end of March 2025 , according to The Information, working out to spending 28% of its revenue on AWS.   May 2025 - $6.19 million June 2025 - $12.67 million So, Bloomberg reported that Cursor hit $500 million on June 5 2025 , along with raising a $900 million funding round. Great news! Turns out it’d need to start handing a lot of that to Anthropic. This was, as I’ve discussed above, the month when Anthropic forced it to adopt “Service Tiers”. I go into detail about the situation here , but the long and short of it is that Anthropic increased the amount of tokens you burned by writing stuff to the cache (think of it like RAM in a computer), and AI coding startups are very cache heavy, meaning that Cursor immediately took on what I believed would be massive new costs. As I discuss in what I just linked, this led Cursor to aggressively change its product, thereby vastly increasing its customers’ costs if they wanted to use the same service. That same month, Cursor’s AWS costs — which I believe are the minority of its cloud compute costs — exploded by 104% (or by $6.48 million), and never returned to their previous levels. It’s conceivable that this surge is due to the compute-heavy nature of the latest Claude 4 models released that month — or, perhaps, Cursor sending more of its users to other models that it runs on Bedrock.  July 2025 - $15.5 million As you can see, Cursor’s costs continue to balloon in July, and I am guessing it’s because of the Service Tiers situation — which, I believe, indirectly resulted in Cursor pushing more users to models that it runs on Amazon’s infrastructure. August 2025 - $9.67 million So, I can only guess as to why there was a drop here. User churn? It could be the launch of GPT-5 on Cursor , which gave users a week of free access to OpenAI’s new models. What’s also interesting is that this was the month when Cursor announced that its previously free “auto” model (where Cursor would select the best available premium model or its own model) would now bill at “ competitive token rates ,” by which I mean it went from charging nothing to $1.25 per million input and $6 per million output tokens. This change would take effect on September 15 2025. On August 10 2025 , Tom Dotan of Newcomer reported that Cursor was “well above” $500 million in annualized revenue based on commentary from two sources. September 2025 - $12.91 million Per the above, this is the month when Cursor started charging for its “auto” model.

1 views