Latest Posts (20 found)

2025, A Retrospective

I'm not dropping this on the actual newsletter feed because it's a little self-indulgent and I'm not sure 88,000 or so people want an email about it. I have a lot of trouble giving myself credit for anything, and genuinely think I could be doing more or that I "didn't do that much" because I'm at a computer or on a microphone versus serving customers in person or something or rather. To try and give some sort of scale to the work from the last year, I've written down the highlights. It appears that 2025 was an insane year for me. Here's the rundown: I also did no less than 50 different interviews, with highlights including: Next year I will be finishing up my book Why Everything Stopped Working (due out in 2027), and continuing to dig into the nightmare of corporate finance I've found myself in the center of. I have no idea what happens next. My fear - and expectation - is that many people still do not realize that there is an AI bubble or will not accept how significant and dangerous the bubble is, meaning that everybody is going to act like AI is the biggest most hugest and most special thing in the world right up until they accept that it isn't. I will always cover tech, but I get the sense I'll be looking into other things next year - private equity, for one - that have caught my eye toward the end of the year. I realize right now everything feels a little intense and bleak, but at this time of year it's always worth remembering to be kinder and more thoughtful toward those close to us. It's cheesy, but it's the best thing you can possibly do. It's easy to feel isolated by the amount of hogs oinking at the prospect of laying you off or replacing you - and it turns out there are far more people that are afraid or outraged than there are executives or AI boosters. Never forget (or forgive them for) what they've done to the computer, and never forget that those scorned by the AI bubble are legion. Join me on r/Betteroffline , you are far from alone. I intend to spend the next year becoming a better writer, analyst, broadcaster, entertainer and person. I appreciate every single one of you that reads my work, and hope you'll continue to do so in the future. See you in 2026, [email protected] Cory Doctorow quoted me at the very front of his new book . I recorded over 110 episodes of my tech podcast Better Offline , starting with a 13.5 hour-long pop-up radio show at CES 2025. And yes, it's back next week, featuring David Roth, Adam Conover, Ed Ongweso Jr., Chloe Radcliffe, Robert Evans, Gare Davis, Cory Doctorow and a host of other great guests. Better Offline also won the Webby for best business podcast episode for last year's episode The Man That Destroyed Google Search . I also had some fantastic interviews, like when I went out to North Carolina to interview Steve Burke of GamersNexus , chatted to author Adam Becker about the technoligarchs , Pablo Torres and David Roth about independent media , and even comedian Andy Richter . I wrote over 440,000 words, not including the work I've done on the book or any notes I took to prepare for my show or newsletter. The newsletter also grew from 47,000~ish people at the end of last year to around 88,500 people. I want to be at 150,000 this time next year. I wrote some of my favourite free newsletters (many of which were turned into episodes of the show): Deep Impact , my analysis of the DeepSeek situation and why it scared the American AI industry (clue: it's cost-related and nothing to do with "national security"). Power Cut , an early warning sign that the bubble was bursting as Microsoft pulled out of gigawatts of data center deals. CoreWeave Is A Time Bomb , published March 17 2025, way before most had even bothered to think about this company deeply, a savage analysis of a "neocloud" - a company that only sells AI compute - backed by NVIDIA, who is also a customer, who CoreWeave also buys billions of GPUs from. The Era of the Business Idiot , probably my favourite piece I wrote this year, the story of how middle management has seized power, breeding out true meritocracy and value-creation in favor of symbolic growth and superficial intelligence. It ties together everything I've ever written. Make Fun Of Them , the piece that restarted my fire after a bit of a low point, where I call for a radical new approach to tech CEOs: mocking them, because they talk like idiots and provide little value to society outside of their dedication to shareholder value. The Hater's Guide To The AI Bubble , a piece that elevated me in a way that I never expected, a thorough and brutal broadside against an industry that has no profits and terrible costs, discussing how generative AI is nothing like Uber or Amazon Web Services, there are no profitable generative AI companies, agents do not and cannot exist, there is no AI SaaS story, and everything rides - and dies - on selling GPUs. AI Is A Money Trap , a piece about how AI companies' ridiculous valuations and unsustainable businesses make exits or IPOs impossible, how data center developers have no exit route, and US economic growth has become shouldered entirely by big tech. How To Argue With An AI Booster , a comprehensive guide to arguing with AI boosters, addressing both their bad faith debate style and their specific (and flimsy) arguments as to why generative AI is the future. The Case Against Generative AI , a comprehensive analysis of a financial collapse built on myths, the markets’ unhealthy obsession with NVIDIA's growth, and the fact that there is not enough money in the world to fund OpenAI. NVIDIA Isn't Enron, So What Is It? - A lighthearted and indepth analysis of NVIDIA as a company, a historic rundown of what happened with Lucent, WorldCom and Enron, as well as a guide to how it makes money, how its future relies on endless debt, how millions of GPUs are sitting waiting to be installed, and why it no longer makes sense to buy more GPUs. The Enshittifinancial Crisis , a piece about The Enshittifinancial Crisis, the fourth stage of enshittification, where companies turn on their shareholders. Unprofitable, unsustainable AI threatens future of venture capital, private equity and the markets themselves. I published two massive exclusives: How Much Anthropic and Cursor Spend On Amazon Web Services , which is exactly what it sounds like. How Much OpenAI Spends On Inference and Its Revenue Share With Microsoft , which also includes evidence that OpenAI's revenues were at around $4.5 billion by the end of September, a vast difference from the $4.3 billion for the first half of the year published by other outlets. The Financial Times , The Register and TechCrunch covered, while others aggressively ignored it. I launched the premium edition of my newsletter, and published multiple deeply important pieces of research: The Hater's Guide to NVIDIA , the single-most exhaustive rundown of the rickety nature of the company sitting at the top of the stock market – how its future is dependent on massive debt, how AI revenues will never pay back the cost of these GPUs, and how there are likely millions of GPUs sitting in warehouses, as there's no chance that 6 million Blackwell GPUs have actually been installed and turned on. Published November 24 2025, I made this call several weeks before famed short seller Michael Burry would do the same . How Does GPT-5 Work? - an exclusive piece (reported using internal documents from an infrastructure provider) on how GPT-5's router mode actually costs OpenAI more money to run. OpenAI Burned $4.1 Billion More Than We Knew - Where Is Its Money Going? - an analysis of reported cash burn and investments in OpenAI that proved the company burned more than $4 billion more than we know. OpenAI and Oracle Are Full of Crap - on September 12 2025, months before anybody started worrying about it, I published proof that OpenAI couldn't afford to pay Oracle and Oracle didn't have the capacity to service their farcical $300 billion, 5-year-long deal . OpenAI Needs A Trillion Dollars In The Next Four Years - on September 26 2025, I published a thorough review and analysis of OpenAI's agreed-upon compute and data center deals, and proved that it needed at least $1 trillion in the next four years to pull any of it off, several weeks before anyone else did . The Hater's Guide To The AI Bubble Volume 2 : a massive omnibus summary of every major AI company's weaknesses - the pathetic revenues, terrible margins and horrifying costs, and how hopeless everything feels. My own interview in the New Yorker's legendary "Talk Of The Town" section . Profiles with Slate , the Financial Times and FastCompany . An interview with MarketWatch about The Hater's Guide to the AI Bubble . A panel in Seattle with Cory Doctorow about Enshittification and The Rot Economy . A chat with Brooke Gladstone on NPR about the AI bubble . Two interviews with the BBC. An interview with Van Lathan and Rachel Lindsay on The Ringer's Higher Learning . Two episodes of Chapo Trap House. Interviews with The Lever , Parker Molloy's The Present Age , Bloomberg's Everybody's Business , The Majority Report , Newsweek's 1600 Podcast , TechCrunch , Defector , the New Yorker (by the legendary Cal Newport) , Guy Kawasaki's Remarkable People , both Slate's Death, Sex & Money and the excellent TBD podcast , TrashFuture multiple times, The Times Radio (I think multiple times?) and NPR Marketplace . Citations in an astonishing amount of major media outlets, with highlights including The Economist , The Guardian , Charlie Brooker (!) in The Hollywood Reporter , ArsTechnica , CNN , Semafor and ZDNet

0 views

The Enshittifinancial Crisis

Soundtrack: Lynyrd Skynyrd — Free Bird This piece is over 19,000 words, and took me a great deal of writing and research. If you liked it, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 15,000 words, including vast, extremely detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I am regularly several steps ahead in my coverage, and you get an absolute ton of value. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual.  Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.  If you have any issues signing up for premium, please email me at [email protected]. One time, a good friend of mine told me that the more I learned about finance, the more pissed off I’d get. He was right. There is an echoing melancholy to this era, as we watch the end of Silicon Valley’s hypergrowth era, the horrifying result of 15+ years of steering the tech industry away from solving actual problems in pursuit of eternal growth. Everything is more expensive, and every tech product has gotten worse, all so that every company can “do AI,” whatever the fuck that means. We are watching one of the greatest wastes of money in history, all as people are told that there “just isn’t the money” to build things like housing, or provide Americans with universal healthcare, or better schools, or create the means for the average person to accumulate wealth. The money does exist, it just exists for those who want to gamble — private equity firms, “ business development companies ” that exist to give money to other companies , venture capitalists, and banks that are getting desperate and need an overnight shot of capital from the Federal Reserve’s Overnight Repurchase Facility or Discount Window , two worrying indicators of bank stress I’ll get into later. No, the money does not exist for you or me or a person . Money is for entities that could potentially funnel more money into the economy , even if the ways that these entities use the money are reckless and foolhardy, because the system’s intent on keeping entities alive incentivizes it. We are in an era where the average person is told to pull up their bootstraps, to work harder, to struggle more , because, as Martin Luther King Jr. once said, it’s socialism for the rich and rugged free market capitalism for the poor. The “free market” is a fucking con . When you or I run out of money, our things are taken from us, we receive increasingly-panicked letters, we get phone calls and texts and emails and demands, we are told that all will be lost if we don’t “work it out,” because the financial system is not about an exchange of value but whether or not you can enter into the currently agreed-upon con.  By letting neoliberalism and the scourge of the free markets rule , modern society created the conditions for what I call The Enshittifinancial Crisis — the place at which my friend Cory Doctorow’s Enshittification Theory meets my own Rot Economy Thesis in a fourth stage of Enshittification. Per The New Yorker : I’ll walk you through it. Facebook was a huge, free platform, much like Instagram, that offered fast and easy access to everybody you knew. It acquired Instagram in 2012 to kill off a likely competitor, and over time would start making both products worse — clickbait notifications, a mandatory algorithmic feed that deliberately emotionally manipulated people and stoked political division, eventually becoming full of AI slop and videos, all so that Meta could continue to sell billions of dollars of ads a quarter. Per Kyle Chayka of the New Yorker, “Facebook’s feed, now choked with A.I.-generated garbage and short-form videos, is well into the third act of enshittification.” The third stage is critical, in that it’s when the company also turns on its business customers. A Marketing Brew story from September of last year told the tale of multiple advertisers who found their campaigns switching to different audiences, wasting their money and getting questionable results. A New York Times story from 2021 described companies losing upwards of 70% of their revenue during a Facebook ads outage , another from 2018 described how Meta (then Facebook) deliberately hid issues with its measurement of engagement on videos from advertisers for over a year , and more recently, Meta’s ads tools started switching out top-performing ads with AI-generated ones , in one case targeting men aged 30 to 45 with an AI-generated grandma, all without warning the advertiser . Meta doesn’t give a shit, because investors and analysts don’t give a shit. I could say “sell-side analysts” here — the ones that are trying to get you to buy a stock — but based on every analyst report I’ve read from a major bank or hedge fund, I truly think everybody is complicit.  In November 2025, Reuters revealed that Meta projected in late 2024 that 10% of its annual revenue ($16 billion) would come from advertisements for scams or banned goods , mere weeks after Meta announced a ridiculous $27 billion data center debt package , one that used deep accountancy magic to keep it off of its balance sheet despite Meta guaranteeing the entirety of the loan. One would think this would horrify investors for two reasons: One would be wrong. Morgan Stanley said a few weeks ago that it is “one of the handful of companies that can leverage its leading data, distribution and investments in AI,” and raised its target to $750, with a $1000-a-share bull case. Wedbush raised Meta’s price to $920, and Bank of America staunchly held firm at…$810 . I can find no analyst commentary on Meta making sixteen billion dollars on fraud , because it doesn’t matter to them, because this is the Rot Economy, and all that matters is number go up.   Reality — such as whether there’s any revenue in AI, or whether it’s a good idea that Meta is spending over $70 billion this year on capital expenditures on a product that has generated no revenue (and please, fucking spare me the bullshit around “Meta’s AI ads play,” that whole story is nonsense) — doesn’t matter to analysts, because stocks are thoroughly, inextricably enshittified, and analysts don’t even realize it’s happening. The stages of enshittification usually involve some sort of devil’s deal.  We have now entered Enshittification Stage 4, where businesses turn on shareholders. Analysts and investors have become trapped in the same kind of loathsome platform play as consumers and businesses, and face exactly the same kinds of punishment through the devaluation of the stock itself. Where platforms have prioritized profits over the health and happiness of users or business customers, they are now prioritizing stock value over literally anything , and have — through the remarkable growth of tech stocks in particular — created a placated and thoroughly whipped investor and analyst sect that never asks questions and always celebrates whatever the next big thing is meant to be. The value of a “stock” is not based on whether the business is healthy, or its future certain, but on its potential price to grow, and analysts have, thanks to an incredible bull run of tech stocks going on over a decade, been able to say “I bet software will be big” for most of the time, going on CNBC or Bloomberg and blandly repeating whatever it is that a tech CEO just said, all without any worries about “responsibility” or “the truth.”  This is because big tech stocks — and many other big stocks, if I’m honest — have made their lives easy as long as they don’t ask questions. Number always seems to be going up for software companies, and all you need to do is provide a vociferous defense of the “next big thing,” and come up with a smart-sounding model that justifies eternal growth.  This is entirely disconnected from the products themselves, which don’t matter as long as Number Go Up . If net income is high and the company estimates it will continue to grow, then the company can do whatever the fuck it want with the product it sells or the things that it buys. Software Has Eaten The World in the sense that Andreesen got his wish, with investors now caring more about the “intrinsic value” of software companies rather than the businesses or products themselves. And because that’s happening, investors aren’t bothering to think too hard about the tech itself, or the deteriorating products underlying tech companies, because “these guys have always worked it out” and “these companies have always managed to keep growing.” As a result, nobody really looks too deep. Minute changes to accounting in earnings filings are ignored, egregious amounts of debt are waved off, and hundreds of billions of dollars of capital expenditures are seen as “the new AI revolution” versus “a huge waste of money.” By incentivizing the Rot Economy — making stocks disconnected from the value of the company beyond net income and future earnings guidance — companies have found ways to enshittify their own stocks, and shareholders will be the ones who suffer, all thanks to the very downstream pressure that they’ve chosen to ignore for decades. You see, while one might (correctly) see that the deterioration of products like Facebook and Google Search was a sign of desperation, it’s important to also see it as the companies themselves orienting around what they believe analysts and investors want to see.   You can also interpret this as weakness, but I see it another way: stock manipulation, and a deliberate attempt to reshape what “value” means in the eyes of customers and investors. If the true value of a stock is meant to be based on the value of its business, cash flow, earnings and future growth, a company deliberately changing its products is an intentional interference with value itself, as are any and all deceptive accounting practices used to boost valuations. But the real problem is that analysts do not…well…analyze, not, at least, if it goes against the market consensus. That’s why Goldman Sachs and JP Morgan and Futurum and Gartner and Forrester and McKinsey and Morgan Stanley all said that the metaverse was inevitable — because they do not actually care about the underlying business itself, just its ability to grow on paper.  Need proof that none of these people give a fuck about actual value? Mark Zuckerberg burned $77 billion on the metaverse , creating little revenue or shareholder value and also burning all that money without any real explanation as to where it went. The street didn’t give a shit because meta’s existent ads business continued to grow, same as it didn’t give a shit that Mark Zuckerberg burned $70 billion on capex, even though we also really don’t know where that went either. In fact, we really have no idea where all this AI spending is going. These companies don’t tell us anything. They don’t tell us how many GPUs they have, or where those GPUs are, or how many of them are installed, or what their capacity is, or how much money they cost to run, or how much money they make. Why would we? Analysts don’t even look at earnings beyond making sure they beat on estimates. They’ve been trained for 20 years to take a puddle-deep look at the numbers to make sure things look okay, look around their peers and make sure nobody else is saying something bad, and go on and collect fees.  The same goes for hedge funds and banks propping up these stocks rather than asking meaningful questions or demanding meaningful answers. In the last two years, every major hyperscaler has extended the “useful life” of its servers from 3 years to either 5.5 or 6 years — and in simple terms, this allowed them to incur a smaller depreciation expense each quarter as a result, boosting net income. Those who are meant to be critical — analysts and investors sinking money into these stocks — had effectively no reaction, despite the fact that Meta used ( per the Wall Street Journal ) this adjustment to reduce its expenses by $2.3 billion in the first three quarters of this year.   This is quite literally disconnected from reality, and done based on internal accounting that we are not party to. Every single tech firm buying GPUs did this and benefited to the tune of billions of dollars in decreased revenues, and analysts thought it was fine and dandy because number went up.  Shareholders are now subordinate to the shares themselves, reacting in the way that the shares demand they do, being happy for what the companies behind the shares give them, and analysts, investors and even the media spend far more energy fighting the doubters than they do showing these companies scrutiny.   Much like a user of an enshittified platform, investors and analysts are frogs in a pot, the experience of owning a stock deteriorating since Jack Welch and GE taught corporations that the markets are run with the kind of simplistic mindset built for grifter exploitation.  And much like those platforms, corporations have found as many ways as possible to abuse shareholders, seeing what they can get away with, seeing how far they can push things as long as the numbers look right, because analysts are no longer looking for sensible ideas. Let me give you an example I’ve used before. Back in November 1998, Winstar Communications signed a “$2 billion equipment and finance agreement with Lucent Technologies” where Winstar would borrow money from Lucent to buy stuff from Lucent, all to create $100 million in revenue over 5 years.  In December 1999, Barron’s wrote a piece called “ In 1999 Tech Ruled ”: Airnet? Bankrupt . WinStar? Horribly bankrupt. While Ciena survived, it had spent over a billion dollars to acquire other companies (all stock , of course), only to see its revenue dwindle basically overnight from $1.6bn to $300 million as the optical cable industry collapsed .   One would have been able to work out that Winstar was a dog, or that all of these companies were dogs, if you were to look at the numbers, such as “how much they made versus how much they were spending.” Instead, analysts, the media and banks chose to pump up these stocks because the numbers kept getting bigger, and when the collapse happened, rationalizations were immediately created — there were a few bad apples (Enron, Winstar, WorldCom), “the fiber was useful” and thus laying it was worthwhile, and otherwise everything was fine. The problem, in everybody else’s mind, was that everybody had got a bit distracted and some companies that weren’t good would die. All of that lost money was only a problem because it didn’t pay off. This was a misplaced gamble, and it taught tech executives one powerful lesson: earnings must be good, without fail, by any means necessary, and otherwise nothing else matters to Wall Street.  It’s all about incentives. A sell-side analyst that tells you not to buy something is a problem. A journalist that is skeptical or critical of an industry in the midst of a growth or hype cycle is considered a “hater” — don’t I fucking know it . Analysts that do not sing the same tune as everybody else are marginalized, mocked and aggressively policed. And I don’t fucking care. Stop being fucking cowards. By not being skeptical or critical you are going to lead regular people into the jaws of another collapse. The dot com bubble was actually a great time to start reevaluating how and why we value stocks — to say “hey, wait, that $2 billion deal will only make $100 million in revenue?” or “this company spends $5 for every $1 it makes!” — but nobody, it appears, remained particularly suspicious of the tech industry, or a stock market that was increasingly orienting itself around conning shareholders. And because shareholders, analysts and the media alike refused to retain a single shred of suspicion leaving the dot com era, the mania never actually subsided. Financial publications still found themselves dedicated to explaining why the latest hype cycle was real. Journalists still found themselves told by editors that they had to cover the latest fad, even if it was nonsensical or clearly rotten. Analysts still grabbed their swords and rushed to protect the very companies that have spent decades misleading them.  Much like we spent years saying that Facebook was a “good deal” because it was free, analysts and investors say tech stocks are “great to hold” because they kept growing, even if the reason they “kept growing” was a series of interlocking monopolies, difficult-to-leave platforms and impossible-to-fight traction and pricing, all of which have an eventual sell-by date. I realize I’m pearl-clutching over the amoral status of capitalism and the stock market, but hear me out: what if we’re actually in a 15-to-20-year-long knife-catching competition? What if all anybody has done is look at cashflow, net income, future growth guidance, and called it a day? A lack of scrutiny has allowed these companies to do effectively anything they want, bereft of worrisome questions like "will this ever make a profit?" What if we basically don’t know what the fuck is going on? What if all of this was utterly senseless? As I wrote last year, the tech industry has run out of hypergrowth ideas, facing something I call “the Rot Com bubble .” In simple terms, they’re only “doing AI” because there do not appear to be any other viable ideas to continue the Rot Economy’s eternal growth-at-all-costs dance.  Yet because growth hasn’t slowed yet , analysts, the media and other investors are quick to claim that AI is “ paying off ,” even if nobody has ever said how much AI revenue is being generated or, in the case of Salesforce, they can say “ nearly $1.4 billion ARR ,” which sounds really big until you realize a company with $10.9 billion in revenue is boasting about making less than $116 million in revenue in a month. Nevertheless, because Salesforce set a new revenue target of $60 billion by 2030, the stock jumped 4% . It doesn’t matter that most Agentforce customers don’t pay for the service, or that AI isn’t really making much money, or really anything, other than Number Go Up. The era we live in is one of abject desperation, to the point that analysts and investors — and shareholders by extension — will take any abuse from management. They will allow companies to spend as much money as they want in whatever ways they want, as long as it continues the charade of “number go up.” Let me spell it out a little more, using the latest earnings of various hyperscalers as an example. We have no idea, because analysts and investors are in an abusive relationship with tech stocks. It is fundamentally insane that Microsoft, Meta, Amazon and Google have spent $776 billion in capital expenditures in the space of three years , and even more so that analysts and investors, when faced with such egregious numbers, simply sit back and say “they’re building the infrastructure of the future, baby!” Analysts and traders and investors and reporters do not think too hard about the underlying numbers, because doing so immediately makes you run head-first into a number of worrying questions such as “where did all that money go?” and “will any of this pay off?” and “how many GPUs do they actually own?” Analysts have, on some level, become the fractional marketing team for the stocks they’re investing in. When Oracle announced its $300 billion deal with OpenAI in September — one that Oracle does not have the capacity to fill and OpenAI does not have the money to pay for – analysts heaved and stammered like horny teenagers seeing their first boob: These are the same people that retail and institutional investors rely upon for advice on what stocks to buy, all acting with the disregard for the truth that comes from years of never facing a consequence. Three months later, and Oracle has lost basically all of the stock bump it saw from the OpenAI deal, meaning that any retail investor that YOLO’d into the trade because, say, analysts from major institutions said it was a good idea and news outlets acted like this deal was real , already got their ass kicked.  And please, spare me the “oh they shouldn’t trade off of analysts” bullshit. That’s the kind of victim-blaming that allows these revered fuckwits to continue farting out these meaningless calls. In reality, we’re in an era of naked, blatant, shameless stock manipulation, both privately and publicly, because a “stock” no longer refers to a unit of ownership in a company so much as it is a chip at a casino where the house constantly changes the rules. Perhaps you’re able to occasionally catch the house showing its hand, and perhaps the house meant for you to see it. Either way, you are always behind, because the people responsible for buying and selling stocks at scale under the auspices of “knowing what’s going on” don’t seem to know what they’re talking about, or don’t care to find out. Let’s walk through the latest surge of blatant stock manipulation, and how the media and analysts helped it happen. Oracle announces its unfillable, unpayable $300 billion deal with OpenAI , leading to 30%+ bump in stock price . Analysts, who should ostensibly be able to count, call it “momentous” and say they’re “in shock.” On September 22 2025, CEO Safra Catz steps down , and nobody seems to think that’s weird or suspicious.  Two months later, Oracle’s stock is down 40% , with investors worried about Oracle’s growing capex, which is surprising I suppose if you didn’t think about how Oracle would build the fucking data centers. Basically anyone who traded into this got burned. NVIDIA announced a “strategic partnership” to invest “up to $100 billion” and build 10GW of data centers with OpenAI, with the first gigawatt to be deployed in the second half of 2026. Where would the data centers go? How would OpenAI afford to build them? How would OpenAI build a gigawatt in less than a year? Don’t ask questions, pig! NVIDIA’s stock bumped from from $175.30 to $181 in the space of a day. The media wrote about the story as if the deal was done, with CNBC claiming that “the initial $10 billion tranche [was] expected to close within a month or so once the transaction has been finalized.” I read at least ten stories that said that “NVIDIA had invested $100 billion.” Analysts would say that NVIDIA was “locking in OpenAI” to “remain the backbone of the next-gen AI infrastructure,” that “demand for NVIDIA GPUs is effectively baked into the development of frontier AI models,” that the deal “[strengthened] the partnership between the two companies…[and] validates NVIDIA’s long-term growth numbers with so much volume and compute capacity.” Others would say that NVIDIA was “enabling OpenAI to meet surging demand.” Three analysts — Rasgon at Bernstein, Luria at D.A. Davidson and Wagner at Aptus Capital — all raised circular deal concerns, but they were the minority, and those concerns were still often buried under buoyant optimism about the prospects of the company. One eensy weensy problem though, everyone! This was a “letter of intent” — it said so in the announcement! — and on NVIDIA’s November earnings , it said that it “entered into a letter of intent with an opportunity to invest in OpenAI.”  It turns out the deal didn’t exist and everybody fell for it! NVIDIA hasn’t sent a dime and likely won’t. A letter of intent is a “concept of a plan.” Back in October, Reuters reported that Samsung and SK Hynix had " signed letters of intent to supply memory chips for OpenAI's data centers ," with South Korea's presidential office saying that said chip demand was expected to reach "900,000 wafers a month," with "much of that from Samsung and SK Hynix," which was quickly extrapolated to mean around 40% of global DRAM output . Stocks in both companies, to quote Reuters , “soared,” with Samsung climbing 4% and SK Hynix more than 12% to an all-time high. Analyst Jeff Kim of KB Securities said that “there have been worries about high bandwidth memory prices falling next year on intensifying competition, but such worries will be easily resolved by the strategic partnership,” adding that “Since Stargate is a key project led by President Trump, there also is a possibility the partnership will have a positive impact on South Korea's trade negotiations with the U.S.” Donald Trump is not “leading Stargate.” Stargate is a name used to refer to data centers built by OpenAI. KB Securities has around $43 billion of assets under management. This is the level of analysis you get from these analysts! This is how much they know! On SK Hynix's October 29 2025 earnings call , weeks after the announcement, its CEO, Kim Woo-Hyun, was asked a question about High Bandwidth Memory growth by SK Kim from Daiwa Securities: This is the only mention of OpenAI. Otherwise, SK Hynix has not added any guidance that would suggest that its DRAM sales will spike beyond overall growth, other than mentioning it had "completed year 2026 supply discussions with key customers." There is no mention of OpenAI in any earnings presentation. On Samsung's October 30 2025 earnings call , Samsung mentioned the term "DRAM" 18 times, and neither mentioned OpenAI nor any letters of intent. In its Q3 2025 earnings presentation, Samsung mentions it will "prioritize the expansion of the HBM4 [high bandwidth memory 4] business with differentiated performance to address increasing AI demand." Analysts do not appear to have noticed a lack of revenue from an apparent deal for 40% of the world’s RAM! Oh well! Pobody’s nerfect! Both Samsung and SK Hynix’s stocks have continued to rise since, and you’d be forgiven for thinking this deal was something to do with it, even though it wasn’t. AMD announced that it had entered a “multi-year, multi-generation agreement” with OpenAI to build 6 GW of data centers, with “the first 1GW deployment set to begin in the second half of 2026,” calling the agreement “definitive” with terms that allowed OpenAI to buy up to 10% of AMD’s stock, vesting over “specific milestones” that started with the first gigawatt of data center development. Said data centers would also use AMD’s yet-to-be-released MI450 GPUs. The deal would, per Reuters , bring in “tens of billions of dollars of revenue.” Where would those data centers go? How would OpenAI pay for them? Would the chips be ready in time? Silence, worm! How dare you ask questions? How dare you? Why are you asking questions? NUMBER GO UP! AMD’s shares surged by 34% , with analyst Dan Ives of Wedbush saying that this was a “major valuation moment” for AMD. As an aside, Ives said that NVIDIA would benefit from the metaverse in 2021 , and told CBS News in November 22 2021 that “ the metaverse [was] real and Wall Street [was] looking for winners .” One would think that AMD’s November earnings — a month after the announcement — might be a barn-burner full of remaining performance obligations from OpenAI. In fact, CEO Lisa Su said that “[AMD expected] this partnership will significantly accelerate [its] data center AI business, with the potential to generate well over $100 billion in revenue over the next few years.” Here’s how AMD’s 10-Q filing referred to it: …so, no revenue from OpenAI at all, I guess? AMD raised guidance by 35% over the next five years   AMD's trailing 12-month revenue is $32 billion . "Tens of billions of dollars" would surely lead to more than a 35% boost (an increase of $11.2 billion or so) in the next five years? Guess all of that was for nothing. No follow-up from the media, no questions from analysts, just a shrug and we all move on. Anyway, AMD’s stock is now down from a high of $259 at the end of October to around $214 as of writing this sentence. Everybody who traded in based on analyst and media comments got fucked. So, back on September 5, Broadcom said on its earnings call that it had a $10 billion order from a mystery customer, which analysts quickly assumed was OpenAI , leading to the stock popping 9%, and gradually increasing to a high of $369 or so on September 10, before declining a little until October 13, when Broadcom announced its ridiculous 10 gigawatt deal with OpenAI , claiming that it would deploy 10GW of OpenAI-designed chips, with the first racks to deploy the second half of 2026 and the entire deployment completed by end of 2029. The same day, its president of semiconductor solutions Charlie Kawwas added that said mystery customer was actually somebody else : Nevertheless, Broadcom's stock popped by 9% on the news about the 10GW deal, with CNBC adding that "the companies have been working together for 18 months." Because it's OpenAI, nobody sat and thought about whether somebody at Broadcom saying "well, OpenAI has yet to order these chips yet" was a problem. In fact, the answer to “how does OpenAI afford this?” appeared to be “they’d afford it” when it came to analysts: Not to worry, OpenAI’s solution was far simpler: it didn’t order any chips. During Broadcom's November earnings call, where Broadcom revealed that the $10 billion order was actually from Anthropic , another LLM startup that burns billions of dollars, which was buying Google's TPUs, and also booked another $11 billion in orders. Analysts somehow believed that Anthropic is “positioned to spend heavily” despite being another venture-backed welfare recipient in the same flavor as OpenAI. Oh, right, that 10GW OpenAI deal. Broadcom CEO Hock Tan said that he did “ not expect much in 2026 ” from the deal, and guidance did not change to reflect it. Broadcom climbed to a high of $412 leading up to its earnings, and I imagine it did so based on people trading on the belief that OpenAI and Broadcom were doing a deal together, which does not appear to be happening. While there’s an alleged $73 billion backlog, every dollar from Anthropic is questionable. Actually, yes we can. Whenever a company says “letter of intent” — as NVIDIA and SK Hynix/Samsung did — it’s important to immediately stop taking the deal seriously until you get the word “contract” involved. Not “agreement” or “deal” or “announcement,” but “contract,” because contracts are the only thing that actually matters. Similarly, it’s time for everybody — analysts, the media, members of congress, the fucking pope, I don’t care — to start treating these companies with suspicion, and to start demanding timelines. NVIDIA and Microsoft announced their $15 billion investment in Anthropic over a month ago. Where’s the money? Why does the agreement say “up to $10 billion” for NVIDIA and “up to $5 billion” from Microsoft? These subtle details suggest that the deal is not going to be for $15 billion, and the lack of activity suggests it might not happen at all.   These deals are announced with the intention of suggesting there is more revenue and money in generative AI than actually exists. Furthermore, it is irresponsible and actively harmful for analysts and the media to continually act as if these deals will actually get paid when you consider the financial conditions of these companies. As part of its alleged funding announcement with NVIDIA and Microsoft, Anthropic agreed to purchase $30 billion of Azure compute . It also agreed to spend "tens of billions of dollars" with Google Cloud . It ordered $10 billion in chips from Broadcom earlier in the year, and apparently placed another $11 billion order in its latest fiscal quarter . How does it pay for those? It allegedly will burn $2.8 billion this year (I believe it burned much, much more ) and raised $16.5 billion in funding (before Microsoft and NVIDIA’s involvement, which we cannot confirm has actually happened). How are investors tolerating Broadcom not directly stating “the future financial condition of this company is questionable”? Has Broadcom created a reserve for this deal?  If not, why not? Anthropic will make no more than $5 billion this year, and has raised $17.5bn (with a further $2.5bn coming in the form of debt). How can it foreseeably afford to pay $10 billion, or $11 billion, or $21 billion, considering its already massive losses and all those other obligations mentioned? Will Jensen Huang hand over $10 billion so that Anthropic can hand it to Broadcom? I realize the counter-argument is that companies aren’t responsible for their counterparties’ financial health, but my argument is that it’s the responsibility of any public company to give a realistic view of its financial health, which includes noting if a chunk of its revenue is from a startup that can’t afford to pay for its orders. There is no counter to that! Anthropic cannot afford to pay Broadcom $10 billion right now!  Nevertheless, the problem is that in any bubble, being really stupid and ignorant works right up until it doesn’t, and however harsh the dot com bubble might have been, it wasn’t harsh enough and those who were responsible were left unpunished and unashamed, guaranteeing that this cycle would happen again.  I want to be really, abundantly clear about what’s happening: every single stock you see “growing because of AI” outside of those selling RAM and GPUs is actually growing because of something else. Microsoft, Amazon, Google and Meta all have other products that are making them money. AI is not doing it, and because analysts and investors do not think about things for two seconds, they have allowed themselves to be beaten down and turned into supplicants for public stocks.  Investors have allowed themselves to be played, and the results will be worse than the dot com bubble bursting by several echelons. I’m gonna be really simplistic for a second. I am skeptical of AI because everybody loses money. I believe every AI company is unprofitable with margins that are getting increasingly worse as they scale , and as a result that none of them will be able to either get acquired or go public.  This means that venture capitalists that have sunk money into AI stocks are going to be sitting on a bunch of assets under management (AUM) — the same assets they collect fees on — that will eventually crater or go to zero, because there will be no way for any liquidity event to occur.  This is at a time of historically-low liquidity for venture capitalists, with Pitchbook estimating there will only be $100.8 billion in venture capital funds available at the end of 2025 .  Venture capitalists raise money from limited partners, who invest in venture capital with the hope of returns that outpace investing in the public markets. Venture capital vastly overinvested during 2021 and 2022, This was also a problem in private equity . In simple terms, this means these funds are sitting on tons of stock that they cannot shift, and the longer it takes for a company to either go public or acquired, the more likely it is the VC or PE firm will have to mark down its value.  This is so bad that according to Carta, as of August 2024, less than 10% of VC funds raised in 2021 have made any distributions to their investors . In a piece from September , Carta revealed that “about 15% of funds” from 2023 have generated any disbursements as of Q2 2025, and the median net internal rate of return was a median 0.1% , meaning that, at best, most investors got their money back and absolutely nothing else . In fact, investing in venture capital has kinda fucking sucked. According to Carta, “As of the end of Q2, most VC funds across all recent vintages had a  TVPI somewhere between 0.8x and 2x. But there are some areas where standout TVPIs are surfacing.” TVPI means Total Value To Paid-in Capital, or the amount of money you made for each dollar invested. This chart may seem confusing, it tells you that for the most part, VCs have struggled to provide even money returns since 2017. A “decent” TVPI is 2.5x, and as you’ll see, things have effectively collapsed since 2021. Companies are not going public or being acquired at the same rate, meaning that investor capital is increasingly locked up, meaning that limited partners are still waiting for a payoff from the last bubble, let alone this one. Carta would update the piece in December 2025 , and things would somehow get worse. TVPI soured further, suggesting a further lack of exits across the board. The only slight improvement was the median IRR rose to 0.5% for funds from 2021 and 0.1% for funds from 2022.  In simple terms, we are looking at years of locked-up capital leaving venture capital cash-starved and a little desperate. The worst part? All of this is happening during a generational increase in the amounts that startups need to raise thanks to the ruinous costs of generative AI, and the negative margins of AI-powered services. To quote myself : None of these companies are profitable, nor do they have any path to an acquisition or IPO. Why? Because even the most advanced AI software company is ultimately prompting Anthropic or OpenAI’s models, meaning that their only real intellectual property is those prompts and their staff, and whatever they can build around the models they don’t control, which has been obvious from the meager “acquisitions” we’ve seen so far.  Windsurf, which was allegedly being sold to OpenAI, ended up selling its assets to Cognition in July , with Google paying $2.4 billion for its co-founders and a “licensing agreement,” similar to its acquisition of Character.Ai , where it paid $2.7 billion to rehire Noam Shazeer , license its tech, and pay off the stock of its remaining staff. This is also exactly what Microsoft did with Inflection AI and its co-founder Mustafa Suleyman . OpenAI’s acquisitions of Statsig ($1.1bn), Io Products ($6.5bn) and Neptune ($400m) were all-stock. Every other acquisition — Wiz, Confluent, Informatica, and so on ( CRN has a great list here ) — is either somebody trying to pretend that (for example) Wiz is related to AI, or trying to say that a data streaming platform is AI-related because AI needs that, which may be true, but doesn’t mean that any AI startups are actually selling. And they’re not, which is a problem, as 41% of US venture dollars in 2025 have gone into AI as of August, and according to Axios, the global number was around 51% . A crisis is brewing. Nerdlawyer, back in October, wrote about the explosive growth of secondary markets :  In simpler terms, there are now Hot Potato Funds, where either another limited partner buys another one’s allocation, the companies themselves buy back their stock, or the stock is resold to other private investors.  While this piece frames this as a positive, the reality is far grimmer. Venture capitalists are sitting on piles of immovable equity in companies worth far less than they invested at, and the answer, it appears, is to find somebody else to buy the dead weight.  According to Newcomer , only 1117 venture funds closed in 2025 (down from 2100 in 2024), and 43% of dollars raised went to the largest venture funds, per The New York Times and PitchBook, suggesting limited partners are becoming less-interested in pumping cash into the system at a time when AI startups are demanding more capital than has ever been raised. How long can the venture capital industry keep handing out $100 million to $500 million to multiple startups a year? Because all signs suggest that the current pace of funding must continue in perpetuity , as nobody appears to have worked out that generative AI is inherently unprofitable, and thus every single company is on the Silicon Valley Welfare System until everybody gives up, or the system itself cannot sustain the pressure. I’ve read too many people make off-handed comments about this “being like the dot com boom” and saying that “lots of startups might die but what’s left over will be good,” and I hate them for both their flippancy and ignorance.  None of the current stack of AI companies can survive on their own, meaning that the venture capital industry is holding them up. If even one of these companies falters and dies, the entire narrative will die. If that happens, it will be harder for AI companies to raise, and even harder to sell an AI company to someone else. This is a punishment for a decade-plus of hubris, where companies were invested in without ever considering a path to profitability. Venture capital has made the same mistake again and again, believing that because Uber, or Facebook, or Airbnb, or any number of companies founded nearly twenty years ago were unprofitable (with paths to profitability in all three cases, mind), it was totally okay to keep pumping up companies that had no path to profitability, which eventually became “had no apparent business model” (see: the metaverse, web3), which eventually became “have negative margins so severe and valuations so high that we will need an IPO at a market cap higher than Netflix.” This is Silicon Valley’s Rot Economy — the desperate, growth-at-all-costs attachment to startups where you “really like the founder,” where “the market could be huge” (who knows if it is!), where you just don’t need to worry about profitability because IPOs and exits were easy.  Venture capital also used to be easy , because we were still in the era of hypergrowth. You could be a stupid asshole that doesn’t know anything, but there were so many good deals , and the more well-known you were, the more likely you’d be brought them first, guaranteeing a bigger payout, guaranteeing more LP capital, guaranteeing more opportunities that were of a higher quality because you were a big name. It was easier to make a valuable company, easier to get funded, and easier to sell, because the goal was always “get funded, grow as large an audience as possible, or go public/get acquired.” As a result, venture capital encouraged growth-at-all-costs thinking. In 2010, Ben Horwitz said that “the only thing worse for an entrepreneur than start-up hell (bankruptcy) is start-up purgatory”: This poisonous theory paid off, in that startups got used to building high-growth, low-margin companies that would easily sell to other companies or the markets themselves.  Until it didn’t, of course. Per Nerdlawyer , IPOs have collapsed as an exit route, along with easy-to-raise capital.  Per PitchBook, since 2022, 70% of VC-backed exits were valued at less than the capital put in , with more than a third of them being startups buying other startups in 2024. The money is drying up as the value of VCs’ assets is decreasing , at a time when VCs need more money than ever , because everybody is heavily leveraged in the single-most-expensive funding climate in history. And as we hit this historic liquidity crisis, the two largest companies — OpenAI and Anthropic — are becoming drains on the system that, in a very real sense, are participating in a massive redistribution of capital reserved for startups to one of a few public companies. No, really!  OpenAI is trying to raise as much as $100 billion in funding so it can continue to pass money to one of a few public companies — $38 billion to Amazon Web Services over seven years, $22.4 billion to CoreWeave over five years, and $250 billion over an indeterminate period on Microsoft Azure . If successful, OpenAI’s venture telethon will raise more money than has ever been raised in a single round, draining funds that actual startups need. Anthropic has agreed to $70 billion in compute and chip deals across Google, Amazon and Broadcom, and that’s not including the Hut8 compute deal that Google is backing . This money will come from what remains of venture capital, private equity and hyperscaler generosity.  Yet elsewhere, even the money that goes to regular startups is ultimately being sent to hyperscalers. That AI startup that needs to keep raising $100 million in a single round isn’t sending that cash to other startups — it’s mostly going to OpenAI (Microsoft, Amazon, CoreWeave, Google), Anthropic (Google, Microsoft, Amazon), or one of the large hyperscalers for Azure, AWS or Google Cloud.  Silicon Valley didn’t birth the next big tech firm. It incubated yet another hyperscaler-level parasite, except instead of just spending money on hyperscaler services (and raising money to do so), both Anthropic and OpenAI actively drain the venture capital system as well, as they both burn billions of dollars.  By creating something that’s incredibly expensive to run, they naturally create startups more-dependent on the venture capital system, and the venture capital system has no idea what to do other than say “just grow, baby!” Both OpenAI and Anthropic’s models might be getting cheaper on a per-million-token basis, but use more tokens, increasing the cost of inference , which in turn increases the costs of startups doing business, which in turn means OpenAI, Anthropic, and all connected startups lose more money, which increases the burn on venture capital. This is a doom-spiral, one that can only be reversed through the most magical and aggressive turnaround we will have seen in history, and it will have to happen next year, without fail.  It won’t.  So why did venture do this? Folks, we haven’t seen values this big in a long time. These are the biggest numbers we’ve ever seen. They’re simply tremendous. OpenAI is maybe worth $830 billion dollars , can you believe that? They lose so much money but folks we don’t worry about that, because they’re growing so fast. We love that Clammy Sam Altman — they call him “Clamuel” — tells everybody he’s giving them one billion dollars. Data centers are going to have the biggest deals we’ve ever seen, even [ tchhh sound through teeth ] if we have to work with Dario. You see, right now AI startups are big, exciting news for the limited partners funding LLM firms.  Things feel exciting because the value of the assets under management (AUM) are going up, which is nothing dodgy, but just how VCs value things and if they are valuing AI stocks, that is how their fees are paid. Investing early in OpenAI allows a VC — or even an asset manager like Blackstone, which invested in 2024 — to say it has a big holding and a big increase in its AUM.  We are currently in the sowing stage . Nevertheless, AI stocks make VCs who bet on them two years ago look like geniuses on paper. You got in early on OpenAI, Anthropic, Cursor, Cognition, Perplexity or any other company that loves to burn several dollars per dollar of revenue, you have a big, beautiful number, the biggest you’ve ever seen, and your limited partners need to pay you a fee just to manage it. Venture capital hasn’t seen valuations like this in a long time , and on paper , it feels like a lot of VCs got in on companies worth billions of dollars. On paper, Cognition is worth $10.2 billion , Perplexity $18 billion , Cursor $29.3 billion , Lovable $6.6 billion , Cohere $6.8 billion , Replit $3 billion , and Glean $7.2 billion — massive valuations for companies that all basically do products that OpenAI or Anthropic or Amazon or Google or any number of Chinese companies are already working to clone. They are all losing tons of money and have no path to profitability.  But right now the numbers are simply tremendous. I’ve heard venture capitalists tell me that there are times when they have to agree to invest with little to no information or know that they’ll lose the opportunity to another sucker investor. I’ve heard venture capitalists say they don’t have any insight into finances. Venture capitalists would, of course, claim I’m insane, saying that the “growth is obviously there” while pointing to whatever startup has made $100 million ARR ($8.3 million in a month), all while not discussing the underlying operating expenses. The idea, I believe, is that the current spate of AI spending is only set to increase next year, and that will…somehow lead to fixing margins? Venture capitalists staunchly refuse to learn anything other than “invest in growth and then profit from growth,” even if “profiting from growth” doesn’t seem to be happening anymore. In reality, venture capital shouldn’t have touched LLMs with a fifteen foot pole, because the margins were obviously, blatantly bad from the very beginning. We knew OpenAI would lose $5 billion in the middle of 2024 . A sane venture capital climate would have fucking panicked , but instead chose to double, triple and quadruple down. I believe that massive valuation drawdowns are a certainty. There are losses coming. Venture capitalists, I have to ask you: what happens if OpenAI dies? Do you think that this will make investors interested in funding or acquiring other AI startups? How much longer are we going to do this? When will venture capital realize it’s setting itself up for disaster? And what, exactly, is the plan? OpenAI and Anthropic will suck the lakes dry like an NVIDIA GPU named after Nancy Reagan. How is this meant to continue, and what will be left when it does? The answer is simple: there won’t be money for venture capital for a while. Those AI holdings are going to be worth, at best, 50%, if they retain any value at all. Once one of these startups die, a panic will ensue, sending venture capitalists scrambling to get their holdings acquired, until there’s little or no investor interest left. Why would LPs ever trust venture capital after this? Why would anybody? Because based on the past four years, it doesn’t appear that venture capital is actually good at investing money — it just got lucky, year after year, until there were few ideas that could sell for hundreds of millions or billions of dollars.  Venture capital believed it knew better as it turned its back on basic business fundamentals, starting with Clubhouse, crypto, the metaverse, and now generative AI. Yet they’re far from the only fuckwits on the dickhead express. Per Bloomberg , there were at least $178.5 billion in data-center credit deals in the US in 2025, rivaling the $215.4 billion invested in US venture capital in 2024 and the $197.2 billion invested in US VC through August 7 2025 , and over $100 billion more than the $60.69 billion of data center credit deals done in 2024 . I’m very worried, and I’m going to tell you why, using a company called CoreWeave that I’ve been actively warning people about since March . CoreWeave is something called a “neocloud.” It’s a company that sells AI compute, and does so by renting out NVIDIA GPUs, and as I explained a few months ago , it does so by building data centers backed by endless debt:  CoreWeave is one of the largest providers of AI compute in the world, and its business model is indicative of how most data center companies make money, and to explain my concerns, I’m going to explain why using this chart from CoreWeave’s Q2 2025 earnings presentation . First, CoreWeave signs contracts — such as its $14 billion deal with Meta and $22.4 billion deal with OpenAI — before it has the physical infrastructure to service them. It then raises debt using this contract as collateral , orders the GPUs from NVIDIA, which arrive after three months, and then take another three months to install, at which point monthly client payments begin. To really simplify this: data center developers are raising money months up to a year before they ever expect to make a penny. In fact, I can find no consistent answer to “how long a data center takes to build,” and the answer here is pretty important, because that’s how the money is gonna get made from these things. You may notice that “monthly payments” begin at 6 to 30 months, a curious and broad blob of time. You see, data centers are extremely difficult to build, and the concept of an “AI data center” is barely a few years old, with the concept of hundreds of megawatts in one data center campus entirely made up of AI GPUs barely two years old, which means basically everybody building one is doing so for the first time, and even experienced developers are running into problems. For example, Core Scientific, CoreWeave’s weird partner organization it tried and failed to buy , has been trying to convert its Denton Texas cryptocurrency mining data center into an AI data center since November 2024 , specifically so that CoreWeave can rent it to Microsoft for OpenAI. This hasn’t gone well, with the Wall Street Journal reporting a few weeks ago that Denton has been wracked with “several months” of delays thanks to rainstorms preventing contractors from pouring concrete. The cluster is apparently going to have 260MW of capacity. What this means for CoreWeave is that it can’t start getting paid by OpenAI, because, per its contract, customers don’t have to start paying until the compute is actually available. This is a very important detail to know for literally any data center development you’ve ever seen. As of its latest Q3 2025 earnings filing , CoreWeave is sitting on $1.1 billion in deferred revenue ( income for services not yet rendered ), up from $951 million in Q2 2025 and $436 million in Q1 2025 . This means deposits have been made, but the contract has yet to be serviced. Now, I’m a curious little critter , so I went and found the 921-page $2.6 billion DDTL 3.0 loan agreement between CoreWeave and banks including Morgan Stanley, MUFG Bank and Goldman Sachs , and in doing so learned the following: I apologize, that suggests that CoreWeave isn’t already in trouble. Buried inside NVIDIA’s latest earnings (page 17) there was a little clue:  Credit where credit is due — eagle-eyed analyst JustDario caught this in November — but in CoreWeave’s condensed consolidated balance sheets, there sits a $477.5 million line-item under “restricted cash and cash equivalents, non-current.” Though this might not be the NVIDIA escrow — this number shifted from $617m in Q1 to $340m in Q2 — it lines up all-too-precisely…and who else would NVIDIA be guaranteeing?  In any case, CoreWeave is likely getting the best deals in data center debt outside of Oracle. It has top-tier financiers (who I will get to shortly), the full backing of NVIDIA (which is both an investor, customer and apparent financial backstop), and the ability to raise debt quickly . CoreWeave’s deals are likely indicative of how data center financing takes place, and those top-tier financiers? It’s been in basically every deal. In fact… So, I went and dug through a pile of 26 prominent data center loan deals, including the proposed $38 billion debt package that Oracle and Vantage Data Center Partners are raising for Stargate Shackelford and Wisconsin, Stargate Abilene, New Mexico, SoftBank’s $15 billion bridge loan (which I included for a reason that will become obvious shortly) and multiple CoreWeave loans, and found a few commonalities: I realize there are far more data center deals than these, but I wanted to show you exactly how centralized these deals are .  The largest deals — the $38 billion Stargate TX/WI deal and $18 billion Stargate New Mexico deal — both involved Goldman Sachs, BNP Paribas, SMBC and MUFG, and all four of those companies have, at some point, funded CoreWeave. In fact, everybody appears to have funded CoreWeave at some point — CitiBank, Credit Agricole, Societe Generale, Wells Fargo, Carlyle, Blackstone, BlackRock, Barclays, Magentar, and Jefferies to name a few. Of the 40 banks and financial institutions I researched, 24 have, at some point, loaned to or organized debt for CoreWeave. Of those institutions, Blackstone, Deutsche Bank, JP Morgan Chase, Morgan Stanley, MUFG and Wells Fargo have done so multiple times.  CoreWeave is a deeply unprofitable company saddled with incredible debt and deteriorating margins, with one of its largest clients paying net 360, and, as I’ve said, is arguably the best-financed data center company in the world.  What I’m getting at is that most data center deals are likely much worse than the terms that CoreWeave faces, and are likely financed in a similar way , where a client is signed for data center capacity that doesn’t exist, such as when Nebius raised $4.3 billion through a share sale and convertible notes (read: loans) to handle its $17.4 billion data center contract with Microsoft , and guess what? Goldman Sachs acted as lead underwriter on the deal, with assistance from Bank of America, CitiGroup, and Morgan Stanley, all three of which have invested in CoreWeave. AI data centers are expensive, require debt due to the massive cost of construction and GPUs, and all take at least a year, if not two to start generating revenue, at which point they also begin losing money because it seems that renting out AI GPUs is really unprofitable .  Every single major bank and financial institution has piled hundreds of millions if not billions of dollars into building data centers that take forever to even start generating money, at which point they only seem to lose it. Worse still, NVIDIA sells GPUs on a one-year upgrade cycle, meaning that all of those data centers being built right now are being filled with Blackwell chips, and by the time they turn on, NVIDIA will be selling its next-generation Vera Rubin chips. Now, you’ve probably heard that Vera Rubin will use the same racks (Oberon) as Blackwell, which is true to an extent , but won’t be true for long, as NVIDIA intends to shift to Kyber racks in 2027 , hoping to build 1MW IT racks (which will involve entire racks-full of power supplies!), meaning that all of those data centers you see today — whenever they get built! — will be full of racks incompatible with the next generation of GPUs. This will also decrease the value of the assets inside the data centers, which will in turn decrease the value of the assets held by the firms investing. Stargate Abilene? The one invested in by JP Morgan, Blue Owl, Primary Digital Infrastructure and Societe Generale? The one that’s heavily delayed and won’t be ready until the end of 2026 at earliest? Full to the brim with two-year-old GB200 racks !  By the beginning of 2027, Stargate Abilene will be obsolete, as will any and all data centers filled with Blackwell GPUs, as will any and all data centers being built today. Every single one takes 1-3 years and hundreds of millions (or billions) in debt, every single one faces the same kinds of construction delays, and better yet, almost all of them will turn on in roughly the same time frame. Now, I ain’t no economist, but I do know that “supply and demand” has an effect on pricing. What do you believe happens to the price of renting a Blackwell GPU when all of these data centers come on? Do you think it becomes more valuable? Or less?   And while we’re on the subject, what do you think happens if there isn’t sufficient demand?  Right now, OpenAI makes up a large chunk of the global sale of compute — at least $8.67 billion of Azure revenue through September 2025, $22.4 billion of CoreWeave’s backlog, $38 billion of Amazon’s backlog, and so on and so forth — and made, based on my reporting, just over $4.5 billion in that period . It cannot afford to pay anybody, and nowhere is that more obvious than when it negotiated year-long payment terms for CoreWeave.   Otherwise, when you remove the contracts signed by hyperscalers and OpenAI (which I do not believe has paid anybody other than Microsoft yet), based on my analysis , there was less than a billion dollars of AI compute revenue in 2025, or 0.5831% of the money spent on data centers.   Hyperscaler revenue is also immediately questionable, with Microsoft’s deal with Nebius ( per its 6k filing ) set to default in the event that Nebius cannot provide the capacity it sold out of its unfinished Vineland, New Jersey data center, which is being built by DataOne, a company which has never built an AI data center with a CEO that has his LinkedIn location set to “ United Arab Emirates ” with funding from a concrete firm that is also a vendor on the construction project . I also believe Microsoft is setting Nebius up to fail. Based on discussions with sources with direct knowledge of plans for the Vineland, New Jersey data center, Nebius has agreed to timelines that involve having 18,000 NVIDIA B200 and B300 GPUs by the end of January for a total of 50MW, with another 18,000 B300s due by the end of May. On speaking with experts in the field about how viable these plans are, two laughed, and one told me to fuck off. If Nebius fails to build the capacity, Microsoft can walk away, much like OpenAI can walk away from Stargate in the event that Oracle fails to build it on time ( as reported by The Information in April ), and I believe that this is the case for literally any data center provider that’s building a data center for any signed-up tenant. This is another layer of risk to data center development that nobody bothers to discuss, because everybody loves seeing these big, beautiful numbers. Except the numbers might have become a little too beautiful for some.  A few weeks ago, the Financial Times reported that Blue Owl Capital had pulled out of the $10 billion Michigan Stargate Data Center project , citing “concerns about its rising debt and artificial intelligence spending.” To quote the FT, “Blue Owl had been in discussions with lenders and Oracle about investing in the planned 1 gigawatt data centre being built to serve OpenAI in Saline Township, Michigan.” What debt, you ask? Well, Blue Owl — formerly the loosest legs in data center financing — was in CoreWeave’s $600 million and $750 million debt deals for its planned Virginia data center with Chirisa Technology Parks , as well as a $4 billion CoreWeave data center project in Lancaster, Pennsylvania , Stargate Abilene and Stargate Mexico, Meta’s $30 billion Hyperion data center , and a $1.3 billion data center deal in Australia through Stack Infrastructure, a company it owns through its acquisition of IPI Partners.  To be clear, Blue Owl “pulling out” is not the same as a regular deal. It’s a BDC — Business Development Corporation — that invests both its own money and rallies together various banks, in this case SMBC, BNP Paribas, MUFG and Goldman Sachs (all part of Stargate New Mexico).  Blue Owl is incredibly well-connected and experienced in putting together these kinds of deals, and very likely went to the many banks it’s worked with over the years, who apparently had “concerns about its rising debt,” much of it issued by them! While rumours suggest that Blackstone may “step in,” the banks that will actually back a $10 billion deal are fairly narrow, and “stepping in” would require billions of dollars and legal logistics. So, why are things looking shaky? Well, remember that thing about how this data center would be leased to Oracle? Well, it had a free cash flow of negative thirteen billion on revenues of $16 billion , with its most-recent earnings only "beat" on estimates only thanks to the sale of its $2.68 billion stake in Ampere . Its debt is exploding (with over a billion dollars in interest payments in its last quarter), its GPU gross margins are 14% (which does not mean profitable) , its latest NVIDIA GB200 GPUs have a negative 100% gross margin , and it has $248 billion in upcoming data center leases yet to begin.  All, for the most part, to handle compute for one customer: OpenAI, which needs to raise $100 billion, I guess. We’ve already got some signs of concern within the banking world around data center exposure.  In November, the FT reported that Deutsche Bank — which backed CoreWeave multiple times and several data centers — was “exploring ways to hedge its exposure to data centers after extending billions of dollars in debt,” including shorting a “basket of AI-related stocks” or buying default protection on some of its debt using synthetic risk transfers , which are when a bank sells the full or partial credit risk of a loan (or loans) to another bank while keeping the loans on their book, paying a monthly fee to investors (this is a simplification). In December, Fortune reported that Morgan Stanley (CoreWeave three times, IPI Partners, Hyperion, SoftBank Bridge Loan) was also considering synthetic risk transfers on “loans to businesses involved in AI infrastructure.” Back in April , SMBC sold synthetic risk transfers tied to “private debt BDCs” — and while this predates the large data center deals done by Blue Owl, SMBC has overseen multiple Blue Owl deals in the past. In December, SMBC closed another SRT , selling off risk from “Australian and Asian project finance loans,” though I can’t confirm if any of them were data center related. In December, Goldman Sachs paused a planned mortgage-bond sale for data center operator CyrusOne , with the intent to revive it in the first quarter of 2026. Oracle’s credit risk reached a 16-year high in the middle of December , with credit default swaps (basically, betting that Oracle will default on its debts, an unlikely yet no-longer-impossible event) climbing to their highest price since the great financial crisis.  While Morgan Stanley and Deutsche Bank’s SRTs are yet to close, it’s still notable that two of the largest players in data center financing feel the need to hedge their bets. So, what exactly are they hedging against? Simple! That tenants won’t arrive and debts won’t get paid.  I also believe they’re going to need bigger hedges, because I don’t think there is enough actual demand for AI to meet the data centers being built, and I think most data center loans end up being underwater within the next two years. I realize we’ve taken a great deal of words to get here, but every single part was necessary to explain what I think happens next. Let’s start by quoting my premium newsletter from a few weeks ago : You see, every little link in the chain of pain is necessary to understand things.  In really simple terms, I believe that almost every investment in a data center or AI startup may go to zero.  Let me explain. If we assume that 50% of $171.5 (so $85.75) billion in data center debt is in GPUs, that’s around 3.2GW of data center capacity, based on my model of NVIDIA’s approximate split of sales between different AI GPUs from my premium piece last week . The likelihood of the majority of these projects being A) completed within the next year and B) completed on budget is very, very small. Every delay increases the likelihood of default, as each of these projects is heavily debt-based. The customers of these projects are either hyperscalers (who are only “doing AI” because they have no other hypergrowth ideas and because Wall Street currently approves) or AI startups, all of whom are unprofitable. While there are potentially hedge funds or other companies looking for “private AI” integrations, I think this is a very, very small market. On top of that, AI compute itself may not be profitable, and because, by my estimate, everybody has spent about $85 billion on filling data centers with the same GPUs, the aggregate price of renting out GPUs will decline. Already the average price of Blackwell GPUs has declined to an average of $4.41 an hour according to Silicon Data , and that’s before the majority of Blackwell-powered GPUs come online. Yet the customer base shrinks from there, because the majority of AI startups aren’t actually renting GPUs — they build products on top of models built by OpenAI or Anthropic, who have made it clear they’re buying capacity from either hyperscalers or, in OpenAI’s case, getting Oracle or CoreWeave to build it for them. Why? Because building your own model is incredibly capital-intensive, and it’s hard to tell if the results will be worth it. Now, let’s assume — I don’t actually believe it will, but let’s try anyway — that all of that 3.2GW of capacity comes online. How much compute does an AI company use? OpenAI claims it has 2GW of capacity as of the end of 2025 , and is allegedly approaching 900 million weekly active users . I don’t think there are any AI companies with even 10% of that userbase, but even if there were, OpenAI spent $8.67 billion on inference through the end of September. Who can afford to pay even 10% of that a year? Or 5%?  Yet in reality, OpenAI is likely more indicative of the overall compute spend of the entire AI industry. As I’ve said, most companies are powered not by their own GPU-driven models, but by renting them from other providers.  OpenAI and Anthropic spent a combined $11.33 billion in compute on Azure and AWS respectively through the first 9 months of this year, and as the two largest consumers of AI compute, which suggests two things: In fact, it would take sinking every single dollar of venture capital — over $200 billion — every single year and then some funneled into AI compute just to provide the revenue to justify these deals.  In the space of a year, Microsoft Azure made $75 billion , Google Cloud $43 billion and Amazon Web Services $100 billion .  Need more proof? Still don’t believe me? Then skip to page 18 of NVIDIA’s most-recent earnings : If there’s such incredible, surging demand, why exactly is NVIDIA spending six fucking billion dollars a year in 2026 and 2027 on cloud compute ? NVIDIA doesn’t need the compute — it just shut down its AWS rival DGX Cloud ! It looks far more like NVIDIA is propping up an industry with non-existent demand. I’m afraid there is no secret AWS-sized spend waiting in the wings for the right moment to pounce. There is no secret demand wave, nor is there any capacity crunch that is holding back incredible swaths of revenue. Oracle’s $523 billion in remaining performance obligations are made up of OpenAI, Meta, and fucking NVIDIA .  For AI data centers to make sense, most startups would have to start becoming direct users of AI compute , while also spending more on cloud compute services than they’ve ever spent. The largest consumers of AI compute are both unprofitable, unsustainable monstrosities.  Eventually, reality will dawn on one or more of these banks. Projects will get delayed thanks to weather, or budgetary issues, or when customers walk away ( as just happened to data center REIT Fermi ). Loan payments will start going unpaid. Elsewhere, AI startups will keep asking for money, again and again, and for a while they’ll keep raising, until the valuations get too high, or VC coffers get too low.  You’re probably gonna say at this point that Anthropic or OpenAI might go public, which will infuse capital into the system, and I want to give you a preview of what to look forward to, courtesy of AI labs MiniMax and Zhipu (as reported by The Information), which just filed to go public in Hong Kong.  Anyway, I’m sure these numbers are great- oh my GOD ! In the first half of this year, Zhipu had a net loss of $334 million on $27 million in revenue , and guess what, 85% of that revenue came from enterprise customers. Meanwhile, MiniMax made $53.4 million in revenue in the first nine months of the year, and burned $211 million to earn it. It is time to wake up. These are the real-life costs of running an AI company. OpenAI and Anthropic are going to be even worse. This is why nobody wants to take AI companies public. This is why nobody wants to talk about the actual costs of AI. This is why nobody wants you to know the hourly cost of running a GPU, and this is why OpenAI and Anthropic both burn billions of dollars — the margins fucking stink , every product is unprofitable , and none of these companies can afford their bills based on their actual cashflow. Generative AI is not a functional industry, and once the money works that out, everything burns. Though many AI data centers boast of having tenancy agreements, remember that these agreements are either with AI startups that will run out of money or hyperscalers with legal teams numbering in the thousands. Every single deal that Microsoft, Amazon, Meta, Google or NVIDIA signs is riddled with outs specifically hedging against this scenario, and there won’t be a damn thing that anybody can do if hyperscalers decide to walk away. Before then, NVIDIA’s bubble is likely to burst. As I discussed a few weeks ago, NVIDIA claims to have shipped six million Blackwell GPUs , and while it may be employing very dodgy maths (claiming each Blackwell GPU is actually two GPUs because each one has two chips ), my modeling of its last three quarters suggests that NVIDIA shipped around 5.33GW’s worth of GPUs — and based on reading about every single data center I can find, it doesn’t appear that many have been built and powered on. Worse still, NVIDIA’s diversified revenue is collapsing. In Q1FY26, two customers represented 16% and 14% of revenue, in Q2FY26 two customers represented 23% and 16% of revenue, and in Q3FY26 four customers represented 22%, 15%, 13% and 11% of total revenue, with all that money going toward either GPUs or networking gear. I go into detail here , but I put it in a chart to show you why this is bad: In simpler terms, NVIDIA’s revenue is no longer coming from a diverse swath of customers. In Q1FY26, NVIDIA had $30.84 billion of diversified revenue, Q2 $28.51 billion, and Q3 $22.23 billion.  NVIDIA GPUs are astronomically expensive — $4.5 million for a GB300 rack of 72 B300 GPUs, for example — and filling data centers full of them requires debt unless you’re a hyperscaler. While I can’t say for sure, I believe NVIDIA’s diversified revenue collapse is a sign that smaller data center projects are starting to have issues getting funded, and/or hyperscalers are pulling back on their GPU purchases.  To look through the eyes of an AI booster — all I’m seeing is blue and yellow, as usual! — one might say that these big customers are covering the loss of revenue, but the reality is that these big projects are run on debt issued by banks that are becoming increasingly-worried about nobody paying them back. The mistake that every investor, commentator, analyst and member of the media makes about NVIDIA is believing that its sales are an expression of demand for AI compute, when it’s really more of a statement about the availability of debt from banks and private credit.  Similarly, the continued existence of AI startups is an expression of the desperation of venture capital, and the continuing flow of massive funding rounds is a sign that they see no other avenues for growth.  Eventually, data centers are going to go unbuilt, and data center debt packages will begin to fall apart. Remember, Oracle’s $38 billion data center deal is actually yet to close , much like Stargate New Mexico is yet to close. These deals, while seeming like they’re trending positively, are both incredibly important to the future of the AI bubble, and any failure will spook an already-nervous market. Only one link in the chain needs to break. Every part of the AI bubble — this fucking charade — is unprofitable, save for NVIDIA and the construction firms erecting future laser tag arenas full of negative-margin GPUs. What happens if the debt stops flowing to data centers? How will NVIDIA sell those 20 million Blackwell and Vera Rubin GPUs ? What happens if venture capitalists start running low on funds, and can’t keep feeding hundreds of millions of dollars to AI startups so that they can feed them to Anthropic or OpenAI?  What happens to OpenAI and Anthropic if their already negative-margin businesses when their customers run out of money? What happens to Oracle or CoreWeave’s work-in-progress data centers if OpenAI can’t pay its bills? What happens to Anthropic’s $21 billion of Broadcom orders, or tens of billions of Google Cloud spend? In the last year, I estimate I’ve been asked the question “what if you’re wrong?” over 25 times. Every single time the question comes with an undercurrent of venom — the suggestion that I’m being an asshole for daring to question the wondrous AI bubble. Every single person who has asked this has been poorly-read — both in terms of my work and the surrounding economics and technological possibilities of Large Language Models — and believes they’re defending technology, when in reality they’re defending growth , and the Rot Economy’s growth-at-all-costs mindset.  In many cases they are not excited about technology , but the prospects of being first in line to lick an already-sparkling boot. This has never been about progress or productivity. If it was, we’d actually see progress, or productivity boosts, or anything other than the frothiest debt and venture markets of all time. Large Language Models do not create novel concepts, they are inconsistent and unreliable, and even the “good” things they do vary wildly thanks to the dramatic variance of a giant probability machine. LLMs are not good enough for people to pay regular software prices at any scale, and the consequences of this will be that every single dollar spent on GPUs has been for exactly one point: manipulating the value of their stocks. AI does not have the business returns and may have negative gross margins. It is inconsistent, ugly, unreliable, expensive and environmentally ruinous, pissing off a large chunk of consumers and underwhelming most of the rest, other than those convinced they’re smart for using it or those who have resigned to giving up at the sight of a confidence game sold by a tech industry that stopped making products primarily focused on solving the problems of consumers or businesses some time ago.  You may say that I’m wrong because Google, Microsoft, Meta and Amazon continue to have healthy net revenues and revenue growth, and as I previously said, these companies are not sharing AI revenues and their existing businesses are still growing due to the massive monopolies they’ve built.  And I want to plea to AI boosters and bullish analysts alike: you are being had. Satya Nadella, Sam Altman, Dario Amodei, Jensen Huang, Mark Zuckerberg, Larry Ellison, Safra Catz, Elon Musk, Clay Magouyrk, Mark Sicilia, Michael Truell, Aravind Srivinas — all of them are laughing at you behind your back, because they know that you are never going to ask the obvious questions that would defeat my arguments, and know that you will never, ever push back on them. The enshittification of the shareholder has the downstream effect of an enshittification of the media and Wall Street analysts writ large. These companies own you. They treat you with disdain and condescension, because they know you’ll let them. They know that no sell-side analyst will ever ask them “when will you be profitable?” or “how much are you spending?” or if you do ask, they know you will experience temporary amnesia and forget whatever answer they give, because these are the incentives of an enshittified stock market, where stocks are not extrapolations of shareholder value but chips in a fucking casino where the house always wins and changes the rules every three months. They have changed the meaning of “stock” to mean “what the market will reward,” and when you allow companies to start dictating the terms of what will be rewarded — as neoliberalism, Friedman, Reagan, Nixon, NAFTA, Thatcher, and every other policy has, orienting everything exclusively around growth — companies eventually cut off any powers that may curtail any reevaluation of the fundamental terms of capitalism, and the incentives within.  Focusing on growth-at-all-costs thinking naturally encourages, enables, and empowers grifters, because all they ever have to promise is “more” — more users, more debt, more venture, more features, more everything .  The very institutions that are meant to hold companies accountable — analysts and the media — are far more desperate to trade scoops for interviews, to pull punches, to find ways to explain why a company is right rather than understand what the company is doing, and this is something pushed not by writers, but by editors that want to make sure they stay on the right side of the largest companies. And if I’m right, OpenAI’s death will kill off most if not all other AI startups, Anthropic included. Every investor that invested in AI will take massive losses. Every startup that builds on the back of their models will see their company fold, if it hasn’t already due to the massive costs and upcoming price increases. The majority of GPU-based data centers — which really have no other revenue stream — will be left inert, likely powered down, waiting for the day that somebody works it all out, which they won’t, because literally everybody has these things now and I truly believe they’ve tried everything. I don’t “hate on AI” because I am a hater, I hate on it because it fucking sucks and what I’m worried about happening seems to be happening. The tech industry has run out of hypergrowth ideas, and in its desperation hitched itself to the least-profitable hardware and software in history, then spent three straight years lying about what was possible to the media, analysts and shareholders. And they were allowed to lie , because everybody lapped it the fuck up. They didn’t need to worry about convincing anybody. Financiers, editors, analysts and investors were already drafting reasons why they were excited about something they didn’t really understand or believe in, other than the fact it promised more.  This is what happens when you make everything about growth: everybody becomes stupid, ready to be conned, ready to hear what the next big growth thing is because asking nasty questions gets you fucking fired. And what’s left is a tech industry that doesn’t build technology, but growth-focused startups.  Look at Silicon Valley. Do you see these fucking people ever building a new kind of computer? Do you believe these men fit to even imagine a future? These men care about the status quo, they want to always have more software to sell or ways to increase advertising revenue so that the stock number goes up so they receive more money in the form of stock compensation. They are concerned with neither actual business value, honest exchange of value, or societal value. Their existence is only in shareholder value, which is how they are incentivized by their board of directors.  And really, if you’re still defending AI -- does it matter to any of you that this software fucking sucks, does it? If you think it’s good you don’t know much about software! It does not respond precisely at any point to a user or programmer’s intent. That’s bad software. I don’t care that you have heard developers really like it, because that doesn’t fix the underlying economic and social poison in AI. I don’t care that it sort of replaced search for you. I don’t care if you “know a team of engineers that use it.” Every single AI app is subsidized, its price is fake, you are being lied to, and none of this is real. When the collapse happens, do not let a single person that waved off the economics have a moment’s peace. Do not let anybody who sat in front of Dario Amodei or Sam Altman and squealed with delight at whatever vacuous talking points they burped out forget that they didn’t push them, they didn’t ask hard questions, they didn’t worry or wonder or feel any concern for investors or the general public. Do not let a single analyst that called AI skeptics “luddites” or equated them to flat Earthers hear the end of it. Do not let anybody who claimed that we “lost control of AI” or “ blackmailed developers ” go without their complementary “Fell For It Again” badge. When it happens, I promise I won’t be too insufferable, but I will be calling for accountability for anybody who boosted AI 2027 , who sat in front of Sam Altman or Dario Amodei and refused to ask real questions, and for anyone who collected anything resembling “detailed notes” about me or any other AI skeptic. If you think I’m talking about you, I probably am, and I have a question: why didn’t you approach the AI companies with as much skepticism as you did the skeptics? I also promise you, if I’m wrong , I’ll happily explain how and why, and I’ll do so at length, too. I will have links and citations, I’ll do podcast episodes. I will make a good faith effort to explain every single failing, because my concern is the truth, and I would love everybody else to follow suit. Do you think any booster will have the same courtesy? Do you think they care about the truth? Or do they just want to get a fish biscuit from Sam Altman or Jensen Huang?  Pathetic.   It’s times like this where it’s necessary to make the point that there is absolutely “enough money” to end hunger or build enough affordable housing or have universal healthcare, but they would be “too expensive” or “not profitable enough,” despite having a blatant and obvious economic benefit in that more people would have happier, better lives and — if you must see the world in purely reptilian senses — enable many more people to have disposable income and the means of entering the economy on even terms. By contrast, investments in AI do not appear to be driving much economic growth at all, other than in the revenue driven to NVIDIA from selling these GPUs, and the construction of data centers themselves. Had Microsoft, Google, Meta and Amazon sunk $776 billion into building housing and renting it out, the world would be uneven, we would have horrible new landlords, and it would still be a great deal better than one where nearly a trillion dollars is being wasted propping up a broken, doomed industry, all because the people in charge are fucking idiots obsessed with growth.  The future, I believe, spells chaos, and I am trying to rise to the occasion. My work has transformed from being critical of the tech industry to a larger critique of the global financial system. I’ve had to learn accountancy, the mechanics of venture and private equity, and all sorts of annoying debt-related language, all so that I sufficiently explain what’s going on. I see several worrying signs I have yet to fully understand. The Discount Window — where banks go when they need quick liquidity as a last resort — has seen a steady increase of loans on its books since September 2024 , suggesting that financial institutions are facing liquidity issues, and the last few times that this has happened, financial crises followed.  There is also a brewing bullshit crisis in Private Equity, which is heavily invested in data centers.  In September, Auto parts maker First Brands collapsed in a puff of fraud with billions of dollars “ vanishing ” after it double-pledged the same collateral to multiple loans, off-balance sheet liabilities, falsified invoices, and even leased some of the parts it sold. This wasn’t a case where smaller lenders were swindled, either — global investment banks UBS and Jefferies both lost hundreds of millions of dollars , along with asset manager BlackRock through associated funds.  Subprime auto lender Tricolor collapsed in similar circumstances , burning JPMorgan , Jefferies, and Zions Bancorporation, who also loaned money to First Brands. A similar situation is currently brewing with Solar company PosiGen, which recently filed for bankruptcy after, you guessed it, double-pledging collateral for loans. One of its equity financing backers is Magnetar Capital , who invested in CoreWeave. What appears to be happening is simple: large financial institutions are issuing debt without doing the necessary due diligence or considering the future financial health of the companies involved. Private Equity firms are also heavily-leveraged, sidling acquisitions with debt, and playing silly games where they “volatility launder” — deliberately choosing not to regularly revalue assets held to make returns (or the value of assets) look better to their investors .  I don’t really know what this means right now, but I am worried that these data center loans have been entered into under similarly-questionable circumstances. Every single data center deal is based on the phony logic that AI will somehow become profitable one day, and if there’s even one First Brands situation, the entire thing collapses. I realize this is the longest thing I’ve ever written ( or should I say written so far? ), and I want to end it on a positive note, because hundreds of thousands of people now read and listen to my work, and it’s important to note how much support I’ve received and how awesome it is seeing people pick up my work and run with. I want to be clear that there is very little that separates you from the people running these companies, or many analysts. I have taught myself everything I know from scratch, and I believe you can too, and I hope I have been able to and will be able to teach you everything I know, which is why everything I write is so long. Well, that and I’m working out what I’m going to say as I write it. The AI bubble is an inflation of capital and egos, of people emboldened and outright horny over the prospect of millions of people’s livelihoods being automated away. It is a global event where we’ve realized how the global elite are just as stupid and ignorant as anybody you’d meet on the street — Business Idiots that couldn’t think their way out of a paper bag, empowered by other Business Idiots that desperately need to believe that everything will grow forever. I have had a tremendous amount of help in the last year — from my editor Matt Hughes , Robert and Sophie at Cool Zone Media, Better Offline producer Matt Osowski, Kakashii and JustDario (two pseudonymous analysts that know more about LLMs and finance than most people I read), Kasey Kagawa , Ed Ongweso Jr ., Rob Smith , Bryce Elder and Tabby Kinder of the Financial Times, all of whom have been generous with their time, energy and support. A special shoutout to Caleb Wilson ( Kill The Computer ) and Arif Hasan ( Wide Left ), my cohosts on our NFL podcast 60 Minute Drill .  And I’ve heard from thousands of you about how frustrated you are, and how none of this makes sense, and how crazy you feel seeing AI get shoved into every product, how insane it marks you feel when somebody tells you that LLMs are amazing when their actual outputs fucking suck. We are all being lied to, we all feel gaslit and manipulated and punished for not pledging ourselves to Sam Altman’s graveyard smash, but I believe we are right . In the last year, my work has gone from being relatively popular to being cited by multiple major international news organizations, hedge funds, and internal investor analyses. I was profiled by the Financial Times , went on the BBC twice , and watched as my Subreddit, r/ BetterOffline , grew to around 80,000 visitors a week and became one of the 20th largest podcast Subreddits, which is a bigger deal than it sounds. I believe there are millions of people that are tired of the state of the tech industry, and disgusted at what these people have done to the computer. I believe that they outnumber the boosters, the analysts and the hype-fiends that have propped up this era. I believe that a better world is possible by creating a meaningful consensus around making the powerful prove themselves to us rather than proving it for them. I am honoured that you read me, and even more so if you read this far. I’ll see you in 2026. Meta’s business is both supporting and profiting from organized crime, and at 10% of its revenue, it’s also kind of dependent on it. Meta is using deliberate and insidious accounting tricks to act like a data center that it is paying to build and will be the sole tenant of is somehow an “off balance sheet” operation. In Stage 1, things are good for users: the platform is free, things are easy-to-use, and thus it’s really simple for you and your friends to adopt and become dependent on it. In Stage 2, things become bad for consumers, but good for business customers: the platform begins forcing users to do “profitable” things — like show them more adverts by making search results worse — all while making it difficult to migrate to another one, either through locking in your data or the tacit knowledge that moving platforms is hard, and your friends are usually in one place. Businesses sink tons of money into the platform, knowing that users are unlikely to leave, and make good money buying ads against a populace that increasingly stays because it has to as there are no other options. In Stage 3, things become bad for consumers and businesses, but good for shareholders: the platforms begin to deteriorate to the point that usability is pushed to the brink, and businesses — who are now dependent on the platform because monopolies have pushed out every alternative platform to advertise or reach consumers — begin to see their product crumble, all in favour of shareholder capital, which only cares about stock value, net income and buybacks. According to its latest quarterly filings, Microsoft spent $34.9 billion on capital expenditures , Amazon $34.2 billion , Meta $19.37 billion , and Google $24 billion . The common mantra is that these companies are “spending all this money on GPUs,” but that doesn’t match up with NVIDIA’s revenues. NVIDIA’s last quarterly earnings said that four direct customers made up more than 10% of revenue — 22% ($12.54bn), 15% ($8.55bn), 13% ($7.41bn) and 11% ($6.27bn) out of $57 billion.  While this sort of lines up with capex spend, it doesn’t if you shift back a quarter, when Microsoft spent $21.4 billion , Meta $17.01 billion , Amazon $31.4 billion and Google $22.4 billion , with the vast majority on “technical infrastructure.”  In the same quarter, NVIDIA had only two customers that accounted for more than 10% — one 23% ($10.7bn) and one 16% ($7.47bn) out of $46.7 billion. Another quarter back, and Microsoft spent $22.6 billion , Meta $13.69 billion , Google $17.2 billion and Amazon $22.4 billion . In the same quarter, NVIDIA had two customers accounting for more than 10% of revenue — 16% ($7.49bn) and 14% ($6.168bn). Where, exactly, is all this money going? In Microsoft’s latest earnings (Q1FY26), it said that $19.39 billion went to “additions to property and equipment,” with “roughly half of [its total capex] spend on short-lived assets, primarily GPUs and CPUs.” A quarter (Q4FY2025) back, additions to property and equipment were $16.74 billion, with “roughly half…[spent] on long-lived assets that will support monetization over the next 15 years and beyond.”  Let’s assume that Microsoft is NVIDIA’s biggest customer every single quarter — customer A, spending $12.5 billion (out of $34.9 billion), $10.7 billion (out of $21.4 billion) and $7.049 billion (out of $22.6 billion) a quarter. Assuming that Microsoft is only buying NVIDIA’s Blackwell GPUs (forgive the model numbers, but it’s based on my own modeling. Let’s say 40% B200s, 30% GB200s, 10% B300s and 20% GB300s), that works out to about 457MW of IT load for Q1FY26, 391MW for Q4FY25 and (adjusting to include more H200s, as the B300/GB300s were not shipping yet) 263MW for Q3FY25.  Has Microsoft built 1.11GW of data centers in that time? Apparently! It claims it added 2GW in the last year , but Satya Nadella claimed in November that Microsoft had chips in inventory it couldn’t install due to a lack of power.  In any case, where did the remaining $22.4 billion, $11.9 billion and $15.5 billion in capex flow? We know there are finance leases. What for? More GPUs? What is the actual output of these expenditures? OpenAI appears to have net 360 payment terms from CoreWeave — meaning it can pay literally a year from invoice .  Per CoreWeave’s Q3 earnings (page 19), “...on occasion, the Company has granted payment terms up to net 360 days.” Per CoreWeave’s loan agreement (page 12), under “contract realization ratio,” “the sum of Projected Contracted Cash Flows applicable for the corresponding three-month period as determined on a net 360 basis.” CoreWeave is required to maintain something called a “contract realization ratio” of .85x — meaning that CoreWeave has to make at least 85 cents of every expected dollar or it is  in default on their loan. This is important to note because it means that if, say, OpenAI decides not to pay up in a year, CoreWeave will be in real trouble. Blue Owl was present in every single Stargate deal, other than the $38 billion package being raised by Vantage. It also was involved in a $1.3 billion Australian data center debt package by virtue of owning Stack Infrastructure . Remember that name.  MUFG (Mitsubishi UFJ Financial Group) was present in 17 out of 26 of the deals, including three separate CoreWeave financings, Stargate New Mexico ($18 billion), the $38 billion Stargate TX/WI deal for Oracle , SoftBank’s bridge loan , and a $5 billion “green loan” package for Vantage Data Centers (who are the ones building the Stargate TX/WI data centers). JP Morgan Chase was involved in eight deals, but they were some of the largest — CoreWeave’s October 2024 financing, DDTL 3.0 and November financing , the funding behind Stargate Abilene , the $38 billion Oracle deal, and Blue Owl’s acquisition of IPI Partners’ Data Centers in 2024 . They also were part of SoftBank’s bridge loan. Deutsche Bank was involved in SoftBank’s bridge loan, but also three smaller deals: a $212 million data center in Seoul , CoreWeave’s 2024 debt, CoreWeave’s November financing , and a data center in Latin America. It also was part of a $610 million data center project in Virginia , as well as a €1 billion data center project in Germany (invested in with NVIDIA). BNP Paribas? Seven deals: CoreWeave’s DDTL 3.0, Stargate New Mexico, Stargate WI/TX, the acquisition of IPI Partners by Blue Owl, the $212m deal in Seoul, and a data center in Chile . Morgan Stanley? Eight, including CoreWeave’s October 2024, DDTL 3 and November loans, Stargate New Mexico, Stargate WI/TX, EQT’s EdgeConnex financing deal , and, of course, SoftBank’s bridge loan. SMBC (Sumitomo Mitsui Banking Corporation) ? Seven deals, all notable — CoreWeave’s DDTL 3.0 and November financing, Stargate New Mexico, Stargate TX/WI, a data center in Rowan MD (also involving MUFG, TD Securities and HSBC), as well as the data centers in Chile and Latin America. Oh, and SoftBank’s bridge loan. The enshittified stock market, pumped not by actual cashflow or productivity but by signals read by analysts and investors trained over decades to push consumer investors to invest in magnificent 7 stocks that represent as much as 40% of the value of the S&P 500 , their values pumped by analysts and the media misleading investors into believing that their revenue growth is anything to do with AI. Venture capital’s liquidity crisis, one peaking at a time when AI startups have become more capital-intensive than any other point in history. Ballooning, centralized data center debt, funded based on customer contracts or built for demand that doesn’t exist, funding massive data centers of GPUs that immediately become commoditized as a result of the hysteria. The market for AI compute is very, very small. If you assume that Anthropic spent the same on Google Cloud as it did on AWS ($2.66 billion, for a total of $5.32 billion), and add CoreWeave’s revenue ($5 billion, most of which was either OpenAI (via Microsoft) or NVIDIA), there doesn’t appear to be an AI compute market, outside of serving these two companies. The market for AI compute is not actually growing. In the last two years, no new major consumers of AI compute have emerged. Every company that has signed a large compute deal has either been OpenAI, Anthropic or a hyperscaler. Even if Cursor were to dump its entire $2.3 billion in funding into AI compute, that would still not be enough.

0 views

Premium - How The AI Bubble Bursts In 2026

Hello and welcome to the final premium edition of Where's Your Ed At for the year. Since kicking off premium, we've had some incredible bangers that I recommend you revisit (or subscribe and read in the meantime!): I pride myself on providing a ton of value in these pieces, and I really hope if you're on the fence about subscribing you'll give me a look. Last week has been a remarkably grim one for the AI industry, resplendent with some terrible news and "positive stories" that still leave investors with a vile taste in their mouth. Let's recount: There are a few common threads between all of these stories: And the other key thread is the year 2026. Next year is meant to be the year that everything changes. It was meant to be the year that OpenAI had a gigawatt of data centers built with Broadcom and AMD , and when Stargate Abilene's 8 buildings were fully built and energized . 2026 is meant to be the year that OpenAI opened Stargate UAE , too. Here in reality , absolutely none of this is happening, and I believe that 2026 is the year when everything begins to collapse. In today's piece, I'm going to line up the sharp objects sitting right next to an increasingly-wobbling AI bubble, and why everything hinges on a looming cash crunch for OpenAI, AI data centers, those funding AI data centers, and venture capital itself. The Hater's Guide To NVIDIA , a comprehensive guide to the largest and weirdest company on the stock market, which was several weeks ahead of most on the "GPUs in warehouses" story. Big Tech Needs $2 Trillion In AI Revenue By 2030 or They Wasted Their Capex , a mathematical breakdown of how big tech has to make so much money before 2030 or it will have wasted every penny building AI data centers. Oracle and OpenAI Are Full Of Crap , where I broke down how Oracle doesn't have the capacity and OpenAI doesn't have the money to pay for their $300 billion compute deal, predicting the current state of affairs with Oracle's data centers months in advance. The Ways The AI Bubble Will Burst , a detailed piece about how the collapse of AI data center funding will eventually lead to the collapse of AI startup funding, creating a " chain of pain " that eventually leads to nobody buying GPUs and the end of this era. Disney is investing $1 billion in OpenAI in a deal where OpenAI will " bring beloved characters from Disney's brands to Sora ," including a three-year licensing deal. One might think that a licensing deal is weird, given that Disney is investing, and one would be right! Apparently OpenAI is "paying" to license Disney's characters entirely in stock warrants , and Disney has the opportunity to buy an undisclosed amount of future stock. Amazon is in discussions to invest $10 billion in OpenAI at a valuation of over $500 billion, per The Information , and plans to use Amazon's Trainium AI server chips (its in-house competitor to NVIDIA's GPUs that some startups, per Business Insider , claim have "performance challenges" and "underperformed" NVIDIA's years-old H100 chips), apparently. Any excitement you might have over this deal should be tempered by the fact that OpenAI and Amazon Web Services signed a $38 billion deal back in November , meaning that this is likely a situation where Amazon would hand money to OpenAI, which would then hand the money right back to Amazon, and that's assuming any real money actually changes hands. Though this is just one source, I've heard tell that Amazon, at times, sells Trainium at a loss to get customers. Then again, I think this might be the case with all AI compute. Bloomberg reported that Oracle has pushed back the completion date of multiple data centers being built for OpenAI, "largely due to labor and material shortages." Oracle responded , saying that "there have been no delays to any sites required to meet our contractual commitments, and all milestones remain on track." It isn't clear what data centers these are, but a clue might be... ...that Blue Owl has pulled out of funding a $10 billion deal for a data center for Oracle/OpenAI in Michigan, per The Financial Times . This is a very, very, very bad sign. Blue Owl is arguably the loosest, friendliest lender in the data center space, and while Oracle claims another partner is allegedly talking to Blackstone, one has to wonder whether Blackstone is lining up to fund "the deal that Blue Owl couldn't handle." Blue Owl is the pre-eminent lender in data center financing. It backed Meta's $30 billion Hyperion data center project with $3 billion of its own capital , it sunk $3 billion into OpenAI's Stargate New Mexico deal , and an indeterminate amount in Stargate Abilene, likely   somewhere between $2.5 billion and $5 billion , on top of a $7.1 billion loan provided to Blue Owl and developer Crusoe to finish the project , on top of another $5 billion joint venture with Chrisa and Powerhouse to build a data center for rickety, nasty AI compute company CoreWeave . So why did this deal fall apart? Well, according to the Financial Times, "lenders pushed for stricter leasing and debt terms amid shifting market sentiment around enormous AI spending including Oracle’s own commitments and rising debt levels." If only somebody could have warned them , somehow . Though I'll get into more detail after the premium break, both Oracle and Broadcom reported earnings, and both saw their stocks get dumped like a deadbeat boyfriend with a bad attitude and credit card debt. In Oracle's case it was the same old story — lots of debt, decaying margins and negative cash flow, along with a bunch of commitments. Did I mention that Oracle has $248 billion in upcoming data center lease commitments ? More than double those made by Microsoft? In Broadcom's case, things were a little weirder. While it beat on estimates, it partly did so, per The Coastal Journal , by playing funny non-GAAP (generally accepted accounting practices) games with things like how it handles stock compensation and the amortizations to raise its "adjusted" earnings per share, boosting non-GAAP revenues by $4.4 billion. The other problem was related to OpenAI. Back in October, Broadcom and OpenAI announced a "strategic collaboration" for "10 gigawatts of customer AI accelerators ," with "Broadcom to deploy racks of AI accelerator and network systems targeted to start in the second half of 2026, to complete by 2029." I'll get into the nitty gritty later, but CEO Hock Tan said that Broadcom " did not expect much [revenue]" in 2026 from the deal. CoreWeave's Denton Data Center has become a nightmare, with, per the Wall Street Journal , heavy rains and winds causing "a roughly 60-day delay" that prevented contractors from pouring concrete for the data center, pushing the completion date back by "several months" on top of "additional delays caused by revisions to design" for a data center specifically built to lease to OpenAI. OpenAI doesn't have cash. The Disney licensing deal? Paid for in stock. The AWS contract? Amazon has to give OpenAI $10 billion to pay for it, because OpenAI doesn't have the cash. Broadcom's deal with OpenAI? "not much" revenue in 2026, probably because OpenAI doesn't have the cash. The Money For Data Centers Is Running Out. Blue Owl is the loosest lender in the universe, and if it’s having trouble raising money, everybody will very soon. Investors are aggressively dumping Oracle because it keeps trying to build more data centers for OpenAI, a company that does not have the money to pay for its compute. AI Is Wearing Out Its Welcome, and the AI Bubble Narrative Is Impossible To Ignore It used to be (back in September, at least) that you could announce a big, stupid deal with OpenAI and see a 40% stock bump . Now the markets are suddenly thinking "huh, how is it gonna pay that?" Oracle's stock also got dumped because it increased capital expenditures in its latest quarter to $12 billion, on analyst expectations of $8.4 billion .

0 views

Premium: Mythbusters - AI Edition

I keep trying to think of a cool or interesting introduction to this newsletter, and keep coming back to how fucking weird everything is getting. Two days ago, cloud stalwart Oracle crapped its pants in public, missing on analyst revenue estimates and revealing it spent (to quote Matt Zeitlin of Heatmap News) more than $4 billion more in that quarter than analysts expected on capital expenditures, for a total of $12 billion. The "good" news? Oracle has remaining performance obligations (RPOs) of $523 billion . For those that aren’t fluent in financese, this is future contracted revenue that hasn’t been paid for, or even delivered: So we've got — per Kakashii on Twitter — $68 billion of new compute deals signed in the quarter, with $20 billion from Meta ( announced in October ), and a few other mystery clients that could include the ByteDance/TikTok deal . But wait. Hold the fort — what was that? NVIDIA? NVIDIA? The accelerated computing company? The largest company on the stock market? That NVIDIA? Why is NVIDIA buying cloud compute? The Information reported back in September that NVIDIA was "stepping back from its nascent cloud computing business," intending to use it "for its own researchers." Well, I sure hope those researchers need compute! NVIDIA has, according to its November 10-Q , agreed to $26 billion in cloud compute deals , spending $6 billion in a year each in Fiscal Years 2027 and 2028, $5 billion in FY2029, $4 billion in 2030, and $4 billion in 2031. AI boosters damn near ripped their jorts jumping for joy at the sight of this burst of new performance obligations, yet it seems that the reason that NVIDIA CEO Jensen Huang said back in October that AI compute demand had gone up "substantially" in the last six months was because NVIDIA had stepped in to increase it . It signed a deal to buy $6.3 billion of unused capacity from CoreWeave , another to buy $1.5 billion from Lambda , and now apparently needs to buy even more compute from Oracle, despite Huang saying in November that cloud GPUs are "sold out" , which traditionally means you "can't rent them." We are in the dynasty of bullshit, a deceptive epoch where analysts and journalists who are ostensibly burdened with telling the truth feel the need to continue pushing the Gospel According To Jensen. When all of this collapses there must be a reckoning with how little effort was made to truly investigate the things that executives are saying on the television, in press releases, in earnings filings and even on social media, all because the market consensus demanded that The Number Must Continue Going Up. The AI era is one of mythology, where billions in GPUs are bought to create supply for imaginary demand, where software is sold based on things it cannot reliably do, where companies that burn billions of dollars are rewarded with glitzy headlines and not an ounce of cynicism, and where those that have pushed back against it have been treated with more skepticism and ire than those who would benefit the most from the propagation of propaganda and outright lies. So today I'm giving you Mythbusters — AI Edition. This is the spiritual successor to How To Argue With An AI Booster , where I address the technical, financial and philosophical myths that underpin the endless sales of GPUs and ever-increasing valuation of OpenAI. This is going to be fun , because I truly believe that both the financial and tech press take this all a little too seriously, in the sense that everything is so dull. With a handful of exceptions (The Register being the best example), most publications treat financial reporting as something that must be inherently separate from any kind of analysis or criticism. And so, that’s why, if a publication calls bullshit on something insane, that call is almost always segmented away in its own little piece.  If you asked me why I thought this is the case, I’d say it’s probably because (excluding those cases of genuine malfeasance and fraud, like Enron and Worldcom and Nortel) we haven’t seen anything as egregiously offensive or dishonest as what’s emerged from the AI bubble. And so, reporters are accustomed to a lack of civility that, frankly, isn’t warranted.  I also think the total lack of levity or self-awareness leads to less-effective analysis, too. For example, lots of people are freaking out about Disney investing $1 billion for an equity stake in OpenAI , all while licensing its characters to be used in Sora, and I really think you can simmer the deal down to two points: Oh, and while I'm here, let's talk about TIME naming the "Architects of AI" its person (people) of the year . Who fuckin' cares! Marc Benioff, one of the biggest AI boosters in the world, owns TIME, and has already run no less than three other pieces of booster propaganda, including everything from "researchers finding that AIs can scheme, deceive or blackmail," to the supposed existence of an "AI arms race" to "coding tools like Cursor and Claude code becoming so powerful that engineers across top AI companies are using them for virtually every aspect of their work."  Are any of these points true? No! But that doesn't stop them being printed! Number must go up! AI bubble must inflate! No fact check! No investigation! Just print! Print AI Now! Make AI Go Big Now! Jensen Sell GPU! Ahhhhhhhhhhh! Okay, alright, let's go into it. Let's bust some myths. That sounded better in my head. Wow, $1 billion? That's going to pay for a whole month of OpenAI's inference ! Regardless of how many guardrails OpenAI puts on Sora ( which is currently 21 on free apps on the App Store ), there is nothing that will stop degenerates from making a video of Goofy flying a plane into a building, or Donald Duck recreating Frank's entrance from Blue Velvet, or Darth Vader saying every slur imaginable, which already happened when Disney launched a generative AI Vader in Fortnite .

0 views

NVIDIA Isn't Enron - So What Is It?

At the end of November, NVIDIA put out an internal memo ( that was leaked to Barron's reporter Tae Kim, who is a huge NVIDIA fan and knows the company very well , so take from that what you will) that sought to get ahead of a few things that had been bubbling up in the news, a lot of which I covered in my Hater’s Guide To NVIDIA (which includes a generous free intro).  Long story short, people have a few concerns about NVIDIA, and guess what, you shouldn’t have any concerns, because NVIDIA’s very secret, not-to-be-leaked-immediately document spent thousands of words very specifically explaining how NVIDIA was fine and, most importantly, nothing like Enron . Anyway, all of this is fine and normal . Companies do this all the time, especially successful ones, and there is nothing to be worried about here , because after reading all seven pages of the document, we can all agree that NVIDIA is nothing like Enron.  No, really! NVIDIA is nothing like Enron, and it’s kind of weird that you’re saying that it is! Why would you say anything about Enron? NVIDIA didn’t say anything about Enron. Okay, well now NVIDIA said something about Enron, but that’s because fools and vagabonds kept suggesting that NVIDIA was like Enron, and very normally, NVIDIA has decided it was time to set the record straight.  And I agree! I truly agree. NVIDIA is nothing like Enron. Putting aside how I might feel about the ethics or underlying economics of generative AI, NVIDIA is an incredibly successful business that has incredible profits, holds an effective monopoly on CUDA ( explained here ), which powers the underlying software layer to running software on GPUs, specifically generative AI, and not really much else that has any kind of revenue potential.  And yes, while I believe that one day this will all be seen as one of the most egregious wastes of capital of all time, for the time being, Jensen Huang may be one of the most successful salespeople in business history.  Nevertheless, people have somewhat run away with the idea that NVIDIA is Enron , in part because of the weird, circular deals it’s built with Neoclouds — dedicated AI-focused cloud companies — like CoreWeave, Lambda and Nebius , who run data centers full of GPUs sold by NVIDIA, which they then use as collateral for loans to buy more GPUs from NVIDIA .  Yet as dodgy and weird and unsustainable as this is, it isn’t illegal , and it certainly isn’t Enron, because, as NVIDIA has been trying to tell you, it is nothing like Enron! Now, you may be a little confused — I get it! — that NVIDIA is bringing up Enron at all. Nobody seriously thought that NVIDIA was like Enron before (though JustDario, who has been questioning its accounting practices for years , is a little suspicious), because Enron was one of the largest criminal enterprises in history, and NVIDIA is at worst, I believe, a big, dodgy entity that is doing whatever it can to survive. Wait, what’s that? You still think NVIDIA is Enron ? What’s it going to take to convince you? I just told you NVIDIA isn’t Enron! NVIDIA itself has shown it’s not Enron, and I’m not sure why you keep bringing up Enron all the time! Stop being an asshole. NVIDIA is not Enron! Look, NVIDIA’s own memo said that “NVIDIA does not resemble historical accounting frauds because NVIDIA's underlying business is economically sound, [its] reporting is complete and transparent, and [it] cares about [its] reputation for integrity.” Now, I know what you’re thinking. Why is the largest company on the stock market having to reassure us about its underlying business economics and reporting? One might immediately begin to think — Streisand Effect style — that there might be something up with NVIDIA’s underlying business. But nevertheless, NVIDIA really is nothing like Enron.  But you know what? I’m good. I’m fine. NVIDIA, grab your coat, we’re going out, let’s forget any of this ever happened. Wait, what was that? First, unlike Enron, NVIDIA does not use Special Purpose Entities to hide debt and inflate revenue. NVIDIA has one guarantee for which the maximum exposure is disclosed in Note 9 ($860M) and mitigated by $470M escrow. The fair value of the guarantee is accrued and disclosed as having an insignificant value. NVIDIA neither controls nor provides most of the financing for the companies in which NVIDIA invests. Oh, okay! I wasn’t even thinking about that at all, I was literally just saying how you were nothing like Enron , we’re good. Let’s go home- Second, the article claims that NVIDIA resembles WorldCom but provides no support for the analogy. WorldCom overstated earnings by capitalizing operating expenses as capital expenditures. We are not aware of any claims that NVIDIA has improperly capitalized operating expenses. Several commentators allege that customers have overstated earnings by extending GPU depreciation schedules beyond economic useful life. Rebutting this claim, some companies have increased useful life estimates to reflect the fact that GPUs remain useful and profitable for longer than originally anticipated; in many cases, for six years or more. We provide additional context on the depreciation topic below. I…okay, NVIDIA is also not like WorldCom either. I wasn’t even thinking about WorldCom. I haven’t thought of them in a while.  Per Adam Berger of Ebsco :   …NVIDIA, are you doing something WorldCommy? Why are you bringing up WorldCom?  To be clear, WorldCom was doing capital F fraud , and its CEO Bernie Ebbers went to prison after an internal team of auditors led by WorldCom VP of internal auditing Cynthia Cooper reported $3.8 billion in “misallocated expenses and phony accounting entries.”  So, yeah, NVIDIA, you were really specific about saying you didn’t capitalize operating expenses as capital expenditures. You’re…not doing that, I guess? That’s great. Great stuff. I had literally never thought you had done that before. I genuinely agree that NVIDIA is nothing like WorldCom.  Anyway, also glad to hear about the depreciation stuff, looking forward to reading- Third, unlike Lucent, NVIDIA does not rely on vendor financing arrangements to grow revenue. In typical vendor financing arrangements, customers pay for products over years. NVIDIA's DSO was 53 in Q3. NVIDIA discloses our standard payment terms, with payment generally due shortly after delivery of products. We do not disclose any vendor financing arrangements. Our customers are subject to strict credit evaluation to ensure collectability. NVIDIA would disclose any receivable longer than one year in long-term other assets. The $632M "Other" balance as of Q3 does not include extended receivables; even if it did, the amount would be immaterial to revenue. Erm… Alright man, if anyone asks about whether you’re like famed dot-com crashout Lucent Technologies, I’ll be sure to correct them. After all, Lucent’s situation was really different — well…sort of. Lucent was a giant telecommunications equipment company, one that was, for a time, extremely successful, really really successful, in fact, turned around by the now-infamous Carly Fiorina. From a 2010 profile in CNN : NVIDIA, this sounds great — why wouldn’t you want to be compared to Lucen- Oh. So, to put it simply, Lucent was classifying debt as an asset (we're getting into technicalities here, but it sort of was but was really counting money from loans as revenue, which is dodgy and bad and accountants hate it ), and did something called “vendor financing,” which means you lend somebody money to buy something from you. It turns out Lucent did a lot of this. Okay, NVIDIA, I hate to say this, but I kind of get why somebody might say you’re doing Lucent stuff. After all, rumour has it that your deal with OpenAI — a company that burns billions of dollars a year — will involve it leasing your GPUs , which sure sounds like you’re doing vendor financing... -we do not disclose any vendor financing arrangements- Fine! Fine. Anyway, Lucent really fucked up big time, indulging in the dark art of circular vendor financing. In 1998 it signed its largest deal — a $2 billion “equipment and finance agreement” — with telecommunications company Winstar , which promised to bring in “$100 million in new business over the next five years” and build a giant wireless broadband network, along with expanding Winstar’s optical networking.  To quote The Wall Street Journal : In December 1999, WIRED would say that Winstar’s “small white dish antennas…[heralded] a new era and new mind-set in telecommunications,” and included this awesome quote about Lucent from CEO and founder Will Rouhana: Fuck yeah!  But that’s not the only great part of this piece: Annualized revenues, very nice. We love annualized revenues don't we folks? A company making about $25 million a month a year after taking on $2 billion in financing from Lucent. Weirdly, Winstar’s Wikipedia page says that revenues were $445.6 million for the year ending 1999 — or around $37.1 million a month.  Winstar loved raising money — two years later in November 2000, it would raise $1.02 billion, for example — and it raised a remarkable $5.6 billion between February 1999 and July 2001 according to the Wall Street Journal. $900 million of that came in December 1999 from an investment from Microsoft and “several investment firms,” with analyst Greg Miller of Jefferies & Co saying: Another fun thing happened in November 2000 too.  Lucent would admit it had overstated its fourth-quarter profits by improperly recording $125 million in sales , reducing that quarter’s revenue from “profitable” to “break-even.” Things would eventually collapse when Winstar couldn’t pay its debts, filing for Chapter 11 bankruptcy protection on April 18 2001 after failing to pay $75 million in interest payments to Lucent, which had cut access to the remaining $400 million of its $1 billion loan to Winstar as a result. Winstar would file a $10 billion lawsuit in bankruptcy court in Delaware the very same day, claiming that Lucent breached its contract and forced Winstar into bankruptcy by, well, not offering to give it more money that it couldn’t pay off. Elsewhere, things had begun to unravel for Lucent. A January 2001 story from the New York Times told a strange story of Lucent, a company that had made over $33 billion in revenue in its previous fiscal year, asking to defer the final tranche of payment — $20 million — for an acquisition due to “accounting and financial reporting considerations.” Why? Because Lucent needed to keep that money on the books to boost its earnings, as its stock was in the toilet, and was about to announce it was laying off 10,000 people and a quarterly loss of $1.02 billion .  Over the course of the next few years, Lucent would sell off various entities , and by the end of September 2005 it would have 30,500 staff and have a stock price of $2.99 — down from a high of $75 a share at the end of 1999 and 157,000 employees. According to VC Tomasz Tunguz, Lucent had $8.1 billion of vendor financing deals at its height . Lucent was still a real company selling real things, but had massively overextended itself in an attempt to meet demand that didn’t really exist, and when Lucent realized that, it decided to create demand itself to please the markets. To quote MIT Tech Review (and author Lisa Endlich), it believed that “setting and meeting [the expectations of Wall Street] “subsumed all other goals,” and that “Lucent had little choice but to ride the wave.”  To be clear, NVIDIA is quite different from Lucent. It has plenty of money, and the circular deals it does with CoreWeave and Lambda don’t involve the same levels of risk. NVIDIA is not (to my knowledge) backstopping CoreWeave’s business or providing it with loans , though NVIDIA has agreed to buy $6.3 billion of compute as the “buyer of last resort” of any unsold capacity . NVIDIA can actually afford this, and it isn’t illegal , though it is obviously propping up a company with flagging demand. NVIDIA also doesn’t appear to be taking on masses of debt to fund its empire, with over $56 billion in cash on hand and a mere $8.4 billion in long term debt .   Okay, phew. We got through this man. NVIDIA is nothing like Lucent either . Okay, maybe it’s got some similarities — but it’s different! No worries at all. I know I’m relaxed. You still seem nervous, NVIDIA. I promise you, if anyone asks me if you’re like Lucent I’ll tell them you’re not. I’ll be sure to tell them you’re nothing like that. Are you okay, dude? When did you last sleep?  Inventory growth indicates waning demand Claim: Growing inventory in Q3 (+32% QoQ) suggests that demand is weak and chips are accumulating unsold, or customers are accepting delivery without payment capability, causing inventory to convert to receivables rather than cash. Woah, woah, woah, slow down. Who has been saying this? Oh, everybody ? Did Michael Burry scare you? Did you watch The Big Short and say “ah, fuck, Christian Bale is going to get me! I can’t believe he played drums to Pantera ! Ahh!”  Anyway, now you’ve woken up everybody else in the house and they’re all wondering why you’re talking about receivables. Shouldn’t that be fine? NVIDIA is a big business, and it’s totally reasonable to believe that a company planning to sell $63 billion of GPUs in the next quarter would have ballooning receivables ( $33 billion, up from $27 billion last quarter ) and growing inventory ( $19.78 billion, up from $14.96 billion the last quarter ). It’s a big, asset-heavy business, which means NVIDIA’s clients likely get decent payment terms to raise debt or move cash around to get them paid.  Everybody calm down! Like my buddy NVIDIA, who is nothing like Enron by the way, just said: Response: First, growing inventory does not necessarily indicate weak demand. In addition to finished goods, inventory includes significant raw materials and work-in-progress. Companies with sophisticated supply chains typically build inventory in advance of new product launches to avoid stockouts. NVIDIA's current supply levels are consistent with historical trends and anticipate strong future growth. Second, growing inventory does not indicate customers are accepting delivery without payment capability. NVIDIA recognizes revenue upon shipping a product and deeming collectability probable. The shipment reduces inventory, which is not related to customer payments. Our customers are subject to strict credit evaluation to ensure collectability. Payment is due shortly after product delivery; some customers prepay. NVIDIA's DSO actually decreased sequentially from 54 days to 53 days. Haha, nice dude, you’re totally right, it’s pretty common for companies, especially large ones, to deliver something before they receive the cash, it happens , I’m being sincere. Sounds like companies are paying! Great!  But, you know, just, can you be a little more specific? Like about the whole “shipping things before they’re paid” thing.  NVIDIA recognizes revenue upon shipping a product and deeming collectability probable- Alright, yeah, thought I heard you right the first time. What does “deeming collectability probable” mean? You could’ve just said “we get paid 95% of the time within 2 months” or whatever. Unless it’s not 95%? Or 90%? How often is it? Most companies don’t break this down by the way, but then again, most companies are not NVIDIA, the largest company on the stock market, and if I’m honest, nobody else has recently had to put out anything that said “I’m not like Enron,” and I want to be clear that NVIDIA is not like Enron. For real, Enron was a criminal enterprise. It broke the law, it committed real deal, actual fraud, and NVIDIA is nothing like Enron. In fact, before NVIDIA put out a letter saying how it was nothing like Enron I would have staunchly defended the company against the Enron allegations, because I truly do not think NVIDIA is committing fraud. That being said, it is very strange that NVIDIA wants somebody to think about how it’s nothing like Enron. This was, technically, an internal memo, and thus there is a chance its existence was built for only internal NVIDIANs worried about the value of their stock, and we know it was definitely written to try and deflect Michael Burry’s criticism, as well as that of a random Substacker who clearly had AI help him write a right-adjacent piece that made all sorts of insane and made up statements (including several about Arrow Electronics that did not happen) — and no, I won’t link it, it’s straight up misinformation.  Nevertheless, I think it’s fair to ask: why does NVIDIA need you to know that it’s nothing like Enron? Did it do something like Enron? Is there a chance that I, or you, may mistakenly say “hey, is NVIDIA doing Enron?”  Heeeeeeyyyy NVIDIA. How’re you feeling? Yeah, haha, you had a rough night. You were saying all this crazy stuff about Enron last night, are you doing okay? No, no, I get it, you’re nothing like Enron, you said that a lot last night. So, while you were asleep — yeah it’s been sixteen hours dude, you were pretty messed up, you brought up Lucent then puked in my sink — I did some digging and like, I get it, you are definitely not like Enron, Enron was breaking the law . NVIDIA is definitely not doing that. But…you did kind of use Special Purpose Vehicles recently? I’m sorry, I know, you’re not like Enron! You’re investing $2 billion in Elon Musk’s special purpose vehicle that will then use that money to raise debt to buy GPUs from NVIDIA that will then be rented to Elon Musk . This is very different to what Enron did! I am with you dude , don’t let the haters keep you down! No, I don’t think a t-shirt that says “NVIDIA is not like Enron for these specific reasons” helps.  Wait, wait, okay, look. One thing. You had this theoretical deal lined up with Sam Altman and OpenAI to invest $100 billion — and yes, you said in your latest earnings that "it was actually a Letter of Intent with the opportunity to invest," which doesn’t mean anything, got it — and the plan was that you would “ lease the GPUs to OpenAI .” Now how would you go about doing that NVIDIA? You’d probably need to do exactly the same deal as you just did with xAI. Right? Because you can’t very well rent these GPUs directly to Elon Musk , you need to sell them to somebody so that you can book the revenue, you were telling me that’s how you make money. I dunno, it’s either that or vendor financing.  Oh, you mentioned that already- -unlike Lucent, NVIDIA does not rely on vendor financing arrangements to grow revenue. In typical vendor financing arrangements, customers pay for products over years. NVIDIA's DSO was 53 in Q3. NVIDIA discloses our standard payment terms, with payment generally due shortly after delivery of products. We do not disclose any vendor financing arrangements- Let me stop you right there a second, you were on about this last night before you scared my cats when you were crying about something to do with “two nanometer.”  First of all, why are you bringing up typical vendor financing agreements? Do you have atypical ones?  Also I’m jazzed to hear you “disclose your standard payment terms,” but uh, standard payment terms for what exactly? Where can I find those? For every contract?  Also, you are straight up saying you don’t disclose any vendor financing arrangements , that’s not the same as “not having any vendor financing arrangements.” I “do not disclose” when I go to the bathroom but I absolutely do use the toilet. Let’s not pretend like you don’t have a history in helping get your buddies funding. You have deals with both Lambda and CoreWeave to guarantee that they will have compute revenue, which they in turn use to raise debt, which is used to buy more of your GPUs. You have learned how to feed debt into yourself quite well, I’m genuinely impressed .  This is great stuff, I’m having the time of my life with how not like Enron you are, and I’m serious that I 100% do not believe you are like Enron. But…what exactly are you doing man? What’re you going to do about what Wall Street wants?  Enron was a criminal enterprise! NVIDIA is not. More than likely NVIDIA is doing relatively boring vendor financing stuff and getting people to pay them on 50-60 day time scales — probably net 60, and, like it said, it gets paid upfront sometimes.  NVIDIA truly isn’t like Enron — after all, Meta is the one getting into ENERGY TRADING — to the point that I think it’s time to explain to you what exactly happened with Enron. Or, at least as much as is possible within the confines of a newsletter that isn’t exclusively about Enron… The collapse of Enron wasn’t just — in retrospect — a large business that ultimately failed. If that was all it was, Enron wouldn’t command the same space in our heads as other failures from that era, like WorldCom (which I mentioned earlier) and Nortel (which I’ll get to later), both of whom were similarly considered giants in their fields. It’s also not just about the fact that Enron failed because of proven business and accounting malfeasance. WorldCom entered bankruptcy due to similar circumstances (though, rather than being liquidated, it was acquired as part of Verizon’s acquisition of MCI , the name of a company that had previously merged with WorldCom that WorldCom renamed itself to after bankruptcy ), and unlike Enron, isn’t the subject of flashy Academy-nominated films , or even a Broadway production .  It’s not the size of Enron that makes its downfall so intriguing. Nor, for that matter, is it the fact that Enron did a lot of legally and ethically dubious stuff to bring about its downfall.  No, what makes Enron special is the sheer gravity of its malfeasance, the rotten culture at the heart of the company that encouraged said malfeasance, and the creative ways Enron’s leaders crafted an image of success around what was, at its heart, a dog of a company.  Enron was born in 1985 on the foundations of two older, much less interesting businesses. The first, Houston Natural Gas (HNG), started life as a utility provider, pumping natural gas from the oilfields of Texas to customers throughout the region, before later exiting the industry to focus on other opportunities. The other, InterNorth, was based in Omaha, Nebraska and was in the same business — pipelines.  In the mid-1980s, HNG was the subject of a hostile take-over from Coastal Corporation (which, until 2001, operated a chain of refineries and gas stations throughout much of the US mainland). Unable to fend it off by itself, HNG merged with InterNorth, with the combined corporation renamed Enron .  The CEO of this new entity was Ken Lay, an economist by trade who spent most of his career in the energy sector who also enjoyed deep political connections with the Bush family . He co-chaired George H. W. Bush’s failed 1992 re-election campaign , and allowed Enron’s corporate jet to ferry Bush Sr. and Barbara Bush back and forth to Washington. Center for Public Integrity Director Charles Lewis said that “ there was no company in America closer to George W. Bush than Enron. ” George W. Bush (the second one) even had a nickname for Lay. Kenny Boy . Anyway, in 1987, Enron hired McKinsey — the world’s most evil management consultancy firm — to help the company create a futures market for natural gas. What that means isn’t particularly important to the story, but essentially, a futures contract is where a company agrees to buy or sell an asset in the future at a fixed price.  It’s a way of hedging against risk, whether that be from something like price or currency fluctuations, or from default. If you’re buying oil in dollars, for example, buying a futures contract for oil to be delivered in six months time at a predetermined price means that if your currency weakens against the dollar, your costs won’t spiral.  That bit isn’t terribly important. What does matter is while working with McKinsey, Lay met someone called Jeff Skilling — a young engineer-turned-consultant who impressed the company’s CEO deeply, so much so that Lay decided to poach him from McKinsey in 1990 and give him the role of chairman and CEO of Enron Finance Group.  Anyway, Skilling continued to impress Lay, who gave him greater and greater responsibility, eventually crowning him Chief Operating Officer (COO) of Enron.  With Skilling in a key leadership position, he was able to shape the organization’s culture. He appreciated those who took risks — even if those risks, when viewed with impartial eyes, were deemed reckless, or even criminal.  He introduced the practice of stack-ranking (also known as “rank and yank”) to Enron, which had previously been pioneered by Jack Welch at GE (see The Shareholder Supremacy from last year ). Here, employees were graded on a scale, and those at the bottom of the scale were terminated. Managers had to place at least 10% (other reports say closer to 15%) of employees in the lowest bracket, which created an almost Darwinian drive to survive.  Staffers worked brutal hours. They cut corners. They did some really, really dodgy shit. None of this bothered Skilling in the slightest.  How dodgy, you ask? Well, in 2000 and 2001, California suffered a series of electricity blackouts. This shouldn’t have happened, because California’s total energy demand (at the time) was 28GW and its production capacity was 45GW.  California also shares a transmission grid with other states (and, for what it’s worth, the Canadian provinces of Alberta and British Colombia, as well as part of Baja California in Mexico), meaning that in the event of a shortage, it could simply draw capacity from elsewhere. So, how did it happen?  Well, remember, Enron traded electricity like a commodity, and as a result, it was incentivized to get the highest possible price for that commodity . So, it took power plants off line during peak hours, and exported power to other states when there was real domestic demand.  How does a company like Enron shut down a power station? Well, it just asked .  In one taped phone conversation released after the company’s collapse , an Enron employee called Bill called an official at a Las Vegas power plant (California shares the same grid with Nevada) and asked him to “ get a little creative, and come up with a reason to go down. Anything you want to do over there? Any cleaning, anything like that? " This power crisis had dramatic consequences — for the people of California, who faced outages and price hikes; for Governor Gray Davis, who was recalled by voters and later replaced by Arnold Schwarzenegger; for PG&E, which entered Chapter 11 bankruptcy that year ; and for Southern California Edison, which was pushed to the brink of bankruptcy as a result. This kind of stuff could only happen in an organization whose culture actively rewarded bad behavior .  In fact, Skilling was seemingly determined to elevate the dodgiest of characters to the highest positions within the company, and few were more-ethically-dubious than Andy Fastow, who Skilling mentored like a protegé, and who would later become Enron’s Chief Financial Officer.  Even before vaulting to the top of Enron’s nasty little empire, Fastow was able to shape its accounting practices, with the company adopting mark-to-market accounting practices in 1991 .  Mark-to-market sounds complicated, but it’s really simple. When listing assets on a balance sheet, you don’t use the acquisition cost, but rather the fair-market value of that asset. So, if I buy a baseball card for a dollar, and I see that it’s currently selling for $10 on eBay, I’d say that said asset is worth $10, not the dollar I paid for it, even though I haven’t actually sold it yet.  This sounds simple — reasonable, even — but the problem is that the way you determine the value of that asset matters, and mark-to-market accounting allows companies and individuals to exercise some…creativity.  Sure, for publicly-traded companies (where the price of a share is verifiable, open knowledge), it’s not too bad, but for assets with limited liquidity, limited buyers, or where the price has to be engineered somehow, you have a lot of latitude for fraud.  Let’s go back to the baseball card example. How do you know it’s actually worth $10, and not $1? What if the “fair value” isn’t something you can check on eBay, but what somebody told me in-person it’s worth? What’s to stop me from lying and saying that the card is actually worth $100, or $1000? Well, other than the fact I’d be committing fraud. What if I have ten $1 baseball cards, and I give my friend $10 and tell him to buy one of the cards using the $10 bill I just handed him, allowing me to say that I’ve realized a $9 profit on one of my $1 cards, and my other cards are worth $90 and not $9?  And then, what if I use the phony valuation of my remaining cards to get a $50 loan, using the cards as collateral, even though the collateral isn’t even one-fifth of the value of the loan?  You get the idea. While a lot of the things people can do to alter the mark-to-market value of an asset are illegal (and would be covered under generic fraud laws), it doesn’t change the fact that mark-to-market accounting allows for some shenanigans to take place. Another trait of mark-to-market accounting, as employed by Enron, is that it would count all the long-term potential revenue from a deal as quarterly revenue — even if that revenue would be delivered over the course of a decades-long contract, or if the contract would be terminated before its intended expiration date.  It would also realize potential revenue as actual revenue, even before money changed hands, and when the conclusion of the deal wasn’t a certainty. For example, in 1999, Enron sold a stake in four electricity-generating barges in Nigeria (essentially floating power stations) to Merrill Lynch , which allowed the company to register $12m in profit.  That sale ultimately didn’t happen, though that didn’t stop Enron from selling pieces to Merrill Lynch, which — I’m not kidding — Merrill Lynch quickly sold back to a Special Purpose Vehicle called “LJM2” controlled by Andrew Fastow. You’re gonna hear that name again. Although the Merrill Lynch bankers who participated in the deal were eventually convicted of conspiracy and fraud charges (long after the collapse of Enron), their convictions were later quashed on appeal.   But still, for a moment, it gave a jolt to Enron’s quarterly earnings.  Anyway, Enron was incredibly creative when it came to how it valued its assets. Take, for example, fiber optic cables. As the Dot Com bubble swelled, Enron saw an opportunity, and wanted to be able to trade and control the supply of bandwidth, just like it does with other more conventional commodities (like oil and gas) .  It built, bought, and leased fiber-optic cables throughout the country, and then, using exaggerated estimates of their value and potential long-term revenue, released glowing financial reports that made the company look a lot healthier and more successful than it actually was.  Enron also loved to create special-purpose entities that existed either to generate revenue that didn’t exist, or to hold toxic assets that would otherwise need to be disclosed (with Enron then using its holdings in said entities to boost its balance sheet), or to disguise its debt.  One, Whitewing, was created and capitalized by Enron (and an outside investor), and pretty much exclusively bought assets from Enron — which allowed the company to recognize sales and profits on its balance sheets, even if they were fundamentally contrived.  Another set of entities — known as LJM, named after the first initial of Andy Fastow’s wife and two children , and which I mentioned earlier — did the same thing, allowing the company to hide risky or failing investments, to limit its perceived debt, and to generate artificial profits and revenues. LJM2 was, creatively, the second version of the idea. Even though the assets that LJM held were, ultimately, dogshit, the distance that LJM provided, combined with Enron’s use of mark-to-market accounting, allowed the company to turn a multi-billion collective failure into a resounding and (on paper) profitable triumph.  So, how did this happen, and how did it go on for so long?  Well, first, Enron was, at its peak, worth $70bn. Its failure would be a failure for its investors and shareholders, and nobody — besides the press, that is — wanted to ask tough questions.  It had auditors, but they were paid handsomely, turning a blind eye to the criminal malfeasance at the heart of the company. Auditor Arthur Andersen surrendered its license in 2002, bringing an end to the company — and resulting in 85,000 employees losing their jobs.  Well, it’s not so much as it only turned a blind eye, so much as it turned on a big paper shredder , shredding tons — and I’m using that as a measure of weight, and not figuratively — of documents as Enron started to implode , a crime for which it was later convicted of obstruction of justice.  I’ve talked about Enron’s culture, but I’d be remiss if I didn’t mention that Enron’s highest-performers and its leadership received hefty bonuses in company equity, motivating them to keep the charade going. Enron’s pension scheme, I add, was basically entirely Enron stock, and employees were regularly encouraged to buy more, with Kenneth Lay telling employees weeks before the company’s collapse that “the company is fundamentally sound” and to “hang on to their stock.”  Additionally, per the terms of the Enron pension plan, employees were prevented from shifting their holdings into other pension funds, or other investments, until they turned 50 . When the company collapsed, those people lost everything, even those who didn’t know anything about Enron’s criminality. George Maddox, a retired former Enron employee, had his entire retirement tied up in 14,000 Enron shares (worth at the time more than $1.3 million), was “forced to spend his golden years making ends meet by mowing pastures and living in a run-down East Texas farmhouse.”  The US Government brought criminal charges against Enron’s top leadership. Ken Lay was convicted of four counts of fraud and making false statements , but died on a skiing vacation to Aspen before sentencing . May he burn in Hell. Skilling was convicted on 24 counts of fraud and conspiracy and sentenced to 24 years in jail. This was reduced in 2013 on appeal to 14 years, and he was released to a halfway house in 2018 , and then freed in 2019. He’s since tried to re-enter the energy sector — with one venture combining energy trading and, I kid you not, blockchain technology — although nothing really came out of it.  Andy Fastow pled guilty to two counts — one of manipulation of financial statements, and one of self-dealing . and received ten years in prison. This was later reduced to six years, including two years of probation , in part because he cooperated with the investigations against other Enron executives. He is now a public speaker and a tech investor in an AI company, KeenCorp .  His wife, Lea, who also worked at Enron, received twelve months for conspiracy to commit wire fraud and money laundering and for submitting false tax returns. She was released from custody in July, 2005 .  Enron’s implosion was entirely self-inflicted and horrifyingly, painfully criminal, yet, it had plenty of collateral damage — to the US economy, to those companies that had lent it money, to its employees who lost their jobs and their life savings and their retirements, and to those employees at companies most entangled with Enron, like those at auditing firm Arthur Andersen. This isn’t unique among corporate failures. WorldCom had some dodgy accounting practices. Nortel too. Both companies failed, both companies wrecked the lives of their employees, and the failure of these companies had systemic economic consequences (especially in Canada, where Nortel, at its peak, accounted for one-third of the market cap of all companies on the Toronto Stock Exchange). The reason why Enron remains captured in our imagination — and why NVIDIA is so vociferously opposed to being compared with Enron — is the extent to which Enron manipulated reality to appear stronger and more successful than it was, and how long it was able to get away with it.  While we may have forgotten the memory of Enron — it happened over two decades ago, after all — we haven’t forgotten the instincts that it gave us. It’s why our noses twitch when we see special-purpose vehicles being used to buy GPUs, and why we gag when we see mark-to-market accounting.  It’s entirely possible that everything NVIDIA is doing is above board. Great! But that doesn’t do anything for the deep pit of dread in my stomach.  A few weeks ago, I published the Hater’s Guide to NVIDIA, and included within it a guide to what this company does . If you’re looking at this through the cold, unthinking lenses of late-stage capitalism. This all sounds really good! I’ve basically described a company that has an essential monopoly in the one thing required for a high-growth (if we’re talking exclusively about capex spending) industry to exist.  Moreover, that monopoly is all-but assured, thanks to NVIDIA’s CUDA moat, its first-mover advantage, and the actual capabilities of the products themselves — thereby allowing the company to charge a pretty penny to customers.  And those customers? If we temporarily forget about the likes of Nebius and CoreWeave (oh, how I wish I could forget about CoreWeave permanently), we’re talking about the biggest companies on the planet. Ones that, surely, will have no problems paying their bills.  Back in February 2023, I wrote about The Rot Economy , and how everything in tech had become oriented around growth — even if it meant making products harder to use as a means of increasing user engagement or funnelling them toward more-profitable parts of an app.  Back in June 2024, I wrote about the Rot-Com Bubble , and my greater theory that the tech industry has run out of hypergrowth ideas: In simple terms, big tech — Amazon, Google, Microsoft and Meta, but also a number of other companies — no longer has the “next big thing,” and jumped on AI out of an abundance of desperation.  Hell, look at Oracle. This company started off by selling databases and ERP systems to big companies, and then trapping said companies by making it really, really difficult to migrate to cheaper (and better) solutions, and then bleeding said companies with onerous licensing terms (including some where you pay by the number of CPU cores that use the application). It doesn’t do anything new, or exciting, or impressive, and even when presented with the opportunity to do things that are useful or innovative (like when it bought Sun Microsystems), it turns away. I imagine that, deep down, it recognizes that its current model just isn’t viable in the long-term, and so, it needs something else.  When you haven’t thought about innovation… well… ever, it’s hard to start. Generative AI, on the face of it, probably seemed like a godsend to Larry Ellison.  We also live in an era where nobody knows what big tech CEOs do other than make nearly $100 million a year , meaning that somebody like Satya Nadella can get called a “ thoughtful leader with striking humility ” for pushing Copilot AI in every single part of your Microsoft experience, even Notepad, a place that no human being would want it , and accelerating capital expenditures from $28 billion across the entirey of FY 2023 to $34.9 billion in its latest quarter . In simpler terms, spending money makes a CEO look busy. And at a time when there were no other potential growth avenues, AI was a convenient way to make everybody look busy. Every department can “have an AI strategy,” and every useless manager and executive can yell, as ServiceNow CEO did back in 2022 , “ let me make it clear to everybody here, everything you do: AI, AI, AI, AI, AI. ” I should also add that ChatGPT was the first real, meaningful hit that the American tech industry had produced in a long, long time — the last being, if I’m honest, Uber, and that’s if we allow “successful yet not particularly good businesses” into the pile.  If we insist on things like “profitability” and “sustainability,” US tech hasn’t done so great. Snowflake runs at a loss , Snap runs at a loss , and while Uber has turned things around somewhat , it’s hardly created the next cloud computing or smartphone.  Putting aside finances, the last major “hit” was probably Venmo or Zelle, and maybe, if I’m feeling generous, smart speakers like Amazon Echo and Apple Homepod. Much like Uber, none of these were “the next big thing,” which would be fine except big tech needs more growth forever right now, pig! This is why Google, Amazon and Meta all do 20 different things — although rarely for any length of time, with these “things” often having a shelf life shorter than a can of peaches — because The Rot Economy’s growth-at-all-costs mindset exists only to please the markets, and the markets demanded growth. ChatGPT was different. Not only did it do something new, it did so in a way that was relatively easy to get people to try and “see the potential” of. It was also really easy to convince people it would become something bigger and better , because that’s what tech does. To quote Bender and Hanna, AI is a “marketing term ” — a squishy way of evoking futuristic visions of autonomous computers that can do anything and everything from us, and because both consumers and analysts have been primed to believe and trust the tech industry, everybody believed that whatever ChatGPT was would be the Next Big Thing. And said “Next Big Thing” is powered by Large Language Models, which require GPUs sold by one company — NVIDIA.  AI became a very useful thing to do. If a company wanted to seem futuristic and attract investors, it could now “integrate AI.” If a hyperscaler wanted to seem enterprising and like it was “building for the future,” it could buy a bunch of GPUs, or invest in its own silicon, or, as Google, Microsoft, Amazon and Meta have done, shove AI in every imaginable crevice of the app.  Investors could invest in AI companies, retail investors (IE: regular people) could invest in AI stocks, tech reporters could write about something new in AI, LinkedIn perverts could write long screeds about AI, the markets could become obsessed with AI… …and yeah, you can kind of see how things got out of control. Everybody now had something to do . An excuse to do AI, regardless of whether it made sense, because everybody else was doing it. ChatGPT quickly became one of the most popular websites on the internet — all while OpenAI burned billions of dollars — and because the media effectively published every single thought that Sam Altman had (such as that GPT-4 would “automate away some jobs and create others ” and that he was a “ little bit scared of it ”), AI, as an idea, technology, symbolic stock trope, marketing tool and myth became so powerful that it could do anything, replace anyone, and be worth anything, even the future of your company. Amongst the hype, there was an assumption related to scaling laws ( summarized well by Charlie Meyer ): In simple terms, the paper suggested that shoving more training data and using more compute power would exponentially increase the ability of a model to do stuff. And to make a model that did more stuff, you needed more GPUs and more data centers. Did it matter that there was compelling evidence in 2022 ( Gary Marcus was right! ) that there were limits to scaling laws, and that we would hit the point of diminishing returns? Amidst all this, NVIDIA has sold over $200 billion of GPUs since the beginning of 2023 , becoming the largest company on the stock market and trading at over $170 as of writing this sentence only a few years after being worth $19.52 a share .  You see, Meta, Google, Microsoft and Amazon all wanted to be “part of the future,” so they sunk a lot of their money into NVIDIA, making up 42% of its revenue in its fiscal year 2025. Though there are some arguments about exactly how much of big tech’s billowing capital expenditures are spent on GPUs, some estimate somewhere between 41% to more than 50% of a data center’s capex is spent on them. If you’re wondering what the payoff is, well, you’re in good company. I estimate that there’s only around $61 billion in total generative AI revenue , and that includes every hyperscaler and neocloud. Large Language Models are limited, AI agents are a pipedream and simply do not work , AI-powered products are unreliable and coding LLMs make developers slower , and the cost of inference — the way in which a model produces its output — keeps going up .  So, due to the fact that so much money has now been piled into building AI infrastructure, and big tech has promised to spend hundreds of billions of dollars more in the next year , big tech has found itself in a bit of a hole. How big a hole? Well, By the end of the year, Microsoft, Amazon, Google and Meta will have spent over $400bn in capital expenditures, much of it focused on building AI infrastructure, on top of $228.4 billion in capital expenditures in 2024 and around $148bn in capital expenditures in 2023, for a total of around $776bn in the space of three years, and intends to spend $400 billion or more in 2026. As a result, based on my analysis, big tech needs to make $2 trillion in brand new revenue, specifically from AI by 2030, or all of this was for nothing. I go into detail here in my premium piece , but I’m going to give you a short explanation here. Sadly you’re going to have to learn stuff. I know! I’m sorry. Introducing a term: depreciation. From my October, 31 newsletter : Nobody seems to be able to come to a consensus about how long this should be. In Microsoft’s case, depreciation for its servers is spread over six years — a convenient change it made in August 2022, a few months before the launch of ChatGPT. This means that Microsoft can spread the cost of the tens of thousands of A100 GPUs bought in 2020, or the 450,000 H100 GPUs it bought in 2024 , across six years, regardless of whether those are the years they will be either A) generating revenue or B) still functional.  CoreWeave, for what it’s worth, says the same thing — but largely because it’s betting that it’ll still be able to find users for older silicon after its initial contracts with companies like OpenAI expire. The problem is, as the aforementioned linked CNBC article points out, is that this is pretty much untested ground.  Whereas we know how much, say, a truck or a piece of heavy machinery can last, and how long it can deliver value to an organization, we don’t know the same thing about the kind of data center GPUs that hyperscapers are spending tens of billions of dollars on each year. Any kind of depreciation schedule is based on, at best, assumptions, and at worst, hope.  The assumption that the cards won’t degrade with heavy usage. The assumption that future generations of GPUs won’t be so powerful and impressive, they’ll render the previous ones more obsolete than expected, kind of like how the first jet-powered planes of the 1950s did to those manufactured just one decade prior. The assumption that there will, in fact, be a market for older cards, and that there’ll be a way to lease them profitably. What if those assumptions are wrong? What if that hope is, ultimately, irrational?  Mihir Kshirsagar of the Center for Information Technology Policy framed the problem well : This is why Michael Burry brought it up recently — because spreading out these costs allows big tech to make their net income (IE: profits) look better. In simple terms, by spreading out costs over six years rather than three, hyperscalers are able to reduce a line item that eats into their earnings, which makes their companies look better to the markets. So, why does this create an artificial time limit? In really, really simple terms:  So, now that you know this, there’s a fairly obvious question to ask: why are they still buying GPUs? Also…where the fuck are they going? As I covered in the Hater’s Guide To NVIDIA : While I’m not going to copy-paste my whole (premium) piece, I was only able to find, at most, a few hundred thousand Blackwell GPUs — many of which aren’t even online! — including OpenAI’s Stargate Abilene (allegedly 400,000, though only two buildings are handed over); a theoretical 131,000 GPU cluster owned by Oracle announced in March 2025 ; 5000 Blackwell GPUs at the University of Texas, Austin ; “more than 1500” in a Lambda data center in Columbus, Ohio ; The Department of Energy’s still-in-development 100,000 GPU supercluster, as well as “10,000 NVIDIA Blackwell GPUs” that are “expected to be available in 2026 in its “Equinox” cluster ; 50,000 going into the still-unbuilt Musk-run Colossus 2 supercluster ; CoreWeave’s “largest GB200 Blackwell cluster” of 2496 Blackwell GPUs ; “tens of thousands” of them deployed globally by Microsoft ( including 4600 Blackwell Ultra GPUs ); 260,000 GPUs for five AI data centers for the South Korean government …and I am still having trouble finding one million of these things that are actually allocated anywhere , let alone in a data center, let alone one with sufficient power. I do not know where these six million Blackwell GPUs have gone, but they certainly haven’t gone into data centers that are powered and turned on. In fact, power has become one of the biggest issues with building these things, in that it’s really difficult (and maybe impossible!) to get the amount of power these things need.   In really simple terms: there isn’t enough power or built data centers for those six million Blackwell GPUs, in part because the data centers aren’t built, and in part because there isn’t enough power for the ones that are. Microsoft CEO Satya Nadella recently said on a podcast that his company “[didn’t] have the warm shells to plug into,” meaning buildings with sufficient power, and heavily suggested Microsoft “may actually have a bunch of chips sitting in inventory that [he] couldn’t plug in.” The news that HPE’s (Hewlett Packard Enterprise) AI server business underperformed, and by a significant margin, only raises more questions about where these chips are going .  So why, pray tell, is Jensen Huang of NVIDIA saying that he has 20 million Blackwell and Vera Rubin GPUs ordered through the end of 2026 ? Where are they going to go? I truly don’t know!  AI bulls will tell you about the “insatiable demand for AI” and that these massive amounts of orders are proof of something or rather, and you know what, I’ll give them that — people sure are buying a lot of NVIDIA GPUs! I just don’t know why . Nobody has made a profit from AI, and those making revenue aren’t really making much.  For example, my reporting on OpenAI from a few weeks ago suggests that the company only made $4.329 billion in revenue through the end of September, extrapolated from the 20% revenue share that Microsoft receives from the company. As some people have argued with the figures, claiming they are either A) delayed or B) not inclusive of the revenue that OpenAI is paid from Microsoft as part of Bing’s AI integration and sales of OpenAI’s models via Microsoft Azure, I wanted to be clear of two things: In the same period, it spent $8.67 billion on inference (the process in which an LLM creates an output). This is the biggest company in the generative AI space, with 800 million weekly active users and the mandate of heaven in the eyes of the media. Anthropic, its largest competitor, alleges it will make $833 million in revenue in December 2025 , and based on my estimates will end up having $5 billion in revenue by end of year. Based on my reporting from October, Anthropic spent $2.66 billion on Amazon Web Services through the end of September, meaning that it (based on my own analysis of reported revenues) spent 104% of its $2.55 billion in revenue up until that point just on AWS , and likely spent just as much on Google Cloud.  While everybody wants to tell the story of Anthropic’s “efficiency” and “ only burning $2.8 billion this year ,” one has to ask why a company that is allegedly “reducing costs” had to raise $13 billion in September 2025 after raising $3.5 billion in March 2025 , and after raising $4 billion in November 2024 ? Am I really meant to read stories about Anthropic hitting break even in 2028 with a straight face? Especially as other stories say Anthropic will be cash flow positive “ as soon as 2027 .” These are the two largest companies in the generative AI space, and by extension the two largest consumers of GPU compute. Both companies burn billions of dollars, and require an infinite amount of venture capital to keep alive at a time when the Saudi Public Investment Fund is struggling and the US venture capital system is set to run out of cash in the next year and a half . The two largest sources of actual revenue for selling AI compute are subsidized by venture capital and debt. What happens if these sources dry up? And, in all seriousness, who else is buying AI compute? What are they doing with it? Hyperscalers (other than Microsoft, which chose to stop reporting its AI revenue back in January, when it claimed a $13 billion, or about $1 billion a month, in revenue ) don’t disclose anything about their AI revenue, which in turn means we have no real idea about how much real, actual money is coming in to justify these GPUs.  CoreWeave made $1.36 billion in revenue (and lost $110 million doing so) in its last quarter — and if that’s indicative of the kind of actual, real demand for AI compute, I think it’s time to start panicking about whether all of this was for nothing.  CoreWeave has a backlog of over $50 billion in compute , but $22 billion of that is OpenAI (a company that burns billions of dollars a year and lives on venture subsidies), $14 billion of that is Meta (which has yet to work out how to make any kind of real money from generative AI, and no, its “ generative AI ads ” are not the future, sorry), and the rest is likely a mixture of Microsoft and NVIDIA, which agreed to buy $6.3 billion of any unused compute from CoreWeave through 2032 .  Sorry, I also forgot Google, which is renting capacity from CoreWeave to rent to OpenAI . Also, I also forgot to mention that CoreWeave’s backlog problem stems from data center construction delays . That and CoreWeave has $14 billion in debt mostly from buying GPUs, which it was able to raise by using GPUs as collateral and that it had contracts from customers willing to pay it, such as NVIDIA, which is also selling it the GPUs. So, just to be abundantly clear: CoreWeave has bought all those GPUs to rent to OpenAI, Microsoft (for OpenAI), Meta, Google (OpenAI), and NVIDIA, which is the company that benefits from CoreWeave’s continued ability to buy GPUs.  Otherwise, where’s the fucking business, exactly? Who are the customers? Who are the people renting these GPUs, and for what purpose are they being rented? How much money is renting those GPUs? You can sit and waffle on about the supposedly glorious “AI revolution” all you want, but where’s the money, exactly? And why, exactly, are we buying more GPUs? What are they doing? To whom are they being rented? For what purpose? And why isn’t it creating the kind of revenue that is actually worth sharing?  Is it because the revenue sucks? Is it because it’s unprofitable to provide it?  And why, at this point in history, do we not know? Hundreds of billions of dollars that have made NVIDIA the biggest company on the stock market and we still do not know why people are buying these fucking things. NVIDIA is currently making hundreds of billions in revenue selling GPUs to companies that either plug them in and start losing money or, I assume, put them in a warehouse for safe keeping. This brings me to my core anxiety: why, exactly, are companies pre-ordering GPUs? What benefit is there in doing so? Blackwell does not appear to be “more efficient” in a way that actually makes anybody a profit, and we’re potentially years from seeing these GPUs in operation in data centers at the scale they’re being shipped — so why would anybody be buying more?  I doubt these are new customers — they’re likely hyperscalers, neoclouds like CoreWeave and resellers like Dell and SuperMicro — because the only companies that can actually afford to buy them are those with massive amounts of cash or debt, to the point that even Google , Amazon , Meta and Oracle are taking on massive amounts of new debt, all without a plan to make a profit. NVIDIA’s largest customers are increasingly unable to afford its GPUs, which appear to be increasing in price with every subsequent generation. NVIDIA’s GPUs are so expensive that the only way you can buy them is by already having billions of dollars or being able to raise billions of dollars, which means, in a very real sense, that NVIDIA is dependent not on its customers , but on its customers’ credit ratings and financial backers. To make matters worse, the key reason that one would buy a GPU is to either run services using it or rent it to somebody else, and the two largest parties spending money on these services are OpenAI and Anthropic, both of whom lose billions of dollars, and are thus dependent on venture capital and debt (remember, OpenAI has a $4 billion line of credit , and Anthropic a $2.5 billion one too ). In simple terms, NVIDIA’s customers rely on debt to buy its GPUs, and NVIDIA’s customers’ customers rely on debt to pay to rent them.  Yet it gets worse from there. Who, after all, are the biggest customers renting AI compute? That’s right, AI startups, all of which are deeply unprofitable. Cursor — Anthropic’s largest customer and now its biggest competitor in the AI coding sphere — raised $2.3 billion in November after raising $900 million in June . Perplexity, one of the most “popular” AI companies,  raised $200 million in September after raising $100 million in July after seeming to fail to raise $500 million in May (I’ve not seen any proof this round closed) after raising $500 million in December 2024 . Cognition raised $400 million in September after raising $300 million in March . Cohere raised $100 million in September a month after it raised $500 million .  Venture capital is feeding money to either OpenAI or Anthropic to use their models, or in some cases hyperscalers or neoclouds like CoreWeave or Lambda to rent NVIDIA GPUs. OpenAI and Anthropic then raise venture capital or debt to pay hyperscalers or neoclouds to rent NVIDIA GPUs. Hyperscalers and neoclouds then use either debt or existent cashflow (in the case of hyperscalers, though not for long!) to buy more NVIDIA GPUs. Only one company actually makes a profit here: NVIDIA.  At some point, a link in this debt-backed chain breaks, because very little cashflow exists to prop it up. At some point, venture capitalists will be forced to stop funnelling money into unprofitable, unsustainable AI companies, which will make those companies unable to funnel money into the pockets of those buying GPUs, which will make it harder for those companies buying GPUs to justify (or raise debt for) buying more GPUs.  And if I’m honest, none of NVIDIA’s success really makes any sense. Who is buying so many GPUs? Where are they going?  Why are inventories increasing ? Is it really just pre-buying parts for future orders? Why are accounts receivable climbing , and how much product is NVIDIA shipping before it gets paid? While these are both explainable as “this is a big company and that’s how big companies do business” (which is true!), why do receivables not seem to be coming down?  And how long, realistically, can the largest company on the stock market continue to grow revenues selling assets that only seem to lose its customers money? I worry about NVIDIA, not because I believe there’s a massive scandal, but because so much rides on its success, and its success rides on the back of dwindling amounts of venture capital and debt, because nobody is actually making money to pay for these GPUs.   In fact, I’m not even saying it goes tits up. Hell, it might even have another good quarter or two. It really comes down to how long people are willing to be stupid and how long Jensen Huang is able to call hyperscalers at three in the morning and say “buy one billion dollars of GPUs, pig.”  No, really! I think much of the US stock market’s growth is held up by how long everybody is willing to be gaslit by Jensen Huang into believing that they need more GPUs. At this point it’s barely about AI anymore, as AI revenue — real, actual cash made from selling services run on GPUs — doesn’t even cover its own costs, let alone create the cash flow necessary to buy $70,000 GPUs thousands at a time. It’s not like any actual innovation or progress is driving this bullshit!  In any case, the markets crave a healthy NVIDIA, as so many hundreds of billions of dollars of NVIDIA stock sit in the hands of retail investors and people’s 401ks, and its endless growth has helped paper over the pallid growth of the US stock market and, by extension, the decay of the tech industry’s ability to innovate. Once this pops — and it will pop, because there is simply not enough money to do this forever — there must be a referendum on those that chose to ignore the naked instability of this era, and the endless lies that inflated the AI bubble. Until then, everybody is betting billions on the idea that Wile E. Coyote won’t look down. Let’s start with a horrible fact: it takes about 2.5 years of construction time and $50 billion per gigawatt of data center capacity . One way or another, these GPUs are depreciating in value, either through death (or reduced efficacy through wear and tear) or becoming obsolete, which is very likely as NVIDIA has committed to releasing a new GPU every year . At some point, Wall Street is going to need to see some sort of return on this investment, and right now that return is “negative dollars.”  I break it down in my premium piece, but I estimate that big tech needs to make $2 for every $1 of capex . This revenue must also be brand spanking new, as this capex is only for AI. Meta, Amazon, Google and Microsoft are already years and hundreds of billions of dollars in , and are yet to see a dollar of profit , creating a $1.21 trillion hole just to justify the expenses (so around $605 billion in capex all told, at the time I calculated it). You might argue that there’s a scenario where, say, an A100 GPU is “useful” past the 3 or 6 year shelf life. Even if that were the case, the average rental price of an A100 GPU is 99 cents an hour . This is a four or five-year-old GPU, and customers are paying for it like they would a five-year-old piece of hardware. The same fate awaits H100 GPUs too. Every year, NVIDIA releases a new GPU, lowering the value of all the other GPUs in the process, making it harder to fill in the holes created by all the other GPUs. This whole time, nobody appears to have found a way to make a profit, meaning that the hole created by these GPUs remains unfilled, all while big tech firms buy more GPUs, creating more holes to fill. Big tech keeps buying more GPUs despite the old GPUs failing to pay for themselves. To fix this problem, big tech is buying more GPUs.  Newer generation GPUs — like NVIDIA’s Blackwell and Vera Rubin — require entirely new data center architecture, meaning that one has to either build a brand new data center or retrofit an old one.  Big tech is spending billions of dollars to make sure it’s able to turn on these new GPUs, at which point you may think that they’ll make a profit.  Even when they’re turned on, these things don’t make money. The Information reports that Oracle’s Blackwell GPUs have a negative 100% gross margin .  How exactly are these bloody things meant to make more money than they cost in the next six years, let alone three? They don’t make a profit now and have no path to doing so in the future! I feel like I’m going INSANE! This is accrual accounting, meaning that these numbers are revenue booked in the quarter I reported them. Any comments about quarter-long delays in payments are incorrect. Microsoft’s revenue share payments to OpenAI are pathetic — totalling, based on documents reviewed by this publication, $69.1 million in CY (calendar year) Q3 2025.

0 views

Premium: The Ways The AI Bubble Might Burst

[Editor's Note: this piece previously said "Blackstone" instead of "Blackrock," which has now been fixed.] I've been struggling to think about what to write this week, if only because I've written so much recently and because, if I'm honest, things aren't really making a lot of sense. NVIDIA claims to have shipped six million Blackwell GPUs in the last four quarters — as I went into in my last premium piece — working out to somewhere between 10GW and 12GW of power (based on the power draw of B100 and B200 GPUs and GB200 and GB300 racks), which...does not make sense based on the amount of actual data center capacity brought online. Similarly, Anthropic claims to be approaching $10 billion in annualized revenue — so around $833 million in a month — which would make it competitive with OpenAI's projected $13 billion in revenue, though I should add that based on my reporting extrapolating OpenAI's revenues from Microsoft's revenue share , I estimate the company will miss that projection by several billion dollars, especially now that Google's Gemini 3 launch has put OpenAI on a " Code Red, " shortly after an internal memo revealed that Gemini 3 could “create some temporary economic headwinds for [OpenAI]." Which leads me to another question: why? Gemini 3 is "better," in the same way that every single new AI model is some indeterminate level of "better." Nano Banana Pro is, to Simon Willison, " the best available image generation model. " But I can't find a clear, definitive answer as to why A) this is "so much better," B) why everybody is freaking out about Gemini 3, and C) why this would have created "headwinds" for OpenAI, headwinds so severe that it has had to rush out a model called Garlic "as soon as possible" according to The Information : Right, sure, cool, another model. Again, why is Gemini 3 so much better and making OpenAI worried about "economic headwinds"? Could this simply be a convenient excuse to cover over, as Alex Heath reported a few weeks ago , ChatGPT's slowing download and usage growth ? Experts I've talked to arrived at two conclusions: I don't know about garlic or shallotpeat or whatever , but one has to wonder at some point what it is that OpenAI is doing all day : So, OpenAI's big plan is to improve ChatGPT , make the image generation better , make people like the models better , improve rankings , make it faster, and make it answer more stuff. I think it's fair to ask: what the fuck has OpenAI been doing this whole time if it isn't "make the model better" and "make people like ChatGPT more"? I guess the company shoved Sora 2 out the door — which is already off the top 30 free Android apps in the US and at 17 on the US free iPhone apps rankings as of writing this sentence after everybody freaked out about it hitting number one . All that attention, and for what? Indeed, signs seem to be pointing towards reduced demand for these services. As The Information reported a few days ago ... Microsoft, of course, disputed this, and said... Well, I don't think Microsoft has any problems selling compute to OpenAI — which paid it $8.67 billion just for inference between January and September — as I doubt there is any "sales team" having to sell compute to OpenAI. But I also want to be clear that Microsoft added a word: "aggregate." The Information never used that word, and indeed nobody seems to have bothered to ask what "aggregate" means. I do, however, know that Microsoft has had trouble selling stuff. As I reported a few months ago, in August 2025 Redmond only had 8 million active paying licenses for Microsoft 365 Copilot out of the more-than-440 million people paying for Microsoft 365 . In fact, here's a rundown of how well AI is going for Microsoft: Yet things are getting weird. Remember that OpenAI-NVIDIA deal? The supposedly "sealed" one where NVIDIA would invest $100 billion in OpenAI , with each tranche of $10 billion gated behind a gigawatt of compute? The one that never really seemed to have any fundament to it, but people reported as closed anyway? Well, per NVIDIA's most-recent 10-Q (emphasis mine): A letter of intent "with an opportunity" means jack diddly squat. My evidence? NVIDIA's follow-up mention of its investment in Anthropic: This deal, as ever, was reported as effectively done , with NVIDIA investing $10 billion and Microsoft $5 billion, saying the word "will" as if the money had been wired, despite the "closing conditions" and the words "up to" suggesting NVIDIA hasn't really agreed how much it will really invest. A few weeks later, the Financial Times would report that Anthropic is trying to go public   as early as 2026 and that Microsoft and NVIDIA's money would "form part of a funding round expected to value the group between $300bn and $350bn." For some reason, Anthropic is hailed as some sort of "efficient" competitor to OpenAI, at least based on what both The Information and Wall Street Journal have said, yet it appears to be raising and burning just as much as OpenAI . Why did a company that's allegedly “reducing costs” have to raise $13 billion in September 2025 after raising $3.5 billion in March 2025 , and after raising $4 billion in November 2024 ? Am I really meant to read stories about Anthropic hitting break even in 2028 with a straight face? Especially as other stories say Anthropic will be cash flow positive “ as soon as 2027 .” And if this company is so efficient and so good with money , why does it need another $15 billion, likely only a few months after it raised $13 billion? Though I doubt the $15 billion round closes this year, if it does, it would mean that Anthropic would have raised $31.5 billion in 2025 — which is, assuming the remaining $22.5 billion comes from SoftBank, not far from the $40.8 billion OpenAI would have raised this year. In the event that SoftBank doesn't fund that money in 2025, Anthropic will have raised a little under $2 billion less ($16.5 billion) than OpenAI ($18.3 billion, consisting of $10 billion in June   split between $7.5 billion from SoftBank and $2.5 billion from other investors, and an $8.3 billion round in August ) this year. I think it's likely that Anthropic is just as disastrous a business as OpenAI, and I'm genuinely surprised that nobody has done the simple maths here, though at this point I think we're in the era of "not thinking too hard because when you do so everything feels crazy.” Which is why I'm about to think harder than ever! I feel like I'm asked multiple times a day both how and when the bubble will burst, and the truth is that it could be weeks or months or another year , because so little of this is based on actual, real stuff. While our markets are supported by NVIDIA's eternal growth engine, said growth engine isn't supported by revenues or real growth or really much of anything beyond vibes. As a result, it's hard to say exactly what the catalyst might be, or indeed what the bubble bursting might look like. Today, I'm going to sit down and give you the scenarios — the systemic shocks — that would potentially start the unravelling of this era, as well as explain what a bubble bursting might actually look like, both for private and public companies. This is the spiritual successor to August's AI Bubble 2027 , except I'm going to have a little more fun and write out a few scenarios that range from likely to possible , and try and give you an enjoyable romp through the potential apocalypses waiting for us in 2026. Gemini 3 is good/better at the stuff tested on benchmarks compared to what OpenAI has. OpenAI's growth and usage was decelerating before this happened, and this just allows OpenAI to point to something. Its chips effort is falling behind , with its "Maya" AI chip delayed to 2026, and according to The Information, "when it finally goes into mass production next year, it’s expected to fall well short of the performance of Nvidia’s flagship Blackwell chip." According to The Information in late October 2025 , "more customers have been using Microsoft’s suite of AI copilots, but many of them aren’t paying for it." In October , Australian's Competition and Consumer Commission sued Microsoft for "allegedly misleading 2.7 million Australians over Microsoft 365 subscriptions," by making it seem like they had to pay extra and integrate Copilot into their subscription rather than buy the, and I quote, "undisclosed third option, the Microsoft 365 Personal or Family Classic plans, which allowed subscribers to retain the features of their existing plan, without Copilot, at the previous lower price." This is what a company does when it can't sell shit. Google did the same thing with its workspace accounts earlier in the year . This should be illegal! According to The Information in September 2025 , Microsoft had to "partly" replace OpenAI's models with Anthropic's for some of its Copilot software. Microsoft has, at this point, sunk over ten billion dollars into OpenAI, and part of its return for doing so was exclusively being able to use its models. Cool! According to The Information in September 2025 , Microsoft has had to push discounts for Office 365 Copilot as customers had "found Copilot adoption slow due to high cost and unproven ROI." In late 2024 , customers had paused purchasing further Copilot assistants due to performance and cost issues.

0 views

Premium: The Hater's Guide To NVIDIA

This piece has a generous 3000+ word introduction, because I want as many people to understand NVIDIA as possible. The (thousands of) words after the premium break get into arduous detail, but I’ve written this so that, ideally, most people can pick up the details early on and understand this clusterfuck. Please do subscribe to the premium! I really appreciate it. I've reached a point with this whole era where there are many, many things that don't make sense, and I know I'm not alone. I've been sick since Friday last week, and thus I have had plenty of time to sit and think about stuff. And by "stuff" I mean the largest company on the stock market: NVIDIA.  Look, I'm not an accountant, nor am I a "finance expert." I learned all of this stuff myself. I learn a great deal by coming to things from the perspective of being a dumbass , a valuable intellectual framework of "I need to make sure I understand each bit and explain it as simply as possible." In this piece, I'm going to try and explain both what this company is, how we got here, and ask questions that I, from the perspective of a dumbass, have about the company, and at least try and answer them. Let's start with a very simple point: for a company of such remarkable size, very few people — myself included, at times! —  seem to actually understand NVIDIA. NVIDIA is a company that sells all sorts of stuff, but the only reason you're hearing about it as a normal person is that NVIDIA's stock has become a load-bearing entity in the US stock market. This has happened because NVIDIA sells "GPUs" — graphics processing units — that power the large language model services that are behind the whole AI boom, either through "inference" (the process of creating an output from an AI model) or "training" (feeding data into the model to make its outputs better). NVIDIA also sells other things, which I’ll get to later, but it doesn’t really matter to the bigger picture. Back in 2006, NVIDIA launched CUDA , a software layer that lets you run (some) software on (specifically) NVIDIA graphics cards, and over time this has grown into a massive advantage for the company. The thing is, GPUs are great for parallel processing - essentially spreading a task across multiple, by which I mean thousands, of processor cores at the same time - which means that certain tasks run faster than they would on, say, a CPU. While not every task benefits from parallel processing, or from having several thousand cores available at the same time, the kind of math that underpins LLMs is one such example.  CUDA is proprietary to NVIDIA, and while there are alternatives (both closed- and open-source), none of them have the same maturity and breadth. Pair that with the fact that Nvidia’s been focused on the data center market for longer than, say, AMD, and it’s easy to understand why it makes so much money. There really isn’t anyone who can do the same thing as NVIDIA, both in terms of software and hardware, and certainly not at the scale necessary to feed the hungry tech firms that demand these GPUs. Anyway, back in 2019 NVIDIA acquired a company called Mellanox for $6.9 billion, beating off other would-be suitors, including Microsoft and Intel. Mellanox was a manufacturer of high-performance networking gear, and this acquisition would give NVIDIA a stronger value proposition for data center customers. It wanted to sell GPUs — lots of them — to data center customers, and now it could also sell the high-speed networking technology required to make them work in tandem.  This is relevant because it created the terms under which NVIDIA could start selling billions (and eventually tens of billions) of specialized GPUs for AI workloads. As pseudonymous finance account JustDario connected (both Dario and Kakashii have been immensely generous with their time explaining some of the underlying structures of NVIDIA, and are worth reading, though at times we diverge on a few points), mere months after the Mellanox acquisition, Microsoft announced its $1 billion investment in OpenAI to build "Azure AI supercomputing technologies." Though it took until November 2022 for ChatGPT to really start the fires, in March 2020 , NVIDIA began the AI bubble with the launch of its "Ampere" architecture, and the A100, which provided "the greatest generational performance leap of NVIDIA's eight generations of GPUs," built for "data analytics, scientific computing and cloud graphics." The most important part, however, was the launch of NVIDIA's "Superpod": Per the press release:  One might be fooled into thinking this was Huang suggesting we could now build smaller, more efficient data centers, when he was actually saying we should build way bigger ones that had way more compute power and took up way more space. The "Superpod" concept — groups of GPU servers networked together to work on specific operations — is the "thing" that is driving NVIDIA's sales. To "make AI happen," a company must buy thousands of these things and put them in data centers and you'd be a god damn idiot to not do this and yes, it requires so much more money than you used to spend. At the time, a DGX A100 — a server that housed eight A100 GPUs (starting at around $10,000 at launch per-GPU, increasing with the amount of on-board RAM, as is the case across the board) — started at $199,000. The next generation SuperPod, launched in 2022, was made up of eight H100 GPUs (Starting at $25,000-per-GPU, the next generation "Hopper" chips were apparently 30x times more powerful than the A100), and retailed from $300,000. You'll be shocked to hear the next generation Blackwell SuperPods started at $500,000 when launched in 2024 . A single B200 GPU costs at least $30,000 . Because nobody else has really caught up with CUDA, NVIDIA has a functional monopoly ( edit: I wrote monopsony in a previous version, sorry), and yes, you can have a situation where a market has a monopoly, even if there is, at least in theory, competition. Once a particular brand — and particular way of writing software for a particular kind of hardware — takes hold, there's an implicit cost of changing to another, on top of the fact that AMD and others have yet to come up with something particularly competitive. Anyway, the reason that I'm writing all of this out is because I want you to understand why everybody is paying NVIDIA such extremely large amounts of money. Every year, NVIDIA comes up with a new GPU, and that GPU is much, much more expensive, and NVIDIA makes so much more money, because everybody has to build out AI infrastructure full of whatever the latest NVIDIA GPUs are, and those GPUs are so much more expensive every single year. With Blackwell — the third generation of AI-specialized GPUs — came a problem, in that these things were so much more power-hungry, and required entirely new ways of building data centers, along with different cooling and servers to put them in, much of which was sold by NVIDIA. While you could kind of build around your current data centers to put A100s and H100s into production, Blackwell was...less cooperative, and ran much hotter. To quote NVIDIA Employee Number 4 David Rosenthal : In simple terms, Blackwell runs hot, so much hotter than Ampere (A100) or Hopper (H100) GPUs that it requires entirely different ways to cool it, meaning your current data center needs to be ripped apart to fit them. Huang has confirmed that Vera Rubin, the next generation of GPUs, will have the same architecture as Blackwell . I would bet money that it's also much more expensive. Anyway, all of this has been so good for NVIDIA. As the single vendor for the most important component in the entire AI boom, it has set the terms for how much you pay and how you build any and all AI infrastructure. While there are companies like Supermicro and Dell who buy NVIDIA GPUs and ship them in servers to customers, that's just fine for NVIDIA CEO Jensen Huang, as that's somebody else selling his GPUs for him. NVIDIA has been printing money, quarter after quarter, going from a meager $7.192 billion in total revenue in the third (calendar year) quarter of 2023 to an astonishing $50 billion in just data center revenue (that's where the GPUs are) in its most recent quarter , for a total of $57 billion in revenue , and the company projects to make $63 billion to $67 billion in the next quarter. Now, I'm going to stop you here, because this bit is really important, really simple, yet nobody thinks about it much: NVIDIA makes so much money, and it makes it from a much smaller customer base than most companies, because there are only so many entities that can buy thousands of chips that cost $50,000 or more each.   $35 billion, $39 billion, $44 billion, $46 billion and $57 billion are very large amounts of money, and the entities pumping those numbers into the stratosphere are collectively having to spend hundreds of billions of dollars to make it happen. So, let me give you a theoretical example. I swear I'm going somewhere with this.You, a genius, have decided you are about to join the vaunted ranks of "AI data center ownership." You decide to build a "small" AI data center — 25MW (megawatts, which in this example, refers to the combined power draw of the tech inside the data center). That can't be that much, right? OpenAI is building a 1.2GW one out in Abilene Texas . How much could this tiny little thing cost? Okay, well, let's start with those racks. You're gonna need to give Jensen Huang $600 million right away, as you need 200 GB200 racks. You're also gonna need a way to make them network together, because otherwise they aren't going to be able to handle all those big IT loads , so that's gonna be another $80 million or more, and you're going to need storage and servers to sync all of this up, which is, let's say, another $35 million. So we're at $715 million. Should be fine, right? Everybody's cool and everybody's normal. This is just a small data center after all. Oops, forgot cooling and power delivery stuff — that's another $5 million. $720 million. Okay. Anyway, sadly data centers require something called a "building." Construction costs for a data center are somewhere from $8 million to $12 million per megawatt , so, crap, okay. That's $250 million, but probably more like $300 million. We're now up to $1.02 billion, and we haven't even got the power yet. Okay, sick. Do you have one billion dollars? You don't? No worries! Private credit — money loaned by non-banking entities — has been feeding more than $50 billion dollars a quarter into the hungry mouths of anybody who desires to build a data center . You need $1.02 billion. You get $1.5 billion, because, you know, "stuff happens." Don't worry about those pesky high interest rates — you're about to be printing big money, AI style! Now you're done raising all that cash, it'll now only take anywhere from 6 to 18 months for site selection, permitting, design, development, construction, and energy procurement . You're also going to need about 20 acres of land for that 100,000 square foot data center . You may wonder why 100,000 square feet needs that much space, and that's because all of the power and cooling equipment takes up an astonishing amount of room. So, yeah, after two years and over a billion dollars, you too can own a data center with NVIDIA GPUs that turn on, and at that point, you will offer a service that is functionally identical to everybody else buying GPUs from NVIDIA. Your competitors are Amazon, Google and Microsoft, followed by neoclouds — AI chip companies selling the same thing as you, except they're directly backed by NVIDIA, and frequently, the big hyperscaler companies with brands that most people have heard of, like AWS and Azure. Oh, also, this stuff costs an indeterminately-large amount of money to run. You may wonder why I can't tell you how much, and that's because nobody wants to actually discuss the cost of running GPUs, the thing that underpins our entire stock market. There're good reasons, too. One does not just run "a GPU" — it's a GPU in a server of other GPUs with associated hardware, all drawing power in varying amounts, all running in sync with networking gear that also draws power, with varying amounts of user demand and shifts in the costs of power from the power company. But what we can say is that the up front cost of buying these GPUs and their associated crap is such that it's unclear if they ever will generate a profit, because these GPUs run hot , all the time , and that causes some amount of them to die. Here are some thoughts I have had: The NVIDIA situation is one of the most insane things I've seen in my life. The single-largest, single-most-valuable, single-most-profitable company on the stock market has got there through selling ultra-expensive hardware that takes hundreds of millions or billions of dollars (and years of construction in some cases) to start using, at which point it...doesn't make much revenue and doesn't seem to make a profit.  Said hardware is funded by a mixture of cashflow from healthy businesses (see: Microsoft) or massive amounts of debt (see: everybody who is not a hyperscaler, and, at this point, some hyperscalers). The response to the continued proof that generative AI is not making money is to buy more GPUs, and it doesn't appear anybody has ever worked out why. This problem has been obvious for a long time, too.  Today I'm going to explain to you — simply, but at length — why I am deeply concerned, and how deeply insane this situation has become. A 25MW data center costs about $1 billion, with $600 million of that being GPUs — 200 GB200 racks, to be specific. It needs about 20 acres — 100,000 square feet for the data center, roughly. NVIDIA sells about $50 billion of GPUs and associated hardware in a quarter, so let's say that $40 billion of that is just the GPUs and $10 billion is everything else (primarily networking gear), so around 13,333 GB200 racks. I realize that NVIDIA sells far more than that (GB300 racks, singular GPUs, and so on). Deep-pocketed hyperscalers like Microsoft, Google, Meta and Amazon representing 41.32% of NVIDIA's revenue in the middle of 2025 , funneling free cash flow directly into Jensen Huang's pockets... ...for now. Amazon ( $15 billion ), Google ( $25 billion ), Meta ( $30 billion ) and Oracle ( $18 billion ) have all had to raise massive amounts of debt to continue to fund AI-focused capital expenditures, with more than half of that ( per Rubenstein ) spent on GPUs. Otherwise, basically anybody buying GPUs at any scale has to fund doing so with either venture capital (money raised in exchange for part of the company) or debt. NVIDIA, at this point, is around 8% of the value of the S&P 500 (the 500 leading (meaning they meet certain criteria of size, liquidity (cash availability) and profitability) companies on the US stock market). Its continued health — and representative value as a stock, which is not necessarily based on its actual numbers or health, but in this case kind of is? — has led the stock market to remarkable gains. It is not enough for NVIDIA to simply be a profitable company. It must continue beating the last quarter's revenue, again and again and again and again, forever . If that sounds dramatic, I assure you it is the truth. NVIDIA's continued success — and its ability to continue delivering outsized beats of Wall Street's revenue estimates — depends on: The willingness of a few very large, cash-rich companies (Microsoft, Meta, Amazon and Google) to continue buying successive generations of NVIDIA GPUs forever. The ability of said companies to continue buying successive generations of GPUs forever. The ability of other, less-cash-rich companies like Oracle to continue being able to raise debt to buy massive amounts of GPUs — such as the $40 billion of GPUs that Oracle is buying for Stargate Abilene forever. This is becoming a problem. The ability of unprofitable, debt-ridden companies like CoreWeave, AI "neoclouds" that use the GPUs they purchase from NVIDIA as collateral for loans to buy more GPUs , to to continue raising that debt to buy more GPUs. The ability of anybody who buys these GPUs to actually install them and use them, which requires massive amounts of construction... and more power than is currently available, even to the most well-funded and conspicuous projects . In simple terms, its success depends on the debt markets to continue propping up its revenues, because there is not really enough free cash in the world to continue pumping it into NVIDIA at this rate. And after all of this, large language models, the only way to make any real money on any of these GPUs , must prove they can actually produce a profit. Per my article from September, I can find no compelling evidence (outside of boosters speciously claiming otherwise) that it's profitable to sell access to GPUs. Based on my calculations, there's likely little more than $61 billion of actual AI revenue in 2025 across every single AI company and hyperscaler. Note that I said "revenue." Absolutely nobody is making a profit.

0 views

Premium: The Hater's Guide To The AI Bubble Vol. 2

We’re approaching the most ridiculous part of the AI bubble, with each day bringing us a new, disgraceful and weird headline. As I reported earlier in the week, OpenAI spent $12.4 billion on inference between 2024 and September 2025 , and its revenue share with Microsoft heavily suggests it made at least $2.469 billion in 2024 ( when reports had OpenAI at $3.7 billion for 2024 ), with the only missing revenue to my knowledge being the 20% Microsoft shares with OpenAI when it sells OpenAI models on Azure, and whatever cut Microsoft gives OpenAI from Bing.  Nevertheless, the gap between reported figures and what the documents I’ve seen said is dramatic. Despite reports that OpenAI made, in the first half of 2025, $4.3 billion in revenue on $2.5 billion of “cost of revenue,” what I’ve seen shows that OpenAI spent $5.022 billion on inference (the process of creating an output using a model) in that period, and made at least $2.2735 billion. I, of course, am hedging aggressively, but I can find no explanation for the gaps. I also can’t find an explanation for why Sam Altman said that OpenAI was “profitable on inference” in August 2025 , nor how OpenAI will hit “$20 billion in annualized revenue” by end of 2025 , nor how OpenAI will do “well more” than $13 billion this year . Perhaps there’s a chance that for some 30 day period of this year OpenAI hits $1.66 billion in revenue (AKA $20 billion annualized), but even that would leave it short of its stated target revenue The very same day I ran that piece, somebody posted a clip of Microsoft CEO Satya Nadella saying , who had this to say when asked about recent revenue projections from AI labs:  I don’t know Satya, not fucking make shit up? Not embellishing? Is it too much to ask that these companies make projections that adhere to reality, rather than whatever an investor would want to hear? Or, indeed, projections that perpetuate a myth of inevitability, but fly in the face of reality?  I get that in any investment scenario you want to sell a story, but the idea that the CEO of a company with a $3.8 trillion market cap is sitting around saying “what do you expect them to do, tell the truth? They need money for compute!” is fucking disgraceful.  No, I do not believe a company should make overblown revenue projections, nor do I think it’s good for the CEO of Microsoft to encourage the practice. I also seriously have to ask why Nadella believes that this is happening, and, indeed, who he might be specifically talking about, as Microsoft has particularly good insights into OpenAI’s current and future financial health .  However, because Nadella was talking in generalities, this could refer to Anthropic, and it kinda makes sense, because Anthropic just received near-identical articles about its costs from both The Information and The Wall Street Journal , with The Information saying that Anthropic “projected a positive free cash flow as soon as 2027,” and the Wall Street Journal saying that Anthropic “anticipates breaking even by 2028,” with both pieces featuring the cash burn projections of both OpenAI and Anthropic based on “documents” or “investor projections” shared this summer. Both pieces focus on free cash flow, both pieces focus on revenue, and both pieces say that OpenAI is spending way more than Anthropic, and that Anthropic is on the path to profitability. The Information also includes a graph involving Anthropic’s current and projected gross margins, with the company somehow hitting 75% gross margins by 2028.  How does any of this happen? Nobody seems to know!  Per The Journal: …hhhhooowwwww????? I’m serious! How?  The Information tries to answer: Is…that the case? Are there any kind of numbers to back this up? Because Business Insider just ran a piece covering documents involving startups claiming that Amazon’s chips had "performance challenges,” were “plagued by frequent service disruptions,” and “underperformed” NVIDIA H100 GPUs on latency, making them “less competitive” in terms of speed and cost.” One startup “found Nvidia's older A100 GPUs to be as much as three times more cost-efficient than AWS's Inferentia 2 chips for certain workloads,” and a research group called AI Singapore “determined that AWS’s G6 servers, equipped with NVIDIA GPUs, offered better cost performance than Inferentia 2 across multiple use cases.” I’m not trying to dunk on The Wall Street Journal or The Information, as both are reporting what is in front of them, I just kind of wish somebody there would say “huh, is this true?” or “will they actually do that?” a little more loudly, perhaps using previously-written reporting.  For example, The Information reported that Anthropic’s gross margin in December 2023 was between 50% and 55% in January 2024 , CNBC stated in September 2024 that Anthropic’s “aggregate” gross margin would be 38% in September 2024, and then it turned out that Anthropic’s 2024 gross margins were actually negative 109% (or negative 94% if you just focus on paying customers) according to The Information’s November 2025 reporting . In fact, Anthropic’s gross margin appears to be a moving target. In July 2025, The Information was told by sources that “Anthropic recently told investors its gross profit margin from selling its AI models and Claude chatbot directly to customers was roughly 60% and is moving toward 70%,” only to publish a few months later (in their November piece) that Anthropic’s 2025 gross margin would be…47%, and would hit 63% in 2026. Huh? I’m not bagging on these outlets. Everybody reports from the documents they get or what their sources tell them, and any piece you write comes with the risk that things could change, as they regularly do in running any kind of business. That being said, the gulf between “38%” and “ negative 109%” gross margins is pretty fucking large, and suggests that whatever Anthropic is sharing with investors (I assume) is either so rapidly changing that giving a number is foolish, or made up on the spot as a means of pretending you have a functional business. I’ll put it a little more simply: it appears that much of the AI bubble is inflated on vibes, and I’m a little worried that the media is being too helpful. These companies are yet to prove themselves in any tangible way, and it’s time for somebody to give a frank evaluation of where we stand. if I’m honest, a lot of this piece will be venting, because I am frustrated. When all of this collapses there will, I guarantee, be multiple startups that have outright lied to the media, and done so, in some cases, in ways that are equal parts obvious and brazen. My own work has received significantly more skepticism than OpenAI or Anthropic, two companies worth alleged billions of dollars that appear to change their story with an aloof confidence borne of the knowledge that nobody read or thought too deeply about what it is that their CEOs have to say, other than “wow, Anthropic said a new number !”  So I’m going to do my best to write about every single major AI company in one go. I am going to pull together everything I can find and give a frank evaluation of what they do, where they stand, their revenues, their funding situation, and, well, however else I feel about them.  And honestly, I think we’re approaching the end. The Information recently published one of the grimmest quotes I’ve seen in the bubble so far: Hey, what was that? What was that about “growing concerns regarding the costs and benefits of AI”? What “capital shift”? The fucking companies are telling you, to your face, that they know there’s not a sustainable business model or great use case, and you are printing it and giving it the god damn thumbs up. How can you not be a hater at this point? This industry is loathsome, its products ranging useless to niche at best, its costs unsustainable, and its futures full of fire and brimstone.  This is the Hater’s Guide To The AI Bubble Volume 2 — a premium sequel to the Hater’s Guide from earlier this year — where I will finally bring some clarity to a hype cycle that has yet to prove its worth, breaking down industry-by-industry and company-by-company the financial picture, relative success and potential future for the companies that matter. Let’s get to it.

0 views

Exclusive: Here's How Much OpenAI Spends On Inference and Its Revenue Share With Microsoft

As with my Anthropic exclusive from a few weeks ago , though this feels like a natural premium piece, I decided it was better to publish on my free one so that you could all enjoy it. If you liked or found this piece valuable, please subscribe to my premium newsletter — here’s $10 off the first year of an annual subscription . I have put out over a hundred thousand words of coverage in the last three months, most of which is on my premium, and I’d really appreciate your support. I also did an episode of my podcast Better Offline about this. Before publishing, I discussed the data with a Financial Times reporter. Microsoft and OpenAI both declined to comment to the FT. If you ever want to share something with me in confidence, my signal is ezitron.76, and I’d love to hear from you. What I’ll describe today will be a little more direct than usual, because I believe the significance of the information requires me to be as specific as possible.  Based on documents viewed by this publication, I am able to report OpenAI’s inference spend on Microsoft Azure, in addition to its payments to Microsoft as part of its 20% revenue share agreement, which was reported in October 2024 by The Information . In simpler terms, Microsoft receives 20% of OpenAI’s revenue. I do not have OpenAI’s training spend, nor do I have information on the entire extent of OpenAI’s revenues, as it appears that Microsoft shares some percentage of its revenue from Bing, as well as 20% of the revenue it receives from selling OpenAI’s models.  According to The Verge : Nevertheless, I am going to report what I’ve been told. One small note — for the sake of clarity, every time I mention a year going forward, I’ll be referring to the calendar year, and not Microsoft’s financial year (which ends in June).  These numbers in this post differ to those that have been reported publicly. For example, previous reports had said that OpenAI had spent $2.5 billion on “cost of revenue” - which I believe are OpenAI’s inference costs - in the first half of CY2025 .  According to the documents viewed by this newsletter, OpenAI spent $5.02 billion on inference alone with Microsoft Azure in the first half of Calendar Year CY2025.  This is a pattern that has continued through the end of September. By that point in CY2025 — three months later — OpenAI had spent $8.67 billion on inference.  OpenAI’s inference costs have risen consistently over the last 18 months, too. For example, OpenAI spent $3.76 billion on inference in CY2024, meaning that OpenAI has already doubled its inference costs in CY2025 through September. Based on its reported revenues of $3.7 billion in CY2024 and $4.3 billion in revenue for the first half of CY2025 , it seems that OpenAI’s inference costs easily eclipsed its revenues.  Yet, as mentioned previously, I am also able to shed light on OpenAI’s revenues, as these documents also reveal the amounts that Microsoft takes as part of its 20% revenue share with OpenAI.  Concerningly, extrapolating OpenAI’s revenues from this revenue share does not produce numbers that match those previously reported.  According to the documents, Microsoft received $493.8 million in revenue share payments in CY2024 from OpenAI — implying revenues for CY2024 of at least $2.469 billion, or around $1.23 billion less than the $3.7 billion that has been previously reported .  Similarly, for the first half of CY2025, Microsoft received $454.7 million as part of its revenue share agreement, implying OpenAI’s revenues for that six-month period were at least $2.273 billion, or around $2 billion less than the $4.3 billion previously reported . Through September, Microsoft’s revenue share payments totalled $865.9 million, implying OpenAI’s revenues are at least $4.329 billion. According to Sam Altman, OpenAI’s revenue is “well more” than $13 billion . I am not sure how to reconcile that statement with the documents I have viewed. The following numbers are calendar years. I will add that, where I have them, I will include OpenAI’s leaked or reported revenues. In some cases, the numbers match up. In others they do not. Though I do not know for certain, the only way to reconcile this would be some sort of creative means of measuring “annualized” or “recurring” revenue. I am confident in saying that I have read every single story about OpenAI’s revenue ever written, and at no point does OpenAI (or the documents reporting anything) explain how the company defines “annualized” or “annual recurring revenue.”  I must be clear that the following is me speaking in generalities, and not about OpenAI specifically, but you can get really creative with annualized revenue or annual recurring revenue. You can say 30 days, 28 days, and you can even choose a period of time that isn’t a calendar month too — so, say, the best 30 days of your company’s existence across two different months. I have no idea how OpenAI defines this metric, and default to saying that “annualized” or “ARR” means $Xnumber divided by 12. The Financial Times reported on February 9 2024 that OpenAI’s revenues had “surpassed $2 billion on an annualised basis” in December 2023, working out to $166.6 million in a month: The Information reported on June 12 2024 that OpenAI had “more than doubled its annualized revenue to $3.4 billion in the last six months or so,” working out to around $283 million in a month, likely referring to this period. On September 27 2024, the New York Times reported that “OpenAI’s monthly revenue hit $300 million in August…and the company expects about $3.7 billion in annual sales [in 2024],” according to a financial professional’s review of documents. On June 9, 2025, an OpenAI spokesperson told CNBC that it had hit “$10 billion annual recurring revenue,” excluding licensing revenue from OpenAI’s 20% revenue share and “large, one-time deals.” $10bn annualized revenue works out to around $833 million in a month. These numbers are inclusive of OpenAI’s revenue share payments to Microsoft and OpenAI’s inference spend. There could be potentially royalty payments made to OpenAI as part of its deal to receive 20% of Microsoft’s sales of OpenAI’s models, or other revenue related to its revenue share with Bing.  Due to the sensitivity and significance of this information, I am taking a far more blunt approach with this piece. Based on the information in this piece, OpenAI’s costs and revenues are potentially dramatically different to what we believed. The Information reported in October 2024 that OpenAI’s revenue could be $4 billion, and inference costs $2 billion based on documents “which include financial statements and forecasts,” and specifically added the following: I do not know how to reconcile this with what I am reporting today. In the first half of CY2024, based on the information in the documents, OpenAI’s inference costs were $1.295 billion, and its revenues at least $934 million.  Indeed, it is tough to reconcile what I am reporting with much of what has been reported about OpenAI’s costs and revenues.  OpenAI’s inference spend with Microsoft Azure between CY2024 and Q3 CY2025 was $12.43 billion. That is an astonishing figure, one that dramatically dwarfs any and all reporting, which, based on my analysis, suggested that OpenAI spent $2 billion on inference in 2024 and $2.5 billion through H1 CY2025. In other words, inference costs are nearly triple that reported elsewhere.  Similarly, OpenAI’s extrapolated revenues are dramatically different to those reported.  While we do not have a final tally for 2024, the indicators presented in the documents viewed contrast starkly with the reported predictions from that year.  Both reports of OpenAI’s 2024 revenues ( CNBC , The Information ) are from the same year and are projections of potential final totals, though The Information’s story about OpenAI’s H1 CY2025 revenues said that “OpenAI generated $4.3 billion in revenue in the first half of 2025, about $16% more than it generated all of last year,” which would bring us to $3.612 billion in revenue, or $1.145 billion more than are implied by OpenAI’s revenue share numbers paid to Microsoft. I do not have an answer for inference, other than I believe that OpenAI is spending far more money on inference than we were led to believe, and that the current numbers reported do not resemble those in the documents.  Based on these numbers, it appears that OpenAI may be the single-most cash intensive startup of all time, and that the cost of running large language models may not be something that can be supported by revenues. Even if revenues were to match those that had been reported, OpenAI’s inference spend on Azure consumes them, and appears to scale linearly above revenue.  I also cannot reconcile these numbers with the reporting that OpenAI will have a cash burn of $9 billion in CY2025 . On inference alone, OpenAI has already spent $8.67 billion through Q3 CY2025.  Similarly, I cannot see a path for OpenAI to hit its projected $13 billion in revenue by the end of 2025, nor can I see on what basis Mr. Altman could state that OpenAI will make “well more” than $13 billion this year .  I cannot and will not speak to the financial health of OpenAI in this piece, but I will say this: these numbers are materially different to what has been reported, and the significance of OpenAI’s inference spend alone makes me wonder about the larger cost picture for generative AI. If it costs this much to run inference for OpenAI, I believe it costs this much for any generative AI firm to run on OpenAI’s models. If it does not, OpenAI’s costs are dramatically higher than the prices it is charging its customers, which makes me wonder whether price increases could be necessary to begin making more money, or at the very least losing less. Similarly, if OpenAI’s costs are this high, it makes me wonder about the margins of any frontier model developer.  Inference: $546.8 million Microsoft Revenue Share: $77.3 million Implied OpenAI revenue: at least $386.5 million Inference: $748.3 million Microsoft Revenue Share: $109.5 million Implied OpenAI Revenue: at least $547.5 million Inference: $1.005 billion Microsoft Revenue Share: $139.2 million Implied OpenAI Revenue: at least $696 million Inference: $1.467 billion Microsoft Revenue Share: $167.8 million Implied OpenAI Revenue: at least $839 million Total inference spend for CY2024: $3.767 billion Total implied revenue for CY2024: at least $2.469 billion Reported (projected) revenue for CY2024: $3.7 billion, per CNBC in September 2024. The Information also reported that expected revenue could be as high as $4 billion in a piece from October 2024. Reported inference costs for CY2024: $2 billion, per The Information .  Inference: $2.075 billion Microsoft Revenue Share: $206.4 million Implied OpenAI Revenue: $1.032 billion Inference: $2.947 billion Microsoft Revenue Share: $248.3 million Implied OpenAI Revenue: $1.241.5 billion H1 CY2025 Inference: $5.022 billion H1 CY2025 Revenue: at least $2.273 billion Reported H1 CY2025 Revenue: $4.3 billion ( per The Information ) Reported H1 CY2025 “Cost of Revenue”: $2.5 billion ( per The Information ) Inference: $3.648 billion Microsoft Revenue Share: $411.1 million Implied OpenAI Revenue: at least $2.056 billion

0 views

Premium: OpenAI Burned $4.1 Billion More Than We Knew - Where Is Its Money Going?

Soundtrack: Queens of the Stone Age - Song For The Dead Editor's Note: The original piece had a mathematical error around burnrate, it's been fixed. Also, welcome to another premium issue! Please do subscribe, this is a massive, 7000-or-so word piece, and that's the kind of depth you get every single week for your subscription. A few days ago, Sam Altman said that OpenAI’s revenues were “well more” than $13bn in 2025 , a statement I question based on the fact, based on other outlets’ reporting , OpenAI only made $4.3bn through the first half of 2025, and likely around a billion a month, which I estimate means the company made around $8bn by the end of September. This is an estimate. If I receive information to the contrary, I’ll report it. Nevertheless, OpenAI is also burning a lot of money. In recent public disclosures ( as reported by The Register ), Microsoft noted that it had funding commitments to OpenAI of $13bn, of which $11.6bn had been funded by September 30 2025.  These disclosures also revealed that OpenAI lost $12bn in the last quarter — Microsoft’s Fiscal Year Q1 2026, representing July through September 2025. To be clear, this is actual, real accounting, rather than the figures leaked to reporters. It’s not that leaks are necessarily a problem — it’s just that anything appearing on any kind of SEC filing generally has to pass a very, very high bar. There is absolutely nothing about these numbers that suggests that OpenAI is “profitable on inference” as Sam Altman told a group of reporters at a dinner in the middle of August . Let me get specific.  The Information reported that through the first half of 2025, OpenAI spent $6.7bn on research and development, “which likely include[s] servers to develop new artificial intelligence.” The common refrain here is that OpenAI “is spending so much on training that it’s eating the rest of its margins,” but if that were the case here, it would mean that OpenAI spent the equivalent of six months’ training in the space of three. I think the more likely answer is that OpenAI is spending massive amounts of money on staff, sales and marketing ($2bn alone in the first half of the year), real estate, lobbying , data, and, of course, inference.  According to The Information , OpenAI had $9.6bn in cash at the end of June 2025. Assuming that OpenAI lost $12bn at the end of calendar year Q3 2025, and made — I’m being generous — around $3.3bn (or $1.1bn a month) within that quarter, this would suggest OpenAI’s operations cost them over $15bn in the space of three months. Where, exactly, is this money going? And how do the numbers published actually make sense when you reconcile them with Microsoft’s disclosures?  In the space of three months, OpenAI’s costs — if we are to believe what was leaked to The Information (and, to be clear, I respect their reporting) — went from a net loss of $13.5bn in six months to, I assume, a net loss of $ 12bn in three months.   Though there are likely losses related to stock-based compensation, this only represented a cost of $2.5bn in the first half of 2025. The Information also reported that OpenAI “spent more than $2.5 billion on its cost of revenue,” suggesting inference costs of…around that?  I don’t know. I really don’t know. But something isn't right, and today I'm going to dig into it. In this newsletter I'm going to reveal how OpenAI's reported revenues and costs don't line up - and that there's $4.1 billion of cash burn that has yet to be reported elsewhere.

2 views

Big Tech Needs $2 Trillion In AI Revenue By 2030 or They Wasted Their Capex

As I've established again and again , we are in an AI bubble, and no, I cannot tell you when the bubble will pop, because we're in the stupidest financial era since the great financial crisis — though, I hope, not quite as severe in its eventually apocalyptic circumstances. By the end of the year, Microsoft, Amazon, Google and Meta will have spent over $400bn in capital expenditures, much of it focused on building AI infrastructure, on top of $228.4 bn in capital expenditures in 2024 and around $148bn in capital expenditures in 2023, for a total of around $776bn in the space of three years. At some point, all of these bills will have to come due. You see, big tech has been given incredible grace by the markets, never having to actually show that their revenue growth is coming from selling AI or AI-related services. Only Microsoft ever bothered, piping up in October 2024 to say it was making $833 million a month ($10bn ARR) from AI and then $1.08 billion a month in January 2025 ($13bn ARR), and then choosing to never report it again.  As reported by The Information , $10bn of Microsoft’s Azure revenue this year will come from OpenAI’s spend on compute, which, also reported by The Information , is paid at “...a heavily discounted rental rate that essentially only covers Microsoft’s costs for operating the servers.”  It’s absolutely astonishing that such egregious expenditures have never brought with them any scrutiny of the actual return on investment, or any kind of demands for disclosure of the resulting revenue. As a result, big tech has used their already-successful products and existing growth to pretend that something is actually happening other than Satya Nadella standing with his hands on his hips and talking about his favourite ways to use Copilot , a product that so unpopular that only eight million active Microsoft 365 customers are paying for it out of over 440 million users . This stuff is so unpopular, the world’s biggest and most powerful software company — and one with a virtual monopoly on the office productivity market — had to use dark patterns to get people to pay for this stuff.   Earlier in the week, OpenAI announced that it had “ successfully converted to a more traditional corporate structure ,” giving Microsoft a 27% position in the new entity worth $130bn, with the Wall Street Journal vaguely saying that Microsoft will also have “the ability to get more ownership as the for-profit becomes more valuable.”  Said deal also brought with it a commitment to spend $250bn on Microsoft Azure, which Microsoft has booked as “remaining performance obligations” in the same way that Oracle stuffed its RPOs with $300bn dollars from OpenAI, a company that cannot afford to pay either company even a tenth of those obligations and is on the hook for over a trillion dollars in the next four years . But OpenAI isn’t the only one with a bill coming due. As we speak, the markets are still in the thrall of an egregious, hype-stuffed bubble, with the hogs of Wall Streets braying and oinking their loudest as Jensen Huang claims — without any real breakdown as to who is buying them — that NVIDIA has over $500 bn in bookings for its AI chips , with little worry about whether there’s enough money to actually pay for all of those GPUs or, more operatively, whether anybody plugging them in is making any profits off of them. To be clear, everybody is losing money on AI. Every single startup, every single hyperscaler, everybody who isn’t selling GPUs or servers with GPUs inside them is losing money on AI. No matter how many headlines or analyst emissions you consume, the reality is that big tech has sunk over half a trillion dollars into this bullshit for two or three years, and they are only losing money.  So, at what point does all of this become worth it?  Actually, let me reframe the question: how does any of this become worthwhile? Today, I’m going to try and answer the question, and have ultimately come to a brutal conclusion: due to the onerous costs of building data centers, buying GPUs and running AI services, big tech has to add $2 Trillion in AI revenue in the next four years. Honestly, I think they might need more. No, really. Big tech has already spent $605 billion in capital expenditures since 2023, with a chunk of that dedicated to 5-year-old (A100) and 4-year-old (H100) GPUs, and the rest dedicated to buying Blackwell chips that The Information reports have gross margins of negative 100% : Big tech’s lack of tangible revenue (let alone profits) from selling AI services only compounds the problem, meaning every dollar of capex burned on AI is currently putting these companies further in the hole.  Yet there’s also another problem - that GPUs are uniquely expensive to purchase, run and maintain, requiring billions of dollars of data center construction and labor before you can even make a dollar. Worse still, their value decays every single year, in part thanks to the physics of heat and electricity, and NVIDIA releasing a new chip every single year .

0 views

This Is How Much Anthropic and Cursor Spend On Amazon Web Services

So, I originally planned for this to be on my premium newsletter, but decided it was better to publish on my free one so that you could all enjoy it. If you liked it, please consider subscribing to support my work. Here’s $10 off the first year of annual . I’ve also recorded an episode about this on my podcast Better Offline ( RSS feed , Apple , Spotify , iHeartRadio ), it’s a little different but both handle the same information, just subscribe and it'll pop up.  Over the last two years I have written again and again about the ruinous costs of running generative AI services, and today I’m coming to you with real proof. Based on discussions with sources with direct knowledge of their AWS billing, I am able to disclose the amounts that AI firms are spending, specifically Anthropic and AI coding company Cursor, its largest customer . I can exclusively reveal today Anthropic’s spending on Amazon Web Services for the entirety of 2024, and for every month in 2025 up until September, and that that Anthropic’s spend on compute far exceeds that previously reported.  Furthermore, I can confirm that through September, Anthropic has spent more than 100% of its estimated revenue (based on reporting in the last year) on Amazon Web Services, spending $2.66 billion on compute on an estimated $2.55 billion in revenue. Additionally, Cursor’s Amazon Web Services bills more than doubled from $6.2 million in May 2025 to $12.6 million in June 2025, exacerbating a cash crunch that began when Anthropic introduced Priority Service Tiers, an aggressive rent-seeking measure that begun what I call the Subprime AI Crisis , where model providers begin jacking up the prices on their previously subsidized rates. Although Cursor obtains the majority of its compute from Anthropic — with AWS contributing a relatively small amount, and likely also taking care of other parts of its business — the data seen reveals an overall direction of travel, where the costs of compute only keep on going up .  Let’s get to it. In February of this year, The information reported that Anthropic burned $5.6 billion in 2024, and made somewhere between $400 million and $600 million in revenue: While I don’t know about prepayment for services, I can confirm from a source with direct knowledge of billing that Anthropic spent $1.35 billion on Amazon Web Services in 2024, and has already spent $2.66 billion on Amazon Web Services through the end of September. Assuming that Anthropic made $600 million in revenue, this means that Anthropic spent $6.2 billion in 2024, leaving $4.85 billion in costs unaccounted for.  The Information’s piece also brings up another point: Before I go any further, I want to be clear that The Information’s reporting is sound, and I trust that their source (I have no idea who they are or what information was provided) was operating in good faith with good data. However, Anthropic is telling people it spent $1.5 billion on just training when it has an Amazon Web Services bill of $1.35 billion, which heavily suggests that its actual compute costs are significantly higher than we thought, because, to quote SemiAnalysis, “ a large share of Anthropic’s spending is going to Google Cloud .”  I am guessing, because I do not know, but with $4.85 billion of other expenses to account for, it’s reasonable to believe Anthropic spent an amount similar to its AWS spend on Google Cloud. I do not have any information to confirm this, but given the discrepancies mentioned above, this is an explanation that makes sense. I also will add that there is some sort of undisclosed cut that Amazon gets of Anthropic’s revenue, though it’s unclear how much. According to The Information , “Anthropic previously told some investors it paid a substantially higher percentage to Amazon [than OpenAI’s 20% revenue share with Microsoft] when companies purchase Anthropic models through Amazon.” I cannot confirm whether a similar revenue share agreement exists between Anthropic and Google. This also makes me wonder exactly where Anthropic’s money is going. Anthropic has, based on what I can find, raised $32 billion in the last two years, starting out 2023 with a $4 billion investment from Amazon from September 2023 (bringing the total to $37.5 billion), where Amazon was named its “primary cloud provider” nearly eight months after Anthropic announced Google was Anthropic’s “cloud provider.,” which Google responded to a month later by investing another $2 billion on October 27 2023 , “involving a $500 million upfront investment and an additional $1.5 billion to be invested over time,” bringing its total funding from 2023 to $6 billion. In 2024, it would raise several more rounds — one in January for $750 million, another in March for $884.1 million, another in May for $452.3 million, and another $4 billion from Amazon in November 2024 , which also saw it name AWS as Anthropic’s “primary cloud and training partner,” bringing its 2024 funding total to $6 billion. In 2025 so far, it’s raised a $1 billion round from Google , a $3.5 billion venture round in March, opened a $2.5 billion credit facility in May, and completed a $13 billion venture round in September, valuing the company at $183 billion . This brings its total 2025 funding to $20 billion.  While I do not have Anthropic’s 2023 numbers, its spend on AWS in 2024 — around $1.35 billion — leaves (as I’ve mentioned) $4.85 billion in costs that are unaccounted for. The Information reports that costs for Anthropic’s 521 research and development staff reached $160 million in 2024 , leaving 394 other employees unaccounted for (for 915 employees total), and also adding that Anthropic expects its headcount to increase to 1900 people by the end of 2025. The Information also adds that Anthropic “expects to stop burning cash in 2027.” This leaves two unanswered questions: An optimist might argue that Anthropic is just growing its pile of cash so it’s got a warchest to burn through in the future, but I have my doubts. In a memo revealed by WIRED , Anthropic CEO Dario Amodei stated that “if [Anthropic wanted] to stay on the frontier, [it would] gain a very large benefit from having access to this capital,” with “this capital” referring to money from the Middle East.  Anthropic and Amodei’s sudden willingness to take large swaths of capital from the Gulf States does not suggest that it’s not at least a little desperate for capital, especially given Anthropic has, according to Bloomberg , “recently held early funding talks with Abu Dhabi-based investment firm MGX” a month after raising $13 billion . In my opinion — and this is just my gut instinct — I believe that it is either significantly more expensive to run Anthropic than we know, or Anthropic’s leaked (and stated) revenue numbers are worse than we believe. I do not know one way or another, and will only report what I know. So, I’m going to do this a little differently than you’d expect, in that I’m going to lay out how much these companies spent, and draw throughlines from that spend to its reported revenue numbers and product announcements or events that may have caused its compute costs to increase. I’ve only got Cursor’s numbers from January through September 2025, but I have Anthropic’s AWS spend for both the entirety of 2024 and through September 2025. So, this term is one of the most abused terms in the world of software, but in this case , I am sticking to the idea that it means “month times 12.” So, if a company made $10m in January, you would say that its annualized revenue is $120m. Obviously, there’s a lot of (when you think about it, really obvious) problems with this kind of reporting — and thus, you only ever see it when it comes to pre-IPO firms — but that’s besides the point. I give you this explanation because, when contrasting Anthropic’s AWS spend with its revenues, I’ve had to work back from whatever annualized revenues were reported for that month.  Anthropic’s 2024 revenues are a little bit of a mystery, but, as mentioned above, The Information says it might be between $400 million and $600 million. Here’s its monthly AWS spend.  I’m gonna be nice here and say that Anthropic made $600 million in 2024 — the higher end of The Information’s reporting — meaning that it spent around 226% of its revenue ($1.359 billion) on Amazon Web Services. [Editor's note: this copy originally had incorrect maths on the %. Fixed now.] Thanks to my own analysis and reporting from outlets like The Information and Reuters, we have a pretty good idea of Anthropic’s revenues for much of the year. That said, July, August, and September get a little weirder, because we’re relying on “almosts” and “approachings,” as I’ll explain as we go. I’m also gonna do an analysis on a month-by-month basis, because it’s necessary to evaluate these numbers in context.  In this month, Anthropic’s reported revenue was somewhere from $875 million to $1 billion annualized , meaning either $72.91 million or $83 million for the month of January. In February, as reported by The Information , Anthropic hit $1.4 billion annualized revenue, or around $116 million each month. In March, as reported by Reuters , Anthropic hit $2 billion in annualized revenue, or $166 million in revenue. Because February is a short month, and the launch took place on February 24 2025, I’m considering the launches of Claude 3.7 Sonnet and Claude Code’s research preview to be a cost burden in the month of March. And man, what a burden! Costs increased by $59.1 million, primarily across compute categories, but with a large ($2 million since January) increase in monthly costs for S3 storage. I estimate, based on a 22.4% compound growth rate, that Anthropic hit around $2.44 billion in annualized revenue in April, or $204 million in revenue. Interestingly, this was the month where Anthropic launched its $100 and $200 dollar a month “Max” plan s, and it doesn’t seem to have dramatically increased its costs. Then again, Max is also the gateway to things like Claude Code, which I’ll get to shortly. In May, as reported by CNBC , Anthropic hit $3 billion in annualized revenue, or $250 million in monthly average revenue. This was a big month for Anthropic, with two huge launches on May 22 2025 — its new, “more powerful” models Claude Sonnet and Opus 4, as well as the general availability of its AI coding environment Claude Code. Eight days later, on May 30 2025, a page on Anthropic's API documentation appeared for the first time: " Service Tiers ": Accessing the priority tier requires you to make an up-front commitment to Anthropic , and said commitment is based on a number of months (1, 3, 6 or 12) and the number of input and output tokens you estimate you will use each minute.  As I’ll get into in my June analysis, Anthropic’s Service Tiers exist specifically for it to “guarantee” your company won’t face rate limits or any other service interruptions, requiring a minimum spend, minimum token throughput, and for you to pay higher rates when writing to the cache — which is, as I’ll explain, a big part of running an AI coding product like Cursor. Now, the jump in costs — $65.1 million or so between April and May — likely comes as a result of the final training for Sonnet and Opus 4, as well as, I imagine, some sort of testing to make sure Claude Code was ready to go. In June, as reported by The Information, Anthropic hit $4 billion in annualized revenue, or $333 million. Anthropic’s revenue spiked by $83 million this month, and so did its costs by $34.7 million.  I have, for a while, talked about the Subprime AI Crisis , where big tech and companies like Anthropic, after offering subsidized pricing to entice in customers, raise the rates on their customers to start covering more of their costs, leading to a cascade where businesses are forced to raise their prices to handle their new, exploding costs. And I was god damn right. Or, at least, it sure looks like I am. I’m hedging, forgive me. I cannot say for certain, but I see a pattern.  It’s likely the June 2025 spike in revenue came from the introduction of service tiers, which specifically target prompt caching, increasing the amount of tokens you’re charged for as an enterprise customer based on the term of the contract, and your forecast usage. Per my reporting in July :  Cursor, as Anthropic’s largest client (the second largest being Github Copilot), represents a material part of its revenue, and its surging popularity meant it was sending more and more revenue Anthropic’s way.  Anysphere, the company that develops Cursor, hit $500 million annualized revenue ($41.6 million) by the end of May , which Anthropic chose to celebrate by increasing its costs. On June 16 2025, Cursor launched a $200-a-month “Ultra” plan , as well as dramatic changes to its $20-a-month Pro pricing that, instead of offering 500 “fast” responses using models from Anthropic and OpenAI, now effectively provided you with “at least” whatever you paid a month (so $20-a-month got at least $20 of credit), massively increasing the costs for users , with one calling the changes a “rug pull” after spending $71 in a single day . As I’ll get to later in the piece, Cursor’s costs exploded from $6.19 million in May 2025 to $12.67 million in June 2025, and I believe this is a direct result of Anthropic’s sudden and aggressive cost increases.  Similarly, Replit, another AI coding startup, moved to “Effort-Based Pricing” on June 18 2025 . I have not got any information around its AWS spend. I’ll get into this a bit later, but I find this whole situation disgusting. In July, as reported by Bloomberg , Anthropic hit $5 billion in annualized revenue, or $416 million. While July wasn’t a huge month for announcements, it was allegedly the month that Claude Code was generating “nearly $400 million in annualized revenue,” or $33.3 million ( according to The Information , who says Anthropic was “approaching” $5 billion in annualized revenue - which likely means LESS than that - but I’m going to go with the full $5 billion annualized for sake of fairness.  There’s roughly an $83 million bump in Anthropic’s revenue between June and July 2025, and I think Claude Code and its new rates are a big part of it. What’s fascinating is that cloud costs didn’t increase too much — by only $1.8 million, to be specific. In August, according to Anthropic, its run-rate “ reached over $5 billion ,” or in or around $416 million. I am not giving it anything more than $5 billion, especially considering in July Bloomberg’s reporting said “about $5 billion.” Costs grew by $60.5 this month, potentially due to the launch of Claude Opus 4.1 , Anthropic’s more aggressively expensive model, though revenues do not appear to have grown much along the way. Yet what’s very interesting is that Anthropic — starting August 28 — launched weekly rate limits on its Claude Pro and Max plans. I wonder why? Oh fuck! Look at that massive cost explosion! Anyway, according to Reuters, Anthropic’s run rate is “approaching $7 billion” in October , and for the sake of fairness , I am going to just say it has $7 billion annualized, though I believe this number to be lower. “Approaching” can mean a lot of different things — $6.1 billion, $6.5 billion — and because I already anticipate a lot of accusations of “FUD,” I’m going to err on the side of generosity. If we assume a $6.5 billion annualized rate, that would make this month’s revenue $541.6 million, or 95.8% of its AWS spend.   Nevertheless, Anthropic’s costs exploded in the space of a month by $135.2 million (35%) - likely due to the fact that users, as I reported in mid-July, were costing it thousands or tens of thousands of dollars in compute , a problem it still faces to this day, with VibeRank showing a user currently spending $51,291 in a calendar month on a $200-a-month subscription . If there were other costs, they likely had something to do with the training runs for the launches of Sonnet 4.5 on September 29 2025 and Haiku 4.5 in October 2025 . While these costs only speak to one part of its cloud stack — Anthropic has an unknowable amount of cloud spend on Google Cloud, and the data I have only covers AWS — it is simply remarkable how much this company spends on AWS, and how rapidly its costs seem to escalate as it grows. Though things improved slightly over time — in that Anthropic is no longer burning over 200% of its revenue on AWS alone — these costs have still dramatically escalated, and done so in an aggressive and arbitrary manner.  So, I wanted to visualize this part of the story, because I think it’s important to see the various different scenarios. THE NUMBERS I AM USING ARE ESTIMATES CALCULATED BASED ON 25%, 50% and 100% OF THE AMOUNTS THAT ANTHROPIC HAS SPENT ON AMAZON WEB SERVICES THROUGH SEPTEMBER.  I apologize for all the noise, I just want it to be crystal clear what you see next.   As you can see, all it takes is for Anthropic to spend (I am estimating) around 25% of its Amazon Web Services bills (for a total of around $3.33 billion in compute costs through the end of September) to savage any and all revenue ($2.55 billion) it’s making.  Assuming Anthropic spends half of its  AWS spend on Google Cloud, this number climbs to $3.99 billion, and if you assume - and to be clear, this is an estimate - that it spends around the same on both Google Cloud and AWS, Anthropic has spent $5.3 billion on compute through the end of September. I can’t tell you which it is, just that we know for certain that Anthropic is spending money on Google Cloud, and because Google owns 14% of the company — rivalling estimates saying Amazon owns around 15-19% — it’s fair to assume that there’s a significant spend. I have sat with these numbers for a great deal of time, and I can’t find any evidence that Anthropic has any path to profitability outside of aggressively increasing the prices on their customers to the point that its services will become untenable for consumers and enterprise customers alike. As you can see from these estimated and reported revenues, Anthropic’s AWS costs appear to increase in a near-linear fashion with its revenues, meaning that the current pricing — including rent-seeking measures like Priority Service Tiers — isn’t working to meet the burden of its costs. We do not know its Google Cloud spend, but I’d be shocked if it was anything less than 50% of its AWS bill. If that’s the case, Anthropic is in real trouble - the cost of the services underlying its business increase the more money they make. It’s becoming increasingly apparent that Large Language Models are not a profitable business. While I cannot speak to Amazon Web Services’ actual costs, it’s making $2.66 billion from Anthropic, which is the second largest foundation model company in the world.  Is that really worth $105 billion in capital expenditures ? Is that really worth building a giant 1200 acre data center in Indiana with 2.2GW of electricity? What’s the plan, exactly? Let Anthropic burn money for the foreseeable future until it dies, and then pick up the pieces? Wait until Wall Street gets mad at you and then pull the plug? Who knows.  But let’s change gears and talk about Cursor — Anthropic’s largest client and, at this point, a victim of circumstance. Amazon sells Anthropic’s models through Amazon Bedrock , and I believe that AI startups are compelled to spend some of their AI model compute costs through Amazon Web Services. Cursor also sends money directly to Anthropic and OpenAI, meaning that these costs are only one piece of its overall compute costs. In any case, it’s very clear that Cursor buys some degree of its Anthropic model spend through Amazon. I’ll also add that Tom Dotan of Newcomer reported a few months ago that an investor told him that “Cursor is spending 100% of its revenue on Anthropic.” Unlike Anthropic, we lack thorough reporting of the month-by-month breakdown of Cursor’s revenues. I will, however, mention them in the month I have them. For the sake of readability — and because we really don’t have much information on Cursor’s revenues beyond a few months — I’m going to stick to a bullet point list.  As discussed above, Cursor announced (along with their price change and $200-a-month plan) several multi-year partnerships with xAI, Anthropic, OpenAI and Google, suggesting that it has direct agreements with Anthropic itself versus one with AWS to guarantee “this volume of compute at a predictable price.”  Based on its spend with AWS, I do not see a strong “minimum” spend that would suggest that they have a similar deal with Amazon — likely because Amazon handles more than its infrastructure than just compute, but incentivizes it to spend on Anthropic’s models through AWS by offering discounts, something I’ve confirmed with a source.  In any case, here’s what Cursor spent on AWS. When I wrote that Anthropic and OpenAI had begun the Subprime AI Crisis back in July, I assumed that the increase in costs was burdensome, but having the information from its AWS bills, it seems that Anthropic’s actions directly caused Cursor’s costs to explode by over 100%.  While I can’t definitively say “this is exactly what did it,” the timelines match up exactly, the costs have never come down, Amazon offers provisioned throughput , and, more than likely, Cursor needs to keep a standard of uptime similar to that of Anthropic’s own direct API access. If this is what happened, it’s deeply shameful.  Cursor, Anthropic’s largest customer , in the very same month it hit $500 million in annualized revenue, immediately had its AWS and Anthropic-related costs explode to the point that it had to dramatically reduce the value of its product just as it hit the apex of its revenue growth.  It’s very difficult to see Service Tiers as anything other than an aggressive rent-seeking maneuver. Yet another undiscussed part of the story is that the launch of Claude 4 Opus and Sonnet — and the subsequent launch of Service Tiers — coincided with the launch of Claude Code , a product that directly competes with Cursor, without the burden of having to pay itself for the cost of models or, indeed, having to deal with its own “Service Tiers.” Anthropic may have increased the prices on its largest client at the time it was launching a competitor, and I believe that this is what awaits any product built on top of OpenAI or Anthropic’s models.  I realize this has been a long, number-stuffed article, but the long-and-short of it is simple: Anthropic is burning all of its revenue on compute, and Anthropic will willingly increase the prices on its customers if it’ll help it burn less money, even though that doesn’t seem to be working. What I believe happened to Cursor will likely happen to every AI-native company, because in a very real sense, Anthropic’s products are a wrapper for its own models, except it only has to pay the (unprofitable) costs of running them on Amazon Web Services and Google Cloud. As a result, both OpenAI and Anthropic can (and may very well!) devour the market of any company that builds on top of their models.  OpenAI may have given Cursor free access to its GPT-5 models in August, but a month later on September 15 2025 it debuted massive upgrades to its competitive “Codex” platform.  Any product built on top of an AI model that shows any kind of success can be cloned immediately by OpenAI and Anthropic, and I believe that we’re going to see multiple price increases on AI-native companies in the next few months. After all, OpenAI already has its own priority processing product, which it launched shortly after Anthropic’s in June . The ultimate problem is that there really are no winners in this situation. If Anthropic kills Cursor through aggressive rent-seeking, that directly eats into its own revenues. If Anthropic lets Cursor succeed, that’s revenue , but it’s also clearly unprofitable revenue . Everybody loses, but nobody loses more than Cursor’s (and other AI companies’) customers.  I’ve come away from this piece with a feeling of dread. Anthropic’s costs are out of control, and as things get more desperate, it appears to be lashing out at its customers, both companies like Cursor and Claude Code customers facing weekly rate limits on their more-powerful models who are chided for using a product they pay for. Again, I cannot say for certain, but the spike in costs is clear, and it feels like more than a coincidence to me.  There is no period of time that I can see in the just under two years of data I’ve been party to that suggests that Anthropic has any means of — or any success doing — cost-cutting, and the only thing this company seems capable of doing is increasing the amount of money it burns on a monthly basis.  Based on what I have been party to, the more successful Anthropic becomes, the more its services cost. The cost of inference is clearly increasing for customers , but based on its escalating monthly costs, the cost of inference appears to be high for Anthropic too, though it’s impossible to tell how much of its compute is based on training versus running inference. In any case, these costs seem to increase with the amount of money Anthropic makes, meaning that the current pricing of both subscriptions and API access seems unprofitable, and must increase dramatically — from my calculations, a 100% price increase might work, but good luck retaining every single customer and their customers too! — for this company to ever become sustainable.  I don’t think that people would pay those prices. If anything, I think what we’re seeing in these numbers is a company bleeding out from costs that escalate the more that its user base grows. This is just my opinion, of course.  I’m tired of watching these companies burn billions of dollars to destroy our environment and steal from everybody. I’m tired that so many people have tried to pretend there’s a justification for burning billions of dollars every year, clinging to empty tropes about how this is just like Uber or Amazon Web Services , when Anthropic has built something far more mediocre.  Mr. Amodei, I am sure you will read this piece, and I can make time to chat in person on my show Better Offline. Perhaps this Friday? I even have some studio time on the books.  I do not have all the answers! I am going to do my best to go through the information I’ve obtained and give you a thorough review and analysis. This information provides a revealing — though incomplete — insight into the costs of running Anthropic and Cursor, but does not include other costs, like salaries and compute obtained from other providers. I cannot tell you (and do not have insight into) Anthropic’s actual private moves. Any conclusions or speculation I make in this article will be based on my interpretations of the information I’ve received, as well as other publicly-available information. I have used estimates of Anthropic’s revenue based on reporting across the last ten months. Any estimates I make are detailed and they are brief.  These costs are inclusive of every product bought on Amazon Web Services, including EC2, storage and database services (as well as literally everything else they pay for). Anthropic works with both Amazon Web Services and Google Cloud for compute. I do not have any information about its Google Cloud spend. The reason I bring this up is that Anthropic’s revenue is already being eaten up by its AWS spend. It’s likely billions more in the hole from Google Cloud and other operational expenses. I have confirmed with sources that every single number I give around Anthropic and Cursor’s AWS spend is the final cash paid to Amazon after any discounts or credits. While I cannot disclose the identity of my source, I am 100% confident in these numbers, and have verified their veracity with other sources. Where is the rest of Anthropic’s money going? How will it “stop burning cash” when its operational costs explode as its revenue increases? January 2024 - $52.9 million February 2024 - $60.9 million March 2024 - $74.3 million April 2024 - $101.1 million May 2024 - $100.1 million June 2024 - $101.8 million July 2024 - $118.9 million August 2024 - $128.8 million September 2024 - $127.8 million October 2024 - $169.6 million November 2024 - $146.5 million December 2024 - $176.1 million January 2025 - $1.459 million This, apparently, is the month that Cursor hit $100 million annualized revenue — or $8.3 million, meaning it spent 17.5% of its revenue on AWS. February 2025 - $2.47 million March 2025 - $4.39 million April 2025 - $4.74 million Cursor hit $200 million annualized ($16.6 million) at the end of March 2025 , according to The Information, working out to spending 28% of its revenue on AWS.   May 2025 - $6.19 million June 2025 - $12.67 million So, Bloomberg reported that Cursor hit $500 million on June 5 2025 , along with raising a $900 million funding round. Great news! Turns out it’d need to start handing a lot of that to Anthropic. This was, as I’ve discussed above, the month when Anthropic forced it to adopt “Service Tiers”. I go into detail about the situation here , but the long and short of it is that Anthropic increased the amount of tokens you burned by writing stuff to the cache (think of it like RAM in a computer), and AI coding startups are very cache heavy, meaning that Cursor immediately took on what I believed would be massive new costs. As I discuss in what I just linked, this led Cursor to aggressively change its product, thereby vastly increasing its customers’ costs if they wanted to use the same service. That same month, Cursor’s AWS costs — which I believe are the minority of its cloud compute costs — exploded by 104% (or by $6.48 million), and never returned to their previous levels. It’s conceivable that this surge is due to the compute-heavy nature of the latest Claude 4 models released that month — or, perhaps, Cursor sending more of its users to other models that it runs on Bedrock.  July 2025 - $15.5 million As you can see, Cursor’s costs continue to balloon in July, and I am guessing it’s because of the Service Tiers situation — which, I believe, indirectly resulted in Cursor pushing more users to models that it runs on Amazon’s infrastructure. August 2025 - $9.67 million So, I can only guess as to why there was a drop here. User churn? It could be the launch of GPT-5 on Cursor , which gave users a week of free access to OpenAI’s new models. What’s also interesting is that this was the month when Cursor announced that its previously free “auto” model (where Cursor would select the best available premium model or its own model) would now bill at “ competitive token rates ,” by which I mean it went from charging nothing to $1.25 per million input and $6 per million output tokens. This change would take effect on September 15 2025. On August 10 2025 , Tom Dotan of Newcomer reported that Cursor was “well above” $500 million in annualized revenue based on commentary from two sources. September 2025 - $12.91 million Per the above, this is the month when Cursor started charging for its “auto” model.

1 views

OpenAI Needs $400 Billion In The Next 12 Months

Hello readers! This premium edition features a generous free intro because I like to try and get some of the info out there, but the real indepth stuff is below the cut. Nevertheless, I deeply appreciate anyone subscribing. On Monday I will have my biggest scoop ever, and it'll go out on the free newsletter because of its scale. This is possible because of people supporting me on the premium. Thanks so much for reading. One of the only consistent critiques of my work is that I’m angry, irate, that I am taking myself too seriously, that I’m swearing too much, and that my arguments would be “better received” if I “calmed down.” Look at where being timid or deferential has got us. Broadcom and OpenAI have announced another 10GW of custom chips and supposed capacity which will supposedly get fully deployed by the end of 2029, and still the media neutrally reports these things as not simply doable, but rational. To be clear, building a gigawatt of data center capacity costs at least $32.5 billion (though Jensen Huang says the computing hardware alone costs $50 billion , which excludes the buildings themselves and the supporting power infrastructure, and Barclays Bank says $50 billion to $60 billion ) and takes two and a half years.  OpenAI has now promised 33GW of capacity across AMD , NVIDIA , Broadcom and the seven data centers built under Stargate , though one of those — in Lordstown, Ohio — is not actually a data center, with my source being “SoftBank,” speaking to WKBN in Lordstown Ohio , which said it will “not be a full-blown data center,” and instead be “at the center of cutting-edge technology that will encompass storage containers that will hold the infrastructure for AI and data storage.” This wasn’t hard to find, by the way! I googled “SoftBank Lordstown” and up it came, ready for me to read with my eyes. Putting all of that aside, I think it’s time that everybody started taking this situation far more seriously, by which I mean acknowledging the sheer recklessness and naked market manipulation taking place.  But let’s make it really simple, and write out what’s meant to happen in the next year: In my most conservative estimate, these data centers will cost over $100 billion, and to be clear, a lot of that money needs to already be in OpenAI’s hands to get the data centers built. Or, some other dupe has to a.) have the money, and b.) be willing to front it.  All of this is a fucking joke. I’m sorry, I know some of you will read this, cowering from your screen like a B-movie vampire that just saw a crucifix, but it is a joke, and it is a fucking stupid joke, the only thing stupider being that any number of respectable media outlets are saying these things like they’ll actually happen. There is not enough time to build these things. If there was enough time, there wouldn’t be enough money. If there was enough money, there wouldn’t be enough transformers, electrical-grade steel, or specialised talent to run the power to the data centers. Fuck! Piss! Shit! Swearing doesn’t change the fact that I’m right — none of what OpenAI, NVIDIA, Broadcom, and AMD are saying is possible, and it’s fair to ask why they’re saying it. I mean, we know. Number must go up, deal must go through, and Jensen Huang wouldn’t go on CNBC and say “yeah man if I’m honest I’ve got no fucking clue how Sam Altman is going to pay me, other than with the $10 billion I’m handing him in a month. Anyway, NVIDIA’s accounts receivables keep increasing every quarter for a normal reason , don’t worry about it.”  But in all seriousness, we now have three publicly-traded tech firms that have all agreed to join Sam Altman’s No IT Loads Refused Cash Dump, all promising to do things on insane timelines that they — as executives of giant hardware manufacturers, or human beings with warm bodies and pulses and sciatica — all must know are impossible to meet.  What is the media meant to do? What are we, as regular people, meant to do? These stocks keep pumping based on completely nonsensical ideas, and we’re all meant to sit around pretending things are normal and good. They’re not! At some point somebody’s going to start paying people actual, real dollars at a scale that OpenAI has never truly had to reckon with. In this piece, I’m going to spell out in no uncertain terms exactly what OpenAI has to do in the next year to fulfil its destiny — having a bunch of capacity that cost ungodly amounts of money to serve demand that never arrives. Yes, yes, I know, you’re going to tell me that OpenAI has 800 million weekly active users , and putting aside the fact that OpenAI’s own research ( see page 10, footnote 20 ) says it double-counts users who are logged out if they’re use different devices, OpenAI is saying it wants to build 250 gigawatts of capacity by 2033, which will cost it $10 trillion dollars, or one-third of the entire US economy last year . Who the fuck for?  One thing that’s important to note: In February, Goldman Sachs estimated that the global data center capacity was around 55GW . In essence, OpenAI says it wants to add five times that capacity — something that has grown organically over the past thirty or so years — by itself, and in eight years.  And yes, it’ll cost one-third of America’s output in 2024. This is not a sensible proposition.  Even if you think that OpenAI’s growth is impressive — it went from 700 million to 800 million weekly active users in the last two months — that is not the kind of growth that says “build capacity assuming that literally every single human being on Earth uses this all the time.”  Anyway, what exactly is OpenAI doing? Why does it need all this capacity? Even if it  hits its $13 billion revenue projection for this year ( it’s only at $5.3 billion or so as of the end of August, and for OpenAI to hit its targets it’ll need to make $1.5bn+ a month very soon ), does it really think it’s going to effectively 10x the entire company from here? What possible sign is there of that happening other than a conga-line of different executives willing to stake their reputations on blatant lies peddled by a man best known for needing, at any given moment, another billion dollars .  According to The Information , OpenAI spent $6.7 billion on research and development in the first six months of 2025, and according to Epoch AI, most of the $5 billion it spent on research and development in 2024 was spent on research, experimental, or derisking runs (basically running tests before doing the final testing run) and models it would never release, with only $480 million going to training actual models that people will use.   I should also add that GPT 4.5 was a dud , and even Altman called it giant, expensive, and said it “wouldn’t crush benchmarks.” I’m sorry, but what exactly is it that OpenAI has released in the last year-and-a-half that was worth burning $11.7 billion for? GPT 5? That was a huge letdown ! Sora 2? The giant plagiarism machine that it’s already had to neuter ? What is it that any of you believe that OpenAI is going to do with these fictional data centers?  The problem with ChatGPT isn’t just that it hallucinates — it’s that you can’t really say exactly what it can do, because you can’t really trust that it can do anything. Sure, it’ll get a few things right a lot of the time, but what task is it able to do every time that you actually need?  Say the answer is “something that took me an hour now takes me five minutes.” Cool! How many of those do you get? Again, OpenAI wants to build 250 gigawatts of data centers, and will need around ten trillion dollars to do it . “It’s going to be really good” is no longer enough. And no, I’m sorry, they are not building AGI. He just told Politico a few weeks ago that if we didn’t have “models that are extraordinarily capable and do things that we ourselves cannot do” by 2030 he would be “very surprised.”  Wow! What a stunning and confident statement. Let’s give this guy the ten trillion dollars he needs! And he’s gonna need it soon if he wants to build 250 gigawatts of capacity by 2033. But let’s get a little more specific. Based on my calculations, in the next six months, OpenAI needs at least $50 billion to build a gigawatt of data centers for Broadcom — and to hit its goal of 10 gigawatts of data centers by end of 2029, at least another $200 billion in the next 12 months, not including at least $50 billion to build a gigawatt of data centers for NVIDIA , $40 billion to pay for its 2026 compute , at least $50 billion to buy chips and build a gigawatt of data centers for AMD , at least $500 million to build its consumer device ( and they can’t seem to work out what to build ), and at least a billion dollars to hand off to ARM for a CPU to go with the new chips from Broadcom . That’s $391.5 billion dollars! That’s $23.5 billion more than the $368 billion of global venture capital raised in 2024 ! That’s nearly 11 times Uber’s total ( $35.8 billion ) lifetime funding, or 5.7 times the $67.6 billion in capital expenditures that Amazon spent building Amazon Web Services !  On top of all of this are OpenAI’s other costs. According to The Information , OpenAI spent $2 billion alone on Sales and Marketing in the first half of 2025, and likely spends billions of dollars on salaries, meaning that it’ll likely need at least another $10 billion on top. As this is a vague cost, I’m going with a rounded $400 billion number, though I believe it’s actually going to be more. And to be clear, to complete these deals by the end of 2026, OpenAI needs large swaths of this money by February 2026.   I know, I know, you’re going to say that OpenAI will simply “raise debt” and “work it out,” but OpenAI has less than a year to do that, because OpenAI has promised in its own announcements that all of these things would happen by the end of December 2026, and even if they’re going to happen in 2027, data centers require actual money to begin construction, and Broadcom, NVIDIA and AMD are going to actually require cash for those chips before they ship them. Even if OpenAI finds multiple consortiums of paypigs to take on the tens of billions of dollars of data center funding, there are limits, and based on OpenAI’s aggressive (and insane) timelines, they will need to raise multiple different versions of the largest known data center deals of all time, multiple times a year, every single year.  Say that happens. OpenAI will still need to pay those compute contracts with Oracle , CoreWeave , Microsoft (I believe its Azure credits have run out) and Google ( via CoreWeave ) with actual, real cash — $40 billion dollars worth — when it’s already burning $9.2 billion in the first half of 2026 on compute against revenues of $4.3 billion . OpenAI will still need to pay its staff, its storage, its sales and marketing department that cost it $2 billion in the first half of 2026, all while converting its non-profit into a for-profit by the end of the year, or it loses $20 billion in funding from SoftBank . Also, if it doesn’t convert to a for-profit by October 2026, its $6.6 billion funding round from 2024 converts to debt . The burden that OpenAI is putting on the financial system is remarkable, and actively dangerous. It would absorb, at this rate, the capital expenditures of multiple hyperscalers, requiring multiple $30 billion debt financing operations a year, and for it to hit its goal of 250 gigawatts by the end of 2033, it will likely have to have outpaced the capital expenditures of any other company in the world. OpenAI is an out-of-control monstrosity that is going to harm every party that depends upon it completing its plans. For it to succeed, it will have to absorb over a trillion dollars a year — and for it to hit its target, it will likely have to eclipse the $1.7 trillion in global private equity deal volume in 2024 , and become a significant part of global trade ( $33 trillion in 2025 ). There isn’t enough money to do this without diverting most of the money that exists to doing it, and even if that were to happen, there isn’t enough time to do any of the stuff that has been promised in anything approaching the timelines promised, because OpenAI is making this up as it goes along and somehow everybody is believing it.  At some point, OpenAI is going to have to actually do the things it has promised to do, and the global financial system is incapable of supporting them. And to be clear, OpenAI cannot really do any of the things it’s promised . Just take a look at the Oracle deal! None of this bullshit is happening, and it’s time to be honest about what’s actually going on. OpenAI is not building “the AI industry,” as this is capacity for one company that burns billions of dollars and has absolutely no path to profitability.  This is a giant, selfish waste of money and time, one that will collapse the second that somebody’s confidence wavers. I realize that it’s tempting to write “Sam Altman is building a giant data center empire,” but what Sam Altman is actually doing is lying. He is lying to everybody.  He is saying that he will build 250GW of data centers in the space of eight years , an impossible feat, requiring more money than anybody would ever give him in volumes and intervals that are impossible for anybody to raise.  Sam Altman’s singular talent is finding people willing to believe his shit or join him in an economy-supporting confidence game, and the recklessness of continuing to do so will only harm retail investors — regular people beguiled by the bullshit machine and bullshit masters making billions promising they’ll make trillions. To prove it, I’m going to write down everything that will need to take place in the next twelve months for this to happen, and illustrate the timelines of everything involved.  In the second half of 2026 , OpenAI and Broadcom will tape out and successfully complete an AI inference chip, then manufacture enough of them to fill a 1GW data center. That data center will be built in an as-yet-unknown location, and will have at least 1GW of power, but more realistically it will need 1.2GW to 1.3GW of power, because for every 1GW of IT load, you need extra power capacity in reserve for the hottest day of the year, when the cooling system works hardest and power transmission losses are highest. .  OpenAI does not appear to have a site for this data center, and thus has not broken ground on it. In the second half of 2026, AMD and OpenAI will begin “ the first 1 gigawatt deployment of AMD Instinct MI450 GPUs .”  This will take place in an as-yet-unnamed data center location, which to be completed by that time would have needed to start construction and early procurement of power at least a year ago, if not more.  In the second half of 2026, OpenAI and NVIDIA will deploy the first gigawatt of NVIDIA’s Vera Rubin GPU systems as part of their $100 billion deal. These GPUs will be deployed in a data center of some sort, which remains unnamed, but for them to meet this timeline they will need to have started construction at least a year ago. Oracle needs 4.5 gigawatts of IT load capacity to provide OpenAI the compute for its $300 billion, five-year-long deal . Despite Oracle CEO Greg Magouyrk saying “ of course OpenAI can pay $60 billion a year ,” OpenAI cannot actually afford to pay $60 billion a year. It’s on course to lose  Even if it could, Oracle needs 4.5GW of capacity. Stargate Abilene is meant to be completed by the end of 2026 (six months behind schedule), but ( as I reported last week ) only appears to have 200MW of the 1.5+GW of actual, real power it needs right now, and won’t have enough by the end of the year. Even if Abilene was completed on time, Oracle only has one other data center location planned — a 1.4GW data center plot in Shackelford, Texas that has only just begun construction, and will only have a single building by the second half of 2026 .

1 views

The AI Bubble's Impossible Promises

Readers: I’ve done a very generous “free” portion of this newsletter, but I do recommend paying for premium to get the in-depth analysis underpinning the intro. That being said, I want as many people as possible to get the general feel for this piece. Things are insane, and it’s time to be realistic about what the future actually looks like. We’re in a bubble. Everybody says we’re in a bubble. You can’t say we’re not in a bubble anymore without sounding insane, because everybody is now talking about how OpenAI has promised everybody $1 trillion — something you could have read about two weeks ago on my premium newsletter . Yet we live in a chaotic, insane world, where we can watch the news and hear hand-wringing over the fact that we’re in a bubble , read article after CEO after article after CEO after analyst after investor saying we’re in a bubble, yet the market continues to rip ever-upward on increasingly more-insane ideas , in part thanks to analysts that continue to ignore the very signs that they’re relied upon to read . AMD and OpenAI signed a very strange deal where AMD will give OpenAI the chance to buy 160 million shares at a cent a piece, in tranches of indeterminate size, for every gigawatt of data centers OpenAI builds using AMD’s chips, adding that OpenAI has agreed to buy “six gigawatts of GPUs.” This is a peculiar way to measure GPUs, which are traditionally measured in the price of each GPU , but nevertheless, these chips are going to be a mixture of AMD’s mi450 instinct GPUs — which we don’t know the specs of! — and its current generation mi350 GPUs , making the actual scale of these purchases a little difficult to grasp, though the Wall Street Journal says it would “result in tens of billions of dollars in new revenue” for AMD . This AMD deal is weird , but one that’s rigged in favour of Lisa Su and AMD. OpenAI doesn’t get a dollar at any point - it has work out how to buy those GPUs and figure out how to build six further gigawatts of data centers on top of the 10GW of data centers it promised to build for NVIDIA and the seven-to-ten gigawatts that are allegedly being built for Stargate , bringing it to a total of somewhere between 23 and 26 gigawatts of data center capacity. Hell, while we’re on the subject, has anyone thought about how difficult and expensive it is to build a data center?  Everybody is very casual with how they talk about Sam Altman’s theoretical promises of trillions of dollars of data center infrastructure , and I'm not sure anybody realizes how difficult even the very basics of this plan will be. Nevertheless, everybody is happily publishing stories about how Stargate Abilene Texas — OpenAI’s massive data center with Oracle — is “open, ” by which they mean two buildings, and I’m not even confident both of them are providing compute to OpenAI yet. There are six more of them that need to get built for this thing to start rocking at 1.2GW — even though it’s only 1.1GW according to my sources in Abilene. But, hey, sorry — one minute — while we’re on that subject, did anybody visiting Abilene in the last week or so ever ask whether they’ll have enough power there?  Don’t worry, you don’t need to look. I’m sure you were just about to, but I did the hard work for you and read up on it, and it turns out that Stargate Abilene only has 200MW of power — a 200MW substation that, according to my sources, has only been built within the last couple of months, with 350MWs of gas turbine generators that connect to a natural gas power plant that might get built by the end of the year . Said turbine is extremely expensive, featuring volatile pricing ( for context, natural gas price volatility fell in Q2 2025…to 69% annualized ) and even more volatile environmental consequences , and is, while permitted for it ( this will download the PDF of the permit ), impractical and expensive to use long-term.  Analyst James van Geelen, founder of Citrini Research recently said on Bloomberg’s Odd Lots podcast that these are “not the really good natural gas turbines” because the really good ones would take seven years to deliver due to a natural gas turbine shortage . But they’re going to have to do. According to sources in Abilene, developer Lancium has only recently broken ground on the 1GW substation and five transformers OpenAI’s going to need to build out there , and based on my conversations with numerous analysts and researchers, it does not appear that Stargate Abilene will have sufficient power before the year 2027.  Then there’s the question of whether 1GW of power actually gets you 1GW of compute. This is something you never see addressed in the coverage of OpenAI’s various construction commitments, but it’s an important point to make. Analyst Daniel Bizo, Research Director at the Uptime Institute, explained that 1 gigawatt of power is only sufficient to power (roughly) 700 megawatts of data center capacity . We’ll get into the finer details of that later in this newsletter, but if we assume that ratio is accurate, we’re left with a troubling problem. That figure represents a 1.43 PUE — Power Usage Effectiveness — and if we apply that to Stargate Abilene, we see that it needs at least 1.7GW of power, and currently only has 200MW. Stargate Abilene does not have sufficient power to run at even half of its supposed IT load of 1.2GW, and at its present capacity — assuming that the gas turbines function at full power — can only hope to run 370MW to 460MW of IT load. I’ve seen article after article about the gas turbines and their use of fracked gas — a disgusting and wasteful act typical of OpenAI — but nobody appears to have asked “how much power does a 1.2GW data center require?” and then chased it with “how much power does Stargate Abilene have?” The answer is not enough, and the significance of said “not enough” is remarkable. Today, I’m going to tell you, at length, how impossible the future of generative AI is.  Gigawatt data centers are a ridiculous pipe dream, one that runs face-first into the walls of reality.   The world’s governments and media have been far too cavalier with the term “gigawatt,” casually breezing by the fact that Altman’s plans require 17 or more nuclear reactors’ worth of power , as if building power is quick and easy and cheap and just happens. I believe that many of you think that this is an issue of permitting — of simply throwing enough money at the problem — when we are in the midst of a shortage in the electrical grade steel and transformers required to expand America’s (and the world’s) power grid. I realize it’s easy to get blinded by the constant drumbeat of “ gargoyle-like tycoon cabal builds 1GW  data center ” and feel that they will simply overwhelm the problem with money, but no, I’m afraid that isn’t the case at all, and all of this is so silly, so ridiculous, so cartoonishly bad that it threatens even the seemingly-infinite wealth of Elon Musk, with xAI burning over a billion dollars a month and planning to spend tens of billions of dollars building the Colossus 2 data center , dragging two billion dollars from SpaceX in his desperate quest to burn as much money as possible for no reason.  This is the age of hubris — a time in which we are going to watch stupid, powerful and rich men fuck up their legacies by finding a technology so vulgar in its costs and mythical outcomes that it drives the avaricious insane and makes fools of them.  Or perhaps this is what happens when somebody believes they’ve found the ultimate con — the ability to become both the customer and the business, which is exactly what NVIDIA is doing to fund the chips behind Colossus 2. According to Bloomberg , NVIDIA is creating a company — a “special purpose vehicle” — that it will invest $2 billion in, along with several other backers. Once that’s done, the special purpose vehicle will then use that equity to raise debt from banks, buy GPUs from NVIDIA, and then rent those GPUs to Elon Musk for five years. Hell, why make it so complex? NVIDIA invested money in a company specifically built to buy chips from it, which then promptly handed the money back to NVIDIA along with a bunch of other money, and then whatever happened next is somebody else’s problem. Actually, wait — how long do GPUs last, exactly? Four years for training ? Three years ? The A100 GPU started shipping in May 2020 , and the H100 (and the Hopper GPU generation) entered full production in September 2022 , meaning that we’re hurtling at speed toward the time in which we’re going to start seeing a remarkable amount of chips start wearing down, which should be a concern for companies like Microsoft, which bought 150,000 Hopper GPUs in 2023 and 485,000 of them in 2024 . Alright, let me just be blunt: the entire economy of debt around GPUs is insane. Assuming these things don’t die within five years ( their warranties generally end in three ), their value absolutely will, as NVIDIA has committed to releasing a new AI chip every single year , likely with significant increases to power and power efficiency. At the end of the five year period, the Special Purpose Vehicle will be the proud owner of five-year-old chips that nobody is going to want to rent at the price that Elon Musk has been paying for the last five years. Don’t believe me? Take a look at the rental prices for H100 GPUs that went from $8-an-hour in 2023 to $2-an-hour in 2024 , or the Silicon Data Indexes (aggregated realtime indexes of hourly prices) that show H100 rentals at around $2.14-an-hour and A100 rentals at a dollar-an-hour , with Vast.AI offering them at as little as $0.67 an hour . This is, by the way, a problem that faces literally every data center being built in the world , and I feel insane talking about it. It feels like nobody is talking about how impossible and ridiculous all of this is. It’s one thing that OpenAI has promised one trillion dollars to people — it’s another that large swaths of that will be spent on hardware that will, by the end of these agreements, be half-obsolete and generating less revenue than ever. Think about it. Let’s assume we live in a fantasy land where OpenAI is somehow able to pay Oracle $300 billion over 5 years — which, although the costs will almost certainly grow over time, and some of the payments are front-loaded, averages out to $5bn each month, which is a truly insane number that’s in excess of what Netflix makes in revenue.  Said money is paying for access to Blackwell GPUs, which will, by then, be at least two generations behind, with NVIDIA’s Vera Rubin GPUs due next year . What happens to that GPU infrastructure? Why would OpenAI continue to pay the same rental rate for five-year-old Blackwell GPUs?   All of these ludicrous investments are going into building data centers full of what will, at that point, be old tech.  Let me put it in simple terms: imagine you, for some reason, rented an M1 Mac when it was released in 2020 , and your rental was done in 2025, when we’re onto the M4 series . Would you expect somebody to rent it at the same price? Or would they say “hey, wait a minute, for that price I could rent one of the newer generation ones.” And you’d be bloody right!  Now, I realize that $70,000 data center GPUs are a little different to laptops, but that only makes their decline in value more profound, especially considering the billions of dollars of infrastructure built around them.  And that’s the problem. Private equity firms are sinking $50 billion or more a quarter into theoretical data center projects full of what will be years-old GPU technology, despite the fact that there’s no real demand for generative AI compute , and that’s before you get to the grimmest fact of all: that even if you can build these data centers, it will take years and billions of dollars to deliver the power, if it’s even possible to do so. Harvard economist Jason Furman estimates that data centers and software accounted for 92% of GDP growth in the first half of this year , in line with my conversation with economist Paul Kedrosky from a few months ago .  All of this money is being sunk into infrastructure for an “AI revolution” that doesn’t exist, as every single AI company is unprofitable, with pathetic revenues ( $61 billion or so if you include CoreWeave and Lambda, both of which are being handed money by NVIDIA ), impossible-to-control costs that have only ever increased , and no ability to replace labor at scale ( and especially not software engineers ).   OpenAI needs more than a trillion dollars to pay its massive cloud compute bills and build 27 gigawatts of data centers, and to get there, it needs to start making incredible amounts of money, a job that’s been mostly handed to Fidji Simo, OpenAI’s new CEO of Applications , who is solely responsible for turning a company that loses billions of dollars into one that makes $200 billion in 2030 with $38 billion in profit . She’s been set up to fail, and I’m going to explain why. In fact, today I’m going to explain to you how impossible all of this is — not just expensive , not just silly , but actively impossible within any of the timelines set .  Stargate will not have the power it needs before the middle of 2026 — the beginning of Oracle’s fiscal year 2027, when OpenAI has to pay it $30 billion for compute — or, according to The Information , choose to walk away if the capacity isn’t complete. And based on my research, analysis and discussions with power and data center analysts, gigawatt data centers are, by and large, a pipedream, with their associated power infrastructure taking two to four years, and that’s if everything goes smoothly. OpenAI cannot build a gigawatt of data centers for AMD by the “second half of 2026.”   It haven’t even announced the financing, let alone where the data center might be, and until it does that it’s impossible to plan the power, which in and of itself takes months before you even start building.   Every promise you’re reading in the news is impossible. Nobody has even built a gigawatt data center, and more than likely nobody ever will . Stargate Abilene isn’t going to be ready in 2026, won’t have sufficient power until at best 2027, and based on the conversations I’ve had it’s very unlikely it will build that gigawatt substation before the year 2028.   In fact, let me put it a little simpler: all of those data center deals you’ve seen announced are basically bullshit. Even if they get the permits and the money, there are massive physical challenges that cannot be resolved by simply throwing money at them.  Today I’m going to tell you a story of chaos, hubris and fantastical thinking. I want you to come away from this with a full picture of how ridiculous the promises are, and that’s before you get to the cold hard reality that AI fucking sucks.

2 views

OpenAI Is Just Another Boring, Desperate AI Startup

What is OpenAI? I realize you might say "a foundation model lab" or "the company that runs ChatGPT," but that doesn't really give the full picture of everything it’s promised, or claimed, or leaked that it was or would be. No, really, if you believe its leaks to the press... To be clear, many of these are ideas that OpenAI has leaked specifically so the media can continue to pump up its valuation and continue to raise the money it needs — at least $1 Trillion over the next four or five years, and I don't believe the theoretical (or actual) costs of many of the things I've listed are included. OpenAI wants you to believe it is everything , because in reality it’s a company bereft of strategy, focus or vision. The GPT-5 upgrade for ChatGPT was a dud — an industry-wide embarrassment for arguably the most-hyped product in AI history, one that ( as I revealed a few months ago ) costs more to operate than its predecessor, not because of any inherent capability upgrade, but how it actually processes the prompts its user provides — and now it's unclear what it is that this company does.   Does it make hardware? Software? Ads? Is it going to lease you GPUs to use for your own AI projects? Is it going to certify you as an AI expert ? Notice how I've listed a whole bunch of stuff that isn't ChatGPT, which will, if you look at The Information's reporting of its projections, remain the vast majority of its revenue until 2027, at which point "agents" and "new products including free user monetization" will magically kick in. In reality, OpenAI is an extremely boring (and bad!) software business. It makes the majority of its revenue selling subscriptions to ChatGPT, and apparently had 20 million paid subscribers (as of April) and 5 million business subscribers (as of August, though 500,000 of them are Cal State University seats paid at $2.50 a month ). It also loses incredibly large amounts of money. Yes, I realize that OpenAI also sells access to its API, but as you can see from the chart above, it is making a teeny tiny sliver of revenue from it in 2025, though I will also add that this chart has a little bit of green for "agent" revenue, which means it's very likely bullshit. Operator, OpenAI's so-called agent, is barely functional , and I have no idea how anyone would even begin to charge money for it outside of "please try my broken product." In any case, API sales appear to be a very, very small part of OpenAI's revenue stream, and that heavily suggests a lack of interest in integrating its models at scale. Worse still, this effectively turns OpenAI into an AI startup. Think about it: if OpenAI can't make the majority of its money through "innovating" in the development of large language models (LLMs), then it’s just another company plugging LLMs into its software. While ChatGPT may be a very popular product, it is, by definition (and in its name!) a GPT wrapper, with the few differences being that OpenAI pays its own immediate costs, has the people necessary to continue improving its own models, and also continually makes promises to convince people it’s anything other than just another AI startup. In fact, the only real difference is the amount of money backing it. Otherwise, OpenAI could be literally any foundation model company, and with a lack of real innovation within those models, it’s just another startup trying to find ways to monetize generative AI, an industry that only ever seems to lose money . As a result, we should start evaluating OpenAI as just another AI startup, as its promises do not appear to mesh with any coherent strategy, other than " we need $1 trillion dollars ." There does not seem to be much of a plan on a day-to-day basis, nor does there seem to be one about what OpenAI should be, other than that OpenAI will be a consumer hardware, consumer software, enterprise SaaS and data center operator, as well as running a social network. As I've discussed many times , LLMs are inherently flawed due to their probabilistic nature."Hallucinations" — when a model authoritatively states something is true when it isn't (or takes an action that seems the most likely course of action, even if it isn't the right one) — are a " mathematically inevitable " according to OpenAI's own research feature of the technology, meaning that there is no fixing their most glaring, obvious problem , even with "perfect data." I'd wager the reason OpenAI is so eager to build out so much capacity while leaking so many diverse business lines is an attempt to get away from a dark truth: that when you peel away the hype, ChatGPT is a wrapper, every product it makes is a wrapper, and OpenAI is pretty fucking terrible at making products. Today I'm going to walk you through a fairly unique position: that OpenAI is just another boring AI startup lacking any meaningful product roadmap or strategy, using the press as a tool to pump its bags while very rarely delivering on what it’s promised. It is a company with massive amounts of cash, industrial backing, and brand recognition, and otherwise is, much like its customers, desperately trying to work out how to make money selling products built on top of Large Language Models. OpenAI lives and dies on its mythology as the center of innovation in the world of AI, yet reality is so much more mediocre. Its revenue growth is slowing, its products are commoditized, its models are hardly state-of-the-art, the overall generative AI industry has lost its sheen , and its killer app is a mythology that has converted a handful of very rich people and very few others. OpenAI spent, according to The Information , 150% ($6.7 billion in costs) of its H1 2025 revenue ($4.3 billion) on research and development, producing the deeply-underwhelming GPT-5 and Sora 2, an app that I estimate costs it upwards of $5 for each video generation, based on Azure's published rates for the first Sora model , though it's my belief that these rates are unprofitable, all so that it can gain a few more users. To be clear, R&D is good, and useful, and in my experience, the companies that spend deeply on this tend to be the ones that do well. The reason why Huawei has managed to outpace its American rivals in several key areas — like automotive technology and telecommunications — is because it spends around a quarter of its revenue on developing new technologies and entering new markets, rather than stock buybacks and dividends. The difference is that said R&D spending is both sustainable and useful, and has led to Huawei becoming much a stronger business, even as it languishes on a Treasury Department entity list that effectively cuts it off from US-made or US-origin parts or IP . Considering that OpenAI’s R&D spending was 38.28% of its cash-on-hand by the end of the period (totalling $17.5bn, which we’ll get to later), and what we’ve seen as a result, it’s hard to describe it as either sustainable or useful.     OpenAI isn't innovative, it’s exploitative, a giant multi-billion dollar grift attempting to hide how deeply unexciting it is, and how nonsensical it is to continue backing it . Sam Altman is an excellent operator, capable of spreading his mediocre, half-baked mantras about how 2025 was the year AI got smarter than us , or how we'll be building 1GW data centers each week (something that, by my estimations, takes 2.5 years), taking advantage of how many people in the media, markets and global governments don't know a fucking thing about anything. OpenAI is  also getting desperate. Beneath the surface of the media hype and trillion-dollar promises is a company struggling to maintain relevance, its entire existence built on top of hype and mythology. And at this rate, I believe it’s going to miss its 2025 revenue projections, all while burning billions more than anyone has anticipated. OpenAI is a social media company, this week launching Sora 2, a social feed entirely made up of generative video . OpenAI is a workplace productivity company, allegedly working on its own productivity suite to compete with Microsoft . OpenAI is a jobs portal, announcing in September it was "developing an AI-powered hiring platform ," which it will launch 'by mid-2026. OpenAI is an ads company, and is apparently trying to hire an an ads chief , with the (alleged) intent to start showing ads in ChatGPT "by 2026." OpenAI is a company that would sell AI compute like Microsoft Azure or Amazon Web Services, or at least is considering being one, with CFO Sarah Friar telling Bloomberg in August that it is not "actively looking" at such an effort today but will "think about it as a business down the line, for sure." OpenAI is a fabless semiconductor design company, launching its own AI chips in, again, 2026 with Broadcom , but only for internal use. OpenAI is a consumer hardware company, preparing to launch a device by the end of 2026 or early 2027 and hiring a bunch of Apple people to work on it , as well as considering — again, it’s just leaking random stuff at this point to pump up its value — a smart speaker, a voice recorder and AR glasses. OpenAI is also working on its own browser , I guess.

0 views

The Case Against Generative AI

Soundtrack: Queens of the Stone Age - First It Giveth Before we go any further: This is, for the third time this year, the longest newsletter I've ever written, weighing in somewhere around 18,500 words. I've written it specifically to be read at your leisure — dip in and out where you'd like — but also in one go.  This is my comprehensive case that yes, we’re in a bubble, one that will inevitably (and violently) collapse in the near future. I'll also be cutting this into a four-part episode starting tomorrow on my podcast Better Offline . I deeply appreciate your time. If you like this newsletter, please think about subscribing to the premium, which I write weekly. Thanks for reading. Alright, let’s do this one last time . In 2022, a (kind-of) company called OpenAI surprised the world with a website called ChatGPT that could generate text that sort-of sounded like a person using a technology called Large Language Models (LLMs), which can also be used to generate images, video and computer code.  Large Language Models require entire clusters of servers connected with high-speed networking, all containing this thing called a GPU — graphics processing units. These are different to the GPUs in your Xbox, or laptop, or gaming PC. They cost much, much more, and they’re good at doing the processes of inference (the creation of the output of any LLM) and training (feeding masses of training data to models, or feeding them information about what a good output might look like, so they can later identify a thing or replicate it). These models showed some immediate promise in their ability to articulate concepts or generate video, visuals, audio, text and code. They also immediately had one glaring, obvious problem: because they’re probabilistic, these models can’t actually be relied upon to do the same thing every single time. So, if you generated a picture of a person that you wanted to, for example, use in a story book, every time you created a new page, using the same prompt to describe the protagonist, that person would look different — and that difference could be minor (something that a reader should shrug off), or it could make that character look like a completely different person. Moreover, the probabilistic nature of generative AI meant that whenever you asked it a question, it would guess as to the answer, not because it knew the answer, but rather because it was guessing on the right word to add in a sentence based on previous training data. As a result, these models would frequently make mistakes — something which we later referred to as “hallucinations.”  And that’s not even mentioning the cost of training these models, the cost of running them, the vast amounts of computational power they required, the fact that the legality of using material scraped from books and the web without the owner’s permission was (and remains) legally dubious, or the fact that nobody seemed to know how to use these models to actually create profitable businesses.  These problems were overshadowed by something flashy, and new, and something that investors — and the tech media — believed would eventually automate the single thing that’s proven most resistant to automation: namely, knowledge work and the creative economy.  This newness and hype and these expectations sent the market into a frenzy, with every hyperscaler immediately creating the most aggressive market for one supplier I’ve ever seen. NVIDIA has sold over $200 billion of GPUs since the beginning of 2023 , becoming the largest company on the stock market and trading at over $170 as of writing this sentence only a few years after being worth $19.52 a share . While I’ve talked about some of the propelling factors behind the AI wave — automation and novelty — that’s not a complete picture. A huge reason why everybody decided to “do AI” was because the software industry’s growth was slowing , with SaaS (Software As A Service) company valuations stalling or dropping , resulting in  the terrifying prospect of companies having to “ under promise and over deliver ” and “be efficient.” Things that normal companies — those whose valuations aren’t contingent on ever-increasing, ever-constant growth — don’t have to worry about, because they’re normal companies.  Suddenly, there was the promise of a new technology — Large Language Models — that were getting exponentially more powerful, which was mostly a lie but hard to disprove because “powerful” can mean basically anything, and the definition of “powerful” depended entirely on whoever you asked at any given time, and what that person’s motivations were.  The media also immediately started tripping on its own feet, mistakenly claiming OpenAI’s GPT-4 model tricked a Taskrabbit into solving a CAPTCHA ( it didn’t — this never happened), or saying that “ people who don’t know how to code already [used] bots to produce full-fledged games, ” and if you’re wondering what “full-fledged” means, it means “pong” and a cobbled-together rolling demo of SkyRoads, a game from 1993 . The media (and investors) helped peddle the narrative that AI was always getting better, could do basically anything, and that any problems you saw today would be inevitably solved in a few short months, or years, or, well, at some point I guess.  LLMs were touted as a digital panacea, and the companies building them offered traditional software companies the chance to plug these models into their software using an API, thus allowing them to ride the same generative AI wave that every other company was riding.  The model companies similarly started going after individual and business customers, offering software and subscriptions that promised the world, though this mostly boiled down to chatbots that could generate stuff, and then doubled down with the promise of “agents” — a marketing term that’s meant to make you think “autonomous digital worker” but really means “ broken digital product .” Throughout this era, investors and the media spoke with a sense of inevitability that they never really backed up with data. It was an era based on confidently-asserted “vibes.” Everything was always getting better and more powerful, even though there was never much proof that this was truly disruptive technology, other than in its ability to disrupt apps you were using with AI — making them worse by, for example, suggesting questions on every Facebook post that you could ask Meta AI, but which Meta AI couldn’t answer. “AI” was omnipresent, and it eventually grew to mean everything and nothing. OpenAI would see its every move lorded over like a gifted child, its CEO Sam Altman called the “ Oppenheimer of Our Age ,” even if it wasn’t really obvious why everyone was impressed. GPT-4 felt like something a bit different, but was it actually meaningful?  The thing is, Artificial Intelligence is built and sold on not just faith, but a series of myths that the AI boosters expect us to believe with the same certainty that we treat things like gravity, or the boiling point of water.  Can large language models actually replace coders? Not really, no, and I’ll get into why later in the piece. Can Sora — OpenAI’s video creation tool — replace actors or animators? No, not at all, but it still fills the air full of tension because you can immediately see who is pre-registered to replace everyone that works for them.  AI is apparently replacing workers, but nobody appears to be able to prove it! But every few weeks a story runs where everybody tries to pretend that AI is replacing workers with some poorly-sourced and incomprehensible study , never actually saying “someone’s job got replaced by AI” because it isn’t happening at scale, and because if you provide real-world examples, people can actually check. To be clear, some people have lost jobs to AI, just not the white collar workers, software engineers, or really any of the career paths that the mainstream media and AI investors would have you believe.  Brian Merchant has done excellent work covering how LLMs have devoured the work of translators , using cheap, “almost good” automation to lower already-stagnant wages in a field that was already hurting before the advent of generative AI, with some having to abandon the field, and others pushed into bankruptcy. I’ve heard the same for art directors, SEO experts, and copy editors, and Christopher Mims of the Wall Street Journal covered these last year .  These are all fields with something in common: shitty bosses with little regard for their customers who have been eagerly waiting for the opportunity to slash contract labor. To quote Merchant, “the drumbeat, marketing, and pop culture of ‘powerful AI’ encourages and permits management to replace or degrade jobs they might not otherwise have.”  Across the board, the people being “replaced” by AI are the victims of lazy, incompetent cost-cutters who don’t care if they ship poorly-translated text. To quote Merchant again, “[AI hype] has created the cover necessary to justify slashing rates and accepting “good enough” automation output for video games and media products.” Yet the jobs crisis facing translators speaks to the larger flaws of the Large Language Model era, and why other careers aren’t seeing this kind of disruption. Generative AI creates outputs , and by extension defines all labor as some kind of output created from a request. In the case of translation, it’s possible for a company to get by with a shitty version, because many customers see translation as “what do these words say,” even though ( as one worker told Merchant ) translation is about conveying meaning. Nevertheless, “translation” work had already started to condense to a world where humans would at times clean up machine-generated text, and the same worker warned that the same might come for other industries. Yet the problem is that translation is a heavily output-driven industry, one where (idiot) bosses can say “oh yeah that’s fine” because they ran an output back through Google Translate and it seemed fine in their native tongue. The problems of a poor translation are obvious, but the customers of translation are, it seems, often capable of getting by with a shitty product. The problem is that most jobs are not output-driven at all, and what we’re buying from a human being is a person’s ability to think.   Every CEO talking about AI replacing workers is an example of the real problem: that most companies are run by people who don’t understand or experience the problems they’re solving, don’t do any real work, don’t face any real problems, and thus can never be trusted to solve them. The Era of the Business Idiot is the result of letting management consultants and neoliberal “free market” sociopaths take over everything, leaving us with companies run by people who don’t know how the companies make money, just that they must always make more. When you’re a big, stupid asshole, every job that you see is condensed to its outputs, and not the stuff that leads up to the output, or the small nuances and conscious decisions that make an output good as opposed to simply acceptable, or even bad.  What does a software engineer do? They write code! What does a writer do? They write words! What does a hairdresser do? They cut hair!  Yet that’s not actually the case.  As I’ll get into later, a software engineer does far more than just code, and when they write code they’re not just saying “what would solve this problem?” with a big smile on their face — they’re taking into account their years of experience, what code does, what code could do , all the things that might break as a result, and all of the things that you can’t really tell from just looking at code , like whether there’s a reason things are made in a particular way. A good coder doesn’t just hammer at the keyboard with the aim of doing a particular task. They factor in questions like: How does this functionality fit into the code that’s already here? Or, if someone has to update this code in the future, how do I make it easy for them to understand what I’ve written and to make changes without breaking a bunch of other stuff? A writer doesn’t just “write words.” They jostle ideas and ideals and emotions and thoughts and facts and feelings into a condensed piece of text, explaining both what’s happening and why it’s happening from their perspective, finding nuanced ways to convey large topics, none of which is the result of a single (or many) prompts but the ever-shifting sand of a writer’s brain.  Good writing is a fight between a bunch of different factors: structure, style, intent, audience, and prioritizing the things that you (or your client) care about in the text. It’s often emotive — or at the very least, driven or inspired by a given emotion — which is something that an AI simply can’t replicate in a way that’s authentic and believable.  And a hairdresser doesn’t just cut hair, but cuts your hair, which may be wiry, dry, oily, long, short, healthy, unhealthy, on a scalp with particular issues, at a time of year when perhaps you want to change length, at a time that fits you, in “the way you like” which may be impossible to actually write down but they get it just right. And they make conversation, making you feel at ease while they snip and clip away at your tresses, with you having to trust that they’ll get it right.  This is the true nature of labor that executives fail to comprehend at scale: that the things we do are not units of work, but extrapolations of experience, emotion, and context that cannot be condensed in written meaning. Business Idiots see our labor as the result of a smart manager saying “do this,” rather than human ingenuity interpreting both a request and the shit the manager didn’t say. What does a CEO do? Uhhh, um, well, a Harvard study says they spend 25% of their time on “people and relationships,” 25% on “functional and business unit reviews,” 16% on “organization and culture,” and 21% on “strategy,” with a few percent here and there for things like “professional development.”  That’s who runs the vast majority of companies: people that describe their work predominantly as “looking at stuff,” “talking to people” and “thinking about what we do next.” The most highly-paid jobs in the world are impossible to describe, their labor described in a mish-mash of LinkedInspiraton, yet everybody else’s labor is an output that can be automated. As a result, Large Language Models seem like magic. When you see everything as an outcome — an outcome you may or may not understand, and definitely don’t understand the process behind, let alone care about — you kind of already see your workers as LLMs.   You create a stratification of the workforce that goes beyond the normal organizational chart, with senior executives — those closer to the class level of CEO — acting as those who have risen above the doldrums of doing things to the level of “decisionmaking,” a fuzzy term that can mean everything from “making nuanced decisions with input from multiple different subject-matter experts” to, as ServiceNow Bill McDermott did in 2022 , “[make] it clear to everybody [in a boardroom of other executives], everything you do: AI, AI, AI, AI, AI.”  The same extends to some members of the business and tech media that have, for the most part, gotten by without having to think too hard about the actual things the companies are saying.  I realize this sounds a little mean, and I must be clear it doesn’t mean that these people know nothing , just that it’s been possible to scoot through the world without thinking too hard about whether or not something is true. When Salesforce said back in 2024 that its “Einstein Trust Layer” and AI would be “transformational for jobs,” the media dutifully wrote it down and published it without a second thought. It fully trusted Marc Benioff when he said that Agentforce agents would replace human workers , and then again when he said that AI agents are doing “ 30% to 50% of all the work in Salesforce itself ,” even though that’s an unproven and nakedly ridiculous statement.  Salesforce’s CFO said earlier this year that AI wouldn’t boost sales growth in 2025 . One would think this would change how they’re covered, or how seriously one takes Marc Benioff.  It hasn’t, because nobody is paying attention. In fact, nobody seems to be doing their job. This is how the core myths of generative AI were built: by executives saying stuff and the media publishing it without thinking too hard.  AI is replacing workers! AI is writing entire computer programs! AI is getting exponentially more-powerful! What does “powerful” mean? That the models are getting better on benchmarks that are rigged in their favor, but because nobody fucking explains it , regular people are regularly told that AI is “powerful.”  The only thing “powerful” about generative AI is its mythology. The world’s executives, entirely disconnected from labor and actual production, are doing the only thing they know how to — spend a bunch of money and say vague stuff about “AI being the future.” There are people — journalists, investors, and analysts — that have built entire careers on filling in the gaps for the powerful as they splurge billions of dollars and repeat with increasing desperation that “the future is here” as absolutely nothing happens. You’ve likely seen a few ridiculous headlines recently. One of the most recent, and most absurd, is that that OpenAI will pay Oracle $300 billion over four years , closely followed with the claim that NVIDIA will “invest” “$100 billion” in OpenAI to build 10GW of AI data centers , though the deal is structured in a way that means that OpenAI is paid “progressively as each gigawatt is deployed,” and OpenAI will be leasing the chips (rather than buying them outright) . I must be clear that these deals are intentionally made to continue the myth of generative AI, to pump NVIDIA, and to make sure OpenAI insiders can sell $10.3 billion of shares .   OpenAI cannot afford the $300 billion, NVIDIA hasn’t sent OpenAI a cent and won’t do so if it can’t build the data centers, which OpenAI most assuredly can’t afford to do.  NVIDIA needs this myth to continue, because in truth, all of these data centers are being built for demand that doesn’t exist, or that — if it exists — doesn’t necessarily translate into business customers paying huge amounts for access to OpenAI’s generative AI services.  NVIDIA, OpenAI, CoreWeave and other AI-related companies hope that by announcing theoretical billions of dollars (or hundreds of billions of dollars) of these strange, vague and impossible-seeming deals, they can keep pretending that demand is there, because why else would they build all of these data centers, right?   That, and the entire stock market rests on NVIDIA’s back . It accounts for 7% to 8% of the value of the S&P 500, and Jensen Huang needs to keep selling GPUs. I intend to explain later on how all of this works, and how brittle it really is. The intention of these deals is simple: to make you think “this much money can’t be wrong.” It can. These people need you to believe this is inevitable, but they are being proven wrong, again and again, and today I’m going to continue doing so.  Underpinning these stories about huge amounts of money and endless opportunity lies a dark secret — that none of this is working, and all of this money has been invested in a technology that doesn’t make much revenue and loves to burn millions or billions or hundreds of billions of dollars. Over half a trillion dollars has gone into an entire industry without a single profitable company developing models or products built on top of models. By my estimates, there is around $44 billion of revenue in generative AI this year (when you add in Anthropic and OpenAI’s revenues to the pot, along with the other stragglers) and most of that number has been gathered through reporting from outlets like The Information, because none of these companies share their revenues, all of them lose shit tons of money , and their actual revenues are really, really small. Only one member of the Magnificent Seven (outside of NVIDIA) has ever disclosed its AI revenue — Microsoft, which stopped reporting in January 2025, when it reported “$13 billion in annualized revenue,” so around $1.083 billion a month.   Microsoft is a sales MACHINE. It is built specifically to create or exploit software markets, suffocating competitors by using its scale to drive down prices, and to leverage the ecosystem that it’s created over the past few decades. $1 billion a month in revenue is chump change for an organization that makes over $27 billion a quarter in PROFITS .  Don’t worry Satya, I’ll come back to you later. “But Ed, the early days!” Worry not — I’ve got that covered .  This is nothing like any other era of tech. There has never been this kind of cash-rush, even in the fiber boom . Over a decade, Amazon spent about one-tenth of the capex that the Magnificent Seven spent in two years on generative AI building AWS — something that now powers a vast chunk of the web, and has long been Amazon’s most profitable business unit .  Generative AI is nothing like Uber , with OpenAI and Anthropic’s true costs coming in at about $159 billion in the past two years, approaching five times Uber’s $30 billion all-time burn. And that’s before the bullshit with NVIDIA and Oracle. Microsoft last reported AI revenue in January . It’s October this week. Why did it stop reporting this number, you think? Is it because the numbers are so good it couldn’t possibly let people know? As a general rule, publicly traded companies — especially those where the leadership are compensated primarily in equity — tend to brag about their successes, in part because said bragging boosts the value of the thing that the leadership gets paid in. There’s no benefit to being shy. Oracle literally made a regulatory filing to boast it had a $30 billion customer , which turned out to be OpenAI, who eventually agreed (publicly) to spend $300 billion in compute over five years .  Which is to say that Microsoft clearly doesn’t have any good news to share, and as I’ll reveal later, they can’t even get 3% of their 440 million Microsoft 365 subscribers to pay for Microsoft 365 Copilot.  If Microsoft can’t sell this shit, nobody can.  Anyway, I’m nearly done, sorry, you see, I’m writing this whole thing as if you’re brand new and walking up to this relatively unprepared, so I need to introduce another company.  In 2020, a splinter group jumped off of OpenAI, funded by Amazon and Google to do much the same thing as OpenAI but pretend to be nicer about it until they have to raise from the Middle East . Anthropic has always been better at coding for some reason, and people really like its Claude models.  Both OpenAI and Anthropic have become the only two companies in generative AI to make any real progress, either in terms of recognition or in sheer commercial terms, accounting for the majority of the revenue in the AI industry.  In a very real sense, the AI industry’s revenue is OpenAI and Anthropic. In the year where Microsoft recorded $13bn in AI revenues, $10 billion came from OpenAI’s  spending on Microsoft Azure. Anthropic burned $5.3 billion last year — with the vast majority of that going towards compute . Outside of these two companies, there’s barely enough revenue to justify a single data center. Where we sit today is a time of immense tension. Mark Zuckerberg says we’re in a bubble , Sam Altman says we’re in a bubble , Alibaba Chairman and billionaire Joe Tsai says we’re in a bubble , Apollo says we’re in a bubble , nobody is making money and nobody knows why they’re actually doing this anymore, just that they must do it immediately.  And they have yet to make the case that generative AI warranted any of these expenditures.  That was undoubtedly the longest introduction to a newsletter I’ve ever written, and the reason why I took my time was because this post demands a level of foreshadowing and exposition, and because I want to make it make sense to anyone who reads it — whether they’ve read my newsletter for years, or whether they’re only just now investigating their suspicions that generative AI may not be all it’s cracked up to be.  Today I will make the case that generative AI’s fundamental growth story is flawed, and explain why we’re in the midst of an egregious bubble. This industry is sold by keeping things vague, and knowing that most people don’t dig much deeper than a headline, a problem I simply do not have.  This industry is effectively in service of two companies — OpenAI and NVIDIA — who pump headlines out through endless contracts between them and subsidiaries or investments to give the illusion of activity.  OpenAI is now, at this point, on the hook for over a trillion dollars , an egregious sum for a company that already forecast billions in losses, with no clear explanation as to how it’ll afford any of this beyond “we need more money” and the vague hope that there’s another Softbank or Microsoft waiting in the wings to swoop in and save the day.  I’m going to walk you through where I see this industry today, and why I see no future for it beyond a fiery apocalypse.  While everybody (reasonably!) harps on about hallucinations — which, to remind you, is when a model authoritatively states something that isn’t true — but the truth is far more complex, and far worse than it seems.  You cannot rely on a large language model to do what you want. Even the most highly-tuned models on the most expensive and intricate platform can’t actually be relied upon to do exactly what you want.  A “hallucination” isn’t just when these models say something that isn’t true. It’s when they decide to do something wrong because it seems the most likely thing to do, or when a coding model decides to go on a wild goose chase, failing the user and burning a ton of money in the process.  The advent of “reasoning” models — those engineered to ‘think’ through problems in a way reminiscent of a human — and the expansion of what people are (trying) to use LLMs for demands that the definition of an AI hallucination be widened, not merely referring to factual errors, but fundamental errors in understanding the user’s request or intent, or what constitutes a task, in part because these models cannot think and do not know anything .  However successful a model might be in generating something good *once*, it will also often generate something bad, or it’ll generate the right thing but in an inefficient and over-verbose fashion. You do not know what you’re going to get each time, and hallucinations multiply with the complexity of the thing you’re asking for, or whether a task contains multiple steps (which is a fatal blow to the idea of “agents.”  You can add as many levels of intrigue and “reasoning” as you want, but Large Language Models cannot be trusted to do something correctly, or even consistently, every time. Model companies have successfully convinced everybody that the issue is that users are prompting the models wrong, and that people need to be “trained to use AI,” but what they’re doing is training people to explain away the inconsistencies of Large Language Models, and to assume individual responsibility for what is an innate flaw in how large language models work.  Large Language Models are also uniquely expensive. Many mistakenly try and claim this is like the dot com boom or Uber, but the basic unit economics of generative AI are insane. Providers must purchase tens or hundreds of thousands of GPUs each costing $50,000 a piece, and hundreds of millions or billions of dollars of infrastructure for large clusters. And that’s without mentioning things like staffing, construction, power, or water.   Then you turn them on and start losing money. Despite hundreds of billions of GPUs sold, nobody seems to make any money, other than NVIDIA, the company that makes them, and resellers like Dell and Supermicro who buy the GPUs, put them in servers, and sell them to other people.  This arrangement works out great for Jensen Huang, and terribly for everybody else.  I am going to explain the insanity of the situation we find ourselves in, and why I continue to do this work undeterred. The bubble has entered its most pornographic, aggressive and destructive stage, where the more obvious it becomes that they’re cooked, the more ridiculous the generative AI industry will act — a dark juxtaposition against every new study that says “generative AI does not work” or new story about ChatGPT’s uncanny ability to activate mental illness in people.  So, let’s start simple: NVIDIA is a hardware company that sells GPUs, including the consumer GPUs that you’d see in a modern gaming PC, but when you read someone say “GPU” within the context of AI, they mean enterprise-focused GPUs like the A100, H100, H200, and more modern GPUs like the Blackwell-series B200 and GB200 (which combines two GPUs with an NVIDIA CPU).  These GPUs cost anywhere from $30,000 to $50,000 (or as high as $70,000 for the newer Blackwell GPUs), and require tens of thousands of dollars more of infrastructure — networking to “cluster” server racks of GPUs together to provide compute, and massive cooling systems to deal with the massive amounts of heat they produce, as well as the servers themselves that they run on, which typically use top-of-the-line data center CPUs, and contain vast quantities of high-speed memory and storage. While the GPU itself is likely the most expensive single item within an AI server, the other costs — and I’m not even factoring in the actual physical building that the server lives in, or the water or electricity that it uses — add up.  I’ve mentioned NVIDIA because it has a virtual monopoly in this space. Generative AI effectively requires NVIDIA GPUs, in part because it’s the only company really making the kinds of high-powered cards that generative AI demands, and  because NVIDIA created something called CUDA — a collection of software tools that lets programmers write software that  runs on GPUs, which were traditionally used primarily for rendering graphics in games.  While there are open-source alternatives , as well as alternatives from Intel (with its ARC GPUs) and AMD (Nvidia’s main rival in the consumer space), these aren’t nearly as mature or feature-rich.  Due to the complexities of AI models, one cannot just stand up a few of these things either — you need clusters of thousands, tens of thousands, or hundreds of thousands of them for it to be worthwhile, making any investment in GPUs in the hundreds of millions or billions of dollars, especially considering they require completely different data center architecture to make them run. A common request — like asking a generative AI model to parse through thousands of lines of code and make a change or an addition — may use multiple of these $50,000 GPUs at the same time, and so if you aspire to serve thousands, or millions of concurrent users, you need to spend big. Really big.  It’s these factors — the vendor lock-in, the ecosystem, and the fact that generative AI only works when you’re buying GPUs at scale — that underpin the rise of Nvidia. But beyond the economic and technical factors, there are human ones, too.   To understand the AI bubble is to understand why CEOs do the things they do. Because an executive’s job is so vague , they can telegraph the value of their “labor” by spending money on initiatives and making partnerships. AI gave hyperscalers the excuse to spend hundreds of billions of dollars on data centers and buy a bunch of GPUs to go in them, because that, to the markets, looks like they’re doing something. By virtue of spending a lot of money in a frighteningly short amount of time, Satya Nadella received multiple glossy profiles , all without having to prove that AI can really do anything, be it a job or make Microsoft money.  Nevertheless, AI allowed CEOs to look busy, and once the markets and journalists had agreed on the consensus opinion that “AI would be big,” all that these executives had to do was buy GPUs and “do AI.”   We are in the midst of one of the darkest forms of software in history, described by many as an unwanted guest invading their products, their social media feeds, their bosses’ empty minds, and resting in the hands of monsters. Every story of its success feels bereft of any real triumph, with every literal description of its abilities involving multiple caveats about the mistakes it makes or the incredible costs of running it.  Generative AI exists for two reasons: to cost money, and to make executives look busy. It was meant to be the new enterprise software and the new iPhone and the new Netflix all at once, a panacea where software guys pay one hardware guy for GPUs to unlock the incredible value creation of the future.  Generative AI was always set up to fail, because it was meant to be everything, was talked about like it was everything, is still sold like it’s everything, yet for all the fucking hype, it all comes down to two companies: OpenAI, and, of course, NVIDIA. NVIDIA was, for a while, living high on the hog. All CEO Jensen Huang had to do every three months was saying “check out these numbers” and the markets and business journalists would squeal with glee, even as he said stuff like “ the more you buy the more you save ,” in part tipping his head to the (very real and sensible) idea of accelerated computing, but framed within the context of the cash inferno that’s generative AI, seems ludicrous.  Huang’s showmanship  worked really well for NVIDIA for a while, because for a while the growth was easy. Everybody was buying GPUs. Meta, Microsoft, Amazon, Google (and to a lesser extent Apple and Tesla) make up 42% of NVIDIA’s revenue , creating, at least for the first four, a degree of shared mania where everybody justified buying tens of billions of dollars of GPUs a year by saying “the other guy is doing it!” This is one of the major reasons the AI bubble is happening, because people conflated NVIDIA’s incredible sales with “interest in AI,” rather than everybody buying GPUs. Don’t worry, I’ll explain the revenue side a little bit later. We’re here for the long haul. Anyway, NVIDIA is facing a problem — that the only thing that grows forever is cancer .  On September 9 2025, the Wall Street Journal said that NVIDIA’s “wow” factor was fading , going from beating analyst estimates in by nearly 21% in its Fiscal Year Q2 2024 earnings to scraping by with a mere 1.52% beat in its most-recent earnings — something that for any other company, would be a good thing, but framed against the delusional expectations that generative AI has inspired, is a figure that looks nothing short of ominous. Per the Wall Street Journal: In any other scenario, 56% year-over-year growth would lead to an abundance of Dom Perignon and Huang signing hundreds of boobs , but this is NVIDIA , and that’s just not good enough. Back in February 2024, NVIDIA was booking 265% year-over-year growth , but in its February 2025 earnings, NVIDIA only grew by a measly 78% year-over-year .  It isn’t so much that NVIDIA isn’t growing , but that to grow year-over-year at the rates that people expect is insane. Life was a lot easier when NVIDIA went from $6.05 billion in revenue in Q4 FY2023 to $22 billion in revenue in Q4 FY2024 , but for it to grow even 55% year-over-year from Q2 FY2026 ( $46.7 billion ) to Q2 2027 would require it to make $72.385 billion in revenue in the space of three months, mostly from selling GPUs (which make up around 88% of its revenue).   This would put Nvidia in the ballpark of Microsoft ( $76 billion in the last quarter ) and within the neighborhood of Apple ( $94 billion in the last quarter ), predominantly making money in an industry that a year-and-a-half ago barely made the company $6 billion in a quarter.  And the market needs NVIDIA to perform, as the company makes up 8% of the value of the S&P 500 . It’s not enough for it to be wildly profitable, or have a monopsony on selling GPUs, or for it to have effectively 10x’d their stock in a few years. It must continue to grow at the fastest rate of anything ever, making more and more money selling these GPUs to a small group of companies that immediately start losing money once they plug them in.  While a few members of the Magnificent Seven could be depended on to funnel tens of billions of dollars into a furnace each quarter, there were limits , even for companies like Microsoft, which had bought over 485,000 GPUs in 2024 alone . To take a step back, companies like Microsoft, Google and Amazon make their money by either selling access to Large Language Models that people incorporate into their products, or by renting out servers full of GPUs to run inference (as said previously, the process to generate an output by a model or series of models) or train AI models for companies that develop and market models themselves, namely Anthropic and OpenAI.  The latter revenue stream of which is where Jensen Huang found a solution to that eternal growth problem: the neocloud, namely CoreWeave, Lambda and Nebius.  These businesses are fairly straightforward. They own (or lease) data centers that they then fill full of servers that are full of NVIDIA GPUs, which they then rent on an hourly basis to customers, either on a per-GPU basis or in large batches for larger customers, who guarantee they'll use a certain amount of compute and sign up to long-term (i.e. more than an hour at a time) commitments. A neocloud is a specialist cloud compute company that exists only to provide access to GPUs for AI, unlike Amazon Web Services, Microsoft Azure and Google Cloud, all of which have healthy businesses selling other kinds of compute, with AI (as I’ll get into later) failing to provide much of a return on investment.  It’s not just the fact that these companies are more specialized than, say, Amazon’s AWS or Microsoft Azure. As you’ve gathered from the name, these are new, young, and in almost all cases, incredibly precarious businesses — each with financial circumstances that would make a Greek finance minister blush.  That’s because setting up a neocloud is expensive . Even if the company in question already has data centers — as CoreWeave did with its cryptocurrency mining operation — AI requires completely new data center infrastructure to house and cool the GPUs , and those GPUs also need paying for, and then there’s the other stuff I mentioned earlier, like power, water, and the other bits of the computer (the CPU, the motherboard, the memory and storage, and the housing).  As a result, these neoclouds are forced to raise billions of dollars in debt, which they collateralize using the GPUs they already have , along with contracts from customers, which they use to buy more GPUs. CoreWeave, for example, has $25 billion in debt on estimated revenues of $5.35 billion , losing hundreds of millions of dollars a quarter. You know who also invests in these neoclouds? NVIDIA! NVIDIA is also one of CoreWeave’s largest customers (accounting for 15% of its revenue in 2024), and just signed a deal to buy $6.3 billion of any capacity that CoreWeave can’t otherwise sell to someone else through 2032 , an extension of a $1.3 billion 2023 deal reported by the Information . It was the anchor investor ($250 million) in CoreWeave’s IPO , too. NVIDIA is currently doing the same thing with Lambda, another neocloud that NVIDIA invested in, which also  plans to go public next year. NVIDIA is also one of Lambda’s largest customers, signing a deal with it this summer to rent 10,000 GPUs for $1.3 billion over four years . In the UK, NVIDIA has just invested $700 million in Nscale , a former crypto miner that has never built an AI data center , and that has, despite having no experience, committed $1 billion (and/or 100,000 GPUs) to an OpenAI data center in Norway . On Thursday, September 25, Nscale announced it had closed another funding round, with NVIDIA listed as a main backer — although it’s unclear how much money it put in . It would be safe to assume it’s another few hundred million.  NVIDIA also invested in Nebius , an outgrowth of Russian conglomerate Yandex, and Nebius provides, through a partnership with NVIDIA, tens of thousands of dollars’ worth of compute credits to companies in NVIDIA’s Inception startup program. NVIDIA’s plan is simple: fund these neoclouds, let these neoclouds load themselves up with debt, at which point they buy GPUs from NVIDIA, which can then be used as collateral for loans, along with contracts from customers, allowing the neoclouds to buy even more GPUs. It’s like that Robinhood infinite money glitch… …except, that is, for one small problem. There don’t appear to be that many customers. As I went into recently on my premium newsletter , NVIDIA funds and sustains Neoclouds as a way of funnelling revenue to itself, as well as partners like Supermicro and Dell , resellers that take NVIDIA GPUs and put them in servers to sell pre-built to customers. These two companies made up 39% of NVIDIA’s revenues last quarter .  Yet when you remove hyperscaler revenue — Microsoft, Amazon, Google, OpenAI and NVIDIA — from the revenues of these neoclouds, there’s barely $1 billion in revenue combined, across CoreWeave, Nebius and Lambda . CoreWeave’s $5.35 billion revenue is predominantly made up of its contracts with NVIDIA, Microsoft (offering compute for OpenAI), Google ( hiring CoreWeave to offer compute for OpenAI ), and OpenAI itself, which has promised CoreWeave $22.4 billion in business over the next few years. This is all a lot of stuff , so I’ll make it really simple: there is no real money in offering AI compute, but that isn’t Jensen Huang’s problem, so he will simply force NVIDIA to hand money to these companies so that they have contracts to point to when they raise debt to buy more NVIDIA GPUs.  Neoclouds are effectively giant private equity vehicles that exist to raise money to buy GPUs from NVIDIA, or for hyperscalers to move money around so that they don’t increase their capital expenditures and can, as Microsoft did earlier in the year , simply walk away from deals they don’t like. Nebius’ “$17.4 billion deal” with Microsoft even included a clause in its 6-K filing that Microsoft can terminate the deal in the event the capacity isn’t built by the delivery dates, and Nebius has already used the contract to raise $3 billion to… build the data center to provide compute for the contract. Here, let me break down the numbers: From my analysis, it appears that CoreWeave, despite expectations to make that $5.35 billion this year, has only around $500 million of non-Magnificent Seven or OpenAI AI revenue in 2025 , with Lambda estimated to have around $100 million in AI revenue , and Nebius around $250 million without Microsoft’s share , and that’s being generous. In simpler terms, the Magnificent Seven is the AI bubble, and the AI bubble exists to buy more GPUs, because (as I’ll show) there’s no real money or growth coming out of this, other than in the amount that private credit is investing — “ $50 billion a quarter, for the low end, for the past three quarters .” I dunno man, let’s start simple: $50 billion a quarter of data center funding is going into an industry that has less revenue than Genshin Impact . That feels pretty bad. Who’s gonna use these data centers? How are they going to even make money on them? Private equity firms don’t typically hold onto assets, they sell them or take them public. Doesn’t seem great to me! Anyway, if AI was truly the next big growth vehicle, neoclouds would be swimming in diverse global revenue streams. Instead, they’re heavily-centralized around the same few names, one of which (NVIDIA) directly benefits from their existence not as a company doing business, but as an entity that can accrue debt and spend money on GPUs. These Neoclouds are entirely dependent on a continual flow of private credit from firms like Goldman Sachs ( Nebius , CoreWeave , Lambda for its IPO ), JPMorgan ( Lambda , Crusoe , CoreWeave ), and Blackstone ( Lambda , CoreWeave ), who have in a very real sense created an entire debt-based infrastructure to feed billions of dollars directly to NVIDIA, all in the name of an AI revolution that's yet to arrive. The fact that the rest of the neocloud revenue stream is effectively either a hyperscaler or OpenAI is also concerning. Hyperscalers are, at this point, the majority of data center capital expenditures , and have yet to prove any kind of success from building out this capacity, outside, of course, Microsoft’s investment in OpenAI, which has succeeded in generating revenue while burning billions of dollars .  Hyperscaler revenue is also capricious, but even if it isn’t, why are there no other major customers? Why, across all of these companies, does there not seem to be one major customer who isn’t OpenAI?  The answer is obvious: nobody that wants it can afford it, and those who can afford it don’t need it.  It’s also unclear what exactly hyperscalers are doing with this compute, because it sure isn’t “making money.” While Microsoft makes $10 billion in revenue from renting compute to OpenAI via Microsoft Azure, it does so at-cost, and was charging OpenAI $1.30-per-hour for each A100GPU it rents, a loss of $2.2 an hour per GPU , meaning that it is  likely losing money on this compute, especially as SemiAnalysis has the total cost per hour per GPU at around $1.46 with the cost of capital and debt associated for a hyperscaler, though it’s unclear if that’s for an H100 or A100 GPU. In any case, how do these neoclouds pay for their debt if the hyperscalers give up, or NVIDIA doesn’t send them money, or, more likely, private credit begins to notice that there’s no real revenue growth outside of circular compute deals with neoclouds’ largest supplier, investor and customer? They don’t! In fact, I have serious concerns that they can’t even build the capacity necessary to fulfil these deals, but nobody seems to worry about that. No, really! It appears to be taking Oracle and Crusoe around 2.5 years per gigawatt of compute capacity . How exactly are any of these neoclouds (or Oracle itself) able to expand to actually capture this revenue? Who knows! But I assume somebody is going to say “OpenAI!” Here’s an insane statistic for you: OpenAI will account for — in both its own revenue (projected $13 billion) and in its own compute costs ($16 billion, according to The Information, although that figure is likely out of date, and seemingly only includes the compute it’ll use, and not that it has committed to build, and thus has spent money on) — about 50% of all AI revenues in 2025. That figure takes into account the $400m ARR for ServiceNow, Adobe, and Salesforce ; the $35bn in revenue for the Magnificent Seven from AI (not profit, and based on figures from the previous year); revenue from neoclouds like CoreWeave, Nebius, and Lambda; and the estimated revenue from the entire generative AI industry (including Anthropic and other smaller players, like Perplexity and Anysphere) for a total of $55bn.OpenAI is the generative AI industry — and it’s a dog of a company. As a reminder, OpenAI has leaked that it’ll burn $115 billion in the next four years , and based on my estimates, it needs to raise more than $290 billion in the next four years based on its $300 billion deal with Oracle alone. That alone is a very, very bad sign, especially as we’re three years and $500 billion or more into this hype cycle with few signs of life outside of, well, OpenAI promising people money. Credit to Anthony Restaino for this horrifying graphic : This is not what a healthy, stable industry looks like. Alright, well, things can’t be that bad on the software side. As I covered on my premium newsletter a few weeks ago , everybody is losing money on generative AI, in part because the cost of running AI models is increasing , and in part because the software itself doesn’t do enough to warrant the costs associated with running them, which are already subsidized and unprofitable for the model providers .  Outside of OpenAI (and to a lesser extent Anthropic), nobody seems to be making much revenue, with the most “successful” company being Anysphere, makers of AI coding tool Cursor, which hit $500 million ‘annualized” ( so $41.6 million in one month ) a few months ago, just before Anthropic and OpenAI jacked up the prices for “priority processing” on enterprise queries , raising its operating costs as a result. In any case, that’s some piss-poor revenue for an industry that’s meant to be the future of software. Smartwatches are projected to make $32 billion this year , and as mentioned, the Magnificent Seven expects to make $35 billion or so in revenue from AI this year . Even Anthropic and OpenAI seem a little lethargic, both burning billions of dollars while making, by my estimates, no more than $2 billion and $6.26 billion in 2025 so far , despite projections of $5 billion and $13 billion respectively.  Outside of these two, AI startups are floundering, struggling to stay alive and raising money in several-hundred million dollar bursts as their negative-gross-margin businesses flounder.  As I dug into a few months ago , I could find only 12 AI-powered companies making more than $8.3 million a month, with two of them slightly improving their revenues, specifically AI search company Perplexity ( which has now hit $150 million ARR, or $12.5 million in a month ) and AI coding startup Replit ( which also hit $150 million ARR in September ).  Both of these companies burn ridiculous amounts of money. Perplexity burned 164% of its revenue on Amazon Web Services, OpenAI and Anthropic last year , and while Replit hasn’t leaked its costs, The Information reports its gross margins in July were 23% , which doesn’t include the costs of its free users, which you simply have to do with LLMs as free users are capable of costing you a hell of a lot of money. Problematically, your paid users can also cost you more than they bring in as well. In fact, every user loses you money in generative AI, because it’s impossible to do cost control in a consistent manner. A few months ago, I did a piece about Anthropic losing money on every single Claude Code subscriber , and I’ll walk you through it in a very simplified fashion: Anthropic is, to be clear, the second-largest model developer, and has some of the best AI talent in the industry. It has a better handle on its infrastructure than anyone outside of big tech and OpenAI. It still cannot seem to fix this problem, even with weekly rate limits . While one could assume that Anthropic is simply letting people run wild, my theory is far simpler: even the model developers have no real way of limiting user activity, likely due to the architecture of generative AI. I know it sounds insane, but at the most advanced level, model providers are still prompting their models, and whatever rate limits may be in place appear to, at times, get completely ignored, and there doesn’t seem to be anything they can do to stop it. No, really. Anthropic counts amongst its capitalist apex predators one lone Chinese man who spent $50,000 of their compute in the space of a month fucking around with Claude Code. Even if Anthropic was profitable — it isn’t, and will burn billions this year — a customer paying $200-a-month running up $50,000 in costs immediately devours the margin of any user running the service that day , if not that week or month. Even if Anthropic’s costs are half the published rates, one guy amounted to 125 users’ monthly revenue.  That’s not a real business! That’s a bad business with out-of-control costs, and it doesn’t appear anybody has these costs under control. A few weeks ago, Replit — an unprofitable AI coding company — released a product called “ Agent 3 .” which promised to be “10x more autonomous” and offer “infinitely more possibilities,” “[testing] and [fixing] its code, constantly improving your application behind the scenes in a reflection loop.” In reality, this means you’d go and tell the model to build something and it would “go do it,” and you’ll be shocked to hear that these models can’t be relied upon to “go and do” anything. Please note that this was launched a few months after Replit raised its prices, shifting to obfuscated “ effort-based ” pricing that would charge “the full scope of the agent’s work.” Agent 3 has been a disaster. Users found tasks that previously cost a few dollars were spiralling into the hundreds of dollars, with The Register reporting one customer found themselves with a $1000 bill after a week: Another user complained that “costs skyrocketed, without any concrete results”: As I previously reported, in late May/early June, both OpenAI and Anthropic cranked up the pricing on their enterprise customers , leading to Replit and Cursor both shifting their prices. This abuse has now trickled down to their customers. Replit has now released an update that lets you choose how autonomous you want Agent 3 to be , which is a tacit admission that you can’t trust coding LLMs to build software. Replit’s users are still pissed off, complaining that Replit is charging them for activity when the agent doesn’t do anything , a consistent problem across its Reddit. While Reddit is not the full summation of all users across every company, it’s a fairly good barometer of user sentiment, and man, are users pissy. Traditionally, Silicon Valley startups have relied upon the same model of “grow really fast and burn a bunch of money, then “turn the profit lever.” AI does not have a “profit lever,” because the raw costs of providing access to AI models are so high ( and they’re only increasing ) that the basic economics of how the tech industry sells software don’t make sense. I’ll reiterate something I wrote a few weeks ago : In simpler terms, it is very, very difficult to imagine what one user — free or otherwise — might cost, and thus it’s hard to charge them on a monthly basis, or tell them what a service might cost them on average. This is a huge problem with AI coding environments.  According to The Information , Claude Code was driving “nearly $400 million in annualized revenue, roughly doubling from a few weeks ago” on July 31 2025.  That annualized revenue works out to about $33 million a month in revenue for a company that predicts it will make at least $416 million a month by the end of the year, and for a product that has become the most-popular coding environment in the world, from the second-largest and best-funded AI company in the world. …is that it? Is that all that’s happening here?  $33 million dollars, all of it unprofitable, after it felt, at least based on social media chatter and discussing with multiple different software engineers, that Claude Code had become ubiquitous with anything to do with LLMs. To be clear, Anthropic’s Sonnet and Opus models are consistently some of the most popular for programming on Openrouter , an aggregator of LLM usage, and Anthropic has been consistently-named as “ the best at coding .”  Some bright spark out there is going to say that Microsoft’s Github Copilot has 1.8 million paying subscribers , and guess what, that’s true, and in fact, I reported it! Here’s another fun fact: the Wall Street Journal reported that Microsoft loses “on average more than $20-a-month-per-user,” with “...some users [costing] the company as much as $80.” And that’s for the most-popular product! If you believe the New York Times or other outlets that simply copy and paste whatever Dario Amodei says , you’d think that the reason that software engineers are having trouble finding work is because their jobs are being replaced by AI. This grotesque , abusive , manipulative and offensive lie has been propagated throughout the entire business and tech media without anybody sitting down and asking whether it’s true, or even getting a good understanding of what it is that LLMs can actually do with code. Members of the media, I am begging you, stop doing this. I get it, every asshole is willing to give a quote saying that “ coding is dead ,” and that every executive is willing to burp out some nonsense about replacing all of their engineers , but I am fucking begging you to either use these things yourself, or speak to people that do. I am not a coder. I cannot write or read code. Nevertheless, I am capable of learning , and have spoken to numerous software engineers in the last few months, and basically reached a consensus of “this is kind of useful, sometimes.” However, one very silly man once said that I don’t speak to people who use these tools , so I went and spoke to three notable, experienced software engineers, and asked them to give me the straight truth about what coding LLMs can do.  In simple terms, LLMs are capable of writing code , but can’t do software engineering, because software engineering is the process of understanding, maintaining and executing code to produce functional software, and LLMs do not “learn,” cannot “adapt,” and (to paraphrase Brown), break down the more of your code and variables you ask them to look at at once. It’s very easy to believe that software engineering is just writing code, but the reality is that software engineers maintain software , which includes writing and analyzing code among a vast array of different personalities and programs and problems . Good software engineering harkens back to Brian Merchant’s interviews with translators — while some may believe that translators simply tell you what words mean, true translation is communicating the meaning of a sentence , which is cultural, contextual, regional, and personal, and often requires the exercise of creativity and novel thinking.  My editor, Matthew Hughes, gave an example of this in his newsletter :  Similarly, coding is not just “a series of text that programs a computer,” but a series of interconnected characters that refers to other software in other places that must also function now and explain, on some level, to someone who has never, ever seen the code before, why it was done this way.  This is, by the way, why we are still yet to get any tangible proof that AI is replacing software engineers…because it can’t.  Of all the fields supposedly at risk from “AI disruption,” coding feels (or felt) the most tangible, if only because the answer to “can you write code with LLMs” wasn’t an immediate, unilateral no.  The media has also been quick to say that AI “writes software,” which is true in the same way that ChatGPT “writes novels”. In reality, LLMs can generate code, and do some software engineering-adjacent tasks, but, like all Large Language Models, break down and go totally insane, hallucinating more as the tasks get more complex . And, as I pointed out earlier, software engineering is not just coding. It involves thinking about problems, finding solutions to novel challenges, designing stuff in a way that can be read and maintained by others, and that’s (ideally) scalable and secure. The whole fucking point of an “AI” is that you hand shit off to it! That’s what they’ve been selling it as! That’s why Jensen Huang told kids to stop learning to code, as with AI, there’s no point .  And it was all a lie. Generative AI can’t do the job of a software engineer, and it fails while  also costing abominable amounts of money.  Coding LLMs seem like magic at first, because they (to quote a conversation with Carl Brown) make the easy things easier, but they also make the harder things harder. They don’t even speed up engineers — they actually make them slower ! Yet coding is basically the only obvious use case for LLMs.  I’m sure you’re gonna say “but I bet the enterprise is doing well!” and you are so very, very wrong. Before I go any further, let’s establish some facts: All of this is to say that Microsoft has one of the largest commercial software empires in history, thousands (if not tens of thousands) of salespeople, and thousands of companies that literally sell Microsoft services for a living . And it can’t sell AI. A source that has seen materials related to sales has confirmed that, as of August 2025, Microsoft has around eight million active licensed users of Microsoft 365 Copilot, amounting to a 1.81% conversion rate across the 440 million Microsoft 365 subscribers. This would amount to, if each of these users paid annually at the full rate of $30-a-month, to about $2.88 billion in annual revenue for a product category that makes $33 billion a fucking quarter. And I must be clear, I am 100% sure these users aren’t all paying $30 a month. The Information reported a few weeks ago that Microsoft has been “reducing the software’s price with more generous discounts on the AI features, according to customers and salespeople,” heavily suggesting discounts had already been happening. Enterprise software is traditionally sold at a discount anyway — or, put a different way, with bulk pricing for those who sign up a bunch of users at once.  In fact, I’ve found evidence that it’s been doing this a while, with a 15% discount on annual Microsoft 365 Copilot subscriptions for orders of 10-to-300 seats mentioned by an IT consultant back in late 2024 , and another that’s currently running through September 30, 2025 through Microsoft’s Cloud Solution Provider program , with up to 2400 licenses discounted if you pay upfront for the year. Microsoft seems to do this a lot, as I found another example of an offer that ran from January 1 2025 through March 31 2025 . An “active” user is someone who has taken one action on Copilot in any Microsoft 365 app in the space of 28 days. Now, I know. That word, active. Maybe you’re thinking “Ed, this is like the gym model! There are unpaid licenses that Microsoft is getting paid for!”  Fine! Let’s assume that Microsoft also has, based on research that suggests this is the case for all software companies, another 50% — four million — of paid Copilot licenses that aren’t being used. That still makes this 12 million users, which is still a putrid 2.72% conversion rate. So, why aren’t people paying for Copilot? Let’s hear from someone who talked to The Information : Microsoft 365 Copilot has been such a disaster that Microsoft will now integrate Anthropic’s models in an attempt and make them better.  Oh, one other thing: sources also confirm GPU utilization for Microsoft 365’s enterprise Copilot is barely scratching 60%.  I’m also hearing that less than SharePoint — another popular enterprise app from Microsoft with 250 million users — had less than 300,000 weekly active users of its AI copilot features in August. So, The Information reported a few months ago that Microsoft’s projected AI revenues would be $13 billion, with $10 billion of that from OpenAI, leaving about $3 billion of total revenue across Microsoft 365 Copilot and any other foreseeable feature that Microsoft sells with “AI” on it. This heavily suggests that Microsoft is making somewhere between $1.5 billion and $2 billion on Azure or Microsoft 365 Copilot, though I suppose there are other places it could be making AI revenue too. Right? I guess. In any case, Microsoft’s net income (read: profit) in its last quarterly earnings was $27.2 billion. One of the comfortable lies that people tell themselves is that the AI bubble is similar to the fiber boom, or the dot com bubble, or Uber, or that we’re in the “growth stage,” or that “this is what software companies do, they spend a bunch of money then “ pull the profit lever .”  This is nothing like anything you’ve seen before, because this is the dumbest shit that the tech industry has ever done.  AI data centers are nothing like fiber, because there are very few actual use cases for these GPUs outside of AI, and none of them are remotely hyperscale revenue drivers. As I discussed a month or so ago , data center development accounted for more of America’s GDP growth than all consumer spending combined, and there really isn’t any demand for AI in general, let alone at the scale that these hundreds of billions of dollars are being sunk into.  The conservative estimate of capital expenditures related to data centers is around $400 billion, but given the $50 billion a quarter in private credit, I’m going to guess it breaks $500 billion, all to build capacity for an industry yet to prove itself. And this NVIDIA-OpenAI “$100 billion funding” news should only fill you full of dread, but also it isn’t fucking finalized, stop reporting it as if it’s done, I swear to god- Anyway, according to CNBC , “the initial $10 billion tranche is locked in at a $500 billion valuation and expected to close within a month or so once the transaction has been finalized,” with “successive $10 billion rounds are planned, each to be priced at the company’s then-current valuation as new capacity comes online.”  At no point is anyone asking how, exactly, OpenAI builds data centers to fill full of these GPUs. In fact, I am genuinely shocked (and a little disgusted!) by how poorly this story has been told. Let’s go point by point: To be clear, when I say OpenAI needs at least $300 billion over the next four years, that’s if you believe its projections, which you shouldn’t .  Let’s walk through its (alleged) numbers, while plagiarizing myself :  According to The Information , here's the breakdown (these are projections): OpenAI's current reported burn is $116 billion through 2030, which means there is no way that these projections include $300 billion in compute costs, even when you factor in revenue. There is simply no space in these projections to absorb that $300 billion, and from what I can tell, by 2029, OpenAI will have actually burned more than $290 billion, assuming that it survives that long, which I do not believe it will. Don’t worry, though. OpenAI is about to make some crazy money . Here are the projections that CFO Sarah Friar signed off on : Just so we are clear, OpenAI intends to 10x its revenue in the space of four years, selling software and access to models in an industry with about $60 billion of revenue in 2025. How will it do this? It doesn’t say. I don’t know OpenAI CFO Sarah Friar, but I do know that signing off on these numbers is, at the very least, ethically questionable.  Putting aside the ridiculousness of OpenAI’s deals, or its funding requirements, Friar has willfully allowed Sam Altman and OpenAI to state goals that defy reality or good sense, all to take advantage of investors and public markets that have completely lost the plot.  I need to be blunter: OpenAI has signed multiple different deals and contracts for amounts it cannot afford to pay, that it cannot hope to raise the money to pay for, that defy the amounts of venture capital and private credit available, all to sustain a company that will burn $300 billion and has no path to profitability of any kind. So, as I said above, CNBC reported on September 23, 2025 that the NVIDIA deal will be delivered in $10 billion tranches, the first of which is “expected to close within a month,” and the rest delivered “as new capacity comes online.” This is, apparently, all part of a plan to build 10GW of data center capacity with NVIDIA. A few key points: So, let’s start simple: data centers take forever to build. As I said previously, based on current reports, it’s taking Oracle and Crusoe around 2.5 years per gigawatt of data center capacity, and nowhere in these reports does one reporter take a second to say “hey, what data centers are you talking about?” or “hey, didn’t Sam Altman say back in July that he was building 10GW of data center capacity with Oracle? ” But wait, now Oracle and OpenAI have done another announcement that says they’re only doing 7GW, but they’re “ahead of schedule” on 10GW?  Wait, is NVIDIA’s 10GW the same 10GW as Oracle and OpenAI are working on? Is it different? Nobody seems to know or care! Anyway, I cannot be clear enough how unlikely it is that (as NVIDIA has said) “ the first gigawatt of NVIDIA systems will be deployed in the second half of 2026 ,” and that’s if it has bought the land and got the permits and ordered the construction, none of which has happened yet. But let’s get really specific on costs!  Crusoe’s 1.2GW of compute for OpenAI is a $15 billion joint venture , which means a gigawatt of compute runs about $12.5 billion. Abilene’s eight buildings are meant to hold 50,000 NVIDIA GB200 GPUs and their associated networking infrastructure, so let’s say a gigawatt is around 333,333 Blackwell GPUs. Though this math is a little funky due to NVIDIA promising to install its new Rubin GPUs in these theoretical data centers, that means these data centers will require a little under $200 billion worth of GPUs. By my maths that’s $325 billion.  I’m so tired of this. A number of you have sent me the following image with some sort of comment about how “this is how it’ll work,” and you are wrong, because this is neither how it works nor how it will work nor accurate on any level. In the current relationship, NVIDIA Is Not Sending OpenAI $100 Billion, nor will it send it that much money, because 90% of OpenAI’s funding is gated behind building 9 or 10 gigawatts of data center capacity. In the current relationship, OpenAI does not have the money to pay Oracle. Also, can Oracle even afford to give that much money to NVIDIA? It had negative free cash flow last quarter , already has $104 billion in debt , and its biggest new customer cannot afford a single fucking thing it’s promised. The only company in this diagram that actually can afford to do any of this shit is NVIDIA, and even then it only has $56 billion cash on hand . In any case, as I went over on Friday, OpenAI has promised about a trillion dollars between compute contracts across Oracle, Microsoft, Google and CoreWeave, 17 Gigawatts of promised data centers in America between NVIDIA and “Stargate,” several more gigawatts of international data centers, custom chips from Broadcom, and their own company operations. How exactly does this get paid for?  Nobody seems to ask these questions! Why am I the asshole doing this? Don’t we have tech analysts that are meant to analyse shit? AHhhhh- Every time I sit down to write about this subject the newsletters seem to get longer, because people are so painfully attached to the norms and tropes of the past. This post is, already, 17,500 words — a record for this newsletter — and I’ve still not finished editing and expanding it.  What we’re witnessing is one of the most egregious wastes of capital in history, sold by career charlatans with their reputations laundered by a tech and business media afraid to criticize the powerful and analysts that don’t seem to want to tell their investors the truth. There are no historic comparisons here — even Britain’s abominable 1800s railway bubble, which absorbed half of the country’s national income , created valuable infrastructure for trains, a vehicle that can move people to and from places. GPUs are not trains, nor are they cars, or even CPUs. They are not adaptable to many other kinds of work, nor are they “the infrastructure of the future of tech,” because they’re already quite old and with everybody focused on buying them, you’d absolutely see one other use case by now that actually mattered. GPUs are expensive, power-hungry, environmentally destructive and require their own kinds of cooling and server infrastructure, making every GPU data center and environmental and fiscal bubble unto themselves. And, whereas the Victorian train infrastructure still exists in the UK — though it has been upgraded over the years — a GPU has a limited useful lifespan. These are cards that can — and will — break after a period of extended usage, whether that period is five years or later, and they’ll inevitably be superseded by something better and more powerful, meaning that the resale value of that GPU will only go down, with a price depreciation that’s akin to a new car.  I am telling you, as I have been telling you for years, again and again and again , that the demand is not there for generative AI, and the demand is never, ever arriving. The only reason anyone humours any of this crap is the endless hoarding of GPUs to build capacity for a revolution that will never arrive. Well, that and OpenAI, a company built and sold on lies about ChatGPT’s capabilities . ChatGPT’s popularity — and OpenAI’s hunger for endless amounts of compute — have created the illusion of demand due to the sheer amount of capacity required to keep their services operational, all so they can burn $8 billion or more in 2025 and, if my estimates are right, nearly a trillion dollars by 2030 . This NVIDIA deal is a farce — an obvious attempt by the largest company on the American stock market to prop up the one significant revenue-generator in the entire industry, knowing that time is running out for it to create new avenues for eternal growth. I’d argue that NVIDIA’s deal also shows the complete contempt that these companies have for the media. There are no details about how this deal works beyond the initial $10 billion, there’s no land purchased, no data center construction started, and yet the media slurps it down without a second thought. I am but one man, and I am fucking peculiar. I did not learn financial analysis in school, but I appear to be one of the few people doing even the most basic analysis of these deals, and while I’m having a great time doing so, I am also exceedingly frustrated at how little effort is being put into prying apart these deals. I realize how ridiculous all of this sounds. I get it. There’s so much money being promised to so many people, market rallies built off the back of massive deals , and I get that the assumption is that this much money can’t be wrong, that this many people wouldn’t just say stuff without intending to follow through, or without considering whether their company could afford it.  I know it’s hard to conceive that hundreds of billions of dollars could be invested in something for no apparent reason, but it’s happening, right god damn now, in front of your eyes, and I am going to be merciless on anyone who attempts to write a “how could we see this coming?”  Generative AI has never been reliable, has always been unprofitable, and has always been unsustainable, and I’ve been saying so since February 2024 . The economics have never made sense, something I’ve said repeatedly since April 2024 , and when I wrote “How Does OpenAI Survive?” in July 2024 , I had multiple people suggest I was being alarmist. Here’s some alarmism for you: the longer it takes for OpenAI to die, the more damage it will cause to the tech industry.  On Friday, when I put out my piece on OpenAI needing a trillion dollars , I asked analyst Gil Luria if the capital was there to build the 17 Gigawatts that OpenAI had allegedly planned to build. He said the following: That doesn’t sound good! Anyway, as I discussed earlier, venture capital could run out in six quarters, with investor and researcher Jon Sakoda estimating that there will only be around $164 billion of dry powder (available capital) in US VC firms by the end of 2025. In July, The French Tech Journal reported (using Pitchbook data) that global venture capital deal activity reached its lowest first-half total since 2018, with $139.4 billion in deal value in the first half of 2025, down from $183.4 billion in the first half of 2024, meaning that any further expansion or demands for venture capital from OpenAI will likely sap the dwindling funds available from other startups. Things get worse when you narrow things to US venture capital. In a piece from April , EY reported that VC-backed investment in US companies hit $80 billion in Q1 2025, but “one $40 billion deal” accounted for half of the investment — OpenAI’s $40 billion deal of which only $10 billion has actually closed, and that didn’t happen until fucking June . Without the imaginary money from OpenAI, US venture would have declined by 36%. The longer that OpenAI survives, the longer it will sap the remaining billions from the tech ecosystem, and I expect it to extend its tendrils to private credit too. The $325 billion it needs just to fulfil its NVIDIA contract, albeit over 4 years, is an egregious sum that I believe exceeds the available private capital in the world. Let’s get specific, and check out the top 10 private equity firms’ available capital!  Assuming that all of this capital is currently available, the top 10 private equity firms in the world have around $477 billion of available capital. We can, of course, include investment banks — Goldman Sachs had around $520 billion cash in hand available at the end of its last quarter , and JPMorgan over $1.7 trillion , but JP Morgan has only dedicated $50 billion in direct lending commitments as of February 2025 , and while Goldman Sachs expanded its direct private credit lending by $15 billion back in June , that appears to be an extension of its “more than $20 billion” direct lending close from mid-2024 . Include both of those, and that brings us up to — if we assume that all of these funds are available — $562 billion in capital and about $164 billion in US venture available to spend, and that’s meant to go to more places than just OpenAI. Sure, sure, there’s more than just the top 10 private equity firms and there’s venture money outside of the US, but what could it be? Like, another $150 billion? You see, OpenAI needs to buy those GPUs, and it needs to build those data centers, and it needs to pay its thousands of staff and marketing and sales costs too. While OpenAI likely wouldn’t be the ones raising the money for the data centers — and honestly, I’m not sure who would do it at this point? — somebody is going to need to build TWENTY GIGAWATTS OF DATA CENTERS if we’re to believe both Oracle and NVIDIA You may argue that venture funds and private credit can raise more, and you’re right! But at this point, there have been few meaningful acquisitions of AI companies, and zero exits from the billions of dollars put into data centers.  Even OpenAI admits in its own announcement about new Stargate sites that this will be a “$400 billion investment over 3 years.” Where the fuck is that money coming from? Is OpenAI really going to absorb massive chunks of all available private credit and venture capital for the next few years?  And no, god, stop saying the US government will bail this out. It will have to bail out hundreds of billions of dollars, there is no scenario where it’s anything less than that, and I’ve already been over this. While the US government has spent equivalent sums in the past to support private business (the total $440 billion dispersed during the Great Recession’s TARP program, where the Treasury bought toxic assets from investment banks to stop them from imploding a la Lehman, springs to mind), it’s hard to imagine any case where OpenAI is seen as vital to the global financial system — and the economic health of the US — as the banking system.  Sure, we spent around $1tn — if we’re being specific, $953bn — on the Paycheck Protection Program during the Covid era, but that was to keep people employed at a time when the economy outside of Zoom and Walmart had, for all intents and purposes, ceased to exist. There was an urgency that doesn’t apply here. If OpenAI goes tits up, Softbank loses some money — nothing new there — and Satya Nadella has to explain why he spent tens of billions of dollars on a bunch of data centers filled with $50,000 GPUs that are, at this point, ornamental.  And while there will be — and have been — disastrous economic consequences, they won’t be as systemically catastrophic as that of the pandemic, or the global financial crisis. To be clear, it’ll be bad, but not as bad .   And there’s also the problem of moral hazard — if the government steps in, what’s to stop big tech chasing its next fruitless rainbow? — and optics. If people resented bailing out the banks after they acted like profligate gamblers and lost, how will they feel bailing out f ucking Sam Altman and Jensen Huang ?  I do apologize for the length of this piece, but the significance of this bubble requires depth. There is little demand, little real money, and little reason to continue, and the sheer lack of responsibility and willingness to kneel before the powerful fills me full of angry bile. I understand many journalists are not in a position where they can just write “this shit sounds stupid,” but we have entered a deeply stupid era, and by continuing to perpetuate the myth of AI, the media guarantees that retail investors and regular people’s 401Ks will suffer. It is now inevitable that this bubble bursts. Deutsche Bank has said the AI boom is unsustainable outside of tech spending “remaining parabolic,” which it says “is highly unlikely,” and Bain Capital has said that $2 trillion in new revenue is needed to fund AI’s scaling , and even that math is completely fucked as it talks about “AI-related savings”: Even when stared in the face by a ridiculous idea — $2 trillion of new revenue in a global software market that’s expected to be around $817 billion in 2025 — Bain still oinks out some nonsense about the “savings from applying AI in sales, marketing, customer support and R&D,” yet another myth perpetuated I assume to placate the fucking morons sinking billions into this. Every single “vibe coding is the future,” “the power of AI,” and “AI job loss” story written perpetuates a myth that will only lead to more regular people getting hurt when the bubble bursts. Every article written about OpenAI or NVIDIA or Oracle that doesn’t explicitly state that the money doesn’t exist, that the revenues are impossible, that one of the companies involved burns billions of dollars and has no path to profitability, is an act of irresponsible make believe and mythos. I am nobody. I am not a financier. I am not anybody special. I just write a lot, and read a lot, and can do the most basic maths in the world. I am not trying to be anything other than myself, nor do I have an agenda, other than the fact that I like doing this and I hate how this story is being told. I never planned for this newsletter to get this big, and now that it has, I’m going to keep doing the same thing every week. I also believe that the way to stop this happening again is to have a thorough and well-sourced explanation of everything as it happens, ripping down the narratives as they’re spun and making it clear who benefits from them and how and why they’re choosing to do so. When things collapse, we need to be clear about how many times people chose to look the other way, or to find good faith ways to interpret bad faith announcements and leak. So, how could we have seen this coming? I don’t know. Did anybody try to fucking look?

0 views

OpenAI Needs A Trillion Dollars In The Next Four Years

Shortly before publishing this newsletter, I spoke with Gil Luria, Managing Director and Analyst at D.A. Davidson , and asked him whether the capital was there to build the 17 Gigawatts of capacity that OpenAI has promised. He said the following: There is quite literally not enough money to build what OpenAI has promised. A few days ago, NVIDIA and OpenAI announced a partnership that would involve NVIDIA “investing $100 billion” into OpenAI, and the reason I put that in quotation marks is the deal is really fucking weird. Based on the text of its own announcement, NVIDIA “intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed,” except CNBC reported a day later that “[the] initial $10 billion tranche is locked in at a $500 billion valuation and expected to close within a month or so once the transaction has been finalized,” which also adds the important detail that this deal isn’t even god damn finalized. In any case , OpenAI has now committed to building 10 Gigawatts of data center capacity at a non-specific location with a non-specific partner, so that it can unlock $10 billion of funding per gigawatt installed. I also want to be clear that it has not explained where these data centers are, or who will build them, or, crucially, who will actually fund them. The very next day, OpenAI announced five more data centers planned “as part of the Stargate initiative,” “bringing Stargate’s current planned capacity to nearly 7 gigawatts,” which is when things get a little confusing.  Altman said back in July that Oracle and OpenAI were “committed to delivering 10GW of new compute capacity for Stargate,” and they were adding an additional 4.5GW of capacity in the US on top of the 1.2GW that was already planned in Abilene Texas. In fact, the Shackelford data center that is allegedly “new” is the 1.4GW facility I talked about a few weeks ago , though I see no mention of the Wisconsin one tied to the $38 billion loan raised by Vantage Data Centers . This announcement involves a site in Doña Ana County, New Mexico and an “undisclosed location in the Midwest,” which I assume is Wisconsin, and eager members of the media could, I dunno, look it up, look up any of this stuff, use the internet to look up the news, all the news is so easily found. But here’s my favourite part of the story: To be clear, the Lordstown, Ohio site is not a data center, at least according to SoftBank , who said it will be a “data center equipment manufacturing facility.” The Milam County data center is likely this one , and it doesn’t appear that SB Energy has even broken ground. Anyway, I want to get really specific about this, because the rest of the media is reporting these stories as if these data centers will pop up overnight, and the money will magically appear, and that there will, indeed, be enough of it to go around. Based on current reports, it’s taking Oracle and Crusoe around 2.5 years per gigawatt of data center capacity. Crusoe’s 1.2GW of compute for OpenAI is a $15 billion joint venture , which means a gigawatt of compute runs about $12.5 billion. Abilene’s 8 buildings are meant to hold 50,000 NVIDIA GB200 GPUs and their associated networking infrastructure, so let’s say a gigawatt is around 333,333 Blackwell GPUs at $60,000 a piece, so about $20 billion a gigawatt.  So, each gigawatt is about $32.5 billion. For OpenAI to actually receive its $100 billion in funding from NVIDIA will require them to spend roughly $325 billion — consisting of $125 billion in data center infrastructure costs and $200 billion in GPUs.  If you’re reporting this story without at least attempting to report these numbers, you are failing to give the general public the full extent of what these companies are promising. According to the New York Times , OpenAI has “agreements in place to build more than $400 billion in data center infrastructure” but also has now promised to spend $400 billion with Oracle over the next five years. What the fuck is going on? Are we just reporting any old shit that somebody says? Oracle hasn’t even got the money to pay for those data centers! Oracle is currently raising $15 billion in bonds to get a start on…something, even though $15 billion is a drop in the bucket for the sheer scale and cost of these data centers. Thankfully, Vantage Data Centers is raising $25 billion to handle the Shackelford ( ready, at best, in mid-to-late 2027 ) and Port Washington Wisconsin ( we have no idea, it doesn’t even appear Vantage has broken ground ) data center plans, allowing Oracle to share the burden of data centers that will likely not be built until fucking 2027 at the earliest. Anyway, putting all of that aside, OpenAI has now made multiple egregious, ridiculous, fantastical and impossible promises to many different parties, in amounts ranging from $50 million to $400 billion, all of which are due within the next five years. It will require hundreds of billions of dollars — either through direct funding, loans, or having partners like Oracle or NVIDIA take the burden, though at this point I believe both companies are genuinely failing their investors by not protecting them from Clammy Sam Altman, a career liar who somehow believes he can mobilize nearly a trillion dollars and have the media print anything he says, mostly because they will print anything he says, even when he says he wants to build 1 Gigawatt of AI infrastructure a week . Today, I’m going to go into detail about every single promise made by Sam Altman and his cadre of charlatans, and give you as close to a hard dollar amount as I can as what it would cost to meet these promises. To be clear, I am aware that in some of these cases another party will take on the burden of capital — but these dollars must be raised, and OpenAI must make sure they are raised. I’ll also get into the raw costs of running OpenAI, and how dire things look when you add everything up. In fact, based on my calculations, OpenAI needs at least $500 billion just to fund its own operations, and at least $432 billion or more through partners or associated entities raising debt just to make it through the next few years. And that's if OpenAI hits the insane revenue targets it's set!

0 views

Is There Any Real Money In Renting Out AI GPUs?

NVIDIA has become a giant, unhealthy rock on which the US markets — and to some extent the US economy — sits, representing 7-to-8% of the value of the market and a large percentage of the $400 billion in expected AI data center capex expected to be spent this year, which in turn made up for more GDP growth than all consumer spending combined. I originally started writing this piece about something else entirely — the ridiculous Oracle deal, what the consequences of me being right might be, and a lot of ideas that I'll get to later, but I couldn't stop looking at what NVIDIA is doing. To be clear, NVIDIA is insane, making 88% of its massive revenues from selling the distinct GPUs and associated server hardware to underpin the inference and training of Large Language Models, a market it effectively created by acquiring Mellanox for $6.9 billion in 2019 , and its underlying hardware that allowed for the high-speed networking to connect massive banks of servers and GPUs together, a deal now under investigation by China's antitrust authorities . Since 2023, NVIDIA has made an astonishing amount of money from its data center vertical , going from making $47 billion in the entirety of their Fiscal Year 2023 to making $41.1 billion in its last quarterly earnings alone. What's even more remarkable is how little money anyone is making as a result, with the combined revenues of the entire generative AI industry unlikely to cross $40 billion this year , even when you include companies like AI compute company CoreWeave, which expects to make a little over $5 billion or so this year , though most of that revenue comes from Microsoft, OpenAI (funded by Microsoft and Google, who are paying CoreWeave to provide compute to OpenAI, despite OpenAI already being a client of CoreWeave, both under Microsoft and in their own name)...and now NVIDIA itself, which has now agreed to buy $6.3 billion of any unsold cloud compute through, I believe, the next four years. Hearing about this deal made me curious. Why is NVIDIA acting as a backstop to CoreWeave? And why are they paying to rent back thousands of its GPUs for $1.5 billion over four years from Lambda , another AI compute company it invested in? The answer is simple: NVIDIA is effectively incubating its own customers, creating the contracts necessary for them to raise debt to buy GPUs — from NVIDIA, of course — which can, in turn, be used as collateral for further loans to buy even more GPUs . These compute contracts are used by AI compute companies as a form of collateral — proof of revenue to reassure creditors that they're good for the money so that they can continue to raise mountains of debt to build more data centers to fill with more GPUs from NVIDIA. This has also created demand for companies like Dell and Supermicro, companies that accounted for a combined 39% of NVIDIA's most recent quarterly revenues . Dell and Supermicro buy GPUs sold by NVIDIA and build the server architecture around them necessary to provide AI compute, reselling them to companies like CoreWeave and Lambda, who also buy GPUs of their own and have preferential access from NVIDIA. You'll be shocked to hear that NVIDIA also invested in both CoreWeave and Lambda, that Supermicro also invested in Lambda, and that Lambda also gets its server hardware from Supermicro. While this is the kind of merciless, unstoppable capitalism that has made Jensen Huang such a success, there's an underlying problem — that these companies become burdened with massive debt, used to send money to NVIDIA, Supermicro (an AI server/architecture reseller), and Dell (another reseller that works directly with CoreWeave ), and there doesn't actually appear to be mass market demand for AI compute, other than the voracious hunger to build more of it. In a thorough review of just about everything ever written about them, I found a worrying pattern within the three major neoclouds (CoreWeave, Lambda, and Nebius): a lack of any real revenue outside of Microsoft, OpenAI, Meta, Amazon, and of course NVIDIA itself, and a growing pile of debt raised in expectation of demand that I don't believe will ever arrive. To make matters worse, I've also found compelling evidence that all three of these companies lack the capacity to actually serve massive contracts like OpenAI's $11.9 billion deal with CoreWeave ( and an additional $4 billion added a few months later ), or Nebius' $17.4 billion deal with Microsoft , both of which were used to raise debt for each company. On some level, NVIDIA's Neocloud play was genius, creating massive demand for its own GPUs, both directly and through resellers, and creating competition with big tech firms like Microsoft's Azure Cloud and Amazon Web Services, suppressing prices in cloud compute and forcing them to buy more GPUs to compete with CoreWeave's imaginary scale. The problem is that there is no real demand outside of big tech's own alleged need for compute. Across the board, CoreWeave, Nebius and Lambda have similar clients, with the majority of CoreWeave's revenue coming from companies offering compute to OpenAI or NVIDIA's own "research" compute. Neoclouds exist as an outgrowth of NVIDIA, taking on debt using GPUs as collateral , which they use to buy more GPUs, which they then use as collateral along with the compute contracts they sign with either OpenAI, Microsoft, Amazon or Google. Beneath the surface of the AI "revolution" lies a dirty secret: that most of the money is one of four companies feeding money to a company incubated by NVIDIA specifically to buy GPUs and their associated hardware. These Neoclouds are entirely dependent on a continual flow of private credit from firms like Goldman Sachs ( Nebius , CoreWeave , Lambda for its IPO ), JPMorgan ( Lambda , Crusoe , CoreWeave ), and Blackstone ( Lambda , CoreWeave ), who have in a very real sense created an entire debt-based infrastructure to feed billions of dollars directly to NVIDIA, all in the name of an AI revolution that's yet to arrive. Those billions — an estimated $50 billion a quarter for the last three quarters at least — will eventually have the expectation of some sort of return, yet every Neocloud is a gigantic money loser, with CoreWeave burning $300 million in the last quarter with expectations to spend more than $20 billion in capital expenditures in 2025 alone . At some point the lack of real money in these companies will make them unable to pay their ruinous debt, and with NVIDIA's growth already slowing, I think we're watching a private credit bubble grow with no way for any of the money to escape. I'm not sure where it'll end, but it's not going to be pretty. Let's begin.

0 views

Oracle and OpenAI Are Full Of Crap

This week, something strange happened. Oracle, a company that had just missed on its earnings and revenue estimates, saw a more-than-39% single day bump in its stock , leading a massive market rally. Why? Because it said its remaining performance obligations — contracts signed that its customers yet to pay — had increased by $317 billion from the previous quarter, with CNBC reporting at the time that this was likely part of Oracle and OpenAI's planned additional 4.5 gigawatts of data center capacity being built in the US . Analysts fawned over Oracle — again, as it missed estimates — with TD Cowen's Derrick Wood saying it was a "momentous quarter" (again, it missed ) and that these numbers were "really amazing to see," and Guggenheim Securities' John DiFucci said he was "blown away." Deutsche Bank's Brad Zelnick added that "[analysts] were all kind of in shock, in a very good way." RPOs, while standard (and required) accounting practice and based on actual signed contracts, are being used by Oracle as a form of marketing. Plans change, contracts can be canceled (usually with a kill fee, but nevertheless), and, especially in this case, clients can either not have the money to pay or die for the very same reason they can't pay. In Oracle's case, it isn’t simply promising ridiculous growth, it is effectively saying it’ll become the dominant player in all cloud compute. A day after Oracle's earnings and a pornographic day of market swings, the Wall Street Journal reported that OpenAI and Oracle had signed a $300 billion deal , starting in "2027," though the Journal neglected to say whether that was the year or Oracle’s FY2027 (which starts June 1 2026). Oracle claims that it will make $18 billion in cloud infrastructure revenue in FY2026, $32 billion in FY2027, $73 billion in FY2028, $114 billion in FY2029, and $144 billion in FY2030. While all of this isn't necessarily OpenAI (as it adds up to $381 billion), it's fair to assume that the majority of it is. This means — as the $300 billion of the $317 billion of new contracts added by Oracle, and assuming OpenAI makes up 78% of its cloud infrastructure revenue ($300 billion out of $381 billion) — that OpenAI intends to spend over $88 billion fucking dollars in compute by FY2029, and $110 billion dollars in compute, AKA nearly as much as Amazon Web Services makes in a year , in FY2030. A sidenote on percentages, and how I'm going to talk about this going forward. If I'm honest, there's also a compelling argument that more of it is OpenAI. Who else is using this much compute? Who has agreed, and why?  In any case, if you trust Oracle and OpenAI, this is what you are believing: I want to write something smart here, but I can't get away from saying that this is all phenomenally, astronomically, ridiculously stupid. OpenAI, at present, has made about $6.26 billion in revenue this year , and it leaked a few days ago that it will burn $115 billion " through 2029 ," a statement that is obviously, patently false. Let's take a look at this chart from The Information : A note on "free cash flow." Now, these numbers may look a little different because OpenAI is now leaking free cash flow instead of losses, likely because it lost $5 billion in 2024 , which included $1 billion in losses from "research compute amortization," likely referring to spreading the cost of R&D out across several years, which means it already paid it. OpenAI also lost $700 million from its revenue share with Microsoft. In any case, this is how OpenAI is likely getting its "negative $2 billion" number." Personally, I don't like this as a means of judging this company's financial health, because it's very clear it’s using it to make its losses seem smaller than they are. The Information also reports that OpenAI will, in totality, spend $350 billion in compute from here until 2030, but claims it’ll only spend $100 billion on compute in that year. If I'm honest, I believe it'll be more based on how much Oracle is projecting. OpenAI represents $300 billion of the $317 billion of new cloud infrastructure revenue it’ll from 2027 through 2030, which heavily suggests that OpenAI will be spending more like $140 billion in that year. As I'll reveal in this piece, I believe OpenAI's actual burn is over $290 billion through 2029, and these leaks were intentional to muddy the waters around how much their actual costs would be. There is no way a $116 billion burnrate from 2025 to 2029 includes these costs, and I am shocked that more people aren't doing the basic maths necessary to evaluate this company. The timing of the leak — which took place on September 5, 2025, five days before the Oracle deal was announced — always felt deeply suspicious, as it's unquestionably bad news... unless, of course, you are trying to undersell how bad your burnrate is. I believe that OpenAI's leaked free cash flow projections intentionally leave out the Oracle contract as a means of avoiding scrutiny. I refuse to let that happen. So, even if OpenAI somehow had the money to pay for its compute — it won't, but it projects, according to The Information, that it’ll make one hundred billion dollars in 2028 — I'm not confident that Oracle will actually be able to build the capacity to deliver it. Vantage Data Centers, the partner building the sites, will be taking on $38 billion of debt to build two sites in Texas and Wisconsin , only one of which has actually broken ground from what I can tell, and unless it has found a miracle formula that can grow data centers from nothing, I see no way that it can provide OpenAI with $70 billion or more of compute in FY2027. Oracle and OpenAI are working together to artificially boost Oracle's stock based on a contract that is, from everything I can see, impossible for either party to fulfill. The fact that this has led to such an egregious pump of Oracle's stock is an utter disgrace, and a sign that the markets and analysts are no longer representative of any rational understanding of a company's value. Let me be abundantly clear: Oracle and OpenAI's deal says nothing about demand for GPU compute. OpenAI is the largest user of compute in the entirety of the generative AI industry. Anthropic expects to burn $3 billion this year (so we can assume that its compute costs are $3 billion to $5 billion, Amazon is estimated to make $5 billion in AI revenue this year, so I think this is a fair assumption ), and xAI burns through a billion dollars a month . CoreWeave expects about $5.3 billion of revenue in 2025 , and per The Information Lambda, another AI compute company, made more than $250 million in the first half of 2025. If we assume that all of these companies were active revenue participants (we shouldn't, as xAI mostly handles its own infrastructure), I estimate the global compute market is about $40 billion in totality, at a time when AI adoption is trending downward in large companies according to Apollo's Torsten Sløk . And yes, Nebius signed a $17.4 billion, four-year-long deal with Microsoft , but Nebius now has to raise $3 billion to build the capacity to acquire "additional compute power and hardware, [secure] land plots with reliable providers, and [expand] its data center footprint," because Nebius, much like CoreWeave, and, much like Oracle, doesn't have the compute to service these contracts. All three have seen a 30% bump in their stock in the last week. In any case, today I'm going to sit down and walk you through the many ways in which the Oracle and OpenAI deal is impossible to fulfill for either party. OpenAI is projecting fantastical growth in an industry that's already begun to contract, and Oracle has yet to even start building the data centers necessary to provide the compute that OpenAI allegedly needs.

0 views

Why Everybody Is Losing Money On AI

Hello and welcome to another premium newsletter. Thanks as ever for subscribing, and please email me at [email protected] to say hello. As I've written again and again , the costs of running generative AI do not make sense. Every single company offering any kind of generative AI service — outside of those offering training data and services like Turing and Surge — is, from every report I can find, losing money, and doing so in a way that heavily suggests that there's no way to improve their margins. In fact, let me explain an example of how ridiculous everything has got, using points I'll be repeating behind the premium break. Anysphere is a company that sells a subscription to their AI coding app Cursor, and said app predominantly uses compute from Anthropic via their models Claude Sonnet 4.1 and Opus 4.1. Per Tom Dotan at Newcomer , Cursor sends 100% of their revenue to Anthropic, who then takes that money and puts it into building out Claude Code, a competitor to Cursor. Cursor is Anthropic's largest customer. Cursor is deeply unprofitable, and was that way even before Anthropic chose to add "Service Tiers," jacking up the prices for enterprise apps like Cursor . My gut instinct is that this is an industry-wide problem. Perplexity spent 164% of its revenue in 2024 between AWS, Anthropic and OpenAI . And one abstraction higher (as I'll get into), OpenAI spent 50% of its revenue on inference compute costs alone , and 75% of its revenue on training compute too (and ended up spending $9 billion to lose $5 billion). Yes, those numbers add up to more than 100%, that's my god damn point. Large Language Models are too expensive, to the point that anybody funding an "AI startup" is effectively sending that money to Anthropic or OpenAI, who then immediately send that money to Amazon, Google or Microsoft, who are yet to show that they make any profit on selling it. Please don't waste your breath saying "costs will come down." They haven't been, and they're not going to. Despite categorically wrong boosters claiming otherwise , the cost of inference — everything that happens from when you put a prompt in to generate an output from a model — is increasing, in part thanks to the token-heavy generations necessary for "reasoning" models to generate their outputs, and with reasoning being the only way to get "better" outputs, they're here to stay (and continue burning shit tons of tokens). This has a very, very real consequence. Christopher Mims of the Wall Street Journal reported last week that software company Notion — which offers AI that boils down to "generate stuff, search, meeting notes and research" — had AI costs eat 10% of its profit margins to provide literally the same crap that everybody else does. As I discussed a month or two ago, the increasing cost of AI has begun a kind of subprime AI crisis , where Anthropic and OpenAI are having to charge more for their models and increasing the price on their enterprise customers to boot. As discussed previously, OpenAI lost $5 billion and Anthropic $5.3 billion in 2024, with OpenAI expecting to lose upwards of $8 billion and Anthropic — somehow — only losing $3 billion in 2025. I have severe doubts that these numbers are realistic, with OpenAI burning at least $3 billion in cash on salaries this year alone , and Anthropic somehow burning two billion dollars less on revenue that has, if you believe its leaks, increased 500% since the beginning of the year . Though I can't say for sure, I expect OpenAI to burn at least $15 billion in compute costs this year alone , and wouldn't be surprised if its burn was $20 billion or more. At this point, it's becoming obvious that it is not profitable to provide model inference, despite Sam Altman recently saying that OpenAI was. He no doubt is trying to play silly buggers with the concept of gross profit margins — suggesting that inference is "profitable" as long as you don't include training, staff, R&D, sales and marketing, and any other indirect costs. I will also add that OpenAI pays a discounted rate on its compute . In any case, we don't even have one — literally one — profitable model developer, one company that was providing these services that wasn't posting a multi-million or billion-dollar loss. In fact, even if you remove the cost of training models from OpenAI's 2024 revenues ( provided by The Information ), OpenAI would still have lost $2.2 billion fucking dollars. One of you will say "oh, actually, this is standard accounting." If that's the case, OpenAI had a 10% gross profit margin in 2024, and while OpenAI has leaked that it has a 48% gross profit margin in 2025 , Altman also claimed that GPT-5 scared him, comparing it to the Manhattan Project . I do not trust him. Generative AI has a massive problem that the majority of the tech and business media has been desperately avoiding discussing: that every single company is unprofitable, even those providing the models themselves. Reporters have spent years hand-waving around this issue, insisting that "these companies will just work it out," yet never really explaining how they'd do so other than "the cost of inference will come down" or "new silicon will bring down the cost of compute." Neither of these things have happened, and it's time to take a harsh look at the rotten economics of the Large Language Model era. Generative AI companies — OpenAI and Anthropic included — lose millions or billions of dollars, and so do the companies building on top of them, in part because the costs associated with delivering models continue to increase. Integrating Large Language Models into your product already loses you money, at a price where the Large Language Model provider (EG: OpenAI and Anthropic) is losing money. I believe that generative AI is, at its core, unprofitable, and that no company building their core services on top of models from Anthropic or OpenAI has a path to profitability outside of massive, unrealistic price increases. The only realistic path forward for generative AI firms is to start charging their users the direct costs for running their services, and I do not believe users will be enthusiastic to do so, because the amount of compute that the average user costs vastly exceeds the amount of money that the company generates from a user each month. As I'll discuss, I don't believe it's possible for these companies to make a profit even with usage-based pricing, because the outcomes that are required to make things like coding LLMs useful require a lot more compute than is feasible for an individual or business to pay for. I will also go into how ludicrous the economics behind generative AI have become, with companies sending 100% or more of their revenue directly to cloud compute or model providers. And I'll explain why, at its core, generative AI is antithetical to the way that software is sold, and why I believe this doomed it from the very beginning.

0 views