Latest Posts (20 found)

Premium: The Hater's Guide To NVIDIA

This piece has a generous 3000+ word introduction, because I want as many people to understand NVIDIA as possible. The (thousands of) words after the premium break get into arduous detail, but I’ve written this so that, ideally, most people can pick up the details early on and understand this clusterfuck. Please do subscribe to the premium! I really appreciate it. I've reached a point with this whole era where there are many, many things that don't make sense, and I know I'm not alone. I've been sick since Friday last week, and thus I have had plenty of time to sit and think about stuff. And by "stuff" I mean the largest company on the stock market: NVIDIA.  Look, I'm not an accountant, nor am I a "finance expert." I learned all of this stuff myself. I learn a great deal by coming to things from the perspective of being a dumbass , a valuable intellectual framework of "I need to make sure I understand each bit and explain it as simply as possible." In this piece, I'm going to try and explain both what this company is, how we got here, and ask questions that I, from the perspective of a dumbass, have about the company, and at least try and answer them. Let's start with a very simple point: for a company of such remarkable size, very few people — myself included, at times! —  seem to actually understand NVIDIA. NVIDIA is a company that sells all sorts of stuff, but the only reason you're hearing about it as a normal person is that NVIDIA's stock has become a load-bearing entity in the US stock market. This has happened because NVIDIA sells "GPUs" — graphics processing units — that power the large language model services that are behind the whole AI boom, either through "inference" (the process of creating an output from an AI model) or "training" (feeding data into the model to make its outputs better). NVIDIA also sells other things, which I’ll get to later, but it doesn’t really matter to the bigger picture. Back in 2006, NVIDIA launched CUDA , a software layer that lets you run (some) software on (specifically) NVIDIA graphics cards, and over time this has grown into a massive advantage for the company. The thing is, GPUs are great for parallel processing - essentially spreading a task across multiple, by which I mean thousands, of processor cores at the same time - which means that certain tasks run faster than they would on, say, a CPU. While not every task benefits from parallel processing, or from having several thousand cores available at the same time, the kind of math that underpins LLMs is one such example.  CUDA is proprietary to NVIDIA, and while there are alternatives (both closed- and open-source), none of them have the same maturity and breadth. Pair that with the fact that Nvidia’s been focused on the data center market for longer than, say, AMD, and it’s easy to understand why it makes so much money. There really isn’t anyone who can do the same thing as NVIDIA, both in terms of software and hardware, and certainly not at the scale necessary to feed the hungry tech firms that demand these GPUs. Anyway, back in 2019 NVIDIA acquired a company called Mellanox for $6.9 billion, beating off other would-be suitors, including Microsoft and Intel. Mellanox was a manufacturer of high-performance networking gear, and this acquisition would give NVIDIA a stronger value proposition for data center customers. It wanted to sell GPUs — lots of them — to data center customers, and now it could also sell the high-speed networking technology required to make them work in tandem.  This is relevant because it created the terms under which NVIDIA could start selling billions (and eventually tens of billions) of specialized GPUs for AI workloads. As pseudonymous finance account JustDario connected (both Dario and Kakashii have been immensely generous with their time explaining some of the underlying structures of NVIDIA, and are worth reading, though at times we diverge on a few points), mere months after the Mellanox acquisition, Microsoft announced its $1 billion investment in OpenAI to build "Azure AI supercomputing technologies." Though it took until November 2022 for ChatGPT to really start the fires, in March 2020 , NVIDIA began the AI bubble with the launch of its "Ampere" architecture, and the A100, which provided "the greatest generational performance leap of NVIDIA's eight generations of GPUs," built for "data analytics, scientific computing and cloud graphics." The most important part, however, was the launch of NVIDIA's "Superpod": Per the press release:  One might be fooled into thinking this was Huang suggesting we could now build smaller, more efficient data centers, when he was actually saying we should build way bigger ones that had way more compute power and took up way more space. The "Superpod" concept — groups of GPU servers networked together to work on specific operations — is the "thing" that is driving NVIDIA's sales. To "make AI happen," a company must buy thousands of these things and put them in data centers and you'd be a god damn idiot to not do this and yes, it requires so much more money than you used to spend. At the time, a DGX A100 — a server that housed eight A100 GPUs (starting at around $10,000 at launch per-GPU, increasing with the amount of on-board RAM, as is the case across the board) — started at $199,000. The next generation SuperPod, launched in 2022, was made up of eight H100 GPUs (Starting at $25,000-per-GPU, the next generation "Hopper" chips were apparently 30x times more powerful than the A100), and retailed from $300,000. You'll be shocked to hear the next generation Blackwell SuperPods started at $500,000 when launched in 2024 . A single B200 GPU costs at least $30,000 . Because nobody else has really caught up with CUDA, NVIDIA has a functional monopoly ( edit: I wrote monopsony in a previous version, sorry), and yes, you can have a situation where a market has a monopoly, even if there is, at least in theory, competition. Once a particular brand — and particular way of writing software for a particular kind of hardware — takes hold, there's an implicit cost of changing to another, on top of the fact that AMD and others have yet to come up with something particularly competitive. Anyway, the reason that I'm writing all of this out is because I want you to understand why everybody is paying NVIDIA such extremely large amounts of money. Every year, NVIDIA comes up with a new GPU, and that GPU is much, much more expensive, and NVIDIA makes so much more money, because everybody has to build out AI infrastructure full of whatever the latest NVIDIA GPUs are, and those GPUs are so much more expensive every single year. With Blackwell — the third generation of AI-specialized GPUs — came a problem, in that these things were so much more power-hungry, and required entirely new ways of building data centers, along with different cooling and servers to put them in, much of which was sold by NVIDIA. While you could kind of build around your current data centers to put A100s and H100s into production, Blackwell was...less cooperative, and ran much hotter. To quote NVIDIA Employee Number 4 David Rosenthal : In simple terms, Blackwell runs hot, so much hotter than Ampere (A100) or Hopper (H100) GPUs that it requires entirely different ways to cool it, meaning your current data center needs to be ripped apart to fit them. Huang has confirmed that Vera Rubin, the next generation of GPUs, will have the same architecture as Blackwell . I would bet money that it's also much more expensive. Anyway, all of this has been so good for NVIDIA. As the single vendor for the most important component in the entire AI boom, it has set the terms for how much you pay and how you build any and all AI infrastructure. While there are companies like Supermicro and Dell who buy NVIDIA GPUs and ship them in servers to customers, that's just fine for NVIDIA CEO Jensen Huang, as that's somebody else selling his GPUs for him. NVIDIA has been printing money, quarter after quarter, going from a meager $7.192 billion in total revenue in the third (calendar year) quarter of 2023 to an astonishing $50 billion in just data center revenue (that's where the GPUs are) in its most recent quarter , for a total of $57 billion in revenue , and the company projects to make $63 billion to $67 billion in the next quarter. Now, I'm going to stop you here, because this bit is really important, really simple, yet nobody thinks about it much: NVIDIA makes so much money, and it makes it from a much smaller customer base than most companies, because there are only so many entities that can buy thousands of chips that cost $50,000 or more each.   $35 billion, $39 billion, $44 billion, $46 billion and $57 billion are very large amounts of money, and the entities pumping those numbers into the stratosphere are collectively having to spend hundreds of billions of dollars to make it happen. So, let me give you a theoretical example. I swear I'm going somewhere with this.You, a genius, have decided you are about to join the vaunted ranks of "AI data center ownership." You decide to build a "small" AI data center — 25MW (megawatts, which in this example, refers to the combined power draw of the tech inside the data center). That can't be that much, right? OpenAI is building a 1.2GW one out in Abilene Texas . How much could this tiny little thing cost? Okay, well, let's start with those racks. You're gonna need to give Jensen Huang $600 million right away, as you need 200 GB200 racks. You're also gonna need a way to make them network together, because otherwise they aren't going to be able to handle all those big IT loads , so that's gonna be another $80 million or more, and you're going to need storage and servers to sync all of this up, which is, let's say, another $35 million. So we're at $715 million. Should be fine, right? Everybody's cool and everybody's normal. This is just a small data center after all. Oops, forgot cooling and power delivery stuff — that's another $5 million. $720 million. Okay. Anyway, sadly data centers require something called a "building." Construction costs for a data center are somewhere from $8 million to $12 million per megawatt , so, crap, okay. That's $250 million, but probably more like $300 million. We're now up to $1.02 billion, and we haven't even got the power yet. Okay, sick. Do you have one billion dollars? You don't? No worries! Private credit — money loaned by non-banking entities — has been feeding more than $50 billion dollars a quarter into the hungry mouths of anybody who desires to build a data center . You need $1.02 billion. You get $1.5 billion, because, you know, "stuff happens." Don't worry about those pesky high interest rates — you're about to be printing big money, AI style! Now you're done raising all that cash, it'll now only take anywhere from 6 to 18 months for site selection, permitting, design, development, construction, and energy procurement . You're also going to need about 20 acres of land for that 100,000 square foot data center . You may wonder why 100,000 square feet needs that much space, and that's because all of the power and cooling equipment takes up an astonishing amount of room. So, yeah, after two years and over a billion dollars, you too can own a data center with NVIDIA GPUs that turn on, and at that point, you will offer a service that is functionally identical to everybody else buying GPUs from NVIDIA. Your competitors are Amazon, Google and Microsoft, followed by neoclouds — AI chip companies selling the same thing as you, except they're directly backed by NVIDIA, and frequently, the big hyperscaler companies with brands that most people have heard of, like AWS and Azure. Oh, also, this stuff costs an indeterminately-large amount of money to run. You may wonder why I can't tell you how much, and that's because nobody wants to actually discuss the cost of running GPUs, the thing that underpins our entire stock market. There're good reasons, too. One does not just run "a GPU" — it's a GPU in a server of other GPUs with associated hardware, all drawing power in varying amounts, all running in sync with networking gear that also draws power, with varying amounts of user demand and shifts in the costs of power from the power company. But what we can say is that the up front cost of buying these GPUs and their associated crap is such that it's unclear if they ever will generate a profit, because these GPUs run hot , all the time , and that causes some amount of them to die. Here are some thoughts I have had: The NVIDIA situation is one of the most insane things I've seen in my life. The single-largest, single-most-valuable, single-most-profitable company on the stock market has got there through selling ultra-expensive hardware that takes hundreds of millions or billions of dollars (and years of construction in some cases) to start using, at which point it...doesn't make much revenue and doesn't seem to make a profit.  Said hardware is funded by a mixture of cashflow from healthy businesses (see: Microsoft) or massive amounts of debt (see: everybody who is not a hyperscaler, and, at this point, some hyperscalers). The response to the continued proof that generative AI is not making money is to buy more GPUs, and it doesn't appear anybody has ever worked out why. This problem has been obvious for a long time, too.  Today I'm going to explain to you — simply, but at length — why I am deeply concerned, and how deeply insane this situation has become. A 25MW data center costs about $1 billion, with $600 million of that being GPUs — 200 GB200 racks, to be specific. It needs about 20 acres — 100,000 square feet for the data center, roughly. NVIDIA sells about $50 billion of GPUs and associated hardware in a quarter, so let's say that $40 billion of that is just the GPUs and $10 billion is everything else (primarily networking gear), so around 13,333 GB200 racks. I realize that NVIDIA sells far more than that (GB300 racks, singular GPUs, and so on). Deep-pocketed hyperscalers like Microsoft, Google, Meta and Amazon representing 41.32% of NVIDIA's revenue in the middle of 2025 , funneling free cash flow directly into Jensen Huang's pockets... ...for now. Amazon ( $15 billion ), Google ( $25 billion ), Meta ( $30 billion ) and Oracle ( $18 billion ) have all had to raise massive amounts of debt to continue to fund AI-focused capital expenditures, with more than half of that ( per Rubenstein ) spent on GPUs. Otherwise, basically anybody buying GPUs at any scale has to fund doing so with either venture capital (money raised in exchange for part of the company) or debt. NVIDIA, at this point, is around 8% of the value of the S&P 500 (the 500 leading (meaning they meet certain criteria of size, liquidity (cash availability) and profitability) companies on the US stock market). Its continued health — and representative value as a stock, which is not necessarily based on its actual numbers or health, but in this case kind of is? — has led the stock market to remarkable gains. It is not enough for NVIDIA to simply be a profitable company. It must continue beating the last quarter's revenue, again and again and again and again, forever . If that sounds dramatic, I assure you it is the truth. NVIDIA's continued success — and its ability to continue delivering outsized beats of Wall Street's revenue estimates — depends on: The willingness of a few very large, cash-rich companies (Microsoft, Meta, Amazon and Google) to continue buying successive generations of NVIDIA GPUs forever. The ability of said companies to continue buying successive generations of GPUs forever. The ability of other, less-cash-rich companies like Oracle to continue being able to raise debt to buy massive amounts of GPUs — such as the $40 billion of GPUs that Oracle is buying for Stargate Abilene forever. This is becoming a problem. The ability of unprofitable, debt-ridden companies like CoreWeave, AI "neoclouds" that use the GPUs they purchase from NVIDIA as collateral for loans to buy more GPUs , to to continue raising that debt to buy more GPUs. The ability of anybody who buys these GPUs to actually install them and use them, which requires massive amounts of construction... and more power than is currently available, even to the most well-funded and conspicuous projects . In simple terms, its success depends on the debt markets to continue propping up its revenues, because there is not really enough free cash in the world to continue pumping it into NVIDIA at this rate. And after all of this, large language models, the only way to make any real money on any of these GPUs , must prove they can actually produce a profit. Per my article from September, I can find no compelling evidence (outside of boosters speciously claiming otherwise) that it's profitable to sell access to GPUs. Based on my calculations, there's likely little more than $61 billion of actual AI revenue in 2025 across every single AI company and hyperscaler. Note that I said "revenue." Absolutely nobody is making a profit.

0 views

Premium: The Hater's Guide To The AI Bubble Vol. 2

We’re approaching the most ridiculous part of the AI bubble, with each day bringing us a new, disgraceful and weird headline. As I reported earlier in the week, OpenAI spent $12.4 billion on inference between 2024 and September 2025 , and its revenue share with Microsoft heavily suggests it made at least $2.469 billion in 2024 ( when reports had OpenAI at $3.7 billion for 2024 ), with the only missing revenue to my knowledge being the 20% Microsoft shares with OpenAI when it sells OpenAI models on Azure, and whatever cut Microsoft gives OpenAI from Bing.  Nevertheless, the gap between reported figures and what the documents I’ve seen said is dramatic. Despite reports that OpenAI made, in the first half of 2025, $4.3 billion in revenue on $2.5 billion of “cost of revenue,” what I’ve seen shows that OpenAI spent $5.022 billion on inference (the process of creating an output using a model) in that period, and made at least $2.2735 billion. I, of course, am hedging aggressively, but I can find no explanation for the gaps. I also can’t find an explanation for why Sam Altman said that OpenAI was “profitable on inference” in August 2025 , nor how OpenAI will hit “$20 billion in annualized revenue” by end of 2025 , nor how OpenAI will do “well more” than $13 billion this year . Perhaps there’s a chance that for some 30 day period of this year OpenAI hits $1.66 billion in revenue (AKA $20 billion annualized), but even that would leave it short of its stated target revenue The very same day I ran that piece, somebody posted a clip of Microsoft CEO Satya Nadella saying , who had this to say when asked about recent revenue projections from AI labs:  I don’t know Satya, not fucking make shit up? Not embellishing? Is it too much to ask that these companies make projections that adhere to reality, rather than whatever an investor would want to hear? Or, indeed, projections that perpetuate a myth of inevitability, but fly in the face of reality?  I get that in any investment scenario you want to sell a story, but the idea that the CEO of a company with a $3.8 trillion market cap is sitting around saying “what do you expect them to do, tell the truth? They need money for compute!” is fucking disgraceful.  No, I do not believe a company should make overblown revenue projections, nor do I think it’s good for the CEO of Microsoft to encourage the practice. I also seriously have to ask why Nadella believes that this is happening, and, indeed, who he might be specifically talking about, as Microsoft has particularly good insights into OpenAI’s current and future financial health .  However, because Nadella was talking in generalities, this could refer to Anthropic, and it kinda makes sense, because Anthropic just received near-identical articles about its costs from both The Information and The Wall Street Journal , with The Information saying that Anthropic “projected a positive free cash flow as soon as 2027,” and the Wall Street Journal saying that Anthropic “anticipates breaking even by 2028,” with both pieces featuring the cash burn projections of both OpenAI and Anthropic based on “documents” or “investor projections” shared this summer. Both pieces focus on free cash flow, both pieces focus on revenue, and both pieces say that OpenAI is spending way more than Anthropic, and that Anthropic is on the path to profitability. The Information also includes a graph involving Anthropic’s current and projected gross margins, with the company somehow hitting 75% gross margins by 2028.  How does any of this happen? Nobody seems to know!  Per The Journal: …hhhhooowwwww????? I’m serious! How?  The Information tries to answer: Is…that the case? Are there any kind of numbers to back this up? Because Business Insider just ran a piece covering documents involving startups claiming that Amazon’s chips had "performance challenges,” were “plagued by frequent service disruptions,” and “underperformed” NVIDIA H100 GPUs on latency, making them “less competitive” in terms of speed and cost.” One startup “found Nvidia's older A100 GPUs to be as much as three times more cost-efficient than AWS's Inferentia 2 chips for certain workloads,” and a research group called AI Singapore “determined that AWS’s G6 servers, equipped with NVIDIA GPUs, offered better cost performance than Inferentia 2 across multiple use cases.” I’m not trying to dunk on The Wall Street Journal or The Information, as both are reporting what is in front of them, I just kind of wish somebody there would say “huh, is this true?” or “will they actually do that?” a little more loudly, perhaps using previously-written reporting.  For example, The Information reported that Anthropic’s gross margin in December 2023 was between 50% and 55% in January 2024 , CNBC stated in September 2024 that Anthropic’s “aggregate” gross margin would be 38% in September 2024, and then it turned out that Anthropic’s 2024 gross margins were actually negative 109% (or negative 94% if you just focus on paying customers) according to The Information’s November 2025 reporting . In fact, Anthropic’s gross margin appears to be a moving target. In July 2025, The Information was told by sources that “Anthropic recently told investors its gross profit margin from selling its AI models and Claude chatbot directly to customers was roughly 60% and is moving toward 70%,” only to publish a few months later (in their November piece) that Anthropic’s 2025 gross margin would be…47%, and would hit 63% in 2026. Huh? I’m not bagging on these outlets. Everybody reports from the documents they get or what their sources tell them, and any piece you write comes with the risk that things could change, as they regularly do in running any kind of business. That being said, the gulf between “38%” and “ negative 109%” gross margins is pretty fucking large, and suggests that whatever Anthropic is sharing with investors (I assume) is either so rapidly changing that giving a number is foolish, or made up on the spot as a means of pretending you have a functional business. I’ll put it a little more simply: it appears that much of the AI bubble is inflated on vibes, and I’m a little worried that the media is being too helpful. These companies are yet to prove themselves in any tangible way, and it’s time for somebody to give a frank evaluation of where we stand. if I’m honest, a lot of this piece will be venting, because I am frustrated. When all of this collapses there will, I guarantee, be multiple startups that have outright lied to the media, and done so, in some cases, in ways that are equal parts obvious and brazen. My own work has received significantly more skepticism than OpenAI or Anthropic, two companies worth alleged billions of dollars that appear to change their story with an aloof confidence borne of the knowledge that nobody read or thought too deeply about what it is that their CEOs have to say, other than “wow, Anthropic said a new number !”  So I’m going to do my best to write about every single major AI company in one go. I am going to pull together everything I can find and give a frank evaluation of what they do, where they stand, their revenues, their funding situation, and, well, however else I feel about them.  And honestly, I think we’re approaching the end. The Information recently published one of the grimmest quotes I’ve seen in the bubble so far: Hey, what was that? What was that about “growing concerns regarding the costs and benefits of AI”? What “capital shift”? The fucking companies are telling you, to your face, that they know there’s not a sustainable business model or great use case, and you are printing it and giving it the god damn thumbs up. How can you not be a hater at this point? This industry is loathsome, its products ranging useless to niche at best, its costs unsustainable, and its futures full of fire and brimstone.  This is the Hater’s Guide To The AI Bubble Volume 2 — a premium sequel to the Hater’s Guide from earlier this year — where I will finally bring some clarity to a hype cycle that has yet to prove its worth, breaking down industry-by-industry and company-by-company the financial picture, relative success and potential future for the companies that matter. Let’s get to it.

0 views

Exclusive: Here's How Much OpenAI Spends On Inference and Its Revenue Share With Microsoft

As with my Anthropic exclusive from a few weeks ago , though this feels like a natural premium piece, I decided it was better to publish on my free one so that you could all enjoy it. If you liked or found this piece valuable, please subscribe to my premium newsletter — here’s $10 off the first year of an annual subscription . I have put out over a hundred thousand words of coverage in the last three months, most of which is on my premium, and I’d really appreciate your support. I also did an episode of my podcast Better Offline about this. Before publishing, I discussed the data with a Financial Times reporter. Microsoft and OpenAI both declined to comment to the FT. If you ever want to share something with me in confidence, my signal is ezitron.76, and I’d love to hear from you. What I’ll describe today will be a little more direct than usual, because I believe the significance of the information requires me to be as specific as possible.  Based on documents viewed by this publication, I am able to report OpenAI’s inference spend on Microsoft Azure, in addition to its payments to Microsoft as part of its 20% revenue share agreement, which was reported in October 2024 by The Information . In simpler terms, Microsoft receives 20% of OpenAI’s revenue. I do not have OpenAI’s training spend, nor do I have information on the entire extent of OpenAI’s revenues, as it appears that Microsoft shares some percentage of its revenue from Bing, as well as 20% of the revenue it receives from selling OpenAI’s models.  According to The Verge : Nevertheless, I am going to report what I’ve been told. One small note — for the sake of clarity, every time I mention a year going forward, I’ll be referring to the calendar year, and not Microsoft’s financial year (which ends in June).  These numbers in this post differ to those that have been reported publicly. For example, previous reports had said that OpenAI had spent $2.5 billion on “cost of revenue” - which I believe are OpenAI’s inference costs - in the first half of CY2025 .  According to the documents viewed by this newsletter, OpenAI spent $5.02 billion on inference alone with Microsoft Azure in the first half of Calendar Year CY2025.  This is a pattern that has continued through the end of September. By that point in CY2025 — three months later — OpenAI had spent $8.67 billion on inference.  OpenAI’s inference costs have risen consistently over the last 18 months, too. For example, OpenAI spent $3.76 billion on inference in CY2024, meaning that OpenAI has already doubled its inference costs in CY2025 through September. Based on its reported revenues of $3.7 billion in CY2024 and $4.3 billion in revenue for the first half of CY2025 , it seems that OpenAI’s inference costs easily eclipsed its revenues.  Yet, as mentioned previously, I am also able to shed light on OpenAI’s revenues, as these documents also reveal the amounts that Microsoft takes as part of its 20% revenue share with OpenAI.  Concerningly, extrapolating OpenAI’s revenues from this revenue share does not produce numbers that match those previously reported.  According to the documents, Microsoft received $493.8 million in revenue share payments in CY2024 from OpenAI — implying revenues for CY2024 of at least $2.469 billion, or around $1.23 billion less than the $3.7 billion that has been previously reported .  Similarly, for the first half of CY2025, Microsoft received $454.7 million as part of its revenue share agreement, implying OpenAI’s revenues for that six-month period were at least $2.273 billion, or around $2 billion less than the $4.3 billion previously reported . Through September, Microsoft’s revenue share payments totalled $865.9 million, implying OpenAI’s revenues are at least $4.329 billion. According to Sam Altman, OpenAI’s revenue is “well more” than $13 billion . I am not sure how to reconcile that statement with the documents I have viewed. The following numbers are calendar years. I will add that, where I have them, I will include OpenAI’s leaked or reported revenues. In some cases, the numbers match up. In others they do not. Though I do not know for certain, the only way to reconcile this would be some sort of creative means of measuring “annualized” or “recurring” revenue. I am confident in saying that I have read every single story about OpenAI’s revenue ever written, and at no point does OpenAI (or the documents reporting anything) explain how the company defines “annualized” or “annual recurring revenue.”  I must be clear that the following is me speaking in generalities, and not about OpenAI specifically, but you can get really creative with annualized revenue or annual recurring revenue. You can say 30 days, 28 days, and you can even choose a period of time that isn’t a calendar month too — so, say, the best 30 days of your company’s existence across two different months. I have no idea how OpenAI defines this metric, and default to saying that “annualized” or “ARR” means $Xnumber divided by 12. The Financial Times reported on February 9 2024 that OpenAI’s revenues had “surpassed $2 billion on an annualised basis” in December 2023, working out to $166.6 million in a month: The Information reported on June 12 2024 that OpenAI had “more than doubled its annualized revenue to $3.4 billion in the last six months or so,” working out to around $283 million in a month, likely referring to this period. On September 27 2024, the New York Times reported that “OpenAI’s monthly revenue hit $300 million in August…and the company expects about $3.7 billion in annual sales [in 2024],” according to a financial professional’s review of documents. On June 9, 2025, an OpenAI spokesperson told CNBC that it had hit “$10 billion annual recurring revenue,” excluding licensing revenue from OpenAI’s 20% revenue share and “large, one-time deals.” $10bn annualized revenue works out to around $833 million in a month. These numbers are inclusive of OpenAI’s revenue share payments to Microsoft and OpenAI’s inference spend. There could be potentially royalty payments made to OpenAI as part of its deal to receive 20% of Microsoft’s sales of OpenAI’s models, or other revenue related to its revenue share with Bing.  Due to the sensitivity and significance of this information, I am taking a far more blunt approach with this piece. Based on the information in this piece, OpenAI’s costs and revenues are potentially dramatically different to what we believed. The Information reported in October 2024 that OpenAI’s revenue could be $4 billion, and inference costs $2 billion based on documents “which include financial statements and forecasts,” and specifically added the following: I do not know how to reconcile this with what I am reporting today. In the first half of CY2024, based on the information in the documents, OpenAI’s inference costs were $1.295 billion, and its revenues at least $934 million.  Indeed, it is tough to reconcile what I am reporting with much of what has been reported about OpenAI’s costs and revenues.  OpenAI’s inference spend with Microsoft Azure between CY2024 and Q3 CY2025 was $12.43 billion. That is an astonishing figure, one that dramatically dwarfs any and all reporting, which, based on my analysis, suggested that OpenAI spent $2 billion on inference in 2024 and $2.5 billion through H1 CY2025. In other words, inference costs are nearly triple that reported elsewhere.  Similarly, OpenAI’s extrapolated revenues are dramatically different to those reported.  While we do not have a final tally for 2024, the indicators presented in the documents viewed contrast starkly with the reported predictions from that year.  Both reports of OpenAI’s 2024 revenues ( CNBC , The Information ) are from the same year and are projections of potential final totals, though The Information’s story about OpenAI’s H1 CY2025 revenues said that “OpenAI generated $4.3 billion in revenue in the first half of 2025, about $16% more than it generated all of last year,” which would bring us to $3.612 billion in revenue, or $1.145 billion more than are implied by OpenAI’s revenue share numbers paid to Microsoft. I do not have an answer for inference, other than I believe that OpenAI is spending far more money on inference than we were led to believe, and that the current numbers reported do not resemble those in the documents.  Based on these numbers, it appears that OpenAI may be the single-most cash intensive startup of all time, and that the cost of running large language models may not be something that can be supported by revenues. Even if revenues were to match those that had been reported, OpenAI’s inference spend on Azure consumes them, and appears to scale linearly above revenue.  I also cannot reconcile these numbers with the reporting that OpenAI will have a cash burn of $9 billion in CY2025 . On inference alone, OpenAI has already spent $8.67 billion through Q3 CY2025.  Similarly, I cannot see a path for OpenAI to hit its projected $13 billion in revenue by the end of 2025, nor can I see on what basis Mr. Altman could state that OpenAI will make “well more” than $13 billion this year .  I cannot and will not speak to the financial health of OpenAI in this piece, but I will say this: these numbers are materially different to what has been reported, and the significance of OpenAI’s inference spend alone makes me wonder about the larger cost picture for generative AI. If it costs this much to run inference for OpenAI, I believe it costs this much for any generative AI firm to run on OpenAI’s models. If it does not, OpenAI’s costs are dramatically higher than the prices it is charging its customers, which makes me wonder whether price increases could be necessary to begin making more money, or at the very least losing less. Similarly, if OpenAI’s costs are this high, it makes me wonder about the margins of any frontier model developer.  Inference: $546.8 million Microsoft Revenue Share: $77.3 million Implied OpenAI revenue: at least $386.5 million Inference: $748.3 million Microsoft Revenue Share: $109.5 million Implied OpenAI Revenue: at least $547.5 million Inference: $1.005 billion Microsoft Revenue Share: $139.2 million Implied OpenAI Revenue: at least $696 million Inference: $1.467 billion Microsoft Revenue Share: $167.8 million Implied OpenAI Revenue: at least $839 million Total inference spend for CY2024: $3.767 billion Total implied revenue for CY2024: at least $2.469 billion Reported (projected) revenue for CY2024: $3.7 billion, per CNBC in September 2024. The Information also reported that expected revenue could be as high as $4 billion in a piece from October 2024. Reported inference costs for CY2024: $2 billion, per The Information .  Inference: $2.075 billion Microsoft Revenue Share: $206.4 million Implied OpenAI Revenue: $1.032 billion Inference: $2.947 billion Microsoft Revenue Share: $248.3 million Implied OpenAI Revenue: $1.241.5 billion H1 CY2025 Inference: $5.022 billion H1 CY2025 Revenue: at least $2.273 billion Reported H1 CY2025 Revenue: $4.3 billion ( per The Information ) Reported H1 CY2025 “Cost of Revenue”: $2.5 billion ( per The Information ) Inference: $3.648 billion Microsoft Revenue Share: $411.1 million Implied OpenAI Revenue: at least $2.056 billion

0 views

Premium: OpenAI Burned $4.1 Billion More Than We Knew - Where Is Its Money Going?

Soundtrack: Queens of the Stone Age - Song For The Dead Editor's Note: The original piece had a mathematical error around burnrate, it's been fixed. Also, welcome to another premium issue! Please do subscribe, this is a massive, 7000-or-so word piece, and that's the kind of depth you get every single week for your subscription. A few days ago, Sam Altman said that OpenAI’s revenues were “well more” than $13bn in 2025 , a statement I question based on the fact, based on other outlets’ reporting , OpenAI only made $4.3bn through the first half of 2025, and likely around a billion a month, which I estimate means the company made around $8bn by the end of September. This is an estimate. If I receive information to the contrary, I’ll report it. Nevertheless, OpenAI is also burning a lot of money. In recent public disclosures ( as reported by The Register ), Microsoft noted that it had funding commitments to OpenAI of $13bn, of which $11.6bn had been funded by September 30 2025.  These disclosures also revealed that OpenAI lost $12bn in the last quarter — Microsoft’s Fiscal Year Q1 2026, representing July through September 2025. To be clear, this is actual, real accounting, rather than the figures leaked to reporters. It’s not that leaks are necessarily a problem — it’s just that anything appearing on any kind of SEC filing generally has to pass a very, very high bar. There is absolutely nothing about these numbers that suggests that OpenAI is “profitable on inference” as Sam Altman told a group of reporters at a dinner in the middle of August . Let me get specific.  The Information reported that through the first half of 2025, OpenAI spent $6.7bn on research and development, “which likely include[s] servers to develop new artificial intelligence.” The common refrain here is that OpenAI “is spending so much on training that it’s eating the rest of its margins,” but if that were the case here, it would mean that OpenAI spent the equivalent of six months’ training in the space of three. I think the more likely answer is that OpenAI is spending massive amounts of money on staff, sales and marketing ($2bn alone in the first half of the year), real estate, lobbying , data, and, of course, inference.  According to The Information , OpenAI had $9.6bn in cash at the end of June 2025. Assuming that OpenAI lost $12bn at the end of calendar year Q3 2025, and made — I’m being generous — around $3.3bn (or $1.1bn a month) within that quarter, this would suggest OpenAI’s operations cost them over $15bn in the space of three months. Where, exactly, is this money going? And how do the numbers published actually make sense when you reconcile them with Microsoft’s disclosures?  In the space of three months, OpenAI’s costs — if we are to believe what was leaked to The Information (and, to be clear, I respect their reporting) — went from a net loss of $13.5bn in six months to, I assume, a net loss of $ 12bn in three months.   Though there are likely losses related to stock-based compensation, this only represented a cost of $2.5bn in the first half of 2025. The Information also reported that OpenAI “spent more than $2.5 billion on its cost of revenue,” suggesting inference costs of…around that?  I don’t know. I really don’t know. But something isn't right, and today I'm going to dig into it. In this newsletter I'm going to reveal how OpenAI's reported revenues and costs don't line up - and that there's $4.1 billion of cash burn that has yet to be reported elsewhere.

2 views

Big Tech Needs $2 Trillion In AI Revenue By 2030 or They Wasted Their Capex

As I've established again and again , we are in an AI bubble, and no, I cannot tell you when the bubble will pop, because we're in the stupidest financial era since the great financial crisis — though, I hope, not quite as severe in its eventually apocalyptic circumstances. By the end of the year, Microsoft, Amazon, Google and Meta will have spent over $400bn in capital expenditures, much of it focused on building AI infrastructure, on top of $228.4 bn in capital expenditures in 2024 and around $148bn in capital expenditures in 2023, for a total of around $776bn in the space of three years. At some point, all of these bills will have to come due. You see, big tech has been given incredible grace by the markets, never having to actually show that their revenue growth is coming from selling AI or AI-related services. Only Microsoft ever bothered, piping up in October 2024 to say it was making $833 million a month ($10bn ARR) from AI and then $1.08 billion a month in January 2025 ($13bn ARR), and then choosing to never report it again.  As reported by The Information , $10bn of Microsoft’s Azure revenue this year will come from OpenAI’s spend on compute, which, also reported by The Information , is paid at “...a heavily discounted rental rate that essentially only covers Microsoft’s costs for operating the servers.”  It’s absolutely astonishing that such egregious expenditures have never brought with them any scrutiny of the actual return on investment, or any kind of demands for disclosure of the resulting revenue. As a result, big tech has used their already-successful products and existing growth to pretend that something is actually happening other than Satya Nadella standing with his hands on his hips and talking about his favourite ways to use Copilot , a product that so unpopular that only eight million active Microsoft 365 customers are paying for it out of over 440 million users . This stuff is so unpopular, the world’s biggest and most powerful software company — and one with a virtual monopoly on the office productivity market — had to use dark patterns to get people to pay for this stuff.   Earlier in the week, OpenAI announced that it had “ successfully converted to a more traditional corporate structure ,” giving Microsoft a 27% position in the new entity worth $130bn, with the Wall Street Journal vaguely saying that Microsoft will also have “the ability to get more ownership as the for-profit becomes more valuable.”  Said deal also brought with it a commitment to spend $250bn on Microsoft Azure, which Microsoft has booked as “remaining performance obligations” in the same way that Oracle stuffed its RPOs with $300bn dollars from OpenAI, a company that cannot afford to pay either company even a tenth of those obligations and is on the hook for over a trillion dollars in the next four years . But OpenAI isn’t the only one with a bill coming due. As we speak, the markets are still in the thrall of an egregious, hype-stuffed bubble, with the hogs of Wall Streets braying and oinking their loudest as Jensen Huang claims — without any real breakdown as to who is buying them — that NVIDIA has over $500 bn in bookings for its AI chips , with little worry about whether there’s enough money to actually pay for all of those GPUs or, more operatively, whether anybody plugging them in is making any profits off of them. To be clear, everybody is losing money on AI. Every single startup, every single hyperscaler, everybody who isn’t selling GPUs or servers with GPUs inside them is losing money on AI. No matter how many headlines or analyst emissions you consume, the reality is that big tech has sunk over half a trillion dollars into this bullshit for two or three years, and they are only losing money.  So, at what point does all of this become worth it?  Actually, let me reframe the question: how does any of this become worthwhile? Today, I’m going to try and answer the question, and have ultimately come to a brutal conclusion: due to the onerous costs of building data centers, buying GPUs and running AI services, big tech has to add $2 Trillion in AI revenue in the next four years. Honestly, I think they might need more. No, really. Big tech has already spent $605 billion in capital expenditures since 2023, with a chunk of that dedicated to 5-year-old (A100) and 4-year-old (H100) GPUs, and the rest dedicated to buying Blackwell chips that The Information reports have gross margins of negative 100% : Big tech’s lack of tangible revenue (let alone profits) from selling AI services only compounds the problem, meaning every dollar of capex burned on AI is currently putting these companies further in the hole.  Yet there’s also another problem - that GPUs are uniquely expensive to purchase, run and maintain, requiring billions of dollars of data center construction and labor before you can even make a dollar. Worse still, their value decays every single year, in part thanks to the physics of heat and electricity, and NVIDIA releasing a new chip every single year .

0 views

This Is How Much Anthropic and Cursor Spend On Amazon Web Services

So, I originally planned for this to be on my premium newsletter, but decided it was better to publish on my free one so that you could all enjoy it. If you liked it, please consider subscribing to support my work. Here’s $10 off the first year of annual . I’ve also recorded an episode about this on my podcast Better Offline ( RSS feed , Apple , Spotify , iHeartRadio ), it’s a little different but both handle the same information, just subscribe and it'll pop up.  Over the last two years I have written again and again about the ruinous costs of running generative AI services, and today I’m coming to you with real proof. Based on discussions with sources with direct knowledge of their AWS billing, I am able to disclose the amounts that AI firms are spending, specifically Anthropic and AI coding company Cursor, its largest customer . I can exclusively reveal today Anthropic’s spending on Amazon Web Services for the entirety of 2024, and for every month in 2025 up until September, and that that Anthropic’s spend on compute far exceeds that previously reported.  Furthermore, I can confirm that through September, Anthropic has spent more than 100% of its estimated revenue (based on reporting in the last year) on Amazon Web Services, spending $2.66 billion on compute on an estimated $2.55 billion in revenue. Additionally, Cursor’s Amazon Web Services bills more than doubled from $6.2 million in May 2025 to $12.6 million in June 2025, exacerbating a cash crunch that began when Anthropic introduced Priority Service Tiers, an aggressive rent-seeking measure that begun what I call the Subprime AI Crisis , where model providers begin jacking up the prices on their previously subsidized rates. Although Cursor obtains the majority of its compute from Anthropic — with AWS contributing a relatively small amount, and likely also taking care of other parts of its business — the data seen reveals an overall direction of travel, where the costs of compute only keep on going up .  Let’s get to it. In February of this year, The information reported that Anthropic burned $5.6 billion in 2024, and made somewhere between $400 million and $600 million in revenue: While I don’t know about prepayment for services, I can confirm from a source with direct knowledge of billing that Anthropic spent $1.35 billion on Amazon Web Services in 2024, and has already spent $2.66 billion on Amazon Web Services through the end of September. Assuming that Anthropic made $600 million in revenue, this means that Anthropic spent $6.2 billion in 2024, leaving $4.85 billion in costs unaccounted for.  The Information’s piece also brings up another point: Before I go any further, I want to be clear that The Information’s reporting is sound, and I trust that their source (I have no idea who they are or what information was provided) was operating in good faith with good data. However, Anthropic is telling people it spent $1.5 billion on just training when it has an Amazon Web Services bill of $1.35 billion, which heavily suggests that its actual compute costs are significantly higher than we thought, because, to quote SemiAnalysis, “ a large share of Anthropic’s spending is going to Google Cloud .”  I am guessing, because I do not know, but with $4.85 billion of other expenses to account for, it’s reasonable to believe Anthropic spent an amount similar to its AWS spend on Google Cloud. I do not have any information to confirm this, but given the discrepancies mentioned above, this is an explanation that makes sense. I also will add that there is some sort of undisclosed cut that Amazon gets of Anthropic’s revenue, though it’s unclear how much. According to The Information , “Anthropic previously told some investors it paid a substantially higher percentage to Amazon [than OpenAI’s 20% revenue share with Microsoft] when companies purchase Anthropic models through Amazon.” I cannot confirm whether a similar revenue share agreement exists between Anthropic and Google. This also makes me wonder exactly where Anthropic’s money is going. Anthropic has, based on what I can find, raised $32 billion in the last two years, starting out 2023 with a $4 billion investment from Amazon from September 2023 (bringing the total to $37.5 billion), where Amazon was named its “primary cloud provider” nearly eight months after Anthropic announced Google was Anthropic’s “cloud provider.,” which Google responded to a month later by investing another $2 billion on October 27 2023 , “involving a $500 million upfront investment and an additional $1.5 billion to be invested over time,” bringing its total funding from 2023 to $6 billion. In 2024, it would raise several more rounds — one in January for $750 million, another in March for $884.1 million, another in May for $452.3 million, and another $4 billion from Amazon in November 2024 , which also saw it name AWS as Anthropic’s “primary cloud and training partner,” bringing its 2024 funding total to $6 billion. In 2025 so far, it’s raised a $1 billion round from Google , a $3.5 billion venture round in March, opened a $2.5 billion credit facility in May, and completed a $13 billion venture round in September, valuing the company at $183 billion . This brings its total 2025 funding to $20 billion.  While I do not have Anthropic’s 2023 numbers, its spend on AWS in 2024 — around $1.35 billion — leaves (as I’ve mentioned) $4.85 billion in costs that are unaccounted for. The Information reports that costs for Anthropic’s 521 research and development staff reached $160 million in 2024 , leaving 394 other employees unaccounted for (for 915 employees total), and also adding that Anthropic expects its headcount to increase to 1900 people by the end of 2025. The Information also adds that Anthropic “expects to stop burning cash in 2027.” This leaves two unanswered questions: An optimist might argue that Anthropic is just growing its pile of cash so it’s got a warchest to burn through in the future, but I have my doubts. In a memo revealed by WIRED , Anthropic CEO Dario Amodei stated that “if [Anthropic wanted] to stay on the frontier, [it would] gain a very large benefit from having access to this capital,” with “this capital” referring to money from the Middle East.  Anthropic and Amodei’s sudden willingness to take large swaths of capital from the Gulf States does not suggest that it’s not at least a little desperate for capital, especially given Anthropic has, according to Bloomberg , “recently held early funding talks with Abu Dhabi-based investment firm MGX” a month after raising $13 billion . In my opinion — and this is just my gut instinct — I believe that it is either significantly more expensive to run Anthropic than we know, or Anthropic’s leaked (and stated) revenue numbers are worse than we believe. I do not know one way or another, and will only report what I know. So, I’m going to do this a little differently than you’d expect, in that I’m going to lay out how much these companies spent, and draw throughlines from that spend to its reported revenue numbers and product announcements or events that may have caused its compute costs to increase. I’ve only got Cursor’s numbers from January through September 2025, but I have Anthropic’s AWS spend for both the entirety of 2024 and through September 2025. So, this term is one of the most abused terms in the world of software, but in this case , I am sticking to the idea that it means “month times 12.” So, if a company made $10m in January, you would say that its annualized revenue is $120m. Obviously, there’s a lot of (when you think about it, really obvious) problems with this kind of reporting — and thus, you only ever see it when it comes to pre-IPO firms — but that’s besides the point. I give you this explanation because, when contrasting Anthropic’s AWS spend with its revenues, I’ve had to work back from whatever annualized revenues were reported for that month.  Anthropic’s 2024 revenues are a little bit of a mystery, but, as mentioned above, The Information says it might be between $400 million and $600 million. Here’s its monthly AWS spend.  I’m gonna be nice here and say that Anthropic made $600 million in 2024 — the higher end of The Information’s reporting — meaning that it spent around 226% of its revenue ($1.359 billion) on Amazon Web Services. [Editor's note: this copy originally had incorrect maths on the %. Fixed now.] Thanks to my own analysis and reporting from outlets like The Information and Reuters, we have a pretty good idea of Anthropic’s revenues for much of the year. That said, July, August, and September get a little weirder, because we’re relying on “almosts” and “approachings,” as I’ll explain as we go. I’m also gonna do an analysis on a month-by-month basis, because it’s necessary to evaluate these numbers in context.  In this month, Anthropic’s reported revenue was somewhere from $875 million to $1 billion annualized , meaning either $72.91 million or $83 million for the month of January. In February, as reported by The Information , Anthropic hit $1.4 billion annualized revenue, or around $116 million each month. In March, as reported by Reuters , Anthropic hit $2 billion in annualized revenue, or $166 million in revenue. Because February is a short month, and the launch took place on February 24 2025, I’m considering the launches of Claude 3.7 Sonnet and Claude Code’s research preview to be a cost burden in the month of March. And man, what a burden! Costs increased by $59.1 million, primarily across compute categories, but with a large ($2 million since January) increase in monthly costs for S3 storage. I estimate, based on a 22.4% compound growth rate, that Anthropic hit around $2.44 billion in annualized revenue in April, or $204 million in revenue. Interestingly, this was the month where Anthropic launched its $100 and $200 dollar a month “Max” plan s, and it doesn’t seem to have dramatically increased its costs. Then again, Max is also the gateway to things like Claude Code, which I’ll get to shortly. In May, as reported by CNBC , Anthropic hit $3 billion in annualized revenue, or $250 million in monthly average revenue. This was a big month for Anthropic, with two huge launches on May 22 2025 — its new, “more powerful” models Claude Sonnet and Opus 4, as well as the general availability of its AI coding environment Claude Code. Eight days later, on May 30 2025, a page on Anthropic's API documentation appeared for the first time: " Service Tiers ": Accessing the priority tier requires you to make an up-front commitment to Anthropic , and said commitment is based on a number of months (1, 3, 6 or 12) and the number of input and output tokens you estimate you will use each minute.  As I’ll get into in my June analysis, Anthropic’s Service Tiers exist specifically for it to “guarantee” your company won’t face rate limits or any other service interruptions, requiring a minimum spend, minimum token throughput, and for you to pay higher rates when writing to the cache — which is, as I’ll explain, a big part of running an AI coding product like Cursor. Now, the jump in costs — $65.1 million or so between April and May — likely comes as a result of the final training for Sonnet and Opus 4, as well as, I imagine, some sort of testing to make sure Claude Code was ready to go. In June, as reported by The Information, Anthropic hit $4 billion in annualized revenue, or $333 million. Anthropic’s revenue spiked by $83 million this month, and so did its costs by $34.7 million.  I have, for a while, talked about the Subprime AI Crisis , where big tech and companies like Anthropic, after offering subsidized pricing to entice in customers, raise the rates on their customers to start covering more of their costs, leading to a cascade where businesses are forced to raise their prices to handle their new, exploding costs. And I was god damn right. Or, at least, it sure looks like I am. I’m hedging, forgive me. I cannot say for certain, but I see a pattern.  It’s likely the June 2025 spike in revenue came from the introduction of service tiers, which specifically target prompt caching, increasing the amount of tokens you’re charged for as an enterprise customer based on the term of the contract, and your forecast usage. Per my reporting in July :  Cursor, as Anthropic’s largest client (the second largest being Github Copilot), represents a material part of its revenue, and its surging popularity meant it was sending more and more revenue Anthropic’s way.  Anysphere, the company that develops Cursor, hit $500 million annualized revenue ($41.6 million) by the end of May , which Anthropic chose to celebrate by increasing its costs. On June 16 2025, Cursor launched a $200-a-month “Ultra” plan , as well as dramatic changes to its $20-a-month Pro pricing that, instead of offering 500 “fast” responses using models from Anthropic and OpenAI, now effectively provided you with “at least” whatever you paid a month (so $20-a-month got at least $20 of credit), massively increasing the costs for users , with one calling the changes a “rug pull” after spending $71 in a single day . As I’ll get to later in the piece, Cursor’s costs exploded from $6.19 million in May 2025 to $12.67 million in June 2025, and I believe this is a direct result of Anthropic’s sudden and aggressive cost increases.  Similarly, Replit, another AI coding startup, moved to “Effort-Based Pricing” on June 18 2025 . I have not got any information around its AWS spend. I’ll get into this a bit later, but I find this whole situation disgusting. In July, as reported by Bloomberg , Anthropic hit $5 billion in annualized revenue, or $416 million. While July wasn’t a huge month for announcements, it was allegedly the month that Claude Code was generating “nearly $400 million in annualized revenue,” or $33.3 million ( according to The Information , who says Anthropic was “approaching” $5 billion in annualized revenue - which likely means LESS than that - but I’m going to go with the full $5 billion annualized for sake of fairness.  There’s roughly an $83 million bump in Anthropic’s revenue between June and July 2025, and I think Claude Code and its new rates are a big part of it. What’s fascinating is that cloud costs didn’t increase too much — by only $1.8 million, to be specific. In August, according to Anthropic, its run-rate “ reached over $5 billion ,” or in or around $416 million. I am not giving it anything more than $5 billion, especially considering in July Bloomberg’s reporting said “about $5 billion.” Costs grew by $60.5 this month, potentially due to the launch of Claude Opus 4.1 , Anthropic’s more aggressively expensive model, though revenues do not appear to have grown much along the way. Yet what’s very interesting is that Anthropic — starting August 28 — launched weekly rate limits on its Claude Pro and Max plans. I wonder why? Oh fuck! Look at that massive cost explosion! Anyway, according to Reuters, Anthropic’s run rate is “approaching $7 billion” in October , and for the sake of fairness , I am going to just say it has $7 billion annualized, though I believe this number to be lower. “Approaching” can mean a lot of different things — $6.1 billion, $6.5 billion — and because I already anticipate a lot of accusations of “FUD,” I’m going to err on the side of generosity. If we assume a $6.5 billion annualized rate, that would make this month’s revenue $541.6 million, or 95.8% of its AWS spend.   Nevertheless, Anthropic’s costs exploded in the space of a month by $135.2 million (35%) - likely due to the fact that users, as I reported in mid-July, were costing it thousands or tens of thousands of dollars in compute , a problem it still faces to this day, with VibeRank showing a user currently spending $51,291 in a calendar month on a $200-a-month subscription . If there were other costs, they likely had something to do with the training runs for the launches of Sonnet 4.5 on September 29 2025 and Haiku 4.5 in October 2025 . While these costs only speak to one part of its cloud stack — Anthropic has an unknowable amount of cloud spend on Google Cloud, and the data I have only covers AWS — it is simply remarkable how much this company spends on AWS, and how rapidly its costs seem to escalate as it grows. Though things improved slightly over time — in that Anthropic is no longer burning over 200% of its revenue on AWS alone — these costs have still dramatically escalated, and done so in an aggressive and arbitrary manner.  So, I wanted to visualize this part of the story, because I think it’s important to see the various different scenarios. THE NUMBERS I AM USING ARE ESTIMATES CALCULATED BASED ON 25%, 50% and 100% OF THE AMOUNTS THAT ANTHROPIC HAS SPENT ON AMAZON WEB SERVICES THROUGH SEPTEMBER.  I apologize for all the noise, I just want it to be crystal clear what you see next.   As you can see, all it takes is for Anthropic to spend (I am estimating) around 25% of its Amazon Web Services bills (for a total of around $3.33 billion in compute costs through the end of September) to savage any and all revenue ($2.55 billion) it’s making.  Assuming Anthropic spends half of its  AWS spend on Google Cloud, this number climbs to $3.99 billion, and if you assume - and to be clear, this is an estimate - that it spends around the same on both Google Cloud and AWS, Anthropic has spent $5.3 billion on compute through the end of September. I can’t tell you which it is, just that we know for certain that Anthropic is spending money on Google Cloud, and because Google owns 14% of the company — rivalling estimates saying Amazon owns around 15-19% — it’s fair to assume that there’s a significant spend. I have sat with these numbers for a great deal of time, and I can’t find any evidence that Anthropic has any path to profitability outside of aggressively increasing the prices on their customers to the point that its services will become untenable for consumers and enterprise customers alike. As you can see from these estimated and reported revenues, Anthropic’s AWS costs appear to increase in a near-linear fashion with its revenues, meaning that the current pricing — including rent-seeking measures like Priority Service Tiers — isn’t working to meet the burden of its costs. We do not know its Google Cloud spend, but I’d be shocked if it was anything less than 50% of its AWS bill. If that’s the case, Anthropic is in real trouble - the cost of the services underlying its business increase the more money they make. It’s becoming increasingly apparent that Large Language Models are not a profitable business. While I cannot speak to Amazon Web Services’ actual costs, it’s making $2.66 billion from Anthropic, which is the second largest foundation model company in the world.  Is that really worth $105 billion in capital expenditures ? Is that really worth building a giant 1200 acre data center in Indiana with 2.2GW of electricity? What’s the plan, exactly? Let Anthropic burn money for the foreseeable future until it dies, and then pick up the pieces? Wait until Wall Street gets mad at you and then pull the plug? Who knows.  But let’s change gears and talk about Cursor — Anthropic’s largest client and, at this point, a victim of circumstance. Amazon sells Anthropic’s models through Amazon Bedrock , and I believe that AI startups are compelled to spend some of their AI model compute costs through Amazon Web Services. Cursor also sends money directly to Anthropic and OpenAI, meaning that these costs are only one piece of its overall compute costs. In any case, it’s very clear that Cursor buys some degree of its Anthropic model spend through Amazon. I’ll also add that Tom Dotan of Newcomer reported a few months ago that an investor told him that “Cursor is spending 100% of its revenue on Anthropic.” Unlike Anthropic, we lack thorough reporting of the month-by-month breakdown of Cursor’s revenues. I will, however, mention them in the month I have them. For the sake of readability — and because we really don’t have much information on Cursor’s revenues beyond a few months — I’m going to stick to a bullet point list.  As discussed above, Cursor announced (along with their price change and $200-a-month plan) several multi-year partnerships with xAI, Anthropic, OpenAI and Google, suggesting that it has direct agreements with Anthropic itself versus one with AWS to guarantee “this volume of compute at a predictable price.”  Based on its spend with AWS, I do not see a strong “minimum” spend that would suggest that they have a similar deal with Amazon — likely because Amazon handles more than its infrastructure than just compute, but incentivizes it to spend on Anthropic’s models through AWS by offering discounts, something I’ve confirmed with a source.  In any case, here’s what Cursor spent on AWS. When I wrote that Anthropic and OpenAI had begun the Subprime AI Crisis back in July, I assumed that the increase in costs was burdensome, but having the information from its AWS bills, it seems that Anthropic’s actions directly caused Cursor’s costs to explode by over 100%.  While I can’t definitively say “this is exactly what did it,” the timelines match up exactly, the costs have never come down, Amazon offers provisioned throughput , and, more than likely, Cursor needs to keep a standard of uptime similar to that of Anthropic’s own direct API access. If this is what happened, it’s deeply shameful.  Cursor, Anthropic’s largest customer , in the very same month it hit $500 million in annualized revenue, immediately had its AWS and Anthropic-related costs explode to the point that it had to dramatically reduce the value of its product just as it hit the apex of its revenue growth.  It’s very difficult to see Service Tiers as anything other than an aggressive rent-seeking maneuver. Yet another undiscussed part of the story is that the launch of Claude 4 Opus and Sonnet — and the subsequent launch of Service Tiers — coincided with the launch of Claude Code , a product that directly competes with Cursor, without the burden of having to pay itself for the cost of models or, indeed, having to deal with its own “Service Tiers.” Anthropic may have increased the prices on its largest client at the time it was launching a competitor, and I believe that this is what awaits any product built on top of OpenAI or Anthropic’s models.  I realize this has been a long, number-stuffed article, but the long-and-short of it is simple: Anthropic is burning all of its revenue on compute, and Anthropic will willingly increase the prices on its customers if it’ll help it burn less money, even though that doesn’t seem to be working. What I believe happened to Cursor will likely happen to every AI-native company, because in a very real sense, Anthropic’s products are a wrapper for its own models, except it only has to pay the (unprofitable) costs of running them on Amazon Web Services and Google Cloud. As a result, both OpenAI and Anthropic can (and may very well!) devour the market of any company that builds on top of their models.  OpenAI may have given Cursor free access to its GPT-5 models in August, but a month later on September 15 2025 it debuted massive upgrades to its competitive “Codex” platform.  Any product built on top of an AI model that shows any kind of success can be cloned immediately by OpenAI and Anthropic, and I believe that we’re going to see multiple price increases on AI-native companies in the next few months. After all, OpenAI already has its own priority processing product, which it launched shortly after Anthropic’s in June . The ultimate problem is that there really are no winners in this situation. If Anthropic kills Cursor through aggressive rent-seeking, that directly eats into its own revenues. If Anthropic lets Cursor succeed, that’s revenue , but it’s also clearly unprofitable revenue . Everybody loses, but nobody loses more than Cursor’s (and other AI companies’) customers.  I’ve come away from this piece with a feeling of dread. Anthropic’s costs are out of control, and as things get more desperate, it appears to be lashing out at its customers, both companies like Cursor and Claude Code customers facing weekly rate limits on their more-powerful models who are chided for using a product they pay for. Again, I cannot say for certain, but the spike in costs is clear, and it feels like more than a coincidence to me.  There is no period of time that I can see in the just under two years of data I’ve been party to that suggests that Anthropic has any means of — or any success doing — cost-cutting, and the only thing this company seems capable of doing is increasing the amount of money it burns on a monthly basis.  Based on what I have been party to, the more successful Anthropic becomes, the more its services cost. The cost of inference is clearly increasing for customers , but based on its escalating monthly costs, the cost of inference appears to be high for Anthropic too, though it’s impossible to tell how much of its compute is based on training versus running inference. In any case, these costs seem to increase with the amount of money Anthropic makes, meaning that the current pricing of both subscriptions and API access seems unprofitable, and must increase dramatically — from my calculations, a 100% price increase might work, but good luck retaining every single customer and their customers too! — for this company to ever become sustainable.  I don’t think that people would pay those prices. If anything, I think what we’re seeing in these numbers is a company bleeding out from costs that escalate the more that its user base grows. This is just my opinion, of course.  I’m tired of watching these companies burn billions of dollars to destroy our environment and steal from everybody. I’m tired that so many people have tried to pretend there’s a justification for burning billions of dollars every year, clinging to empty tropes about how this is just like Uber or Amazon Web Services , when Anthropic has built something far more mediocre.  Mr. Amodei, I am sure you will read this piece, and I can make time to chat in person on my show Better Offline. Perhaps this Friday? I even have some studio time on the books.  I do not have all the answers! I am going to do my best to go through the information I’ve obtained and give you a thorough review and analysis. This information provides a revealing — though incomplete — insight into the costs of running Anthropic and Cursor, but does not include other costs, like salaries and compute obtained from other providers. I cannot tell you (and do not have insight into) Anthropic’s actual private moves. Any conclusions or speculation I make in this article will be based on my interpretations of the information I’ve received, as well as other publicly-available information. I have used estimates of Anthropic’s revenue based on reporting across the last ten months. Any estimates I make are detailed and they are brief.  These costs are inclusive of every product bought on Amazon Web Services, including EC2, storage and database services (as well as literally everything else they pay for). Anthropic works with both Amazon Web Services and Google Cloud for compute. I do not have any information about its Google Cloud spend. The reason I bring this up is that Anthropic’s revenue is already being eaten up by its AWS spend. It’s likely billions more in the hole from Google Cloud and other operational expenses. I have confirmed with sources that every single number I give around Anthropic and Cursor’s AWS spend is the final cash paid to Amazon after any discounts or credits. While I cannot disclose the identity of my source, I am 100% confident in these numbers, and have verified their veracity with other sources. Where is the rest of Anthropic’s money going? How will it “stop burning cash” when its operational costs explode as its revenue increases? January 2024 - $52.9 million February 2024 - $60.9 million March 2024 - $74.3 million April 2024 - $101.1 million May 2024 - $100.1 million June 2024 - $101.8 million July 2024 - $118.9 million August 2024 - $128.8 million September 2024 - $127.8 million October 2024 - $169.6 million November 2024 - $146.5 million December 2024 - $176.1 million January 2025 - $1.459 million This, apparently, is the month that Cursor hit $100 million annualized revenue — or $8.3 million, meaning it spent 17.5% of its revenue on AWS. February 2025 - $2.47 million March 2025 - $4.39 million April 2025 - $4.74 million Cursor hit $200 million annualized ($16.6 million) at the end of March 2025 , according to The Information, working out to spending 28% of its revenue on AWS.   May 2025 - $6.19 million June 2025 - $12.67 million So, Bloomberg reported that Cursor hit $500 million on June 5 2025 , along with raising a $900 million funding round. Great news! Turns out it’d need to start handing a lot of that to Anthropic. This was, as I’ve discussed above, the month when Anthropic forced it to adopt “Service Tiers”. I go into detail about the situation here , but the long and short of it is that Anthropic increased the amount of tokens you burned by writing stuff to the cache (think of it like RAM in a computer), and AI coding startups are very cache heavy, meaning that Cursor immediately took on what I believed would be massive new costs. As I discuss in what I just linked, this led Cursor to aggressively change its product, thereby vastly increasing its customers’ costs if they wanted to use the same service. That same month, Cursor’s AWS costs — which I believe are the minority of its cloud compute costs — exploded by 104% (or by $6.48 million), and never returned to their previous levels. It’s conceivable that this surge is due to the compute-heavy nature of the latest Claude 4 models released that month — or, perhaps, Cursor sending more of its users to other models that it runs on Bedrock.  July 2025 - $15.5 million As you can see, Cursor’s costs continue to balloon in July, and I am guessing it’s because of the Service Tiers situation — which, I believe, indirectly resulted in Cursor pushing more users to models that it runs on Amazon’s infrastructure. August 2025 - $9.67 million So, I can only guess as to why there was a drop here. User churn? It could be the launch of GPT-5 on Cursor , which gave users a week of free access to OpenAI’s new models. What’s also interesting is that this was the month when Cursor announced that its previously free “auto” model (where Cursor would select the best available premium model or its own model) would now bill at “ competitive token rates ,” by which I mean it went from charging nothing to $1.25 per million input and $6 per million output tokens. This change would take effect on September 15 2025. On August 10 2025 , Tom Dotan of Newcomer reported that Cursor was “well above” $500 million in annualized revenue based on commentary from two sources. September 2025 - $12.91 million Per the above, this is the month when Cursor started charging for its “auto” model.

1 views

OpenAI Needs $400 Billion In The Next 12 Months

Hello readers! This premium edition features a generous free intro because I like to try and get some of the info out there, but the real indepth stuff is below the cut. Nevertheless, I deeply appreciate anyone subscribing. On Monday I will have my biggest scoop ever, and it'll go out on the free newsletter because of its scale. This is possible because of people supporting me on the premium. Thanks so much for reading. One of the only consistent critiques of my work is that I’m angry, irate, that I am taking myself too seriously, that I’m swearing too much, and that my arguments would be “better received” if I “calmed down.” Look at where being timid or deferential has got us. Broadcom and OpenAI have announced another 10GW of custom chips and supposed capacity which will supposedly get fully deployed by the end of 2029, and still the media neutrally reports these things as not simply doable, but rational. To be clear, building a gigawatt of data center capacity costs at least $32.5 billion (though Jensen Huang says the computing hardware alone costs $50 billion , which excludes the buildings themselves and the supporting power infrastructure, and Barclays Bank says $50 billion to $60 billion ) and takes two and a half years.  OpenAI has now promised 33GW of capacity across AMD , NVIDIA , Broadcom and the seven data centers built under Stargate , though one of those — in Lordstown, Ohio — is not actually a data center, with my source being “SoftBank,” speaking to WKBN in Lordstown Ohio , which said it will “not be a full-blown data center,” and instead be “at the center of cutting-edge technology that will encompass storage containers that will hold the infrastructure for AI and data storage.” This wasn’t hard to find, by the way! I googled “SoftBank Lordstown” and up it came, ready for me to read with my eyes. Putting all of that aside, I think it’s time that everybody started taking this situation far more seriously, by which I mean acknowledging the sheer recklessness and naked market manipulation taking place.  But let’s make it really simple, and write out what’s meant to happen in the next year: In my most conservative estimate, these data centers will cost over $100 billion, and to be clear, a lot of that money needs to already be in OpenAI’s hands to get the data centers built. Or, some other dupe has to a.) have the money, and b.) be willing to front it.  All of this is a fucking joke. I’m sorry, I know some of you will read this, cowering from your screen like a B-movie vampire that just saw a crucifix, but it is a joke, and it is a fucking stupid joke, the only thing stupider being that any number of respectable media outlets are saying these things like they’ll actually happen. There is not enough time to build these things. If there was enough time, there wouldn’t be enough money. If there was enough money, there wouldn’t be enough transformers, electrical-grade steel, or specialised talent to run the power to the data centers. Fuck! Piss! Shit! Swearing doesn’t change the fact that I’m right — none of what OpenAI, NVIDIA, Broadcom, and AMD are saying is possible, and it’s fair to ask why they’re saying it. I mean, we know. Number must go up, deal must go through, and Jensen Huang wouldn’t go on CNBC and say “yeah man if I’m honest I’ve got no fucking clue how Sam Altman is going to pay me, other than with the $10 billion I’m handing him in a month. Anyway, NVIDIA’s accounts receivables keep increasing every quarter for a normal reason , don’t worry about it.”  But in all seriousness, we now have three publicly-traded tech firms that have all agreed to join Sam Altman’s No IT Loads Refused Cash Dump, all promising to do things on insane timelines that they — as executives of giant hardware manufacturers, or human beings with warm bodies and pulses and sciatica — all must know are impossible to meet.  What is the media meant to do? What are we, as regular people, meant to do? These stocks keep pumping based on completely nonsensical ideas, and we’re all meant to sit around pretending things are normal and good. They’re not! At some point somebody’s going to start paying people actual, real dollars at a scale that OpenAI has never truly had to reckon with. In this piece, I’m going to spell out in no uncertain terms exactly what OpenAI has to do in the next year to fulfil its destiny — having a bunch of capacity that cost ungodly amounts of money to serve demand that never arrives. Yes, yes, I know, you’re going to tell me that OpenAI has 800 million weekly active users , and putting aside the fact that OpenAI’s own research ( see page 10, footnote 20 ) says it double-counts users who are logged out if they’re use different devices, OpenAI is saying it wants to build 250 gigawatts of capacity by 2033, which will cost it $10 trillion dollars, or one-third of the entire US economy last year . Who the fuck for?  One thing that’s important to note: In February, Goldman Sachs estimated that the global data center capacity was around 55GW . In essence, OpenAI says it wants to add five times that capacity — something that has grown organically over the past thirty or so years — by itself, and in eight years.  And yes, it’ll cost one-third of America’s output in 2024. This is not a sensible proposition.  Even if you think that OpenAI’s growth is impressive — it went from 700 million to 800 million weekly active users in the last two months — that is not the kind of growth that says “build capacity assuming that literally every single human being on Earth uses this all the time.”  Anyway, what exactly is OpenAI doing? Why does it need all this capacity? Even if it  hits its $13 billion revenue projection for this year ( it’s only at $5.3 billion or so as of the end of August, and for OpenAI to hit its targets it’ll need to make $1.5bn+ a month very soon ), does it really think it’s going to effectively 10x the entire company from here? What possible sign is there of that happening other than a conga-line of different executives willing to stake their reputations on blatant lies peddled by a man best known for needing, at any given moment, another billion dollars .  According to The Information , OpenAI spent $6.7 billion on research and development in the first six months of 2025, and according to Epoch AI, most of the $5 billion it spent on research and development in 2024 was spent on research, experimental, or derisking runs (basically running tests before doing the final testing run) and models it would never release, with only $480 million going to training actual models that people will use.   I should also add that GPT 4.5 was a dud , and even Altman called it giant, expensive, and said it “wouldn’t crush benchmarks.” I’m sorry, but what exactly is it that OpenAI has released in the last year-and-a-half that was worth burning $11.7 billion for? GPT 5? That was a huge letdown ! Sora 2? The giant plagiarism machine that it’s already had to neuter ? What is it that any of you believe that OpenAI is going to do with these fictional data centers?  The problem with ChatGPT isn’t just that it hallucinates — it’s that you can’t really say exactly what it can do, because you can’t really trust that it can do anything. Sure, it’ll get a few things right a lot of the time, but what task is it able to do every time that you actually need?  Say the answer is “something that took me an hour now takes me five minutes.” Cool! How many of those do you get? Again, OpenAI wants to build 250 gigawatts of data centers, and will need around ten trillion dollars to do it . “It’s going to be really good” is no longer enough. And no, I’m sorry, they are not building AGI. He just told Politico a few weeks ago that if we didn’t have “models that are extraordinarily capable and do things that we ourselves cannot do” by 2030 he would be “very surprised.”  Wow! What a stunning and confident statement. Let’s give this guy the ten trillion dollars he needs! And he’s gonna need it soon if he wants to build 250 gigawatts of capacity by 2033. But let’s get a little more specific. Based on my calculations, in the next six months, OpenAI needs at least $50 billion to build a gigawatt of data centers for Broadcom — and to hit its goal of 10 gigawatts of data centers by end of 2029, at least another $200 billion in the next 12 months, not including at least $50 billion to build a gigawatt of data centers for NVIDIA , $40 billion to pay for its 2026 compute , at least $50 billion to buy chips and build a gigawatt of data centers for AMD , at least $500 million to build its consumer device ( and they can’t seem to work out what to build ), and at least a billion dollars to hand off to ARM for a CPU to go with the new chips from Broadcom . That’s $391.5 billion dollars! That’s $23.5 billion more than the $368 billion of global venture capital raised in 2024 ! That’s nearly 11 times Uber’s total ( $35.8 billion ) lifetime funding, or 5.7 times the $67.6 billion in capital expenditures that Amazon spent building Amazon Web Services !  On top of all of this are OpenAI’s other costs. According to The Information , OpenAI spent $2 billion alone on Sales and Marketing in the first half of 2025, and likely spends billions of dollars on salaries, meaning that it’ll likely need at least another $10 billion on top. As this is a vague cost, I’m going with a rounded $400 billion number, though I believe it’s actually going to be more. And to be clear, to complete these deals by the end of 2026, OpenAI needs large swaths of this money by February 2026.   I know, I know, you’re going to say that OpenAI will simply “raise debt” and “work it out,” but OpenAI has less than a year to do that, because OpenAI has promised in its own announcements that all of these things would happen by the end of December 2026, and even if they’re going to happen in 2027, data centers require actual money to begin construction, and Broadcom, NVIDIA and AMD are going to actually require cash for those chips before they ship them. Even if OpenAI finds multiple consortiums of paypigs to take on the tens of billions of dollars of data center funding, there are limits, and based on OpenAI’s aggressive (and insane) timelines, they will need to raise multiple different versions of the largest known data center deals of all time, multiple times a year, every single year.  Say that happens. OpenAI will still need to pay those compute contracts with Oracle , CoreWeave , Microsoft (I believe its Azure credits have run out) and Google ( via CoreWeave ) with actual, real cash — $40 billion dollars worth — when it’s already burning $9.2 billion in the first half of 2026 on compute against revenues of $4.3 billion . OpenAI will still need to pay its staff, its storage, its sales and marketing department that cost it $2 billion in the first half of 2026, all while converting its non-profit into a for-profit by the end of the year, or it loses $20 billion in funding from SoftBank . Also, if it doesn’t convert to a for-profit by October 2026, its $6.6 billion funding round from 2024 converts to debt . The burden that OpenAI is putting on the financial system is remarkable, and actively dangerous. It would absorb, at this rate, the capital expenditures of multiple hyperscalers, requiring multiple $30 billion debt financing operations a year, and for it to hit its goal of 250 gigawatts by the end of 2033, it will likely have to have outpaced the capital expenditures of any other company in the world. OpenAI is an out-of-control monstrosity that is going to harm every party that depends upon it completing its plans. For it to succeed, it will have to absorb over a trillion dollars a year — and for it to hit its target, it will likely have to eclipse the $1.7 trillion in global private equity deal volume in 2024 , and become a significant part of global trade ( $33 trillion in 2025 ). There isn’t enough money to do this without diverting most of the money that exists to doing it, and even if that were to happen, there isn’t enough time to do any of the stuff that has been promised in anything approaching the timelines promised, because OpenAI is making this up as it goes along and somehow everybody is believing it.  At some point, OpenAI is going to have to actually do the things it has promised to do, and the global financial system is incapable of supporting them. And to be clear, OpenAI cannot really do any of the things it’s promised . Just take a look at the Oracle deal! None of this bullshit is happening, and it’s time to be honest about what’s actually going on. OpenAI is not building “the AI industry,” as this is capacity for one company that burns billions of dollars and has absolutely no path to profitability.  This is a giant, selfish waste of money and time, one that will collapse the second that somebody’s confidence wavers. I realize that it’s tempting to write “Sam Altman is building a giant data center empire,” but what Sam Altman is actually doing is lying. He is lying to everybody.  He is saying that he will build 250GW of data centers in the space of eight years , an impossible feat, requiring more money than anybody would ever give him in volumes and intervals that are impossible for anybody to raise.  Sam Altman’s singular talent is finding people willing to believe his shit or join him in an economy-supporting confidence game, and the recklessness of continuing to do so will only harm retail investors — regular people beguiled by the bullshit machine and bullshit masters making billions promising they’ll make trillions. To prove it, I’m going to write down everything that will need to take place in the next twelve months for this to happen, and illustrate the timelines of everything involved.  In the second half of 2026 , OpenAI and Broadcom will tape out and successfully complete an AI inference chip, then manufacture enough of them to fill a 1GW data center. That data center will be built in an as-yet-unknown location, and will have at least 1GW of power, but more realistically it will need 1.2GW to 1.3GW of power, because for every 1GW of IT load, you need extra power capacity in reserve for the hottest day of the year, when the cooling system works hardest and power transmission losses are highest. .  OpenAI does not appear to have a site for this data center, and thus has not broken ground on it. In the second half of 2026, AMD and OpenAI will begin “ the first 1 gigawatt deployment of AMD Instinct MI450 GPUs .”  This will take place in an as-yet-unnamed data center location, which to be completed by that time would have needed to start construction and early procurement of power at least a year ago, if not more.  In the second half of 2026, OpenAI and NVIDIA will deploy the first gigawatt of NVIDIA’s Vera Rubin GPU systems as part of their $100 billion deal. These GPUs will be deployed in a data center of some sort, which remains unnamed, but for them to meet this timeline they will need to have started construction at least a year ago. Oracle needs 4.5 gigawatts of IT load capacity to provide OpenAI the compute for its $300 billion, five-year-long deal . Despite Oracle CEO Greg Magouyrk saying “ of course OpenAI can pay $60 billion a year ,” OpenAI cannot actually afford to pay $60 billion a year. It’s on course to lose  Even if it could, Oracle needs 4.5GW of capacity. Stargate Abilene is meant to be completed by the end of 2026 (six months behind schedule), but ( as I reported last week ) only appears to have 200MW of the 1.5+GW of actual, real power it needs right now, and won’t have enough by the end of the year. Even if Abilene was completed on time, Oracle only has one other data center location planned — a 1.4GW data center plot in Shackelford, Texas that has only just begun construction, and will only have a single building by the second half of 2026 .

1 views

The AI Bubble's Impossible Promises

Readers: I’ve done a very generous “free” portion of this newsletter, but I do recommend paying for premium to get the in-depth analysis underpinning the intro. That being said, I want as many people as possible to get the general feel for this piece. Things are insane, and it’s time to be realistic about what the future actually looks like. We’re in a bubble. Everybody says we’re in a bubble. You can’t say we’re not in a bubble anymore without sounding insane, because everybody is now talking about how OpenAI has promised everybody $1 trillion — something you could have read about two weeks ago on my premium newsletter . Yet we live in a chaotic, insane world, where we can watch the news and hear hand-wringing over the fact that we’re in a bubble , read article after CEO after article after CEO after analyst after investor saying we’re in a bubble, yet the market continues to rip ever-upward on increasingly more-insane ideas , in part thanks to analysts that continue to ignore the very signs that they’re relied upon to read . AMD and OpenAI signed a very strange deal where AMD will give OpenAI the chance to buy 160 million shares at a cent a piece, in tranches of indeterminate size, for every gigawatt of data centers OpenAI builds using AMD’s chips, adding that OpenAI has agreed to buy “six gigawatts of GPUs.” This is a peculiar way to measure GPUs, which are traditionally measured in the price of each GPU , but nevertheless, these chips are going to be a mixture of AMD’s mi450 instinct GPUs — which we don’t know the specs of! — and its current generation mi350 GPUs , making the actual scale of these purchases a little difficult to grasp, though the Wall Street Journal says it would “result in tens of billions of dollars in new revenue” for AMD . This AMD deal is weird , but one that’s rigged in favour of Lisa Su and AMD. OpenAI doesn’t get a dollar at any point - it has work out how to buy those GPUs and figure out how to build six further gigawatts of data centers on top of the 10GW of data centers it promised to build for NVIDIA and the seven-to-ten gigawatts that are allegedly being built for Stargate , bringing it to a total of somewhere between 23 and 26 gigawatts of data center capacity. Hell, while we’re on the subject, has anyone thought about how difficult and expensive it is to build a data center?  Everybody is very casual with how they talk about Sam Altman’s theoretical promises of trillions of dollars of data center infrastructure , and I'm not sure anybody realizes how difficult even the very basics of this plan will be. Nevertheless, everybody is happily publishing stories about how Stargate Abilene Texas — OpenAI’s massive data center with Oracle — is “open, ” by which they mean two buildings, and I’m not even confident both of them are providing compute to OpenAI yet. There are six more of them that need to get built for this thing to start rocking at 1.2GW — even though it’s only 1.1GW according to my sources in Abilene. But, hey, sorry — one minute — while we’re on that subject, did anybody visiting Abilene in the last week or so ever ask whether they’ll have enough power there?  Don’t worry, you don’t need to look. I’m sure you were just about to, but I did the hard work for you and read up on it, and it turns out that Stargate Abilene only has 200MW of power — a 200MW substation that, according to my sources, has only been built within the last couple of months, with 350MWs of gas turbine generators that connect to a natural gas power plant that might get built by the end of the year . Said turbine is extremely expensive, featuring volatile pricing ( for context, natural gas price volatility fell in Q2 2025…to 69% annualized ) and even more volatile environmental consequences , and is, while permitted for it ( this will download the PDF of the permit ), impractical and expensive to use long-term.  Analyst James van Geelen, founder of Citrini Research recently said on Bloomberg’s Odd Lots podcast that these are “not the really good natural gas turbines” because the really good ones would take seven years to deliver due to a natural gas turbine shortage . But they’re going to have to do. According to sources in Abilene, developer Lancium has only recently broken ground on the 1GW substation and five transformers OpenAI’s going to need to build out there , and based on my conversations with numerous analysts and researchers, it does not appear that Stargate Abilene will have sufficient power before the year 2027.  Then there’s the question of whether 1GW of power actually gets you 1GW of compute. This is something you never see addressed in the coverage of OpenAI’s various construction commitments, but it’s an important point to make. Analyst Daniel Bizo, Research Director at the Uptime Institute, explained that 1 gigawatt of power is only sufficient to power (roughly) 700 megawatts of data center capacity . We’ll get into the finer details of that later in this newsletter, but if we assume that ratio is accurate, we’re left with a troubling problem. That figure represents a 1.43 PUE — Power Usage Effectiveness — and if we apply that to Stargate Abilene, we see that it needs at least 1.7GW of power, and currently only has 200MW. Stargate Abilene does not have sufficient power to run at even half of its supposed IT load of 1.2GW, and at its present capacity — assuming that the gas turbines function at full power — can only hope to run 370MW to 460MW of IT load. I’ve seen article after article about the gas turbines and their use of fracked gas — a disgusting and wasteful act typical of OpenAI — but nobody appears to have asked “how much power does a 1.2GW data center require?” and then chased it with “how much power does Stargate Abilene have?” The answer is not enough, and the significance of said “not enough” is remarkable. Today, I’m going to tell you, at length, how impossible the future of generative AI is.  Gigawatt data centers are a ridiculous pipe dream, one that runs face-first into the walls of reality.   The world’s governments and media have been far too cavalier with the term “gigawatt,” casually breezing by the fact that Altman’s plans require 17 or more nuclear reactors’ worth of power , as if building power is quick and easy and cheap and just happens. I believe that many of you think that this is an issue of permitting — of simply throwing enough money at the problem — when we are in the midst of a shortage in the electrical grade steel and transformers required to expand America’s (and the world’s) power grid. I realize it’s easy to get blinded by the constant drumbeat of “ gargoyle-like tycoon cabal builds 1GW  data center ” and feel that they will simply overwhelm the problem with money, but no, I’m afraid that isn’t the case at all, and all of this is so silly, so ridiculous, so cartoonishly bad that it threatens even the seemingly-infinite wealth of Elon Musk, with xAI burning over a billion dollars a month and planning to spend tens of billions of dollars building the Colossus 2 data center , dragging two billion dollars from SpaceX in his desperate quest to burn as much money as possible for no reason.  This is the age of hubris — a time in which we are going to watch stupid, powerful and rich men fuck up their legacies by finding a technology so vulgar in its costs and mythical outcomes that it drives the avaricious insane and makes fools of them.  Or perhaps this is what happens when somebody believes they’ve found the ultimate con — the ability to become both the customer and the business, which is exactly what NVIDIA is doing to fund the chips behind Colossus 2. According to Bloomberg , NVIDIA is creating a company — a “special purpose vehicle” — that it will invest $2 billion in, along with several other backers. Once that’s done, the special purpose vehicle will then use that equity to raise debt from banks, buy GPUs from NVIDIA, and then rent those GPUs to Elon Musk for five years. Hell, why make it so complex? NVIDIA invested money in a company specifically built to buy chips from it, which then promptly handed the money back to NVIDIA along with a bunch of other money, and then whatever happened next is somebody else’s problem. Actually, wait — how long do GPUs last, exactly? Four years for training ? Three years ? The A100 GPU started shipping in May 2020 , and the H100 (and the Hopper GPU generation) entered full production in September 2022 , meaning that we’re hurtling at speed toward the time in which we’re going to start seeing a remarkable amount of chips start wearing down, which should be a concern for companies like Microsoft, which bought 150,000 Hopper GPUs in 2023 and 485,000 of them in 2024 . Alright, let me just be blunt: the entire economy of debt around GPUs is insane. Assuming these things don’t die within five years ( their warranties generally end in three ), their value absolutely will, as NVIDIA has committed to releasing a new AI chip every single year , likely with significant increases to power and power efficiency. At the end of the five year period, the Special Purpose Vehicle will be the proud owner of five-year-old chips that nobody is going to want to rent at the price that Elon Musk has been paying for the last five years. Don’t believe me? Take a look at the rental prices for H100 GPUs that went from $8-an-hour in 2023 to $2-an-hour in 2024 , or the Silicon Data Indexes (aggregated realtime indexes of hourly prices) that show H100 rentals at around $2.14-an-hour and A100 rentals at a dollar-an-hour , with Vast.AI offering them at as little as $0.67 an hour . This is, by the way, a problem that faces literally every data center being built in the world , and I feel insane talking about it. It feels like nobody is talking about how impossible and ridiculous all of this is. It’s one thing that OpenAI has promised one trillion dollars to people — it’s another that large swaths of that will be spent on hardware that will, by the end of these agreements, be half-obsolete and generating less revenue than ever. Think about it. Let’s assume we live in a fantasy land where OpenAI is somehow able to pay Oracle $300 billion over 5 years — which, although the costs will almost certainly grow over time, and some of the payments are front-loaded, averages out to $5bn each month, which is a truly insane number that’s in excess of what Netflix makes in revenue.  Said money is paying for access to Blackwell GPUs, which will, by then, be at least two generations behind, with NVIDIA’s Vera Rubin GPUs due next year . What happens to that GPU infrastructure? Why would OpenAI continue to pay the same rental rate for five-year-old Blackwell GPUs?   All of these ludicrous investments are going into building data centers full of what will, at that point, be old tech.  Let me put it in simple terms: imagine you, for some reason, rented an M1 Mac when it was released in 2020 , and your rental was done in 2025, when we’re onto the M4 series . Would you expect somebody to rent it at the same price? Or would they say “hey, wait a minute, for that price I could rent one of the newer generation ones.” And you’d be bloody right!  Now, I realize that $70,000 data center GPUs are a little different to laptops, but that only makes their decline in value more profound, especially considering the billions of dollars of infrastructure built around them.  And that’s the problem. Private equity firms are sinking $50 billion or more a quarter into theoretical data center projects full of what will be years-old GPU technology, despite the fact that there’s no real demand for generative AI compute , and that’s before you get to the grimmest fact of all: that even if you can build these data centers, it will take years and billions of dollars to deliver the power, if it’s even possible to do so. Harvard economist Jason Furman estimates that data centers and software accounted for 92% of GDP growth in the first half of this year , in line with my conversation with economist Paul Kedrosky from a few months ago .  All of this money is being sunk into infrastructure for an “AI revolution” that doesn’t exist, as every single AI company is unprofitable, with pathetic revenues ( $61 billion or so if you include CoreWeave and Lambda, both of which are being handed money by NVIDIA ), impossible-to-control costs that have only ever increased , and no ability to replace labor at scale ( and especially not software engineers ).   OpenAI needs more than a trillion dollars to pay its massive cloud compute bills and build 27 gigawatts of data centers, and to get there, it needs to start making incredible amounts of money, a job that’s been mostly handed to Fidji Simo, OpenAI’s new CEO of Applications , who is solely responsible for turning a company that loses billions of dollars into one that makes $200 billion in 2030 with $38 billion in profit . She’s been set up to fail, and I’m going to explain why. In fact, today I’m going to explain to you how impossible all of this is — not just expensive , not just silly , but actively impossible within any of the timelines set .  Stargate will not have the power it needs before the middle of 2026 — the beginning of Oracle’s fiscal year 2027, when OpenAI has to pay it $30 billion for compute — or, according to The Information , choose to walk away if the capacity isn’t complete. And based on my research, analysis and discussions with power and data center analysts, gigawatt data centers are, by and large, a pipedream, with their associated power infrastructure taking two to four years, and that’s if everything goes smoothly. OpenAI cannot build a gigawatt of data centers for AMD by the “second half of 2026.”   It haven’t even announced the financing, let alone where the data center might be, and until it does that it’s impossible to plan the power, which in and of itself takes months before you even start building.   Every promise you’re reading in the news is impossible. Nobody has even built a gigawatt data center, and more than likely nobody ever will . Stargate Abilene isn’t going to be ready in 2026, won’t have sufficient power until at best 2027, and based on the conversations I’ve had it’s very unlikely it will build that gigawatt substation before the year 2028.   In fact, let me put it a little simpler: all of those data center deals you’ve seen announced are basically bullshit. Even if they get the permits and the money, there are massive physical challenges that cannot be resolved by simply throwing money at them.  Today I’m going to tell you a story of chaos, hubris and fantastical thinking. I want you to come away from this with a full picture of how ridiculous the promises are, and that’s before you get to the cold hard reality that AI fucking sucks.

2 views

OpenAI Is Just Another Boring, Desperate AI Startup

What is OpenAI? I realize you might say "a foundation model lab" or "the company that runs ChatGPT," but that doesn't really give the full picture of everything it’s promised, or claimed, or leaked that it was or would be. No, really, if you believe its leaks to the press... To be clear, many of these are ideas that OpenAI has leaked specifically so the media can continue to pump up its valuation and continue to raise the money it needs — at least $1 Trillion over the next four or five years, and I don't believe the theoretical (or actual) costs of many of the things I've listed are included. OpenAI wants you to believe it is everything , because in reality it’s a company bereft of strategy, focus or vision. The GPT-5 upgrade for ChatGPT was a dud — an industry-wide embarrassment for arguably the most-hyped product in AI history, one that ( as I revealed a few months ago ) costs more to operate than its predecessor, not because of any inherent capability upgrade, but how it actually processes the prompts its user provides — and now it's unclear what it is that this company does.   Does it make hardware? Software? Ads? Is it going to lease you GPUs to use for your own AI projects? Is it going to certify you as an AI expert ? Notice how I've listed a whole bunch of stuff that isn't ChatGPT, which will, if you look at The Information's reporting of its projections, remain the vast majority of its revenue until 2027, at which point "agents" and "new products including free user monetization" will magically kick in. In reality, OpenAI is an extremely boring (and bad!) software business. It makes the majority of its revenue selling subscriptions to ChatGPT, and apparently had 20 million paid subscribers (as of April) and 5 million business subscribers (as of August, though 500,000 of them are Cal State University seats paid at $2.50 a month ). It also loses incredibly large amounts of money. Yes, I realize that OpenAI also sells access to its API, but as you can see from the chart above, it is making a teeny tiny sliver of revenue from it in 2025, though I will also add that this chart has a little bit of green for "agent" revenue, which means it's very likely bullshit. Operator, OpenAI's so-called agent, is barely functional , and I have no idea how anyone would even begin to charge money for it outside of "please try my broken product." In any case, API sales appear to be a very, very small part of OpenAI's revenue stream, and that heavily suggests a lack of interest in integrating its models at scale. Worse still, this effectively turns OpenAI into an AI startup. Think about it: if OpenAI can't make the majority of its money through "innovating" in the development of large language models (LLMs), then it’s just another company plugging LLMs into its software. While ChatGPT may be a very popular product, it is, by definition (and in its name!) a GPT wrapper, with the few differences being that OpenAI pays its own immediate costs, has the people necessary to continue improving its own models, and also continually makes promises to convince people it’s anything other than just another AI startup. In fact, the only real difference is the amount of money backing it. Otherwise, OpenAI could be literally any foundation model company, and with a lack of real innovation within those models, it’s just another startup trying to find ways to monetize generative AI, an industry that only ever seems to lose money . As a result, we should start evaluating OpenAI as just another AI startup, as its promises do not appear to mesh with any coherent strategy, other than " we need $1 trillion dollars ." There does not seem to be much of a plan on a day-to-day basis, nor does there seem to be one about what OpenAI should be, other than that OpenAI will be a consumer hardware, consumer software, enterprise SaaS and data center operator, as well as running a social network. As I've discussed many times , LLMs are inherently flawed due to their probabilistic nature."Hallucinations" — when a model authoritatively states something is true when it isn't (or takes an action that seems the most likely course of action, even if it isn't the right one) — are a " mathematically inevitable " according to OpenAI's own research feature of the technology, meaning that there is no fixing their most glaring, obvious problem , even with "perfect data." I'd wager the reason OpenAI is so eager to build out so much capacity while leaking so many diverse business lines is an attempt to get away from a dark truth: that when you peel away the hype, ChatGPT is a wrapper, every product it makes is a wrapper, and OpenAI is pretty fucking terrible at making products. Today I'm going to walk you through a fairly unique position: that OpenAI is just another boring AI startup lacking any meaningful product roadmap or strategy, using the press as a tool to pump its bags while very rarely delivering on what it’s promised. It is a company with massive amounts of cash, industrial backing, and brand recognition, and otherwise is, much like its customers, desperately trying to work out how to make money selling products built on top of Large Language Models. OpenAI lives and dies on its mythology as the center of innovation in the world of AI, yet reality is so much more mediocre. Its revenue growth is slowing, its products are commoditized, its models are hardly state-of-the-art, the overall generative AI industry has lost its sheen , and its killer app is a mythology that has converted a handful of very rich people and very few others. OpenAI spent, according to The Information , 150% ($6.7 billion in costs) of its H1 2025 revenue ($4.3 billion) on research and development, producing the deeply-underwhelming GPT-5 and Sora 2, an app that I estimate costs it upwards of $5 for each video generation, based on Azure's published rates for the first Sora model , though it's my belief that these rates are unprofitable, all so that it can gain a few more users. To be clear, R&D is good, and useful, and in my experience, the companies that spend deeply on this tend to be the ones that do well. The reason why Huawei has managed to outpace its American rivals in several key areas — like automotive technology and telecommunications — is because it spends around a quarter of its revenue on developing new technologies and entering new markets, rather than stock buybacks and dividends. The difference is that said R&D spending is both sustainable and useful, and has led to Huawei becoming much a stronger business, even as it languishes on a Treasury Department entity list that effectively cuts it off from US-made or US-origin parts or IP . Considering that OpenAI’s R&D spending was 38.28% of its cash-on-hand by the end of the period (totalling $17.5bn, which we’ll get to later), and what we’ve seen as a result, it’s hard to describe it as either sustainable or useful.     OpenAI isn't innovative, it’s exploitative, a giant multi-billion dollar grift attempting to hide how deeply unexciting it is, and how nonsensical it is to continue backing it . Sam Altman is an excellent operator, capable of spreading his mediocre, half-baked mantras about how 2025 was the year AI got smarter than us , or how we'll be building 1GW data centers each week (something that, by my estimations, takes 2.5 years), taking advantage of how many people in the media, markets and global governments don't know a fucking thing about anything. OpenAI is  also getting desperate. Beneath the surface of the media hype and trillion-dollar promises is a company struggling to maintain relevance, its entire existence built on top of hype and mythology. And at this rate, I believe it’s going to miss its 2025 revenue projections, all while burning billions more than anyone has anticipated. OpenAI is a social media company, this week launching Sora 2, a social feed entirely made up of generative video . OpenAI is a workplace productivity company, allegedly working on its own productivity suite to compete with Microsoft . OpenAI is a jobs portal, announcing in September it was "developing an AI-powered hiring platform ," which it will launch 'by mid-2026. OpenAI is an ads company, and is apparently trying to hire an an ads chief , with the (alleged) intent to start showing ads in ChatGPT "by 2026." OpenAI is a company that would sell AI compute like Microsoft Azure or Amazon Web Services, or at least is considering being one, with CFO Sarah Friar telling Bloomberg in August that it is not "actively looking" at such an effort today but will "think about it as a business down the line, for sure." OpenAI is a fabless semiconductor design company, launching its own AI chips in, again, 2026 with Broadcom , but only for internal use. OpenAI is a consumer hardware company, preparing to launch a device by the end of 2026 or early 2027 and hiring a bunch of Apple people to work on it , as well as considering — again, it’s just leaking random stuff at this point to pump up its value — a smart speaker, a voice recorder and AR glasses. OpenAI is also working on its own browser , I guess.

0 views

The Case Against Generative AI

Soundtrack: Queens of the Stone Age - First It Giveth Before we go any further: This is, for the third time this year, the longest newsletter I've ever written, weighing in somewhere around 18,500 words. I've written it specifically to be read at your leisure — dip in and out where you'd like — but also in one go.  This is my comprehensive case that yes, we’re in a bubble, one that will inevitably (and violently) collapse in the near future. I'll also be cutting this into a four-part episode starting tomorrow on my podcast Better Offline . I deeply appreciate your time. If you like this newsletter, please think about subscribing to the premium, which I write weekly. Thanks for reading. Alright, let’s do this one last time . In 2022, a (kind-of) company called OpenAI surprised the world with a website called ChatGPT that could generate text that sort-of sounded like a person using a technology called Large Language Models (LLMs), which can also be used to generate images, video and computer code.  Large Language Models require entire clusters of servers connected with high-speed networking, all containing this thing called a GPU — graphics processing units. These are different to the GPUs in your Xbox, or laptop, or gaming PC. They cost much, much more, and they’re good at doing the processes of inference (the creation of the output of any LLM) and training (feeding masses of training data to models, or feeding them information about what a good output might look like, so they can later identify a thing or replicate it). These models showed some immediate promise in their ability to articulate concepts or generate video, visuals, audio, text and code. They also immediately had one glaring, obvious problem: because they’re probabilistic, these models can’t actually be relied upon to do the same thing every single time. So, if you generated a picture of a person that you wanted to, for example, use in a story book, every time you created a new page, using the same prompt to describe the protagonist, that person would look different — and that difference could be minor (something that a reader should shrug off), or it could make that character look like a completely different person. Moreover, the probabilistic nature of generative AI meant that whenever you asked it a question, it would guess as to the answer, not because it knew the answer, but rather because it was guessing on the right word to add in a sentence based on previous training data. As a result, these models would frequently make mistakes — something which we later referred to as “hallucinations.”  And that’s not even mentioning the cost of training these models, the cost of running them, the vast amounts of computational power they required, the fact that the legality of using material scraped from books and the web without the owner’s permission was (and remains) legally dubious, or the fact that nobody seemed to know how to use these models to actually create profitable businesses.  These problems were overshadowed by something flashy, and new, and something that investors — and the tech media — believed would eventually automate the single thing that’s proven most resistant to automation: namely, knowledge work and the creative economy.  This newness and hype and these expectations sent the market into a frenzy, with every hyperscaler immediately creating the most aggressive market for one supplier I’ve ever seen. NVIDIA has sold over $200 billion of GPUs since the beginning of 2023 , becoming the largest company on the stock market and trading at over $170 as of writing this sentence only a few years after being worth $19.52 a share . While I’ve talked about some of the propelling factors behind the AI wave — automation and novelty — that’s not a complete picture. A huge reason why everybody decided to “do AI” was because the software industry’s growth was slowing , with SaaS (Software As A Service) company valuations stalling or dropping , resulting in  the terrifying prospect of companies having to “ under promise and over deliver ” and “be efficient.” Things that normal companies — those whose valuations aren’t contingent on ever-increasing, ever-constant growth — don’t have to worry about, because they’re normal companies.  Suddenly, there was the promise of a new technology — Large Language Models — that were getting exponentially more powerful, which was mostly a lie but hard to disprove because “powerful” can mean basically anything, and the definition of “powerful” depended entirely on whoever you asked at any given time, and what that person’s motivations were.  The media also immediately started tripping on its own feet, mistakenly claiming OpenAI’s GPT-4 model tricked a Taskrabbit into solving a CAPTCHA ( it didn’t — this never happened), or saying that “ people who don’t know how to code already [used] bots to produce full-fledged games, ” and if you’re wondering what “full-fledged” means, it means “pong” and a cobbled-together rolling demo of SkyRoads, a game from 1993 . The media (and investors) helped peddle the narrative that AI was always getting better, could do basically anything, and that any problems you saw today would be inevitably solved in a few short months, or years, or, well, at some point I guess.  LLMs were touted as a digital panacea, and the companies building them offered traditional software companies the chance to plug these models into their software using an API, thus allowing them to ride the same generative AI wave that every other company was riding.  The model companies similarly started going after individual and business customers, offering software and subscriptions that promised the world, though this mostly boiled down to chatbots that could generate stuff, and then doubled down with the promise of “agents” — a marketing term that’s meant to make you think “autonomous digital worker” but really means “ broken digital product .” Throughout this era, investors and the media spoke with a sense of inevitability that they never really backed up with data. It was an era based on confidently-asserted “vibes.” Everything was always getting better and more powerful, even though there was never much proof that this was truly disruptive technology, other than in its ability to disrupt apps you were using with AI — making them worse by, for example, suggesting questions on every Facebook post that you could ask Meta AI, but which Meta AI couldn’t answer. “AI” was omnipresent, and it eventually grew to mean everything and nothing. OpenAI would see its every move lorded over like a gifted child, its CEO Sam Altman called the “ Oppenheimer of Our Age ,” even if it wasn’t really obvious why everyone was impressed. GPT-4 felt like something a bit different, but was it actually meaningful?  The thing is, Artificial Intelligence is built and sold on not just faith, but a series of myths that the AI boosters expect us to believe with the same certainty that we treat things like gravity, or the boiling point of water.  Can large language models actually replace coders? Not really, no, and I’ll get into why later in the piece. Can Sora — OpenAI’s video creation tool — replace actors or animators? No, not at all, but it still fills the air full of tension because you can immediately see who is pre-registered to replace everyone that works for them.  AI is apparently replacing workers, but nobody appears to be able to prove it! But every few weeks a story runs where everybody tries to pretend that AI is replacing workers with some poorly-sourced and incomprehensible study , never actually saying “someone’s job got replaced by AI” because it isn’t happening at scale, and because if you provide real-world examples, people can actually check. To be clear, some people have lost jobs to AI, just not the white collar workers, software engineers, or really any of the career paths that the mainstream media and AI investors would have you believe.  Brian Merchant has done excellent work covering how LLMs have devoured the work of translators , using cheap, “almost good” automation to lower already-stagnant wages in a field that was already hurting before the advent of generative AI, with some having to abandon the field, and others pushed into bankruptcy. I’ve heard the same for art directors, SEO experts, and copy editors, and Christopher Mims of the Wall Street Journal covered these last year .  These are all fields with something in common: shitty bosses with little regard for their customers who have been eagerly waiting for the opportunity to slash contract labor. To quote Merchant, “the drumbeat, marketing, and pop culture of ‘powerful AI’ encourages and permits management to replace or degrade jobs they might not otherwise have.”  Across the board, the people being “replaced” by AI are the victims of lazy, incompetent cost-cutters who don’t care if they ship poorly-translated text. To quote Merchant again, “[AI hype] has created the cover necessary to justify slashing rates and accepting “good enough” automation output for video games and media products.” Yet the jobs crisis facing translators speaks to the larger flaws of the Large Language Model era, and why other careers aren’t seeing this kind of disruption. Generative AI creates outputs , and by extension defines all labor as some kind of output created from a request. In the case of translation, it’s possible for a company to get by with a shitty version, because many customers see translation as “what do these words say,” even though ( as one worker told Merchant ) translation is about conveying meaning. Nevertheless, “translation” work had already started to condense to a world where humans would at times clean up machine-generated text, and the same worker warned that the same might come for other industries. Yet the problem is that translation is a heavily output-driven industry, one where (idiot) bosses can say “oh yeah that’s fine” because they ran an output back through Google Translate and it seemed fine in their native tongue. The problems of a poor translation are obvious, but the customers of translation are, it seems, often capable of getting by with a shitty product. The problem is that most jobs are not output-driven at all, and what we’re buying from a human being is a person’s ability to think.   Every CEO talking about AI replacing workers is an example of the real problem: that most companies are run by people who don’t understand or experience the problems they’re solving, don’t do any real work, don’t face any real problems, and thus can never be trusted to solve them. The Era of the Business Idiot is the result of letting management consultants and neoliberal “free market” sociopaths take over everything, leaving us with companies run by people who don’t know how the companies make money, just that they must always make more. When you’re a big, stupid asshole, every job that you see is condensed to its outputs, and not the stuff that leads up to the output, or the small nuances and conscious decisions that make an output good as opposed to simply acceptable, or even bad.  What does a software engineer do? They write code! What does a writer do? They write words! What does a hairdresser do? They cut hair!  Yet that’s not actually the case.  As I’ll get into later, a software engineer does far more than just code, and when they write code they’re not just saying “what would solve this problem?” with a big smile on their face — they’re taking into account their years of experience, what code does, what code could do , all the things that might break as a result, and all of the things that you can’t really tell from just looking at code , like whether there’s a reason things are made in a particular way. A good coder doesn’t just hammer at the keyboard with the aim of doing a particular task. They factor in questions like: How does this functionality fit into the code that’s already here? Or, if someone has to update this code in the future, how do I make it easy for them to understand what I’ve written and to make changes without breaking a bunch of other stuff? A writer doesn’t just “write words.” They jostle ideas and ideals and emotions and thoughts and facts and feelings into a condensed piece of text, explaining both what’s happening and why it’s happening from their perspective, finding nuanced ways to convey large topics, none of which is the result of a single (or many) prompts but the ever-shifting sand of a writer’s brain.  Good writing is a fight between a bunch of different factors: structure, style, intent, audience, and prioritizing the things that you (or your client) care about in the text. It’s often emotive — or at the very least, driven or inspired by a given emotion — which is something that an AI simply can’t replicate in a way that’s authentic and believable.  And a hairdresser doesn’t just cut hair, but cuts your hair, which may be wiry, dry, oily, long, short, healthy, unhealthy, on a scalp with particular issues, at a time of year when perhaps you want to change length, at a time that fits you, in “the way you like” which may be impossible to actually write down but they get it just right. And they make conversation, making you feel at ease while they snip and clip away at your tresses, with you having to trust that they’ll get it right.  This is the true nature of labor that executives fail to comprehend at scale: that the things we do are not units of work, but extrapolations of experience, emotion, and context that cannot be condensed in written meaning. Business Idiots see our labor as the result of a smart manager saying “do this,” rather than human ingenuity interpreting both a request and the shit the manager didn’t say. What does a CEO do? Uhhh, um, well, a Harvard study says they spend 25% of their time on “people and relationships,” 25% on “functional and business unit reviews,” 16% on “organization and culture,” and 21% on “strategy,” with a few percent here and there for things like “professional development.”  That’s who runs the vast majority of companies: people that describe their work predominantly as “looking at stuff,” “talking to people” and “thinking about what we do next.” The most highly-paid jobs in the world are impossible to describe, their labor described in a mish-mash of LinkedInspiraton, yet everybody else’s labor is an output that can be automated. As a result, Large Language Models seem like magic. When you see everything as an outcome — an outcome you may or may not understand, and definitely don’t understand the process behind, let alone care about — you kind of already see your workers as LLMs.   You create a stratification of the workforce that goes beyond the normal organizational chart, with senior executives — those closer to the class level of CEO — acting as those who have risen above the doldrums of doing things to the level of “decisionmaking,” a fuzzy term that can mean everything from “making nuanced decisions with input from multiple different subject-matter experts” to, as ServiceNow Bill McDermott did in 2022 , “[make] it clear to everybody [in a boardroom of other executives], everything you do: AI, AI, AI, AI, AI.”  The same extends to some members of the business and tech media that have, for the most part, gotten by without having to think too hard about the actual things the companies are saying.  I realize this sounds a little mean, and I must be clear it doesn’t mean that these people know nothing , just that it’s been possible to scoot through the world without thinking too hard about whether or not something is true. When Salesforce said back in 2024 that its “Einstein Trust Layer” and AI would be “transformational for jobs,” the media dutifully wrote it down and published it without a second thought. It fully trusted Marc Benioff when he said that Agentforce agents would replace human workers , and then again when he said that AI agents are doing “ 30% to 50% of all the work in Salesforce itself ,” even though that’s an unproven and nakedly ridiculous statement.  Salesforce’s CFO said earlier this year that AI wouldn’t boost sales growth in 2025 . One would think this would change how they’re covered, or how seriously one takes Marc Benioff.  It hasn’t, because nobody is paying attention. In fact, nobody seems to be doing their job. This is how the core myths of generative AI were built: by executives saying stuff and the media publishing it without thinking too hard.  AI is replacing workers! AI is writing entire computer programs! AI is getting exponentially more-powerful! What does “powerful” mean? That the models are getting better on benchmarks that are rigged in their favor, but because nobody fucking explains it , regular people are regularly told that AI is “powerful.”  The only thing “powerful” about generative AI is its mythology. The world’s executives, entirely disconnected from labor and actual production, are doing the only thing they know how to — spend a bunch of money and say vague stuff about “AI being the future.” There are people — journalists, investors, and analysts — that have built entire careers on filling in the gaps for the powerful as they splurge billions of dollars and repeat with increasing desperation that “the future is here” as absolutely nothing happens. You’ve likely seen a few ridiculous headlines recently. One of the most recent, and most absurd, is that that OpenAI will pay Oracle $300 billion over four years , closely followed with the claim that NVIDIA will “invest” “$100 billion” in OpenAI to build 10GW of AI data centers , though the deal is structured in a way that means that OpenAI is paid “progressively as each gigawatt is deployed,” and OpenAI will be leasing the chips (rather than buying them outright) . I must be clear that these deals are intentionally made to continue the myth of generative AI, to pump NVIDIA, and to make sure OpenAI insiders can sell $10.3 billion of shares .   OpenAI cannot afford the $300 billion, NVIDIA hasn’t sent OpenAI a cent and won’t do so if it can’t build the data centers, which OpenAI most assuredly can’t afford to do.  NVIDIA needs this myth to continue, because in truth, all of these data centers are being built for demand that doesn’t exist, or that — if it exists — doesn’t necessarily translate into business customers paying huge amounts for access to OpenAI’s generative AI services.  NVIDIA, OpenAI, CoreWeave and other AI-related companies hope that by announcing theoretical billions of dollars (or hundreds of billions of dollars) of these strange, vague and impossible-seeming deals, they can keep pretending that demand is there, because why else would they build all of these data centers, right?   That, and the entire stock market rests on NVIDIA’s back . It accounts for 7% to 8% of the value of the S&P 500, and Jensen Huang needs to keep selling GPUs. I intend to explain later on how all of this works, and how brittle it really is. The intention of these deals is simple: to make you think “this much money can’t be wrong.” It can. These people need you to believe this is inevitable, but they are being proven wrong, again and again, and today I’m going to continue doing so.  Underpinning these stories about huge amounts of money and endless opportunity lies a dark secret — that none of this is working, and all of this money has been invested in a technology that doesn’t make much revenue and loves to burn millions or billions or hundreds of billions of dollars. Over half a trillion dollars has gone into an entire industry without a single profitable company developing models or products built on top of models. By my estimates, there is around $44 billion of revenue in generative AI this year (when you add in Anthropic and OpenAI’s revenues to the pot, along with the other stragglers) and most of that number has been gathered through reporting from outlets like The Information, because none of these companies share their revenues, all of them lose shit tons of money , and their actual revenues are really, really small. Only one member of the Magnificent Seven (outside of NVIDIA) has ever disclosed its AI revenue — Microsoft, which stopped reporting in January 2025, when it reported “$13 billion in annualized revenue,” so around $1.083 billion a month.   Microsoft is a sales MACHINE. It is built specifically to create or exploit software markets, suffocating competitors by using its scale to drive down prices, and to leverage the ecosystem that it’s created over the past few decades. $1 billion a month in revenue is chump change for an organization that makes over $27 billion a quarter in PROFITS .  Don’t worry Satya, I’ll come back to you later. “But Ed, the early days!” Worry not — I’ve got that covered .  This is nothing like any other era of tech. There has never been this kind of cash-rush, even in the fiber boom . Over a decade, Amazon spent about one-tenth of the capex that the Magnificent Seven spent in two years on generative AI building AWS — something that now powers a vast chunk of the web, and has long been Amazon’s most profitable business unit .  Generative AI is nothing like Uber , with OpenAI and Anthropic’s true costs coming in at about $159 billion in the past two years, approaching five times Uber’s $30 billion all-time burn. And that’s before the bullshit with NVIDIA and Oracle. Microsoft last reported AI revenue in January . It’s October this week. Why did it stop reporting this number, you think? Is it because the numbers are so good it couldn’t possibly let people know? As a general rule, publicly traded companies — especially those where the leadership are compensated primarily in equity — tend to brag about their successes, in part because said bragging boosts the value of the thing that the leadership gets paid in. There’s no benefit to being shy. Oracle literally made a regulatory filing to boast it had a $30 billion customer , which turned out to be OpenAI, who eventually agreed (publicly) to spend $300 billion in compute over five years .  Which is to say that Microsoft clearly doesn’t have any good news to share, and as I’ll reveal later, they can’t even get 3% of their 440 million Microsoft 365 subscribers to pay for Microsoft 365 Copilot.  If Microsoft can’t sell this shit, nobody can.  Anyway, I’m nearly done, sorry, you see, I’m writing this whole thing as if you’re brand new and walking up to this relatively unprepared, so I need to introduce another company.  In 2020, a splinter group jumped off of OpenAI, funded by Amazon and Google to do much the same thing as OpenAI but pretend to be nicer about it until they have to raise from the Middle East . Anthropic has always been better at coding for some reason, and people really like its Claude models.  Both OpenAI and Anthropic have become the only two companies in generative AI to make any real progress, either in terms of recognition or in sheer commercial terms, accounting for the majority of the revenue in the AI industry.  In a very real sense, the AI industry’s revenue is OpenAI and Anthropic. In the year where Microsoft recorded $13bn in AI revenues, $10 billion came from OpenAI’s  spending on Microsoft Azure. Anthropic burned $5.3 billion last year — with the vast majority of that going towards compute . Outside of these two companies, there’s barely enough revenue to justify a single data center. Where we sit today is a time of immense tension. Mark Zuckerberg says we’re in a bubble , Sam Altman says we’re in a bubble , Alibaba Chairman and billionaire Joe Tsai says we’re in a bubble , Apollo says we’re in a bubble , nobody is making money and nobody knows why they’re actually doing this anymore, just that they must do it immediately.  And they have yet to make the case that generative AI warranted any of these expenditures.  That was undoubtedly the longest introduction to a newsletter I’ve ever written, and the reason why I took my time was because this post demands a level of foreshadowing and exposition, and because I want to make it make sense to anyone who reads it — whether they’ve read my newsletter for years, or whether they’re only just now investigating their suspicions that generative AI may not be all it’s cracked up to be.  Today I will make the case that generative AI’s fundamental growth story is flawed, and explain why we’re in the midst of an egregious bubble. This industry is sold by keeping things vague, and knowing that most people don’t dig much deeper than a headline, a problem I simply do not have.  This industry is effectively in service of two companies — OpenAI and NVIDIA — who pump headlines out through endless contracts between them and subsidiaries or investments to give the illusion of activity.  OpenAI is now, at this point, on the hook for over a trillion dollars , an egregious sum for a company that already forecast billions in losses, with no clear explanation as to how it’ll afford any of this beyond “we need more money” and the vague hope that there’s another Softbank or Microsoft waiting in the wings to swoop in and save the day.  I’m going to walk you through where I see this industry today, and why I see no future for it beyond a fiery apocalypse.  While everybody (reasonably!) harps on about hallucinations — which, to remind you, is when a model authoritatively states something that isn’t true — but the truth is far more complex, and far worse than it seems.  You cannot rely on a large language model to do what you want. Even the most highly-tuned models on the most expensive and intricate platform can’t actually be relied upon to do exactly what you want.  A “hallucination” isn’t just when these models say something that isn’t true. It’s when they decide to do something wrong because it seems the most likely thing to do, or when a coding model decides to go on a wild goose chase, failing the user and burning a ton of money in the process.  The advent of “reasoning” models — those engineered to ‘think’ through problems in a way reminiscent of a human — and the expansion of what people are (trying) to use LLMs for demands that the definition of an AI hallucination be widened, not merely referring to factual errors, but fundamental errors in understanding the user’s request or intent, or what constitutes a task, in part because these models cannot think and do not know anything .  However successful a model might be in generating something good *once*, it will also often generate something bad, or it’ll generate the right thing but in an inefficient and over-verbose fashion. You do not know what you’re going to get each time, and hallucinations multiply with the complexity of the thing you’re asking for, or whether a task contains multiple steps (which is a fatal blow to the idea of “agents.”  You can add as many levels of intrigue and “reasoning” as you want, but Large Language Models cannot be trusted to do something correctly, or even consistently, every time. Model companies have successfully convinced everybody that the issue is that users are prompting the models wrong, and that people need to be “trained to use AI,” but what they’re doing is training people to explain away the inconsistencies of Large Language Models, and to assume individual responsibility for what is an innate flaw in how large language models work.  Large Language Models are also uniquely expensive. Many mistakenly try and claim this is like the dot com boom or Uber, but the basic unit economics of generative AI are insane. Providers must purchase tens or hundreds of thousands of GPUs each costing $50,000 a piece, and hundreds of millions or billions of dollars of infrastructure for large clusters. And that’s without mentioning things like staffing, construction, power, or water.   Then you turn them on and start losing money. Despite hundreds of billions of GPUs sold, nobody seems to make any money, other than NVIDIA, the company that makes them, and resellers like Dell and Supermicro who buy the GPUs, put them in servers, and sell them to other people.  This arrangement works out great for Jensen Huang, and terribly for everybody else.  I am going to explain the insanity of the situation we find ourselves in, and why I continue to do this work undeterred. The bubble has entered its most pornographic, aggressive and destructive stage, where the more obvious it becomes that they’re cooked, the more ridiculous the generative AI industry will act — a dark juxtaposition against every new study that says “generative AI does not work” or new story about ChatGPT’s uncanny ability to activate mental illness in people.  So, let’s start simple: NVIDIA is a hardware company that sells GPUs, including the consumer GPUs that you’d see in a modern gaming PC, but when you read someone say “GPU” within the context of AI, they mean enterprise-focused GPUs like the A100, H100, H200, and more modern GPUs like the Blackwell-series B200 and GB200 (which combines two GPUs with an NVIDIA CPU).  These GPUs cost anywhere from $30,000 to $50,000 (or as high as $70,000 for the newer Blackwell GPUs), and require tens of thousands of dollars more of infrastructure — networking to “cluster” server racks of GPUs together to provide compute, and massive cooling systems to deal with the massive amounts of heat they produce, as well as the servers themselves that they run on, which typically use top-of-the-line data center CPUs, and contain vast quantities of high-speed memory and storage. While the GPU itself is likely the most expensive single item within an AI server, the other costs — and I’m not even factoring in the actual physical building that the server lives in, or the water or electricity that it uses — add up.  I’ve mentioned NVIDIA because it has a virtual monopoly in this space. Generative AI effectively requires NVIDIA GPUs, in part because it’s the only company really making the kinds of high-powered cards that generative AI demands, and  because NVIDIA created something called CUDA — a collection of software tools that lets programmers write software that  runs on GPUs, which were traditionally used primarily for rendering graphics in games.  While there are open-source alternatives , as well as alternatives from Intel (with its ARC GPUs) and AMD (Nvidia’s main rival in the consumer space), these aren’t nearly as mature or feature-rich.  Due to the complexities of AI models, one cannot just stand up a few of these things either — you need clusters of thousands, tens of thousands, or hundreds of thousands of them for it to be worthwhile, making any investment in GPUs in the hundreds of millions or billions of dollars, especially considering they require completely different data center architecture to make them run. A common request — like asking a generative AI model to parse through thousands of lines of code and make a change or an addition — may use multiple of these $50,000 GPUs at the same time, and so if you aspire to serve thousands, or millions of concurrent users, you need to spend big. Really big.  It’s these factors — the vendor lock-in, the ecosystem, and the fact that generative AI only works when you’re buying GPUs at scale — that underpin the rise of Nvidia. But beyond the economic and technical factors, there are human ones, too.   To understand the AI bubble is to understand why CEOs do the things they do. Because an executive’s job is so vague , they can telegraph the value of their “labor” by spending money on initiatives and making partnerships. AI gave hyperscalers the excuse to spend hundreds of billions of dollars on data centers and buy a bunch of GPUs to go in them, because that, to the markets, looks like they’re doing something. By virtue of spending a lot of money in a frighteningly short amount of time, Satya Nadella received multiple glossy profiles , all without having to prove that AI can really do anything, be it a job or make Microsoft money.  Nevertheless, AI allowed CEOs to look busy, and once the markets and journalists had agreed on the consensus opinion that “AI would be big,” all that these executives had to do was buy GPUs and “do AI.”   We are in the midst of one of the darkest forms of software in history, described by many as an unwanted guest invading their products, their social media feeds, their bosses’ empty minds, and resting in the hands of monsters. Every story of its success feels bereft of any real triumph, with every literal description of its abilities involving multiple caveats about the mistakes it makes or the incredible costs of running it.  Generative AI exists for two reasons: to cost money, and to make executives look busy. It was meant to be the new enterprise software and the new iPhone and the new Netflix all at once, a panacea where software guys pay one hardware guy for GPUs to unlock the incredible value creation of the future.  Generative AI was always set up to fail, because it was meant to be everything, was talked about like it was everything, is still sold like it’s everything, yet for all the fucking hype, it all comes down to two companies: OpenAI, and, of course, NVIDIA. NVIDIA was, for a while, living high on the hog. All CEO Jensen Huang had to do every three months was saying “check out these numbers” and the markets and business journalists would squeal with glee, even as he said stuff like “ the more you buy the more you save ,” in part tipping his head to the (very real and sensible) idea of accelerated computing, but framed within the context of the cash inferno that’s generative AI, seems ludicrous.  Huang’s showmanship  worked really well for NVIDIA for a while, because for a while the growth was easy. Everybody was buying GPUs. Meta, Microsoft, Amazon, Google (and to a lesser extent Apple and Tesla) make up 42% of NVIDIA’s revenue , creating, at least for the first four, a degree of shared mania where everybody justified buying tens of billions of dollars of GPUs a year by saying “the other guy is doing it!” This is one of the major reasons the AI bubble is happening, because people conflated NVIDIA’s incredible sales with “interest in AI,” rather than everybody buying GPUs. Don’t worry, I’ll explain the revenue side a little bit later. We’re here for the long haul. Anyway, NVIDIA is facing a problem — that the only thing that grows forever is cancer .  On September 9 2025, the Wall Street Journal said that NVIDIA’s “wow” factor was fading , going from beating analyst estimates in by nearly 21% in its Fiscal Year Q2 2024 earnings to scraping by with a mere 1.52% beat in its most-recent earnings — something that for any other company, would be a good thing, but framed against the delusional expectations that generative AI has inspired, is a figure that looks nothing short of ominous. Per the Wall Street Journal: In any other scenario, 56% year-over-year growth would lead to an abundance of Dom Perignon and Huang signing hundreds of boobs , but this is NVIDIA , and that’s just not good enough. Back in February 2024, NVIDIA was booking 265% year-over-year growth , but in its February 2025 earnings, NVIDIA only grew by a measly 78% year-over-year .  It isn’t so much that NVIDIA isn’t growing , but that to grow year-over-year at the rates that people expect is insane. Life was a lot easier when NVIDIA went from $6.05 billion in revenue in Q4 FY2023 to $22 billion in revenue in Q4 FY2024 , but for it to grow even 55% year-over-year from Q2 FY2026 ( $46.7 billion ) to Q2 2027 would require it to make $72.385 billion in revenue in the space of three months, mostly from selling GPUs (which make up around 88% of its revenue).   This would put Nvidia in the ballpark of Microsoft ( $76 billion in the last quarter ) and within the neighborhood of Apple ( $94 billion in the last quarter ), predominantly making money in an industry that a year-and-a-half ago barely made the company $6 billion in a quarter.  And the market needs NVIDIA to perform, as the company makes up 8% of the value of the S&P 500 . It’s not enough for it to be wildly profitable, or have a monopsony on selling GPUs, or for it to have effectively 10x’d their stock in a few years. It must continue to grow at the fastest rate of anything ever, making more and more money selling these GPUs to a small group of companies that immediately start losing money once they plug them in.  While a few members of the Magnificent Seven could be depended on to funnel tens of billions of dollars into a furnace each quarter, there were limits , even for companies like Microsoft, which had bought over 485,000 GPUs in 2024 alone . To take a step back, companies like Microsoft, Google and Amazon make their money by either selling access to Large Language Models that people incorporate into their products, or by renting out servers full of GPUs to run inference (as said previously, the process to generate an output by a model or series of models) or train AI models for companies that develop and market models themselves, namely Anthropic and OpenAI.  The latter revenue stream of which is where Jensen Huang found a solution to that eternal growth problem: the neocloud, namely CoreWeave, Lambda and Nebius.  These businesses are fairly straightforward. They own (or lease) data centers that they then fill full of servers that are full of NVIDIA GPUs, which they then rent on an hourly basis to customers, either on a per-GPU basis or in large batches for larger customers, who guarantee they'll use a certain amount of compute and sign up to long-term (i.e. more than an hour at a time) commitments. A neocloud is a specialist cloud compute company that exists only to provide access to GPUs for AI, unlike Amazon Web Services, Microsoft Azure and Google Cloud, all of which have healthy businesses selling other kinds of compute, with AI (as I’ll get into later) failing to provide much of a return on investment.  It’s not just the fact that these companies are more specialized than, say, Amazon’s AWS or Microsoft Azure. As you’ve gathered from the name, these are new, young, and in almost all cases, incredibly precarious businesses — each with financial circumstances that would make a Greek finance minister blush.  That’s because setting up a neocloud is expensive . Even if the company in question already has data centers — as CoreWeave did with its cryptocurrency mining operation — AI requires completely new data center infrastructure to house and cool the GPUs , and those GPUs also need paying for, and then there’s the other stuff I mentioned earlier, like power, water, and the other bits of the computer (the CPU, the motherboard, the memory and storage, and the housing).  As a result, these neoclouds are forced to raise billions of dollars in debt, which they collateralize using the GPUs they already have , along with contracts from customers, which they use to buy more GPUs. CoreWeave, for example, has $25 billion in debt on estimated revenues of $5.35 billion , losing hundreds of millions of dollars a quarter. You know who also invests in these neoclouds? NVIDIA! NVIDIA is also one of CoreWeave’s largest customers (accounting for 15% of its revenue in 2024), and just signed a deal to buy $6.3 billion of any capacity that CoreWeave can’t otherwise sell to someone else through 2032 , an extension of a $1.3 billion 2023 deal reported by the Information . It was the anchor investor ($250 million) in CoreWeave’s IPO , too. NVIDIA is currently doing the same thing with Lambda, another neocloud that NVIDIA invested in, which also  plans to go public next year. NVIDIA is also one of Lambda’s largest customers, signing a deal with it this summer to rent 10,000 GPUs for $1.3 billion over four years . In the UK, NVIDIA has just invested $700 million in Nscale , a former crypto miner that has never built an AI data center , and that has, despite having no experience, committed $1 billion (and/or 100,000 GPUs) to an OpenAI data center in Norway . On Thursday, September 25, Nscale announced it had closed another funding round, with NVIDIA listed as a main backer — although it’s unclear how much money it put in . It would be safe to assume it’s another few hundred million.  NVIDIA also invested in Nebius , an outgrowth of Russian conglomerate Yandex, and Nebius provides, through a partnership with NVIDIA, tens of thousands of dollars’ worth of compute credits to companies in NVIDIA’s Inception startup program. NVIDIA’s plan is simple: fund these neoclouds, let these neoclouds load themselves up with debt, at which point they buy GPUs from NVIDIA, which can then be used as collateral for loans, along with contracts from customers, allowing the neoclouds to buy even more GPUs. It’s like that Robinhood infinite money glitch… …except, that is, for one small problem. There don’t appear to be that many customers. As I went into recently on my premium newsletter , NVIDIA funds and sustains Neoclouds as a way of funnelling revenue to itself, as well as partners like Supermicro and Dell , resellers that take NVIDIA GPUs and put them in servers to sell pre-built to customers. These two companies made up 39% of NVIDIA’s revenues last quarter .  Yet when you remove hyperscaler revenue — Microsoft, Amazon, Google, OpenAI and NVIDIA — from the revenues of these neoclouds, there’s barely $1 billion in revenue combined, across CoreWeave, Nebius and Lambda . CoreWeave’s $5.35 billion revenue is predominantly made up of its contracts with NVIDIA, Microsoft (offering compute for OpenAI), Google ( hiring CoreWeave to offer compute for OpenAI ), and OpenAI itself, which has promised CoreWeave $22.4 billion in business over the next few years. This is all a lot of stuff , so I’ll make it really simple: there is no real money in offering AI compute, but that isn’t Jensen Huang’s problem, so he will simply force NVIDIA to hand money to these companies so that they have contracts to point to when they raise debt to buy more NVIDIA GPUs.  Neoclouds are effectively giant private equity vehicles that exist to raise money to buy GPUs from NVIDIA, or for hyperscalers to move money around so that they don’t increase their capital expenditures and can, as Microsoft did earlier in the year , simply walk away from deals they don’t like. Nebius’ “$17.4 billion deal” with Microsoft even included a clause in its 6-K filing that Microsoft can terminate the deal in the event the capacity isn’t built by the delivery dates, and Nebius has already used the contract to raise $3 billion to… build the data center to provide compute for the contract. Here, let me break down the numbers: From my analysis, it appears that CoreWeave, despite expectations to make that $5.35 billion this year, has only around $500 million of non-Magnificent Seven or OpenAI AI revenue in 2025 , with Lambda estimated to have around $100 million in AI revenue , and Nebius around $250 million without Microsoft’s share , and that’s being generous. In simpler terms, the Magnificent Seven is the AI bubble, and the AI bubble exists to buy more GPUs, because (as I’ll show) there’s no real money or growth coming out of this, other than in the amount that private credit is investing — “ $50 billion a quarter, for the low end, for the past three quarters .” I dunno man, let’s start simple: $50 billion a quarter of data center funding is going into an industry that has less revenue than Genshin Impact . That feels pretty bad. Who’s gonna use these data centers? How are they going to even make money on them? Private equity firms don’t typically hold onto assets, they sell them or take them public. Doesn’t seem great to me! Anyway, if AI was truly the next big growth vehicle, neoclouds would be swimming in diverse global revenue streams. Instead, they’re heavily-centralized around the same few names, one of which (NVIDIA) directly benefits from their existence not as a company doing business, but as an entity that can accrue debt and spend money on GPUs. These Neoclouds are entirely dependent on a continual flow of private credit from firms like Goldman Sachs ( Nebius , CoreWeave , Lambda for its IPO ), JPMorgan ( Lambda , Crusoe , CoreWeave ), and Blackstone ( Lambda , CoreWeave ), who have in a very real sense created an entire debt-based infrastructure to feed billions of dollars directly to NVIDIA, all in the name of an AI revolution that's yet to arrive. The fact that the rest of the neocloud revenue stream is effectively either a hyperscaler or OpenAI is also concerning. Hyperscalers are, at this point, the majority of data center capital expenditures , and have yet to prove any kind of success from building out this capacity, outside, of course, Microsoft’s investment in OpenAI, which has succeeded in generating revenue while burning billions of dollars .  Hyperscaler revenue is also capricious, but even if it isn’t, why are there no other major customers? Why, across all of these companies, does there not seem to be one major customer who isn’t OpenAI?  The answer is obvious: nobody that wants it can afford it, and those who can afford it don’t need it.  It’s also unclear what exactly hyperscalers are doing with this compute, because it sure isn’t “making money.” While Microsoft makes $10 billion in revenue from renting compute to OpenAI via Microsoft Azure, it does so at-cost, and was charging OpenAI $1.30-per-hour for each A100GPU it rents, a loss of $2.2 an hour per GPU , meaning that it is  likely losing money on this compute, especially as SemiAnalysis has the total cost per hour per GPU at around $1.46 with the cost of capital and debt associated for a hyperscaler, though it’s unclear if that’s for an H100 or A100 GPU. In any case, how do these neoclouds pay for their debt if the hyperscalers give up, or NVIDIA doesn’t send them money, or, more likely, private credit begins to notice that there’s no real revenue growth outside of circular compute deals with neoclouds’ largest supplier, investor and customer? They don’t! In fact, I have serious concerns that they can’t even build the capacity necessary to fulfil these deals, but nobody seems to worry about that. No, really! It appears to be taking Oracle and Crusoe around 2.5 years per gigawatt of compute capacity . How exactly are any of these neoclouds (or Oracle itself) able to expand to actually capture this revenue? Who knows! But I assume somebody is going to say “OpenAI!” Here’s an insane statistic for you: OpenAI will account for — in both its own revenue (projected $13 billion) and in its own compute costs ($16 billion, according to The Information, although that figure is likely out of date, and seemingly only includes the compute it’ll use, and not that it has committed to build, and thus has spent money on) — about 50% of all AI revenues in 2025. That figure takes into account the $400m ARR for ServiceNow, Adobe, and Salesforce ; the $35bn in revenue for the Magnificent Seven from AI (not profit, and based on figures from the previous year); revenue from neoclouds like CoreWeave, Nebius, and Lambda; and the estimated revenue from the entire generative AI industry (including Anthropic and other smaller players, like Perplexity and Anysphere) for a total of $55bn.OpenAI is the generative AI industry — and it’s a dog of a company. As a reminder, OpenAI has leaked that it’ll burn $115 billion in the next four years , and based on my estimates, it needs to raise more than $290 billion in the next four years based on its $300 billion deal with Oracle alone. That alone is a very, very bad sign, especially as we’re three years and $500 billion or more into this hype cycle with few signs of life outside of, well, OpenAI promising people money. Credit to Anthony Restaino for this horrifying graphic : This is not what a healthy, stable industry looks like. Alright, well, things can’t be that bad on the software side. As I covered on my premium newsletter a few weeks ago , everybody is losing money on generative AI, in part because the cost of running AI models is increasing , and in part because the software itself doesn’t do enough to warrant the costs associated with running them, which are already subsidized and unprofitable for the model providers .  Outside of OpenAI (and to a lesser extent Anthropic), nobody seems to be making much revenue, with the most “successful” company being Anysphere, makers of AI coding tool Cursor, which hit $500 million ‘annualized” ( so $41.6 million in one month ) a few months ago, just before Anthropic and OpenAI jacked up the prices for “priority processing” on enterprise queries , raising its operating costs as a result. In any case, that’s some piss-poor revenue for an industry that’s meant to be the future of software. Smartwatches are projected to make $32 billion this year , and as mentioned, the Magnificent Seven expects to make $35 billion or so in revenue from AI this year . Even Anthropic and OpenAI seem a little lethargic, both burning billions of dollars while making, by my estimates, no more than $2 billion and $6.26 billion in 2025 so far , despite projections of $5 billion and $13 billion respectively.  Outside of these two, AI startups are floundering, struggling to stay alive and raising money in several-hundred million dollar bursts as their negative-gross-margin businesses flounder.  As I dug into a few months ago , I could find only 12 AI-powered companies making more than $8.3 million a month, with two of them slightly improving their revenues, specifically AI search company Perplexity ( which has now hit $150 million ARR, or $12.5 million in a month ) and AI coding startup Replit ( which also hit $150 million ARR in September ).  Both of these companies burn ridiculous amounts of money. Perplexity burned 164% of its revenue on Amazon Web Services, OpenAI and Anthropic last year , and while Replit hasn’t leaked its costs, The Information reports its gross margins in July were 23% , which doesn’t include the costs of its free users, which you simply have to do with LLMs as free users are capable of costing you a hell of a lot of money. Problematically, your paid users can also cost you more than they bring in as well. In fact, every user loses you money in generative AI, because it’s impossible to do cost control in a consistent manner. A few months ago, I did a piece about Anthropic losing money on every single Claude Code subscriber , and I’ll walk you through it in a very simplified fashion: Anthropic is, to be clear, the second-largest model developer, and has some of the best AI talent in the industry. It has a better handle on its infrastructure than anyone outside of big tech and OpenAI. It still cannot seem to fix this problem, even with weekly rate limits . While one could assume that Anthropic is simply letting people run wild, my theory is far simpler: even the model developers have no real way of limiting user activity, likely due to the architecture of generative AI. I know it sounds insane, but at the most advanced level, model providers are still prompting their models, and whatever rate limits may be in place appear to, at times, get completely ignored, and there doesn’t seem to be anything they can do to stop it. No, really. Anthropic counts amongst its capitalist apex predators one lone Chinese man who spent $50,000 of their compute in the space of a month fucking around with Claude Code. Even if Anthropic was profitable — it isn’t, and will burn billions this year — a customer paying $200-a-month running up $50,000 in costs immediately devours the margin of any user running the service that day , if not that week or month. Even if Anthropic’s costs are half the published rates, one guy amounted to 125 users’ monthly revenue.  That’s not a real business! That’s a bad business with out-of-control costs, and it doesn’t appear anybody has these costs under control. A few weeks ago, Replit — an unprofitable AI coding company — released a product called “ Agent 3 .” which promised to be “10x more autonomous” and offer “infinitely more possibilities,” “[testing] and [fixing] its code, constantly improving your application behind the scenes in a reflection loop.” In reality, this means you’d go and tell the model to build something and it would “go do it,” and you’ll be shocked to hear that these models can’t be relied upon to “go and do” anything. Please note that this was launched a few months after Replit raised its prices, shifting to obfuscated “ effort-based ” pricing that would charge “the full scope of the agent’s work.” Agent 3 has been a disaster. Users found tasks that previously cost a few dollars were spiralling into the hundreds of dollars, with The Register reporting one customer found themselves with a $1000 bill after a week: Another user complained that “costs skyrocketed, without any concrete results”: As I previously reported, in late May/early June, both OpenAI and Anthropic cranked up the pricing on their enterprise customers , leading to Replit and Cursor both shifting their prices. This abuse has now trickled down to their customers. Replit has now released an update that lets you choose how autonomous you want Agent 3 to be , which is a tacit admission that you can’t trust coding LLMs to build software. Replit’s users are still pissed off, complaining that Replit is charging them for activity when the agent doesn’t do anything , a consistent problem across its Reddit. While Reddit is not the full summation of all users across every company, it’s a fairly good barometer of user sentiment, and man, are users pissy. Traditionally, Silicon Valley startups have relied upon the same model of “grow really fast and burn a bunch of money, then “turn the profit lever.” AI does not have a “profit lever,” because the raw costs of providing access to AI models are so high ( and they’re only increasing ) that the basic economics of how the tech industry sells software don’t make sense. I’ll reiterate something I wrote a few weeks ago : In simpler terms, it is very, very difficult to imagine what one user — free or otherwise — might cost, and thus it’s hard to charge them on a monthly basis, or tell them what a service might cost them on average. This is a huge problem with AI coding environments.  According to The Information , Claude Code was driving “nearly $400 million in annualized revenue, roughly doubling from a few weeks ago” on July 31 2025.  That annualized revenue works out to about $33 million a month in revenue for a company that predicts it will make at least $416 million a month by the end of the year, and for a product that has become the most-popular coding environment in the world, from the second-largest and best-funded AI company in the world. …is that it? Is that all that’s happening here?  $33 million dollars, all of it unprofitable, after it felt, at least based on social media chatter and discussing with multiple different software engineers, that Claude Code had become ubiquitous with anything to do with LLMs. To be clear, Anthropic’s Sonnet and Opus models are consistently some of the most popular for programming on Openrouter , an aggregator of LLM usage, and Anthropic has been consistently-named as “ the best at coding .”  Some bright spark out there is going to say that Microsoft’s Github Copilot has 1.8 million paying subscribers , and guess what, that’s true, and in fact, I reported it! Here’s another fun fact: the Wall Street Journal reported that Microsoft loses “on average more than $20-a-month-per-user,” with “...some users [costing] the company as much as $80.” And that’s for the most-popular product! If you believe the New York Times or other outlets that simply copy and paste whatever Dario Amodei says , you’d think that the reason that software engineers are having trouble finding work is because their jobs are being replaced by AI. This grotesque , abusive , manipulative and offensive lie has been propagated throughout the entire business and tech media without anybody sitting down and asking whether it’s true, or even getting a good understanding of what it is that LLMs can actually do with code. Members of the media, I am begging you, stop doing this. I get it, every asshole is willing to give a quote saying that “ coding is dead ,” and that every executive is willing to burp out some nonsense about replacing all of their engineers , but I am fucking begging you to either use these things yourself, or speak to people that do. I am not a coder. I cannot write or read code. Nevertheless, I am capable of learning , and have spoken to numerous software engineers in the last few months, and basically reached a consensus of “this is kind of useful, sometimes.” However, one very silly man once said that I don’t speak to people who use these tools , so I went and spoke to three notable, experienced software engineers, and asked them to give me the straight truth about what coding LLMs can do.  In simple terms, LLMs are capable of writing code , but can’t do software engineering, because software engineering is the process of understanding, maintaining and executing code to produce functional software, and LLMs do not “learn,” cannot “adapt,” and (to paraphrase Brown), break down the more of your code and variables you ask them to look at at once. It’s very easy to believe that software engineering is just writing code, but the reality is that software engineers maintain software , which includes writing and analyzing code among a vast array of different personalities and programs and problems . Good software engineering harkens back to Brian Merchant’s interviews with translators — while some may believe that translators simply tell you what words mean, true translation is communicating the meaning of a sentence , which is cultural, contextual, regional, and personal, and often requires the exercise of creativity and novel thinking.  My editor, Matthew Hughes, gave an example of this in his newsletter :  Similarly, coding is not just “a series of text that programs a computer,” but a series of interconnected characters that refers to other software in other places that must also function now and explain, on some level, to someone who has never, ever seen the code before, why it was done this way.  This is, by the way, why we are still yet to get any tangible proof that AI is replacing software engineers…because it can’t.  Of all the fields supposedly at risk from “AI disruption,” coding feels (or felt) the most tangible, if only because the answer to “can you write code with LLMs” wasn’t an immediate, unilateral no.  The media has also been quick to say that AI “writes software,” which is true in the same way that ChatGPT “writes novels”. In reality, LLMs can generate code, and do some software engineering-adjacent tasks, but, like all Large Language Models, break down and go totally insane, hallucinating more as the tasks get more complex . And, as I pointed out earlier, software engineering is not just coding. It involves thinking about problems, finding solutions to novel challenges, designing stuff in a way that can be read and maintained by others, and that’s (ideally) scalable and secure. The whole fucking point of an “AI” is that you hand shit off to it! That’s what they’ve been selling it as! That’s why Jensen Huang told kids to stop learning to code, as with AI, there’s no point .  And it was all a lie. Generative AI can’t do the job of a software engineer, and it fails while  also costing abominable amounts of money.  Coding LLMs seem like magic at first, because they (to quote a conversation with Carl Brown) make the easy things easier, but they also make the harder things harder. They don’t even speed up engineers — they actually make them slower ! Yet coding is basically the only obvious use case for LLMs.  I’m sure you’re gonna say “but I bet the enterprise is doing well!” and you are so very, very wrong. Before I go any further, let’s establish some facts: All of this is to say that Microsoft has one of the largest commercial software empires in history, thousands (if not tens of thousands) of salespeople, and thousands of companies that literally sell Microsoft services for a living . And it can’t sell AI. A source that has seen materials related to sales has confirmed that, as of August 2025, Microsoft has around eight million active licensed users of Microsoft 365 Copilot, amounting to a 1.81% conversion rate across the 440 million Microsoft 365 subscribers. This would amount to, if each of these users paid annually at the full rate of $30-a-month, to about $2.88 billion in annual revenue for a product category that makes $33 billion a fucking quarter. And I must be clear, I am 100% sure these users aren’t all paying $30 a month. The Information reported a few weeks ago that Microsoft has been “reducing the software’s price with more generous discounts on the AI features, according to customers and salespeople,” heavily suggesting discounts had already been happening. Enterprise software is traditionally sold at a discount anyway — or, put a different way, with bulk pricing for those who sign up a bunch of users at once.  In fact, I’ve found evidence that it’s been doing this a while, with a 15% discount on annual Microsoft 365 Copilot subscriptions for orders of 10-to-300 seats mentioned by an IT consultant back in late 2024 , and another that’s currently running through September 30, 2025 through Microsoft’s Cloud Solution Provider program , with up to 2400 licenses discounted if you pay upfront for the year. Microsoft seems to do this a lot, as I found another example of an offer that ran from January 1 2025 through March 31 2025 . An “active” user is someone who has taken one action on Copilot in any Microsoft 365 app in the space of 28 days. Now, I know. That word, active. Maybe you’re thinking “Ed, this is like the gym model! There are unpaid licenses that Microsoft is getting paid for!”  Fine! Let’s assume that Microsoft also has, based on research that suggests this is the case for all software companies, another 50% — four million — of paid Copilot licenses that aren’t being used. That still makes this 12 million users, which is still a putrid 2.72% conversion rate. So, why aren’t people paying for Copilot? Let’s hear from someone who talked to The Information : Microsoft 365 Copilot has been such a disaster that Microsoft will now integrate Anthropic’s models in an attempt and make them better.  Oh, one other thing: sources also confirm GPU utilization for Microsoft 365’s enterprise Copilot is barely scratching 60%.  I’m also hearing that less than SharePoint — another popular enterprise app from Microsoft with 250 million users — had less than 300,000 weekly active users of its AI copilot features in August. So, The Information reported a few months ago that Microsoft’s projected AI revenues would be $13 billion, with $10 billion of that from OpenAI, leaving about $3 billion of total revenue across Microsoft 365 Copilot and any other foreseeable feature that Microsoft sells with “AI” on it. This heavily suggests that Microsoft is making somewhere between $1.5 billion and $2 billion on Azure or Microsoft 365 Copilot, though I suppose there are other places it could be making AI revenue too. Right? I guess. In any case, Microsoft’s net income (read: profit) in its last quarterly earnings was $27.2 billion. One of the comfortable lies that people tell themselves is that the AI bubble is similar to the fiber boom, or the dot com bubble, or Uber, or that we’re in the “growth stage,” or that “this is what software companies do, they spend a bunch of money then “ pull the profit lever .”  This is nothing like anything you’ve seen before, because this is the dumbest shit that the tech industry has ever done.  AI data centers are nothing like fiber, because there are very few actual use cases for these GPUs outside of AI, and none of them are remotely hyperscale revenue drivers. As I discussed a month or so ago , data center development accounted for more of America’s GDP growth than all consumer spending combined, and there really isn’t any demand for AI in general, let alone at the scale that these hundreds of billions of dollars are being sunk into.  The conservative estimate of capital expenditures related to data centers is around $400 billion, but given the $50 billion a quarter in private credit, I’m going to guess it breaks $500 billion, all to build capacity for an industry yet to prove itself. And this NVIDIA-OpenAI “$100 billion funding” news should only fill you full of dread, but also it isn’t fucking finalized, stop reporting it as if it’s done, I swear to god- Anyway, according to CNBC , “the initial $10 billion tranche is locked in at a $500 billion valuation and expected to close within a month or so once the transaction has been finalized,” with “successive $10 billion rounds are planned, each to be priced at the company’s then-current valuation as new capacity comes online.”  At no point is anyone asking how, exactly, OpenAI builds data centers to fill full of these GPUs. In fact, I am genuinely shocked (and a little disgusted!) by how poorly this story has been told. Let’s go point by point: To be clear, when I say OpenAI needs at least $300 billion over the next four years, that’s if you believe its projections, which you shouldn’t .  Let’s walk through its (alleged) numbers, while plagiarizing myself :  According to The Information , here's the breakdown (these are projections): OpenAI's current reported burn is $116 billion through 2030, which means there is no way that these projections include $300 billion in compute costs, even when you factor in revenue. There is simply no space in these projections to absorb that $300 billion, and from what I can tell, by 2029, OpenAI will have actually burned more than $290 billion, assuming that it survives that long, which I do not believe it will. Don’t worry, though. OpenAI is about to make some crazy money . Here are the projections that CFO Sarah Friar signed off on : Just so we are clear, OpenAI intends to 10x its revenue in the space of four years, selling software and access to models in an industry with about $60 billion of revenue in 2025. How will it do this? It doesn’t say. I don’t know OpenAI CFO Sarah Friar, but I do know that signing off on these numbers is, at the very least, ethically questionable.  Putting aside the ridiculousness of OpenAI’s deals, or its funding requirements, Friar has willfully allowed Sam Altman and OpenAI to state goals that defy reality or good sense, all to take advantage of investors and public markets that have completely lost the plot.  I need to be blunter: OpenAI has signed multiple different deals and contracts for amounts it cannot afford to pay, that it cannot hope to raise the money to pay for, that defy the amounts of venture capital and private credit available, all to sustain a company that will burn $300 billion and has no path to profitability of any kind. So, as I said above, CNBC reported on September 23, 2025 that the NVIDIA deal will be delivered in $10 billion tranches, the first of which is “expected to close within a month,” and the rest delivered “as new capacity comes online.” This is, apparently, all part of a plan to build 10GW of data center capacity with NVIDIA. A few key points: So, let’s start simple: data centers take forever to build. As I said previously, based on current reports, it’s taking Oracle and Crusoe around 2.5 years per gigawatt of data center capacity, and nowhere in these reports does one reporter take a second to say “hey, what data centers are you talking about?” or “hey, didn’t Sam Altman say back in July that he was building 10GW of data center capacity with Oracle? ” But wait, now Oracle and OpenAI have done another announcement that says they’re only doing 7GW, but they’re “ahead of schedule” on 10GW?  Wait, is NVIDIA’s 10GW the same 10GW as Oracle and OpenAI are working on? Is it different? Nobody seems to know or care! Anyway, I cannot be clear enough how unlikely it is that (as NVIDIA has said) “ the first gigawatt of NVIDIA systems will be deployed in the second half of 2026 ,” and that’s if it has bought the land and got the permits and ordered the construction, none of which has happened yet. But let’s get really specific on costs!  Crusoe’s 1.2GW of compute for OpenAI is a $15 billion joint venture , which means a gigawatt of compute runs about $12.5 billion. Abilene’s eight buildings are meant to hold 50,000 NVIDIA GB200 GPUs and their associated networking infrastructure, so let’s say a gigawatt is around 333,333 Blackwell GPUs. Though this math is a little funky due to NVIDIA promising to install its new Rubin GPUs in these theoretical data centers, that means these data centers will require a little under $200 billion worth of GPUs. By my maths that’s $325 billion.  I’m so tired of this. A number of you have sent me the following image with some sort of comment about how “this is how it’ll work,” and you are wrong, because this is neither how it works nor how it will work nor accurate on any level. In the current relationship, NVIDIA Is Not Sending OpenAI $100 Billion, nor will it send it that much money, because 90% of OpenAI’s funding is gated behind building 9 or 10 gigawatts of data center capacity. In the current relationship, OpenAI does not have the money to pay Oracle. Also, can Oracle even afford to give that much money to NVIDIA? It had negative free cash flow last quarter , already has $104 billion in debt , and its biggest new customer cannot afford a single fucking thing it’s promised. The only company in this diagram that actually can afford to do any of this shit is NVIDIA, and even then it only has $56 billion cash on hand . In any case, as I went over on Friday, OpenAI has promised about a trillion dollars between compute contracts across Oracle, Microsoft, Google and CoreWeave, 17 Gigawatts of promised data centers in America between NVIDIA and “Stargate,” several more gigawatts of international data centers, custom chips from Broadcom, and their own company operations. How exactly does this get paid for?  Nobody seems to ask these questions! Why am I the asshole doing this? Don’t we have tech analysts that are meant to analyse shit? AHhhhh- Every time I sit down to write about this subject the newsletters seem to get longer, because people are so painfully attached to the norms and tropes of the past. This post is, already, 17,500 words — a record for this newsletter — and I’ve still not finished editing and expanding it.  What we’re witnessing is one of the most egregious wastes of capital in history, sold by career charlatans with their reputations laundered by a tech and business media afraid to criticize the powerful and analysts that don’t seem to want to tell their investors the truth. There are no historic comparisons here — even Britain’s abominable 1800s railway bubble, which absorbed half of the country’s national income , created valuable infrastructure for trains, a vehicle that can move people to and from places. GPUs are not trains, nor are they cars, or even CPUs. They are not adaptable to many other kinds of work, nor are they “the infrastructure of the future of tech,” because they’re already quite old and with everybody focused on buying them, you’d absolutely see one other use case by now that actually mattered. GPUs are expensive, power-hungry, environmentally destructive and require their own kinds of cooling and server infrastructure, making every GPU data center and environmental and fiscal bubble unto themselves. And, whereas the Victorian train infrastructure still exists in the UK — though it has been upgraded over the years — a GPU has a limited useful lifespan. These are cards that can — and will — break after a period of extended usage, whether that period is five years or later, and they’ll inevitably be superseded by something better and more powerful, meaning that the resale value of that GPU will only go down, with a price depreciation that’s akin to a new car.  I am telling you, as I have been telling you for years, again and again and again , that the demand is not there for generative AI, and the demand is never, ever arriving. The only reason anyone humours any of this crap is the endless hoarding of GPUs to build capacity for a revolution that will never arrive. Well, that and OpenAI, a company built and sold on lies about ChatGPT’s capabilities . ChatGPT’s popularity — and OpenAI’s hunger for endless amounts of compute — have created the illusion of demand due to the sheer amount of capacity required to keep their services operational, all so they can burn $8 billion or more in 2025 and, if my estimates are right, nearly a trillion dollars by 2030 . This NVIDIA deal is a farce — an obvious attempt by the largest company on the American stock market to prop up the one significant revenue-generator in the entire industry, knowing that time is running out for it to create new avenues for eternal growth. I’d argue that NVIDIA’s deal also shows the complete contempt that these companies have for the media. There are no details about how this deal works beyond the initial $10 billion, there’s no land purchased, no data center construction started, and yet the media slurps it down without a second thought. I am but one man, and I am fucking peculiar. I did not learn financial analysis in school, but I appear to be one of the few people doing even the most basic analysis of these deals, and while I’m having a great time doing so, I am also exceedingly frustrated at how little effort is being put into prying apart these deals. I realize how ridiculous all of this sounds. I get it. There’s so much money being promised to so many people, market rallies built off the back of massive deals , and I get that the assumption is that this much money can’t be wrong, that this many people wouldn’t just say stuff without intending to follow through, or without considering whether their company could afford it.  I know it’s hard to conceive that hundreds of billions of dollars could be invested in something for no apparent reason, but it’s happening, right god damn now, in front of your eyes, and I am going to be merciless on anyone who attempts to write a “how could we see this coming?”  Generative AI has never been reliable, has always been unprofitable, and has always been unsustainable, and I’ve been saying so since February 2024 . The economics have never made sense, something I’ve said repeatedly since April 2024 , and when I wrote “How Does OpenAI Survive?” in July 2024 , I had multiple people suggest I was being alarmist. Here’s some alarmism for you: the longer it takes for OpenAI to die, the more damage it will cause to the tech industry.  On Friday, when I put out my piece on OpenAI needing a trillion dollars , I asked analyst Gil Luria if the capital was there to build the 17 Gigawatts that OpenAI had allegedly planned to build. He said the following: That doesn’t sound good! Anyway, as I discussed earlier, venture capital could run out in six quarters, with investor and researcher Jon Sakoda estimating that there will only be around $164 billion of dry powder (available capital) in US VC firms by the end of 2025. In July, The French Tech Journal reported (using Pitchbook data) that global venture capital deal activity reached its lowest first-half total since 2018, with $139.4 billion in deal value in the first half of 2025, down from $183.4 billion in the first half of 2024, meaning that any further expansion or demands for venture capital from OpenAI will likely sap the dwindling funds available from other startups. Things get worse when you narrow things to US venture capital. In a piece from April , EY reported that VC-backed investment in US companies hit $80 billion in Q1 2025, but “one $40 billion deal” accounted for half of the investment — OpenAI’s $40 billion deal of which only $10 billion has actually closed, and that didn’t happen until fucking June . Without the imaginary money from OpenAI, US venture would have declined by 36%. The longer that OpenAI survives, the longer it will sap the remaining billions from the tech ecosystem, and I expect it to extend its tendrils to private credit too. The $325 billion it needs just to fulfil its NVIDIA contract, albeit over 4 years, is an egregious sum that I believe exceeds the available private capital in the world. Let’s get specific, and check out the top 10 private equity firms’ available capital!  Assuming that all of this capital is currently available, the top 10 private equity firms in the world have around $477 billion of available capital. We can, of course, include investment banks — Goldman Sachs had around $520 billion cash in hand available at the end of its last quarter , and JPMorgan over $1.7 trillion , but JP Morgan has only dedicated $50 billion in direct lending commitments as of February 2025 , and while Goldman Sachs expanded its direct private credit lending by $15 billion back in June , that appears to be an extension of its “more than $20 billion” direct lending close from mid-2024 . Include both of those, and that brings us up to — if we assume that all of these funds are available — $562 billion in capital and about $164 billion in US venture available to spend, and that’s meant to go to more places than just OpenAI. Sure, sure, there’s more than just the top 10 private equity firms and there’s venture money outside of the US, but what could it be? Like, another $150 billion? You see, OpenAI needs to buy those GPUs, and it needs to build those data centers, and it needs to pay its thousands of staff and marketing and sales costs too. While OpenAI likely wouldn’t be the ones raising the money for the data centers — and honestly, I’m not sure who would do it at this point? — somebody is going to need to build TWENTY GIGAWATTS OF DATA CENTERS if we’re to believe both Oracle and NVIDIA You may argue that venture funds and private credit can raise more, and you’re right! But at this point, there have been few meaningful acquisitions of AI companies, and zero exits from the billions of dollars put into data centers.  Even OpenAI admits in its own announcement about new Stargate sites that this will be a “$400 billion investment over 3 years.” Where the fuck is that money coming from? Is OpenAI really going to absorb massive chunks of all available private credit and venture capital for the next few years?  And no, god, stop saying the US government will bail this out. It will have to bail out hundreds of billions of dollars, there is no scenario where it’s anything less than that, and I’ve already been over this. While the US government has spent equivalent sums in the past to support private business (the total $440 billion dispersed during the Great Recession’s TARP program, where the Treasury bought toxic assets from investment banks to stop them from imploding a la Lehman, springs to mind), it’s hard to imagine any case where OpenAI is seen as vital to the global financial system — and the economic health of the US — as the banking system.  Sure, we spent around $1tn — if we’re being specific, $953bn — on the Paycheck Protection Program during the Covid era, but that was to keep people employed at a time when the economy outside of Zoom and Walmart had, for all intents and purposes, ceased to exist. There was an urgency that doesn’t apply here. If OpenAI goes tits up, Softbank loses some money — nothing new there — and Satya Nadella has to explain why he spent tens of billions of dollars on a bunch of data centers filled with $50,000 GPUs that are, at this point, ornamental.  And while there will be — and have been — disastrous economic consequences, they won’t be as systemically catastrophic as that of the pandemic, or the global financial crisis. To be clear, it’ll be bad, but not as bad .   And there’s also the problem of moral hazard — if the government steps in, what’s to stop big tech chasing its next fruitless rainbow? — and optics. If people resented bailing out the banks after they acted like profligate gamblers and lost, how will they feel bailing out f ucking Sam Altman and Jensen Huang ?  I do apologize for the length of this piece, but the significance of this bubble requires depth. There is little demand, little real money, and little reason to continue, and the sheer lack of responsibility and willingness to kneel before the powerful fills me full of angry bile. I understand many journalists are not in a position where they can just write “this shit sounds stupid,” but we have entered a deeply stupid era, and by continuing to perpetuate the myth of AI, the media guarantees that retail investors and regular people’s 401Ks will suffer. It is now inevitable that this bubble bursts. Deutsche Bank has said the AI boom is unsustainable outside of tech spending “remaining parabolic,” which it says “is highly unlikely,” and Bain Capital has said that $2 trillion in new revenue is needed to fund AI’s scaling , and even that math is completely fucked as it talks about “AI-related savings”: Even when stared in the face by a ridiculous idea — $2 trillion of new revenue in a global software market that’s expected to be around $817 billion in 2025 — Bain still oinks out some nonsense about the “savings from applying AI in sales, marketing, customer support and R&D,” yet another myth perpetuated I assume to placate the fucking morons sinking billions into this. Every single “vibe coding is the future,” “the power of AI,” and “AI job loss” story written perpetuates a myth that will only lead to more regular people getting hurt when the bubble bursts. Every article written about OpenAI or NVIDIA or Oracle that doesn’t explicitly state that the money doesn’t exist, that the revenues are impossible, that one of the companies involved burns billions of dollars and has no path to profitability, is an act of irresponsible make believe and mythos. I am nobody. I am not a financier. I am not anybody special. I just write a lot, and read a lot, and can do the most basic maths in the world. I am not trying to be anything other than myself, nor do I have an agenda, other than the fact that I like doing this and I hate how this story is being told. I never planned for this newsletter to get this big, and now that it has, I’m going to keep doing the same thing every week. I also believe that the way to stop this happening again is to have a thorough and well-sourced explanation of everything as it happens, ripping down the narratives as they’re spun and making it clear who benefits from them and how and why they’re choosing to do so. When things collapse, we need to be clear about how many times people chose to look the other way, or to find good faith ways to interpret bad faith announcements and leak. So, how could we have seen this coming? I don’t know. Did anybody try to fucking look?

0 views

OpenAI Needs A Trillion Dollars In The Next Four Years

Shortly before publishing this newsletter, I spoke with Gil Luria, Managing Director and Analyst at D.A. Davidson , and asked him whether the capital was there to build the 17 Gigawatts of capacity that OpenAI has promised. He said the following: There is quite literally not enough money to build what OpenAI has promised. A few days ago, NVIDIA and OpenAI announced a partnership that would involve NVIDIA “investing $100 billion” into OpenAI, and the reason I put that in quotation marks is the deal is really fucking weird. Based on the text of its own announcement, NVIDIA “intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed,” except CNBC reported a day later that “[the] initial $10 billion tranche is locked in at a $500 billion valuation and expected to close within a month or so once the transaction has been finalized,” which also adds the important detail that this deal isn’t even god damn finalized. In any case , OpenAI has now committed to building 10 Gigawatts of data center capacity at a non-specific location with a non-specific partner, so that it can unlock $10 billion of funding per gigawatt installed. I also want to be clear that it has not explained where these data centers are, or who will build them, or, crucially, who will actually fund them. The very next day, OpenAI announced five more data centers planned “as part of the Stargate initiative,” “bringing Stargate’s current planned capacity to nearly 7 gigawatts,” which is when things get a little confusing.  Altman said back in July that Oracle and OpenAI were “committed to delivering 10GW of new compute capacity for Stargate,” and they were adding an additional 4.5GW of capacity in the US on top of the 1.2GW that was already planned in Abilene Texas. In fact, the Shackelford data center that is allegedly “new” is the 1.4GW facility I talked about a few weeks ago , though I see no mention of the Wisconsin one tied to the $38 billion loan raised by Vantage Data Centers . This announcement involves a site in Doña Ana County, New Mexico and an “undisclosed location in the Midwest,” which I assume is Wisconsin, and eager members of the media could, I dunno, look it up, look up any of this stuff, use the internet to look up the news, all the news is so easily found. But here’s my favourite part of the story: To be clear, the Lordstown, Ohio site is not a data center, at least according to SoftBank , who said it will be a “data center equipment manufacturing facility.” The Milam County data center is likely this one , and it doesn’t appear that SB Energy has even broken ground. Anyway, I want to get really specific about this, because the rest of the media is reporting these stories as if these data centers will pop up overnight, and the money will magically appear, and that there will, indeed, be enough of it to go around. Based on current reports, it’s taking Oracle and Crusoe around 2.5 years per gigawatt of data center capacity. Crusoe’s 1.2GW of compute for OpenAI is a $15 billion joint venture , which means a gigawatt of compute runs about $12.5 billion. Abilene’s 8 buildings are meant to hold 50,000 NVIDIA GB200 GPUs and their associated networking infrastructure, so let’s say a gigawatt is around 333,333 Blackwell GPUs at $60,000 a piece, so about $20 billion a gigawatt.  So, each gigawatt is about $32.5 billion. For OpenAI to actually receive its $100 billion in funding from NVIDIA will require them to spend roughly $325 billion — consisting of $125 billion in data center infrastructure costs and $200 billion in GPUs.  If you’re reporting this story without at least attempting to report these numbers, you are failing to give the general public the full extent of what these companies are promising. According to the New York Times , OpenAI has “agreements in place to build more than $400 billion in data center infrastructure” but also has now promised to spend $400 billion with Oracle over the next five years. What the fuck is going on? Are we just reporting any old shit that somebody says? Oracle hasn’t even got the money to pay for those data centers! Oracle is currently raising $15 billion in bonds to get a start on…something, even though $15 billion is a drop in the bucket for the sheer scale and cost of these data centers. Thankfully, Vantage Data Centers is raising $25 billion to handle the Shackelford ( ready, at best, in mid-to-late 2027 ) and Port Washington Wisconsin ( we have no idea, it doesn’t even appear Vantage has broken ground ) data center plans, allowing Oracle to share the burden of data centers that will likely not be built until fucking 2027 at the earliest. Anyway, putting all of that aside, OpenAI has now made multiple egregious, ridiculous, fantastical and impossible promises to many different parties, in amounts ranging from $50 million to $400 billion, all of which are due within the next five years. It will require hundreds of billions of dollars — either through direct funding, loans, or having partners like Oracle or NVIDIA take the burden, though at this point I believe both companies are genuinely failing their investors by not protecting them from Clammy Sam Altman, a career liar who somehow believes he can mobilize nearly a trillion dollars and have the media print anything he says, mostly because they will print anything he says, even when he says he wants to build 1 Gigawatt of AI infrastructure a week . Today, I’m going to go into detail about every single promise made by Sam Altman and his cadre of charlatans, and give you as close to a hard dollar amount as I can as what it would cost to meet these promises. To be clear, I am aware that in some of these cases another party will take on the burden of capital — but these dollars must be raised, and OpenAI must make sure they are raised. I’ll also get into the raw costs of running OpenAI, and how dire things look when you add everything up. In fact, based on my calculations, OpenAI needs at least $500 billion just to fund its own operations, and at least $432 billion or more through partners or associated entities raising debt just to make it through the next few years. And that's if OpenAI hits the insane revenue targets it's set!

0 views

Is There Any Real Money In Renting Out AI GPUs?

NVIDIA has become a giant, unhealthy rock on which the US markets — and to some extent the US economy — sits, representing 7-to-8% of the value of the market and a large percentage of the $400 billion in expected AI data center capex expected to be spent this year, which in turn made up for more GDP growth than all consumer spending combined. I originally started writing this piece about something else entirely — the ridiculous Oracle deal, what the consequences of me being right might be, and a lot of ideas that I'll get to later, but I couldn't stop looking at what NVIDIA is doing. To be clear, NVIDIA is insane, making 88% of its massive revenues from selling the distinct GPUs and associated server hardware to underpin the inference and training of Large Language Models, a market it effectively created by acquiring Mellanox for $6.9 billion in 2019 , and its underlying hardware that allowed for the high-speed networking to connect massive banks of servers and GPUs together, a deal now under investigation by China's antitrust authorities . Since 2023, NVIDIA has made an astonishing amount of money from its data center vertical , going from making $47 billion in the entirety of their Fiscal Year 2023 to making $41.1 billion in its last quarterly earnings alone. What's even more remarkable is how little money anyone is making as a result, with the combined revenues of the entire generative AI industry unlikely to cross $40 billion this year , even when you include companies like AI compute company CoreWeave, which expects to make a little over $5 billion or so this year , though most of that revenue comes from Microsoft, OpenAI (funded by Microsoft and Google, who are paying CoreWeave to provide compute to OpenAI, despite OpenAI already being a client of CoreWeave, both under Microsoft and in their own name)...and now NVIDIA itself, which has now agreed to buy $6.3 billion of any unsold cloud compute through, I believe, the next four years. Hearing about this deal made me curious. Why is NVIDIA acting as a backstop to CoreWeave? And why are they paying to rent back thousands of its GPUs for $1.5 billion over four years from Lambda , another AI compute company it invested in? The answer is simple: NVIDIA is effectively incubating its own customers, creating the contracts necessary for them to raise debt to buy GPUs — from NVIDIA, of course — which can, in turn, be used as collateral for further loans to buy even more GPUs . These compute contracts are used by AI compute companies as a form of collateral — proof of revenue to reassure creditors that they're good for the money so that they can continue to raise mountains of debt to build more data centers to fill with more GPUs from NVIDIA. This has also created demand for companies like Dell and Supermicro, companies that accounted for a combined 39% of NVIDIA's most recent quarterly revenues . Dell and Supermicro buy GPUs sold by NVIDIA and build the server architecture around them necessary to provide AI compute, reselling them to companies like CoreWeave and Lambda, who also buy GPUs of their own and have preferential access from NVIDIA. You'll be shocked to hear that NVIDIA also invested in both CoreWeave and Lambda, that Supermicro also invested in Lambda, and that Lambda also gets its server hardware from Supermicro. While this is the kind of merciless, unstoppable capitalism that has made Jensen Huang such a success, there's an underlying problem — that these companies become burdened with massive debt, used to send money to NVIDIA, Supermicro (an AI server/architecture reseller), and Dell (another reseller that works directly with CoreWeave ), and there doesn't actually appear to be mass market demand for AI compute, other than the voracious hunger to build more of it. In a thorough review of just about everything ever written about them, I found a worrying pattern within the three major neoclouds (CoreWeave, Lambda, and Nebius): a lack of any real revenue outside of Microsoft, OpenAI, Meta, Amazon, and of course NVIDIA itself, and a growing pile of debt raised in expectation of demand that I don't believe will ever arrive. To make matters worse, I've also found compelling evidence that all three of these companies lack the capacity to actually serve massive contracts like OpenAI's $11.9 billion deal with CoreWeave ( and an additional $4 billion added a few months later ), or Nebius' $17.4 billion deal with Microsoft , both of which were used to raise debt for each company. On some level, NVIDIA's Neocloud play was genius, creating massive demand for its own GPUs, both directly and through resellers, and creating competition with big tech firms like Microsoft's Azure Cloud and Amazon Web Services, suppressing prices in cloud compute and forcing them to buy more GPUs to compete with CoreWeave's imaginary scale. The problem is that there is no real demand outside of big tech's own alleged need for compute. Across the board, CoreWeave, Nebius and Lambda have similar clients, with the majority of CoreWeave's revenue coming from companies offering compute to OpenAI or NVIDIA's own "research" compute. Neoclouds exist as an outgrowth of NVIDIA, taking on debt using GPUs as collateral , which they use to buy more GPUs, which they then use as collateral along with the compute contracts they sign with either OpenAI, Microsoft, Amazon or Google. Beneath the surface of the AI "revolution" lies a dirty secret: that most of the money is one of four companies feeding money to a company incubated by NVIDIA specifically to buy GPUs and their associated hardware. These Neoclouds are entirely dependent on a continual flow of private credit from firms like Goldman Sachs ( Nebius , CoreWeave , Lambda for its IPO ), JPMorgan ( Lambda , Crusoe , CoreWeave ), and Blackstone ( Lambda , CoreWeave ), who have in a very real sense created an entire debt-based infrastructure to feed billions of dollars directly to NVIDIA, all in the name of an AI revolution that's yet to arrive. Those billions — an estimated $50 billion a quarter for the last three quarters at least — will eventually have the expectation of some sort of return, yet every Neocloud is a gigantic money loser, with CoreWeave burning $300 million in the last quarter with expectations to spend more than $20 billion in capital expenditures in 2025 alone . At some point the lack of real money in these companies will make them unable to pay their ruinous debt, and with NVIDIA's growth already slowing, I think we're watching a private credit bubble grow with no way for any of the money to escape. I'm not sure where it'll end, but it's not going to be pretty. Let's begin.

0 views

Oracle and OpenAI Are Full Of Crap

This week, something strange happened. Oracle, a company that had just missed on its earnings and revenue estimates, saw a more-than-39% single day bump in its stock , leading a massive market rally. Why? Because it said its remaining performance obligations — contracts signed that its customers yet to pay — had increased by $317 billion from the previous quarter, with CNBC reporting at the time that this was likely part of Oracle and OpenAI's planned additional 4.5 gigawatts of data center capacity being built in the US . Analysts fawned over Oracle — again, as it missed estimates — with TD Cowen's Derrick Wood saying it was a "momentous quarter" (again, it missed ) and that these numbers were "really amazing to see," and Guggenheim Securities' John DiFucci said he was "blown away." Deutsche Bank's Brad Zelnick added that "[analysts] were all kind of in shock, in a very good way." RPOs, while standard (and required) accounting practice and based on actual signed contracts, are being used by Oracle as a form of marketing. Plans change, contracts can be canceled (usually with a kill fee, but nevertheless), and, especially in this case, clients can either not have the money to pay or die for the very same reason they can't pay. In Oracle's case, it isn’t simply promising ridiculous growth, it is effectively saying it’ll become the dominant player in all cloud compute. A day after Oracle's earnings and a pornographic day of market swings, the Wall Street Journal reported that OpenAI and Oracle had signed a $300 billion deal , starting in "2027," though the Journal neglected to say whether that was the year or Oracle’s FY2027 (which starts June 1 2026). Oracle claims that it will make $18 billion in cloud infrastructure revenue in FY2026, $32 billion in FY2027, $73 billion in FY2028, $114 billion in FY2029, and $144 billion in FY2030. While all of this isn't necessarily OpenAI (as it adds up to $381 billion), it's fair to assume that the majority of it is. This means — as the $300 billion of the $317 billion of new contracts added by Oracle, and assuming OpenAI makes up 78% of its cloud infrastructure revenue ($300 billion out of $381 billion) — that OpenAI intends to spend over $88 billion fucking dollars in compute by FY2029, and $110 billion dollars in compute, AKA nearly as much as Amazon Web Services makes in a year , in FY2030. A sidenote on percentages, and how I'm going to talk about this going forward. If I'm honest, there's also a compelling argument that more of it is OpenAI. Who else is using this much compute? Who has agreed, and why?  In any case, if you trust Oracle and OpenAI, this is what you are believing: I want to write something smart here, but I can't get away from saying that this is all phenomenally, astronomically, ridiculously stupid. OpenAI, at present, has made about $6.26 billion in revenue this year , and it leaked a few days ago that it will burn $115 billion " through 2029 ," a statement that is obviously, patently false. Let's take a look at this chart from The Information : A note on "free cash flow." Now, these numbers may look a little different because OpenAI is now leaking free cash flow instead of losses, likely because it lost $5 billion in 2024 , which included $1 billion in losses from "research compute amortization," likely referring to spreading the cost of R&D out across several years, which means it already paid it. OpenAI also lost $700 million from its revenue share with Microsoft. In any case, this is how OpenAI is likely getting its "negative $2 billion" number." Personally, I don't like this as a means of judging this company's financial health, because it's very clear it’s using it to make its losses seem smaller than they are. The Information also reports that OpenAI will, in totality, spend $350 billion in compute from here until 2030, but claims it’ll only spend $100 billion on compute in that year. If I'm honest, I believe it'll be more based on how much Oracle is projecting. OpenAI represents $300 billion of the $317 billion of new cloud infrastructure revenue it’ll from 2027 through 2030, which heavily suggests that OpenAI will be spending more like $140 billion in that year. As I'll reveal in this piece, I believe OpenAI's actual burn is over $290 billion through 2029, and these leaks were intentional to muddy the waters around how much their actual costs would be. There is no way a $116 billion burnrate from 2025 to 2029 includes these costs, and I am shocked that more people aren't doing the basic maths necessary to evaluate this company. The timing of the leak — which took place on September 5, 2025, five days before the Oracle deal was announced — always felt deeply suspicious, as it's unquestionably bad news... unless, of course, you are trying to undersell how bad your burnrate is. I believe that OpenAI's leaked free cash flow projections intentionally leave out the Oracle contract as a means of avoiding scrutiny. I refuse to let that happen. So, even if OpenAI somehow had the money to pay for its compute — it won't, but it projects, according to The Information, that it’ll make one hundred billion dollars in 2028 — I'm not confident that Oracle will actually be able to build the capacity to deliver it. Vantage Data Centers, the partner building the sites, will be taking on $38 billion of debt to build two sites in Texas and Wisconsin , only one of which has actually broken ground from what I can tell, and unless it has found a miracle formula that can grow data centers from nothing, I see no way that it can provide OpenAI with $70 billion or more of compute in FY2027. Oracle and OpenAI are working together to artificially boost Oracle's stock based on a contract that is, from everything I can see, impossible for either party to fulfill. The fact that this has led to such an egregious pump of Oracle's stock is an utter disgrace, and a sign that the markets and analysts are no longer representative of any rational understanding of a company's value. Let me be abundantly clear: Oracle and OpenAI's deal says nothing about demand for GPU compute. OpenAI is the largest user of compute in the entirety of the generative AI industry. Anthropic expects to burn $3 billion this year (so we can assume that its compute costs are $3 billion to $5 billion, Amazon is estimated to make $5 billion in AI revenue this year, so I think this is a fair assumption ), and xAI burns through a billion dollars a month . CoreWeave expects about $5.3 billion of revenue in 2025 , and per The Information Lambda, another AI compute company, made more than $250 million in the first half of 2025. If we assume that all of these companies were active revenue participants (we shouldn't, as xAI mostly handles its own infrastructure), I estimate the global compute market is about $40 billion in totality, at a time when AI adoption is trending downward in large companies according to Apollo's Torsten Sløk . And yes, Nebius signed a $17.4 billion, four-year-long deal with Microsoft , but Nebius now has to raise $3 billion to build the capacity to acquire "additional compute power and hardware, [secure] land plots with reliable providers, and [expand] its data center footprint," because Nebius, much like CoreWeave, and, much like Oracle, doesn't have the compute to service these contracts. All three have seen a 30% bump in their stock in the last week. In any case, today I'm going to sit down and walk you through the many ways in which the Oracle and OpenAI deal is impossible to fulfill for either party. OpenAI is projecting fantastical growth in an industry that's already begun to contract, and Oracle has yet to even start building the data centers necessary to provide the compute that OpenAI allegedly needs.

0 views

Why Everybody Is Losing Money On AI

Hello and welcome to another premium newsletter. Thanks as ever for subscribing, and please email me at [email protected] to say hello. As I've written again and again , the costs of running generative AI do not make sense. Every single company offering any kind of generative AI service — outside of those offering training data and services like Turing and Surge — is, from every report I can find, losing money, and doing so in a way that heavily suggests that there's no way to improve their margins. In fact, let me explain an example of how ridiculous everything has got, using points I'll be repeating behind the premium break. Anysphere is a company that sells a subscription to their AI coding app Cursor, and said app predominantly uses compute from Anthropic via their models Claude Sonnet 4.1 and Opus 4.1. Per Tom Dotan at Newcomer , Cursor sends 100% of their revenue to Anthropic, who then takes that money and puts it into building out Claude Code, a competitor to Cursor. Cursor is Anthropic's largest customer. Cursor is deeply unprofitable, and was that way even before Anthropic chose to add "Service Tiers," jacking up the prices for enterprise apps like Cursor . My gut instinct is that this is an industry-wide problem. Perplexity spent 164% of its revenue in 2024 between AWS, Anthropic and OpenAI . And one abstraction higher (as I'll get into), OpenAI spent 50% of its revenue on inference compute costs alone , and 75% of its revenue on training compute too (and ended up spending $9 billion to lose $5 billion). Yes, those numbers add up to more than 100%, that's my god damn point. Large Language Models are too expensive, to the point that anybody funding an "AI startup" is effectively sending that money to Anthropic or OpenAI, who then immediately send that money to Amazon, Google or Microsoft, who are yet to show that they make any profit on selling it. Please don't waste your breath saying "costs will come down." They haven't been, and they're not going to. Despite categorically wrong boosters claiming otherwise , the cost of inference — everything that happens from when you put a prompt in to generate an output from a model — is increasing, in part thanks to the token-heavy generations necessary for "reasoning" models to generate their outputs, and with reasoning being the only way to get "better" outputs, they're here to stay (and continue burning shit tons of tokens). This has a very, very real consequence. Christopher Mims of the Wall Street Journal reported last week that software company Notion — which offers AI that boils down to "generate stuff, search, meeting notes and research" — had AI costs eat 10% of its profit margins to provide literally the same crap that everybody else does. As I discussed a month or two ago, the increasing cost of AI has begun a kind of subprime AI crisis , where Anthropic and OpenAI are having to charge more for their models and increasing the price on their enterprise customers to boot. As discussed previously, OpenAI lost $5 billion and Anthropic $5.3 billion in 2024, with OpenAI expecting to lose upwards of $8 billion and Anthropic — somehow — only losing $3 billion in 2025. I have severe doubts that these numbers are realistic, with OpenAI burning at least $3 billion in cash on salaries this year alone , and Anthropic somehow burning two billion dollars less on revenue that has, if you believe its leaks, increased 500% since the beginning of the year . Though I can't say for sure, I expect OpenAI to burn at least $15 billion in compute costs this year alone , and wouldn't be surprised if its burn was $20 billion or more. At this point, it's becoming obvious that it is not profitable to provide model inference, despite Sam Altman recently saying that OpenAI was. He no doubt is trying to play silly buggers with the concept of gross profit margins — suggesting that inference is "profitable" as long as you don't include training, staff, R&D, sales and marketing, and any other indirect costs. I will also add that OpenAI pays a discounted rate on its compute . In any case, we don't even have one — literally one — profitable model developer, one company that was providing these services that wasn't posting a multi-million or billion-dollar loss. In fact, even if you remove the cost of training models from OpenAI's 2024 revenues ( provided by The Information ), OpenAI would still have lost $2.2 billion fucking dollars. One of you will say "oh, actually, this is standard accounting." If that's the case, OpenAI had a 10% gross profit margin in 2024, and while OpenAI has leaked that it has a 48% gross profit margin in 2025 , Altman also claimed that GPT-5 scared him, comparing it to the Manhattan Project . I do not trust him. Generative AI has a massive problem that the majority of the tech and business media has been desperately avoiding discussing: that every single company is unprofitable, even those providing the models themselves. Reporters have spent years hand-waving around this issue, insisting that "these companies will just work it out," yet never really explaining how they'd do so other than "the cost of inference will come down" or "new silicon will bring down the cost of compute." Neither of these things have happened, and it's time to take a harsh look at the rotten economics of the Large Language Model era. Generative AI companies — OpenAI and Anthropic included — lose millions or billions of dollars, and so do the companies building on top of them, in part because the costs associated with delivering models continue to increase. Integrating Large Language Models into your product already loses you money, at a price where the Large Language Model provider (EG: OpenAI and Anthropic) is losing money. I believe that generative AI is, at its core, unprofitable, and that no company building their core services on top of models from Anthropic or OpenAI has a path to profitability outside of massive, unrealistic price increases. The only realistic path forward for generative AI firms is to start charging their users the direct costs for running their services, and I do not believe users will be enthusiastic to do so, because the amount of compute that the average user costs vastly exceeds the amount of money that the company generates from a user each month. As I'll discuss, I don't believe it's possible for these companies to make a profit even with usage-based pricing, because the outcomes that are required to make things like coding LLMs useful require a lot more compute than is feasible for an individual or business to pay for. I will also go into how ludicrous the economics behind generative AI have become, with companies sending 100% or more of their revenue directly to cloud compute or model providers. And I'll explain why, at its core, generative AI is antithetical to the way that software is sold, and why I believe this doomed it from the very beginning.

0 views

AI Bubble 2027

Soundtrack: The Dillinger Escape Plan - One Of Us Is The Killer An MIT study found that 95% of organizations are getting "zero return" from generative AI, seemingly every major outlet is now writing an "are we in a bubble?" story, and now Meta has frozen AI hiring . Things are looking bleak for the AI bubble, and people are getting excited that this loathsome, wasteful era might be coming to an end. As a result, I'm being asked multiple times a day when the bubble might burst. I admit I'm hesitant to make any direct timelines — we are in a time (to quote former Federal Reserve Board chairman Alan Greenspan) of irrational exuberance , where the markets have oriented themselves around something very silly and very, very expensive. Regardless, Anthropic is apparently raising as much as $10 billion in its next funding round , and OpenAI allegedly hit $1 billion in revenue in July , which brings it in line with my estimate that it’s made about $5.26 billion in revenue in 2025 so far , The bubble "bursting" is not Sam Altman declaring that we're in a bubble — it's a series of events that lead to big tech pulling away from this movement, investment money drying up, and the current slate of AI companies withering and dying because they can't raise more money, can't go public, and can't sell to anybody . Anyway, earlier in the year, a bunch of credulous oafs wrote an extremely long piece of fan fiction called " AI 2027 ," beguiling people like Kevin Roose with its "gloominess." Written with a deep seriousness and a lot of charts, AI 2027 makes massive leaps of logic, with its fans rationalizing taking it seriously by saying that the five authors "have the right credentials." In reality, AI 2027 is written to fool people that want to be fooled and scare people that are already scared, its tone consistently authoritative as it suggests that a self-learning agent is on the verge of waking up, a thing that is so remarkably stupid that anyone who took this seriously should be pantsed again and again. These men are also cowards. They choose to use fake company names like "OpenBrain" to mask their predictions instead of standing behind them with confidence. I get that extrapolating years into the future is scary — but these grifting losers can't even commit to a prediction! Nevertheless, I wanted to take a run at something similar myself, though not in the same narrative format. In this piece, I'm going to write out what conditions I believe will burst the bubble. Some of this will be extrapolations based on my own knowledge, sources, and writing hundreds of thousands of words on this subject. I am not going to write a strict timeline, but I am going to write how some things could go, and how (and why) they'll lead to the bubble bursting. For the bubble to burst, I see a few necessary conditions, though reality is often far more boring and annoying. Regardless, it's important to know that this is a bubble driven by vibes not returns, and thus the bubble bursting will be an emotional reaction. This post is, in many respects, a follow-on to my previous “pale horse” article (called Burst Damage) . Many of my original pale horses have already come true — Anthropic and OpenAI have pushed price increases and rate limits, there is already discord in AI investment, Meta is already considering downsizing its AI team , and OpenAI has done multiple different "big, stupid magic tricks," the chief of them being the embarrassing launch of GPT-5, a "smart, efficient router" that I reported last week was quite the opposite . This time, I'm going to write out the linchpin events that will shock the system, and how they might bring about the bubble bursting. I should also be clear, and I will get to after the premium break, that this will be a series of events rather than one big one, though there are big ones to look out for. I also think it might "take a minute," because "the bubble bursting" will be a succession of events that could take upwards of a year to fully happen. That’s been true for every bubble. Although people associate the implosion of the housing bubble with the “one big event” of Lehman Brothers collapsing in 2008, the reality is that it was preceded and followed by a bunch of other events, equally significant though not quite as dramatic.  One VC (who you'll read about shortly) predicted it will take 6 quarters to run out of funding entirely based on the current rate of investment, putting us around February 2027 for things to have truly collapsed. Here's a list of some of the things that I believe will have to happen for this era to be truly done, and the ones that I believe are truly essential. The common thread through all of these points is that they are predominantly impossible to ignore. So far this bubble has inflated because the problems with AI — such as "it doesn't make any money" and "burns billions of dollars" — have been dismissed until very recently as the necessary costs of the beautiful AI revolution. Now that things have begun to unravel, the intensity of criticism will increase gradually, rather than in one big movement that makes everyone say "we hate AI." And it isn't just because of the money. CEOs like Tobias Lutke of Shopify have oriented their companies’ entire culture around AI , demanding in his case that "employees must demonstrate why AI cannot be used before requesting additional resources." Generative AI is, on some level, a kind of dunce detector — its flimsy and vague use cases having enough juice to impress the clueless Business Idiots who don't really engage with the production that makes their companies money. The specious, empty hype of Large Language Models — driven by a tech and business media that has given up on trying to understand them — symbolizes a kind of magic to these empty-headed goobers, and unwinding their "AI-first" cultures will be difficult...right up until the first guy does it, at which point everybody will follow. AI has taken such a hold on our markets because it's symbolic of a few things: In any case, I am going to try and write the things that I think will happen, in detail. I'll go into more conditions in this piece, and as discussed, I'm going to make some informed guesses, extrapolations, and give my thoughts about how things collapse. I predict that the impact of Large Language Models over the next decade will be enormous, not in its actual innovation or returns, but in its ability to expose how little our leaders truly know about the world or labor, how willing many people are to accept whatever the last thing a smart-adjacent person said, and how our markets and economy are driven by people with the most tenuous grasp on reality. This will be an attempt to write down what I believe could happen in the next 18 months, the conditions that might accelerate the collapse, and how the answers to some of my open questions — such as how these companies book revenue and burn compute — could influence outcomes. This...is AI Bubble 2027.

0 views

How To Argue With An AI Booster

Editor's Note: For those of you reading via email, I recommend opening this in a browser so you can use the Table of Contents. This is my longest newsletter - a 16,000-word-long opus - and if you like it, please subscribe to my premium newsletter. Thanks for reading! In the last two years I've written no less than 500,000 words, with many of them dedicated to breaking both existent and previous myths about the state of technology and the tech industry itself. While I feel no resentment — I really enjoy writing, and feel privileged to be able to write about this and make money doing so — I do feel that there is a massive double standard between those perceived as "skeptics" and "optimists." To be skeptical of AI is to commit yourself to near-constant demands to prove yourself, and endless nags of " but what about ?" with each one — no matter how small — presented as a fact that defeats any points you may have. Conversely, being an "optimist" allows you to take things like AI 2027 — which I will fucking get to — seriously to the point that you can write an entire feature about fan fiction in the New York Times and nobody will bat an eyelid. In any case, things are beginning to fall apart. Two of the actual reporters at the New York Times (rather than a "columnist") reported out last week that Meta is yet again "restructuring" its AI department for the fourth time, and that it’s considering "downsizing the A.I. division overall," which sure doesn't seem like something you'd do if you thought AI was the future. Meanwhile, the markets are also thoroughly spooked by an MIT study covered by Fortune that found that 95% of generative AI pilots at companies are failing , and though MIT NANDA has now replaced the link to the study with a Google Form to request access, you can find the full PDF here , in the kind of move that screams "PR firm wants to try and set up interviews." Not for me, thanks! In any case, the report is actually grimmer than Fortune made it sound, saying that "95% of organizations are getting zero return [on generative AI]." The report says that "adoption is high, but transformation is low," adding that "...few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models, or measurable changes in customer behavior." Yet the most damning part was the "Five Myths About GenAI in the Enterprise," which is probably the most wilting takedown of this movement I've ever seen: These are brutal, dispassionate points that directly deal with the most common boosterisms. Generative AI isn't transforming anything, AI isn't replacing anyone, enterprises are trying to adopt generative AI but it doesn't fucking work , and the thing holding back AI is the fact it doesn't fucking work. This isn't a case where "the enterprise" is suddenly going to save these companies, because the enterprise already tried, and it isn't working. An incorrect read of the study has been that the "learning gap" that makes these things less useful, when the study actually says that "...the fundamental gap that defines the GenAI divide [is that users resist tools that don't adapt, model quality fails without context, and UX suffers when systems can't remember." This isn't something you learn your way out of. The products don't do what they're meant to do, and people are realizing it. Nevertheless, boosters will still find a way to twist this study to mean something else. They'll claim that AI is still early, that the opportunity is still there, that we " didn't confirm that the internet or smartphones were productivity boosting ," or that we're in "the early days" of AI, somehow, three years and hundreds of billions and thousands of articles in. I'm tired of having the same arguments with these people, and I'm sure you are too. No matter how much blindly obvious evidence there is to the contrary they will find ways to ignore it. They continually make smug comments about people "wishing things would be bad" or suggesting you are stupid — and yes, that is their belief! — for not believing generative AI is disruptive.  Today, I’m going to give you the tools to fight back against the AI boosters in your life. I’m going to go into the generalities of the booster movement — the way they argue, the tropes they cling to, and the ways in which they use your own self-doubt against you.  They’re your buddy, your boss, a man in a gingham shirt at Epic Steakhouse who won't leave you the fuck alone, a Redditor, a writer, a founder or a simple con artist — whoever the booster in your life is, I want you to have the words to fight them with.  So, this is my longest newsletter ever, and I built it for quick reference - and, for the first time, gave you a Table of Contents. So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs , the less likely they are to emphatically argue that I’m missing out by not doing so myself. No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality. Kevin Roose and Casey Newton are two of the most notable boosters, and — as I’ll get into later in this piece — neither of them have a consistent or comprehensive knowledge of AI. Nevertheless, they will insist that “ everybody is using AI for everything ” — a statement that even a booster should realize is incorrect based on the actual abilities of the models.  But that’s because it isn’t about what’s actually happening , it’s about allegiance. AI symbolizes something to the AI booster — a way that they’re better than other people, that makes them superior because they (unlike “cynics” and “skeptics”) are able to see the incredible potential in the future of AI, but also how great it is today , though they never seem to be able to explain why outside of “it replaced search for me!” and “I use it to draw connections between articles I write,” which is something I do without AI using my fucking brain. Boosterism is a kind of religion, interested in finding symbolic “proof” that things are getting “better” in some indeterminate way, and that anyone that chooses to believe otherwise is ignorant.  I’ll give you an example. Thomas Ptacek’s “ My AI Skeptic Friends Are All Nuts ” was catnip for boosters — a software engineer using technical terms like “interact with Git” and “MCP,” vague charts, and, of course, an extremely vague statement that says hallucinations aren’t a problem: Is it?  Anyway, my favourite part of the blog is this: Nobody projects more than an AI booster. They thrive on the sense they’re oppressed and villainized after years of seemingly every outlet claiming they’re right regardless of whether there’s any proof. They sneer and jeer and cry constantly that people are not showing adequate amounts of awe when an AI lab says “ we did something in private, we can’t share it with you, but it’s so cool ,” and constantly act as if they’re victims as they spread outright misinformation, either through getting things wrong or never really caring enough to check.  Also, none of the booster arguments actually survive a thorough response, as Nik Suresh proved with his hilarious and brutal takedown of Ptacek’s piece . There are, I believe, some people who truly do love using LLMs, yet they are not the ones defending them. Ptacek’s piece drips with condescension, to the point that it feels like he’s trying to convince himself how good LLMs are, and because boosters are eternal victims, he wrote them a piece that they could send around to skeptics and say “heh, see?” without being able to explain why it was such a brutal takedown, mostly because they can’t express why other than “ well this guy gets it!”   One cannot be the big, smart genius that understands the glory and power of AI while also acting like a scared little puppy every time somebody tells them it sucks. In fact, that’s a great place to start. When you speak to an AI booster, you may get the instinct to shake them vigorously, or respond to their post by saying to do something with your something , or that they’re “stupid.” I understand the temptation, but you want to keep a head on a swivel — they thrive on victimisation.   I’m sorry if you are an AI booster and this makes you feel bad. Please reflect on your work and how many times you’ve referred to somebody who didn’t understand AI in a manner that suggested they were ignorant, or tried to gaslight them by saying “AI was powerful” while providing no actionable ways in which it is. You cannot — and should not! — allow these people to act as if they are being victimized or “othered.”  First and foremost: there are boosters at pretty much every major think tank, government agency and media outlet. It’s extremely lucrative being a booster. You’re showered with panel invites, access to executives, and are able to get headlines by saying how scared you are of the computer with ease. Being a booster is the easy path! Being a critic requires you to constantly have to explain yourself in a way that boosters never have to.  If a booster says this to you, ask them to explain: There is no answer here, because this is not a coherent point of view. Boosters are more successful, get more perks and are in general better-treated than any critic. Fundamentally, these people exist in the land of the vague. They will drag you toward what's just on the horizon, but never quite define what the thing that dazzles you will be, or when it will arrive.  Really, their argument comes down to one thought: you must get on board now, because at some point it'll be so good you'll feel stupid for not believing something that kind of sucks wouldn't be really good.  If this line sounds familiar, it’s because you’ve heard it a million times before, most notably with crypto.  They will make you define what would impress you, which isn't your job, in the same way finding a use case for them isn't your job. In fact, you are the customer! Here’s a great place to start: say “that’s a really weird thing to say!” It is peculiar to suggest that somebody doesn’t get how to use a product, and that we, as the customer, must justify ourselves to our own purchases. Make them justify their attitude.  Just like any product, we buy software to serve a need.   This is meant to be artificial *intelligence* — why is it so fucking stupid that I have to work out why it's useful? The answer, of course, is that it has no intellect, is not intelligent, and Large Language Models are being pushed up a mountain by a cadre of people who are either easily impressed or invested — either emotionally or financially — in its success due to the company they keep or their intentions for the world.  If a booster suggests you “just don’t get it,” ask them to explain: Their use cases will likely be that AI has replaced search for them, that they use it for brainstorming or journaling, proof-reading an article, or looking through a big pile of their notes (or some other corpus of information) and summarizing it or pulling out insights.  If a booster refers to AI “being powerful” and getting “more powerful,” ask them: The core of the AI booster’s argument is to make you feel bad. They will suggest you are intentionally not liking A.I. because you're a hater, or a cynic, or a Luddite. They will suggest that you are ignorant for not being amazed by ChatGPT. To be clear, anyone with a compelling argument doesn’t have to make you feel bad to convince you. The iPhone - and to be clear, I am referring to the concept of the smartphone and its utility, I am aware that there was marketing for the iPhone - didn’t need a fucking marketing campaign to explain why one device that can do a bunch of things you already find useful was good.  You don't have to be impressed by ANYTHING by default, and any product — especially software — designed to make you feel stupid for "not getting it" is poorly designed. ChatGPT is the ultimate form of Silicon Valley Sociopathy — you must do the work to find the use cases, and thank them for being given the chance to do so.  A.I. is not even good, reliable software! It resembles the death of the art of technology — inconsistent and unreliable by definition, inefficient by design, financially ruinous, and ADDS to the cognitive load of the user by requiring them to be ever-vigilant.  So, here’s a really easy way to deal with this : if a booster ever suggests you are stupid or ignorant, ask them why it’s necessary to demean you to get their point across! Even if you are unable to argue on a technical level, make them explain why the software itself can’t convince you. Boosters will do everything they can to pull you off course. If you say that none of these companies make money, they’ll say it’s the early days. If you say AI companies burn billions, they’ll say the cost of inference is coming down. If you say the industry is massively overbuilding, they’ll say that this is actually just like the dot com boom and that the infrastructure will be picked up and used in the future. If you say there are no real use cases, they’ll say that ChatGPT has 700 million weekly users.  Every time there’s the same god damn arguments, so I’ve sat down and written as many of them as I can think of. Print this and feed it to your local booster today. Anytime a booster says “AI will,” tell them to stop and explain what AI can do, and if they insist, ask them both when to expect the things they’re talking about, and if they say “very soon,” ask them to be more specific. Get them to agree to a date, then call them on that date. There’s that “will” bullshit again. Agents don’t work! They don’t work at all. The term “agent” means, to quote Max Woolf , “a workflow where the LLM can make its own decisions, [such as in the case of] web search [where] the LLM is told “you can search the web if you need to” then can output “I should search the web” and do so.” Yet “agent” has now become a mythical creature that means “totally autonomous AI that can do an entire job.” if anyone tells you “agents are…” you should ask them to point to one. If they say “coding,” please demand that they explain how autonomous these things are, and if they say that they can “refactor entire codebases,” ask them what that means, and also laugh at them.  Here’s a comprehensive rundown , but here’s a particularly important part: Long story short, agents are not autonomous, they do not replace jobs, they cannot “replace coders,” they are not going to do so because probabilistic models are a horrible means of taking precise actions, and almost anyone who brings up agents as a booster is either misinformed or in the business of misinformation. Let's start with a really simple question: what does this actually mean? In many cases, I think they're referring to AI as being "like the early days of the internet." "The early days of the internet" can refer to just about anything. Are we talking about dial-up? DSL? Are we talking about the pre-platform days when people accessed it via Compuserve or AOL? Yes, yes, I remember that article from Newsweek, I already explained it here : In any case, one guy saying that the internet won't be big doesn't mean a fucking thing about generative AI and you are a simpleton if you think it does. One guy being wrong in some way is not a response to my work. I will crush you like a bug. If your argument is that the early internet required expensive Sun Microsystems servers to run, Jim Covello of Goldman Sachs addressed that by saying that the costs "pale in comparison ," adding that we also didn't need to expand our power grid to build the early Web. This is a straight-up lie. Sorry! Also, as Jim Covello noted , there were hundreds of presentations in the early 2000s that included roadmaps that accurately fit how smartphones rolled out, and that no such roadmap exists for generative AI. The iPhone was also an immediate success as a thing that people paid for , with Apple selling four million units in the space of six months . Hell, in 2006 (the year before the iPhone launch), there was an estimated 17.7 million worldwide smartphone shipments (mostly from BlackBerry and other companies building on Windows Mobile, with Palm vacuuming up the crumbs), though to be generous to the generative AI boosters, I’ll disregard those.  The original Attention Is All You Need paper — the one that kicked off the transformer-based Large Language Model era — was published in June 2017. ChatGPT launched in November 2022. Nevertheless, if we're saying "early days" here, we should actually define what that means. As I mentioned above , people paid for the iPhone immediately, despite it being a device that was completely and utterly new. While there was a small group of consumers that might have used similar devices ( like the iPAQ ), this was a completely new kind of computing, sold at a premium, requiring you to have a contract with a specific carrier (Cingular, now known as AT&T). Conversely, ChatGPT's "annualized" revenue in December 2023 was $1.6 billion (or $133 million a month), for a product that had, by that time, raised over $10 billion , and while we don't know what OpenAI lost in 2023, reports suggest it burned over $5 billion in 2024 . Big tech has spent over $500 billion in capital expenditures in the last 18 months , and all told — between investments of cloud credits and infrastructure — will likely sink over $600 billion by year's-end. The "early days" of the internet were defined not by its lack of investment or attention, but by its obscurity. Even in 2000 — around the time of the dot-com bubble — only 52% of US adults used the internet , and it would take another 19 years for 90% of US adults to do so. These early days were also defined by its early functionality. The internet would become so much more because of the things that hyper-connectivity allowed us to do, and both faster internet connections and the ability to host software in the cloud would change, well, everything. We could define what “better” would mean, and make reasonable predictions about what people could do on a “better” internet.  Yet even in those early days, it was obvious why you were using the internet, and how it might grow from there. One did not have to struggle to explain why buying a book online might be useful, or why a website might be a quicker reference than having to go to a library, or why downloading a game or a song might be a good idea. While habits might have needed adjusting, it was blatantly obvious what the value of the early internet was. It's also unclear when the early days of the internet ended. Only 44% of US adults had access to broadband internet by 2006 . Were those the early days of the internet?  The answer is "no," and that this point is brought up by people with a poor grasp of history and a flimsy attachment to reality. The early days of the internet were very, very different to any associated tech boom since, and we need to stop making the comparison. The internet also grew in a vastly different information ecosystem. Generative AI has had the benefit of mass media — driven by the internet! — along with social media (and social pressure) to "adopt AI" for multiple years. According to Pew, as of mid-June 2025, 34% of US adults have used ChatGPT , with 79% saying they had "heard at least a little about it." Furthermore, ChatGPT has always had a free version. On top of that, a study from May 2023 found that over 10,900 news headlines mentioned ChatGPT between November, 2022 and March, 2023 , and a BrandWatch report found that in the first five months of its release, ChatGPT received over 9.24 million mentions on social media . Nearly 80% of people have heard of ChatGPT, and over a quarter of Americans have used it. If we're defining "the early days" based on consumer exposure, that ship has sailed. If we're defining "the early days" by the passage of time , it's been 8 years since Attention Is All You Need, and three since ChatGPT came out. While three years might not seem like a lot of time, the whole foundation of an "early days" argument is that in the early days, things do not receive the venture funding, research, attention, infrastructural support or business interest necessary to make them "big." In 2024, nearly 33% of all global venture funding went to artificial intelligence , and according to The Information, AI startups have raised over $40 billion in 2025 alone , with Statista adding that AI absorbed 71% of VC funding in Q1 2025 . These numbers also fail to account for the massive infrastructure that companies like OpenAI and Anthropic don't have to pay for. The limitations of the early internet were two-fold: In generative AI's case, Microsoft, Google, and Amazon have built out the "fiber optic cables" for Large Language Models. OpenAI and Anthropic have everything they need. They have (even if they say otherwise) plenty of compute, access to the literal greatest minds in the field, the constant attention of the media and global governments, and effectively no regulations or restrictions stopping them from training their models on the works of millions of people , or destroying our environment . They have already had this support. OpenAI was allowed to burn half a billion dollars on a training run for GPT-4.5 and 5 . If anything, the massive amounts of capital have allowed us to massively condense the time in which a bubble goes from "possible" to "bursting and washing out a bunch of people," because the tech industry has such a powerful follower culture that only one or two unique ideas can exist at one time. The "early days" argument hinges on obscurity and limited resources, something that generative AI does not get to whine about. Companies that make effectively no revenue can raise $500 million to do the same AI coding bullshit that everybody else does. In simpler terms, these companies are flush with cash, have all the attention and investment they could possibly need, and are still unable to create a product with a defined, meaningful, mass-market use case. In fact, I believe that thanks to effectively infinite resources, we've speed-run the entire Large Language Model era, and we're nearing the end. These companies got what they wanted. Bonus trick: ask them to tell you what “the fiber boom” was. So, a little history. The "fiber boom" began after the telecommunications act of 1996 deregulated large parts of America's communications infrastructure, creating a massive boom — a $500 billion one to be precise, primarily funded with debt : In one sense, explaining what happened to the telecom sector is very simple: the growth in capacity has vastly outstripped the growth in demand. In the five years since the 1996 bill became law, telecommunications companies poured more than $500 billion into laying fiber optic cable, adding new switches, and building wireless networks. So much long-distance capacity was added in North America, for example, that no more than two percent is currently being used. With the fixed costs of these new networks so high and the marginal costs of sending signals over them so low, it is not a surprise that competition has forced prices down to the point where many firms have lost the ability to service their debts. No wonder we have seen so many bankruptcies and layoffs. This piece, written in 2002 , is often cited as a defense against the horrifying capex associated with generative AI, as that fiber optic cable has been useful for delivering high-speed internet. Useful, right? This period was also defined by a gluttony of over-investment, ridiculous valuations and outright fraud . In any case, this is not remotely the same thing and anyone making this point needs to learn the very fucking basics of technology. GPUs are built to shove massive amounts of compute into one specific function again and again, like generating the output of a model (which, remember, mostly boils down to complex maths). Unlike CPUs, a GPU can't easily change tasks, or handle many little distinct operations, meaning that these things aren't going to be adopted for another mass-scale use case. In simpler terms, this was not an infrastructure buildout. The GPU boom is a heavily-centralized, capital expenditure-funded asset bubble where a bunch of chips will sit in warehouses waiting for somebody to make up a use case for them, and if an endearing one existed, we'd already have it because we already have all the fucking GPUs. You are describing fan fiction. AI 2027 is fan fiction. Anyone who believes in it is a mark! It doesn’t matter if all of the people writing the fan fiction are scientists, or that they all have “the right credentials.” They themselves say that AI 2027 is a “guess,” an “extrapolation” (guess)  with “expert feedback” (someone editing your fan fiction), and involves “experience at OpenAI” (there are people that worked on the shows they write fan fiction about).  I am not going to go line-by-line to cut this apart anymore than I am going to write a lengthy takedown of someone’s erotic Banjo Kazooie story, because both are fictional. The entire premise of this nonsense is that at one point someone invents a self-learning “agent” that teaches itself stuff, and it does a bunch of other stuff as a result, with different agents with different numbers after them. There is no proof this is possible, nobody has done it, nobody will do it. AI 2027 was written specifically to fool people that wanted to be fooled, with big charts and the right technical terms used to lull the credulous into a wet dream and New York Times column where one of the writers folds their hands and looks worried. It was also written to scare people that are already scared. It makes big, scary proclamations, with tons of links to stuff that looks really legitimate but, when you piece it all together, is still fan fiction.  My personal favourite part is “Mid 2026: China Wakes Up,” which involves China’s intelligence agencies trying to steal OpenBrain’s agent (no idea who this company could be referring to, I’m stumped!), before the headline of “AI Takes Some Jobs” after OpenBrain released a model oh god I am so bored even writing up this tripe!  Sarah Lyons put it well , arguing that AI 2027 (and AI in general) is no different from the spurious “ spectral evidence ” used to accuse someone of being a witch during the Salem Witch Trials: Anyway, AI 2027 is fan fiction, nothing more, and just because it’s full of fancy words and has five different grifters on its byline doesn’t mean anything. Bonus trick: Ask them to explain whether things have actually got cheaper, and if they say they have, ask them why there are no profitable AI companies. If they say “they’re currently in growth stage,” ask them why there are no profitable AI companies. At this point they should try and kill you. In an interview on a podcast from earlier in the year , journalist Casey Newton said the following about my work: Newton then says — several octaves higher, showing how mad he isn't — that "[he] thought what [he] said was very civil" and that there are "things that are true and there are things that are false, like you can choose which ones you wanna believe." I am not going to be so civil. Other than the fact that Casey refers to "micro-innovations" (?) and "DeepSeek being on a curve that was expected," he makes — as many do — two very big mistakes, ones that I personally would not have said in a sentence that begun with suggesting that I knew how the technology works.  Inference — and I've gotten this one wrong in the past too! — is everything that happens from when you put a prompt in to generate an output. It's when an AI, based on your prompt, "infers" meaning. To be more specific, and quoting Google , "...machine learning inference is the process of running data points into a machine learning model to calculate an output such as a single numerical score." Casey will try and weasel out of this one and say this is what he meant. It wasn't.  Casey, like many people who talk about stuff without learning about it first, is likely referring to the fact that the price of tokens for some models has gone down in some cases. The problem, however, is that these are raw token costs, not actual expressions or evaluations of token burn in a practical setting. Worse still… Well, the cost of inference actually went up . In an excellent blog for Kilocode , Ewa Szyszka explained: Token consumption per application grew a lot because models allowed for longer context windows and bigger suggestions from the models. The combination of a steady price per token and more token consumption caused app inference costs to grow about 10x over the last two years. To explain in really simple terms, while the costs of old models may have decreased, new models cost about the same, and the "reasoning" that these models do actually burn way, way more tokens. When these new models "reason," they break a user's input and break into component parts, then run inference on each one of those parts. When you plug an LLM into an AI coding environment, it will naturally burn an absolute ton of tokens, in part because of the large amount of information you have to load into the prompt (and the "context window," or the information you load in with your prompt, with token burn increasing with the size of that information), and in part because generating code is inference-intensive. In fact, the inference costs are so severe that Szyszka says that "... combination of a steady price per token and more token consumption caused app inference costs to grow about 10x over the last two years ." I refuse to let this point go, because people love to say "the cost of inference is going down" when the cost of inference has increased, and they do so to a national audience, all while suggesting I am wrong somehow. I am not wrong. In fact, software development influencer Theo Browne recently put out a video called " I was wrong about AI costs (they keep going up) ," which he breaks down as follows: The price drops have, for the most part, stopped. See the below chart from The Information : You cannot, at this point, fairly evaluate whether a model is "cheaper" just based on its cost-per-tokens, because reasoning models are inherently built to use more tokens to create an output. Reasoning models are also the only way that model developers have been able to improve the efficacy of new models, using something called "test-time compute" to burn extra tokens to complete a task. And in basically anything you're using today, there's gonna be some sort of reasoning model, especially if you're coding. The cost of inference has gone up. Statements otherwise are purely false, and are the opinion of somebody who does not know what he's talking about. ...maybe? It sure isn't trending that way, nor has it gone down yet. I also predict that there's going to be a sudden realization in the media that it's going up, which has kind of already started. The Information had a piece recently about it , where they note that Intuit paid $20 million to Azure last year (primarily for access to OpenAI's models), and is on track to spend $30 million this year, which "outpaces the company's revenue growth in the same period, raising questions about how sustainable the spending is and how much of the cost it can pass along to customers." The problem here is that the architecture underlying Large Language Models is inherently unreliable. I imagine OpenAI's introduction of the router to ChatGPT-5 is an attempt to moderate both the costs of the model chosen and reduce the amount of exposure to reasoning models for simple queries — though Altman was boasting on August 10th about the "significant increase" in both free and paid users' exposure to reasoning models . Worse still, a study written up by VentureBeat found that open-weight models burn between 1.5 to 4 times more tokens, in part due to a lack of token efficiency, and in particular thanks to — you guessed it! — reasoning models: The findings challenge a prevailing assumption in the AI industry that open-source models offer clear economic advantages over proprietary alternatives. While open-source models typically cost less per token to run, the study suggests this advantage can be “easily offset if they require more tokens to reason about a given problem.” And models keep getting bigger and more expensive, too.  Because model developers hit a wall of diminishing returns, and the only way to make their models do more was to make them burn more tokens to generate a more accurate response (this is a very simple way of describing reasoning, a thing that OpenAI launched in September 2024 and others followed). As a result, all the "gains" from "powerful new models" come from burning more and more tokens. The cost-per-million-token number is no longer an accurate measure of the actual costs of generative AI, because it's much, much harder to tell how many tokens a reasoning model may burn, and it varies (as Theo Browne noted) from model to model. In any case, there really is no changing this path. They are out of ideas. So, I've heard this argument maybe 50 times in the last year, to the point that I had to talk about it in my July 2024 piece " How Does OpenAI Survive ." Nevertheless, people make a few points about Uber and AI that I think are fundamentally incorrect, and I'll break them down for you. I've seen this argument a lot, and it's one that's both ahistorical and alarmingly ignorant of the very basics of society. So, OpenAI got a $200 million defense contract with an "estimated completion date of July 2026," and is selling ChatGPT Enterprise to the US government for a dollar a year ( along with Anthropic, which sells access to Claude for the same price, Even Google is undercutting them, selling Gemini access at 47 cents for a year ). You're probably reading that and saying "oh no, that means the government has paid them now, they're never going away, " and I cannot be clear enough that you believing this is the intention of these deals . These are built specifically to make you feel like these things are never going away. This is also an attempt to get "in" with the government at a rate that makes "trying" these models a no-brainer. ... and??????? "The government is going to have cheap access to AI software" does not mean that "the government relies on AI software." Every member of the government having access to ChatGPT — something that is not even necessarily the case! — does not make this software useful, let alone essential , and if OpenAI burns a bunch of money "making it work for them," it still won't be essential, because Large Language Models are not actually that useful for doing stuff! Uber used lobbyist Bradley Tusk to steamroll local governments into allowing Uber to operate in their cities, but Tusk did not have to convince local governments that Uber was useful or have to train people how to use it. Uber's "too big to fail" moment was that local cabs kind of fucking sucked just about everywhere. Did you ever try and take a yellow cab from Downtown Manhattan to Hoboken New Jersey? Or Brooklyn? Or Queens? Did you ever try to pay with a credit card? How about trying to get a cab outside of a major metropolitan area? Do you remember how bad that was? I am not glorifying Uber the company, but the experience that Uber replaced was very, very bad. As a result, Uber did become too big to fail, because people now rely upon it because the old system sucked. Uber used its masses of venture capital to keep prices low to get people used to it too, but the fundamental experience was better than calling a cab company and hoping that they showed up. I also want to be clear this is not me condoning Uber, take public transport if you can! To be clear, Uber has created a new kind of horrifying, extractive labor practice which deprives people of benefits and dignity, paying off academics to help the media gloss over the horrors of its platform . It is also now having to increase prices . What, exactly, is the "essential" experience of generative AI? What essential experience are we going to miss if ChatGPT disappears tomorrow? And on an enterprise or governmental level: what exactly are these tools doing for governments that would make removing them so painful? What use cases? What outcomes? Uber's "essential" nature is that millions of people use it in place of regular taxis, and it effectively replaced decrepit, exploitative systems like the yellow cab medallions in New York with its own tech-enabled exploitation system that, nevertheless, worked far better for the user. Sidenote: although I acknowledge that the disruption that Uber brought to the medallion system had horrendous consequences for the owners of said medallions — some of whom had paid more than a million dollars for the privilege to drive a New York taxi cab , and were burdened under mountains of debt.  There is no such use case with ChatGPT, or any other generative AI system. You cannot point to one use case that is anywhere near as necessary as cabs in cities, and indeed the biggest use cases — things like brainstorming and search — are either easily replaced by any other commoditized LLM or literally already exist with Google Search . Sorry, this is a really simple one. These data centers are not, in and of themselves, driving much economic growth other than in the costs of building them . As I've discussed again and again , there's maybe $40 billion in revenue and no profit coming out of these companies. There isn't any economic growth! They're not holding up anything! These data centers, once built, also create very little economic activity. They don't create jobs , they take up massive amounts of land and utilities, and they piss off and poison their neighbors . If anything, letting these things die would be a political win. There is no "great loss" associated with the death of the Large Language Model era. Taking away Uber would genuinely affect people's ability to get places. So, the classic (and wrong!) argument about OpenAI and companies like OpenAI is that "Uber burned a bunch of money and is now "cash-flow positive" or "profitable ."  Let's talk about raw losses, and where people are making this assumption. Uber lost $24.9 billion in the space of four years (2019 to 2022), in part because of the billions it was spending on sales and marketing and R&D — $4.6 billion and $4.8 billion respectively in 2019 alone . It also massively subsidized the cost of rides — which is why prices had to increase — and spent heavily on driver recruitment, burning cash to get scale, the classic Silicon Valley way. This is absolutely nothing like how Large Language Models are growing, and I am tired of defending this point. OpenAI and Anthropic burn money primarily through compute costs and specialized talent. These costs are increasing, especially with the rush to hire every single AI scientist at the most expensive price possible . There are also essential, immovable costs that neither OpenAI nor Anthropic have to shoulder — the construction of the data centers necessary to train and run inference for their models, which I will get to in a little bit. Yes, Uber raised $33.5 billion ( through multiple rounds of post-IPO debt , though it raised about $25 billion in actual funding). Yes, Uber burned an absolute ass-ton of money. Yes, Uber has scale. But Uber was not burning money as a means of making its product functional or useful. Furthermore, the costs associated with Uber — and its capital expenditures from 2019 through 2024 were around $2.2 billion! — are miniscule compared to the actual costs of OpenAI and Anthropic. Both OpenAI and Anthropic lost around $5 billion in 2024, but their infrastructure was entirely paid for by either Microsoft, Google or Amazon. While we don't know how much of this infrastructure is specifically for OpenAI or Anthropic, as the largest model developers it's fair to assume that a large chunk — at least 30% — of Amazon and Microsoft's capital expenditures have been to support these loads (I leave out Google as it's unclear whether it’s expanded its infrastructure for Anthropic, but we know Amazon has done so ). As a result, the true "cost" of OpenAI and Anthropic is at least ten times what Uber burned. Amazon spent $83 billion in capital expenditures in 2024 and expects to spend $105 billion in 2025 . Microsoft spent $55.6 billion in 2024 and expects to spend $80 billion this year . Based on my (conservative) calculations, the true "cost" of OpenAI is around $82 billion, and that only includes capex from 2024 onward , based on 30% of Microsoft's capex (as not everything has been invested yet in 2025, and OpenAI is not necessarily all of the capex) and the $41.4 billion of funding it’s received so far . The true cost of Anthropic is around $77.1 billion, including all its funding and 30% of Amazon's capex from the beginning of 2024. These are inexact comparisons, but the classic argument is that Uber "burned lots of money and worked out okay," when in fact the combined capital expenditures from 2024 onwards that are necessary to make Anthropic and OpenAI work are each — on their own — four times the amount Uber burned in over a decade. I also believe that these numbers are conservative. There's a good chance that Anthropic and OpenAI dominate the capex of Amazon and Microsoft, in part because what the fuck else are they buying all these GPUs for, as their own AI services don't seem to be making that much money at all. Anyway, to put it real simple, AI has burned more in the last two years than Uber burned in ten, Uber didn't burn money in the same way, didn't burn much by way of capital expenditures, didn't require massive amounts of infrastructure, and isn't remotely the same in any way, shape or form, other than it burned a lot of money — and that burning wasn’t because it was trying to build the core product, but rather trying to scale. I covered this in depth in the Hater's Guide To The AI Bubble , but the long and short of it is that AWS is a platform, a necessity with an obvious choice, and has burned about ten percent of what Amazon et. al has burned chasing generative AI, and had proven demand before building it. Also, AWS was break-even in three years. OpenAI was founded in fucking 2015, and even if you start from November 2022, by AWS standards it should be break-even! Amazon Web Services was created out of necessity — Amazon's infrastructure needs were so great that it effectively had to build both the software and hardware necessary to deliver a store that sold theoretically everything to theoretically anywhere, handling both the traffic from customers, delivering the software that runs Amazon.com quickly and reliably, and, well, making sure things ran in a stable way. It didn't need to come up with a reason for people to run web applications — they were already doing so themselves, but in ways that cost a lot, were inflexible, and required specialist skills. AWS took something that people already did, and what there was a proven demand for, and made it better. Eventually, Google and Microsoft would join the fray. As I’ve discussed in the past, this metric is basically “monthx12,” and while it’s a fine measure for high-gross-margin businesses like SaaS companies, it isn’t for AI. It doesn’t account for churn (when people leave). It also is a number intentionally used to make a company sound more successful — so you can say “$200 million annualized revenue” instead of “$16.6 million a month.” Also, if they’re saying this number, it’s likely that number isn’t consistent! Simple answer: why have literally none of them done this yet? Why not one? There’s that “will” bullshit, once again, always about the “will.” We do not know how thinking works in humans and thus cannot extrapolate it to a machine, and at the very least human beings have the ability to re-evaluate things and learn, a thing that LLMs cannot do and will never do.  We do not know how to get to AGI. Sam Altman said in June that OpenAI was “now confident [they knew] how to build AGI as we have traditionally understood it.” In August, Altman said that AGI was “not a super useful term,” and that “the point of all this is it doesn’t really matter and it’s just this continuing exponential of model capability that we’ll rely on for more and more things.” So, yeah, total bullshit.  Even Meta’s Chief AI Scientist says it isn’t possible with transformer-based models . We don’t know if AGI is possible, anyone claiming they do is lying. This, too, is hogwash, nothing different than your buddy’s friend’s uncle who works at Nintendo that says Mario is coming to PlayStation. Ilya Sutskever and Mira Murati raised billions for companies with no product, let alone a product road map, and they did so because they saw a good opportunity for a grift and to throw a bunch of money at compute. Also: if someone from “deep within the AI industry” has told somebody “big things are coming,” they are doing so to con them or make them think they have privileged information. Ask for specifics.  This argument is poised as a comeback to my suggestion that AI isn't particularly useful, a proof point that this movement is not inherently wasteful, or that there are, in fact, use cases for ChatGPT that are lasting, meaningful or important. I disagree. In fact, I believe ChatGPT — and LLMs in general — have been marketed based on lies of inference. Ironic, I know. I also have grander concerns and suspicions about what OpenAI considers a “user” and how it counts revenue, I’ll get into that later in the week on my premium newsletter, which you should subscribe to. Here’s a hint though: 500,000 of OpenAI’s “ 5 million business customers ” are from its $15 million deal with Cal State University , which works out to around $2.50-a-user-a-month. It’s also started doing $1-a-month trials of its $30-a-month “Teams” subscription , and one has to wonder how many of those are counted in that total, and for how long. I do not know the scale of these offers, nor how long OpenAI has been offering them. A Redditor posted about the deal a few months ago , saying that OpenAI was offering up to 5 seats at once. In fact, I've found a few people talking about these deals, and even one adding that they were offered an annual $10-a-month ChatGPT Plus subscription , with one person saying a few weeks ago saying they'd seen people offered this deal for canceling their subscription . Suspicious. But there’s a greater problem at play. So, ChatGPT has 700 million weekly active users . OpenAI has yet to provide a definition — and yes, I've asked! — which means that an "active" user could be defined as somebody who has gone to ChatGPT once in the space of a week. This term is extremely flimsy, and doesn't really tell us much. Similarweb says that in July 2025 ChatGPT.com had 1.287 billion total visits , making it a very popular website. What do these facts actually mean , though? As I said previously, ChatGPT has had probably the most sustained PR campaign for anything outside of a presidency or a pop star. Every single article about AI mentions OpenAI or ChatGPT, every single feature launch — no matter how small — gets a slew of coverage. Every single time you hear "AI" you’re made to think of "ChatGPT” by a tech media that has never stopped to think about their role in hype, or their responsibility to their readers. And as this hype has grown, the publicity compounds, because the natural thing for a journalist to do when everybody is talking about something is to talk about it more . ChatGPT's immediate popularity may have been viral, but the media took the ball and ran with it, and then proceeded to tell people it did stuff it did not. People were pressured to try this service then under false pretenses, something that continues to this day.  I'll give you an example. On March 15 2023, Kevin Roose of the New York Times would say that OpenAI's GPT-4 was " exciting and scary ," exacerbating (his words!) "...the dizzy and vertiginous feeling I’ve been getting whenever I think about A.I. lately," wondering if he was experiencing "future shock," then described how it was an indeterminate level of "better" and something that immediately sounded ridiculous: In one test, conducted by an A.I. safety research group that hooked GPT-4 up to a number of other systems, GPT-4 was able to hire a human TaskRabbit worker to do a simple online task for it — solving a Captcha test — without alerting the person to the fact that it was a robot. The A.I. even lied to the worker about why it needed the Captcha done, concocting a story about a vision impairment. That doesn't sound remotely real! I went and looked up the paper , and here is the entire extent of what OpenAI shared: This safety card led to the perpetration of one of the earliest falsehoods — and most eagerly-parotted lies — that ChatGPT and generative AI is capable of "agentic" actions. Outlet after outlet — led by Kevin Roose — eagerly interpreted an entire series of events that took place that doesn't remotely make sense, starting with the fact that this is not something you can hire a Taskrabbit to do . Or, at the very least, without a contrived situation where you create an empty task and ask them to complete it. Why not use Mechanical Turk? Or Fiverr? They’ve tons of people offering this service !  But I'm a curious little critter, so I went further and followed their citation to a link on METR's research page . It turns out that what actually happened was METR had a researcher copy paste the generated responses from the model and otherwise handle the entire interaction with Taskrabbit , and based on the plurality of "Taskrabbit contractors," it appears to have taken multiple tries. On top of that, it appears that OpenAI/METR were prompting the model on what to say , which kind of defeats the point. Emphases mine, and comments in [brackets]: It took me five whole minutes to find this piece — which is cited on the GPT-4 system card — read it, then write this piece. It did not require any technical knowledge other than the ability to read stuff.  It is transparently, blatantly obvious that GPT-4 did not "hire" a Taskrabbit or, indeed, make any of these actions — it was prompted to, and they do not show the prompts they used, likely because they had to use so many of them. Anyone falling for this is a mark, and OpenAI should have gone out of its way to correct people. Instead, they sat back and let people publish outright misinformation. Roose, along with his co-host Casey Newton, would go on to describe this example at length on a podcast that week , describing an entire narrative where “the human actually gets suspicious” and “GPT 4 reasons out loud that it should not reveal that [it is] a robot,” at which point “the TaskRabbit solves the CAPTCHA.” During this conversation, Newton gasps and says “oh my god” twice, and when he asks Roose “how does the model understand that in order to succeed at this task, it has to deceive the human?” Roose responds “we don’t know, that is the unsatisfying answer,” and Newton laughs and states “we need to pull the plug. I mean, again, what?” Credulousness aside, the GPT-4 marketing campaign was incredibly effective, creating an aura that allowed OpenAI to take advantage of the vagueness of its offering as people — including members of the media — willfully filled in the blanks for them. Altman has never had to work to sell this product. Think about it — have you ever heard OpenAI tell you what ChatGPT can do, or gone to great lengths to describe its actual abilities? Even on OpenAI's own page for ChatGPT , the text is extremely vague: Scrolling down, you're told ChatGPT can "write, brainstorm, edit and explore ideas with you." It can "generate and debug code, automate repetitive tasks, and [help you] learn new APIs." With ChatGPT you can "learn something new...dive into a hobby...answer complex questions" and "analyze data and create charts." What repetitive tasks? Who knows. How am I learning? Unclear. It's got thinking built in! What that means is unclear, unexplained, and thus allows a user to incorrectly believe that ChatGPT has a brain. To be clear, I know what reasoning means , but this website does not attempt to explain what "thinking" means. You can also "offload complex tasks from start to finish with an agent," which can, according to OpenAI , "think and act, proactively choosing from a toolbox of agentic skills to complete tasks for you using its own computer." This is an egregious lie, employing the kind of weasel-wording that would be used to torture "I.R. Baboon" for an eternity.  Precise in its vagueness, OpenAI's copy is honed to make reporters willing to simply write down whatever they see and interpret it in the most positive light. And thus the lie of inference began. What "ChatGPT" meant was muddied from the very beginning, and thus ChatGPT's actual outcomes have never been fully defined. What ChatGPT "could do" became a form of folklore — a non-specific form of "automation" that could "write code" and "generate copy and images," that can "analyze data," all things that are true but one can infer much greater meaning from. One can infer that "automation" means the automation of anything related to text, or that "write code" means "write the entirety of a computer program." OpenAI's ChatGPT agent is not, by any extension of the word, " already a powerful tool for handling complex tasks ," but it has not, in any meaningful sense, committed to any actual outcomes. As a result, potential users — subject to a 24/7 marketing campaign — have been pushed toward a website that can theoretically do anything or nothing, and have otherwise been left to their own devices. The endless gaslighting, societal pressure, media pressure, and pressure from their bosses has pushed hundreds of millions of people to try a product that even its creators can't really describe. As I've said in the past, OpenAI is deliberately using Weekly Active Users so that it doesn't have to publish its monthly active users, which I believe would be higher. Why wouldn't it do this? Well, OpenAI has 20 million paying ChatGPT subscribers and five million "business customers," with no explanation of what the difference might be. This is already a mediocre (3.5%) conversion rate, yet its monthly active users (which are likely either 800 million or 900 million, but these are guesses!) would make that rate lower than 3%, which is pretty terrible considering everybody says this shit is the future. I also am tired of having people claim that "search" or "brainstorm" or "companions" are a lasting, meaningful business models. So, OpenAI announced that it has hit its first $1 billion month on August 20, 2025 on CNBC, which brings it exactly in line with my estimated $5.26 billion in revenue that I believe it has made as of the end of July . However, remember what the MIT study said: enterprise adoption is high but transformation is low .  There are tons of companies throwing money at AI, but they are not seeing actual returns. OpenAI's growth as the single-most-prominent company in AI (and if we're honest, one of the most prominent in software writ large) makes sense, but at some point will slow, because the actual returns for the businesses aren't there. If there were, we'd have one article where we could point at a ChatGPT integration that at scale helped a company make or save a bunch of money, written in plain English and not a gobbledygook of " profit improvement ." Also… OpenAI is projected to make $12.7 billion in 2025. How exactly will it do that? Is it really making $1.5 billion a month by the end of the year? Even if it does, is the idea that it keeps burning $10 billion or more every year into eternity? What actual revenue potential does OpenAI have long-term? Its products are about as good as everyone else's, cost about the same, and do the same things. ChatGPT is basically the same product as Claude or Grok or any number of different LLMs. The only real advantages that OpenAI has are infrastructure and brand recognition. These models have clearly hit a wall where training is hitting diminishing returns, meaning that its infrastructural advantage is that they can continue providing its service at scale, nothing more. It isn't making its business cheaper, other than the fact that it mostly hasn’t had to pay for it...other than the site in Abilene Texas where it’s promised Oracle $30 billion a year by 2028 . I'm sorry, I don't buy it! I don't buy that this company will continue growing forever, and its stinky conversion rate isn't going to change anytime soon. How? Literally…how!  How? How! HOW???  Nobody ever answers this question! “Efficiencies”? If you’re going to say GPT-5 — here’s a scoop I have about how it’s less efficient !  It's very, very, very common for people to conflate "AI" with "generative AI." Make sure that whatever you're claiming or being told is actually about Large Language Models, as there are all sorts of other kinds of machine learning that people love to bring up. LLMs have nothing to do with Folding@Home, autonomous cars, or most disease research. A lot of people think that they're going to tell me "I use this all the time!" and that'll change my mind. I cannot express enough how irrelevant it is that you have a use case, as every use case I hear is one of the following: This would all be fine and dandy if people weren't talking about this stuff as if it was changing society. None of these use cases come close to explaining why I should be impressed by generative AI. It also doesn't matter if you yourself have kind of a useful thing that AI did for you once. We are so past the point when any of that matters. AI is being sold as a transformational technology, and I am yet to see it transform anything. I am yet to hear one use case that truly impresses me, or even one thing that feels possible now that wasn't possible before. This isn't even me being a cynic — I'm ready to be impressed! I just haven't been in three fucking years and it's getting boring. Also, tell me with a straight face any of this shit is worth the infrastructure. One of the most braindead takes about AI and coding is that "vibe coding" is "allowing anyone to build software." While technically true, in that one can just type "build me a website" into one of many AI coding environments, this does not mean it is functional or useful software. Let's make this really clear: AI cannot "just handle coding ." Read this excellent piece by Colton Voege , then read this piece by Nik Suresh . If you contact me about AI and coding without reading these I will send them to you and nothing else, or crush you like a car in a garbage dump, one or the other. Also, show me a vibe coded company. Not a company where someone who can code has quickly spun up some features, a fully-functional, secure, and useful app made entirely by someone who cannot code. You won't be able to find this as it isn't possible. Vibe Coding is a marketing term based on lies, peddled by people who have either a lack of knowledge or morals. Are AI coding environments making people faster? I don't think so! In fact, a recent study suggested they actually make software engineers slower . The reason that nobody is vibe coding an entire company is because software development is not just "put a bunch of code in a pile and hit "go," and oftentimes when you add something it breaks something else. This is all well and good if you actually understand code — it's another thing entirely when you are using Cursor or Claude Code like a kid at an arcade machine turning the wheel repeatedly and pretending they're playing the demo. Vibe coders are also awful for the already negative margins of most AI coding environments, as every single thing they ask the model to do is imprecise, burning tokens in pursuit of a goal they themselves don't understand. "Vibe coding" does not work, it will not work, and pretending otherwise is at best ignorance and at worst supporting a campaign built on lies. If you are an AI booster, please come up with better arguments. And if you truly believe in this stuff, you should have a firmer grasp on why you do so. It's been three years, and the best some of you have is "it's real popular!" or "Uber burned a lot of money!" Your arguments are based on what you wish were true rather than what's actually true, and it's deeply embarrassing. Then again, there are many well-intentioned people who aren't necessarily AI boosters who repeat these arguments, regardless of how thinly-framed they are, in part because we live in a high-information, low-processing society where people tend to put great faith in people who are confident in what they say and sound smart. I also think the media is failing on a very basic level to realize that their fear of missing out or seeming stupid is being used against them. If you don't understand something, it's likely because the person you're reading or hearing it from doesn't either. If a company makes a promise and you don't understand how they'd deliver on it, it's their job to explain how, and your job to suggest it isn't plausible in clear and defined language . This has gone beyond simple "objectivity" into the realm of an outright failure of journalism. I have never seen more misinformation about the capabilities of a product in my entire career, and it's largely peddled by reporters who either don't know or have no interest in knowing what's actually possible, in part because all of their peers are saying the same nonsense. As things begin to collapse — and they sure look like they're collapsing, but I am not making any wild claims about "the bubble bursting" quite yet — it will look increasingly more deranged to bluntly publish everything that these companies say. Never have I seen an act of outright contempt more egregious than Sam Altman saying that GPT-5 was actually bad, and that GPT-6 will be even better . Members of the media: Sam Altman does not respect you. He is not your friend. He is not secretly confiding in you. He thinks you are stupid and easily-manipulated, and will print anything he says, largely in part because many members of the media will print exactly what he says whenever he says it. To be clear, if you wrote about it and actively mocked it , that's fine. But let's close by discussing the very nature of AI skepticism, and the so-called "void" between those who "hate" AI and those who "love" AI, from the perspective of one of the more prominent people in the "skeptic" side. Critics and skeptics are not given the benefit of grace, patience, or, in many cases, hospitality when it comes to their position. While they may receive interviews and opportunities to "give their side," it is always framed as the work of a firebrand, an outlier, somebody with dangerous ideas that they must eternally justify. They are demonized, their points under constant scrutiny, their allegiances and intentions constantly interrogated for some sort of moral or intellectual weakness. "Skeptic" and "critic" are words said with a sneer or trepidation — that the listener should be suspicious that this person isn't agreeing that AI is the most powerful, special thing ever. To not immediately fall in love with something that everybody is talking about is to be framed as a "hater," to have oneself introduced with the words "not everybody agrees..." on 40% of appearances. By comparison, AI boosters are the first to get TV appearances and offers to be on panels, their coverage featured prominently on Techmeme, selling slop-like books called shit like The Future Of Intelligence: Masters Of The Brain featuring 18 interviews with different CEOs that all say the same thing. They do not have to justify their love — they simply have to remember all the right terms, chirping out "test-time compute" and "the cost of inference is going down" enough times to summon Wario Amodei to give them an hour-long interview where he says "the models, they are, in years, going to be the most powerful school teacher ever built." And yeah, I did sell a book, because my shit fucking rocks. I have consistent, deeply-sourced arguments that I've built on over the course of years. I didn't "become a hater" because I'm a "contrarian," I became a hater because the shit that these fucking oafs have done to the computer pisses me off. I wrote The Man Who Killed Google Search because I wanted to know why Google Search sucked. I wrote Sam Altman, Freed because at the time I didn't understand why everybody was so fucking enamoured with this damp sociopath. Everything I do comes from genuine curiosity and an overwhelming frustration with the state of technology. I started writing this newsletter with 300 subscribers and 60 views, and have written it as an exploration of subjects that grows as I write. I do not have it in me to pretend to be anything other than what I am, and if that is strange to you, well, I'm a strange man, but at least I'm an honest one. I do have a chip on my shoulder, in that I really do not like it when people try to make other people feel stupid, especially when they do so as a means of making money for themselves or somebody else. I write this stuff out because I have an intellectual interest, I like writing, and by writing, I am able to learn about and process my complex feelings about technology. I happen to do so in a manner that hundreds of thousands of people enjoy every month, and if you think that I've grown this by "being a hater," you are doing yourself the disservice of underestimating me, which I will use to my advantage by writing deeper, more meaningful, more insightful things than you. I have watched these pigs ruin the computer again and again, and make billions doing so, all while the media celebrates the destruction of things like Google, Facebook, and the fucking environment in pursuit of eternal growth. I cannot manufacture my disgust, nor can I manufacture whatever it is inside me that makes it impossible to keep quiet about the things I see. I don't know if I take this too seriously or not seriously enough, but I am honoured that I am able to do it, and have 72,000 of you subscribed to find out when I do so.

0 views

How Does GPT-5 Work?

Welcome to another premium edition of Where's Your Ed At! Please subscribe to it so I can continue to drink 80 Diet Cokes a day. Email me at [email protected] with the subject "premium" if you ever want to chat. I realize this is before the paywall, so if you email me without paying, no promises I don't respond with the lyrics to Cheeseburger In Paradise . Also: this is an open call — if you've tried prompt caching with GPT-5 on OpenAI's API, please reach out! You've probably heard a lot about GPT-5 this week, with takes ranging from " it's just good at stuff " to SemiAnalysis' wild statement that " GPT-5 [is setting] the stage for Ad Monetization and the SuperApp ," a piece that makes several assertions about how the "router" that underpins GPT-5 is somehow the secret way that OpenAI will inject ads. Here's a quote: This...did not make a ton of sense to me. Why would this be the case? The article also makes a lot of claims about the "value" of a question and how ChatGPT could — I am serious — "agentically reach out to lawyers" based on a query. In fact, I'm not sure this piece reflects how GPT-5 works at all. To be fair on SemiAnalysis, it's not as if OpenAI gave them much help. Here's what it says : There is a really, really important distinction to make here: that GPT-5, as described above, is referring to GPT-5 as part of ChatGPT. OpenAI's API-based access to GPT-5 models does not route them, nor does OpenAI offer access to its router, or any other associated models. How do I know this? Because I went and found out how ChatGPT-5 actually works. In discussions with a source at an infrastructure provider familiar with the architecture, it appears that ChatGPT-5 is, in fact, potentially more expensive to run than previous models, and due to the complex and chaotic nature of its architecture, can at times burn upwards of double the tokens per query. ChatGPT-5 is also significantly more convoluted, plagued by latency issues, and is more compute-intensive thanks to OpenAI's new "smarter, more efficient" model. In simple terms, every user prompt on ChatGPT — whether it's on the auto, "Fast," "Thinking Fast" or "Thinking" tab — starts by putting the user's prompt before the "static prompt," which is a hidden prompt where instructions like "You are ChatGPT, you are a Large Language Model, You Are A Helpful Chatbot" and so on goes. These static prompts are different with each model you use - a reasoning model will have a different instruction set than a more chat-focused one, such as “think hard about a particular problem before giving an answer.” This becomes an issue when you use multiple different models in the same conversation, because the router — the thing that selects the right model for the request — has to look at the user prompt. It can’t consider the static instructions first. The order has to be flipped for the whole thing to work. Put simpler: Previous versions of ChatGPT would take the static prompt, and then (invisibly) append the user prompt onto it. ChatGPT-5 can’t do that.  Every time you use ChatGPT-5, every single thing you say or do can cause it to do something different. Attach a file? Might need a different model. Ask it to "look into something and be detailed?" Might trigger a reasoning model. Ask a question in a weird way? Sorry, the router's gonna need to send you to a different model.  Every single thing that can happen when you ask ChatGPT to do something may trigger the "router" to change model, or request a new tool, and each time it does so requires a completely fresh static prompt, regardless of whether you select Auto, Thinking, Fast or any other option. This, in turn, requires it to expend more compute, with queries consuming more tokens compared to previous versions.  As a result, ChatGPT-5 may be "smart," but it sure doesn't seem "efficient." To play Devil's Advocate, OpenAI likely added the routing model as a means of creating more sophisticated outputs for users, and, I imagine, with the intention of cost-saving. Then again, this may just be the thing that it had ready to ship — after all, GPT-5 was meant to be " the next great leap in AI ," and the pressure was on to get it out the door. By creating a system that depends on an external routing model — likely another LLM — OpenAI has removed the ability to cache the hidden instructions that dictate how the models generate answers in ChatGPT, creating massive infrastructural overhead. Worse still, this happens with every single "turn" (IE: message) on ChatGPT-5, regardless of the model you choose, creating endless infrastructural baggage with no real way out that only compounds based on how complex a user's queries get.  Could OpenAI make a better router? Sure! Does it have a good router today? I don't think so! Every time you message ChatGPT it has the potential to change model or tooling based on its own whims, each time requiring a fresh static prompt. It doesn't even need to be a case where a user asks ChatGPT-5 to "think," and based on my tests with GPT-5, sometimes just asking it a four-word question can trigger it to "think longer" for no apparent reason. OpenAI has created a product with latency issues and an overwhelmingly convoluted routing system that's already straining capacity, to the point that this announcement feels like OpenAI is walking away from its API entirely. Unlike the GPT-4o announcement , which mentions the API in the first paragraph, the GPT-5 announcement has no reference to it, and a single reference to developers when talking about coding. Sam Altman has already hinted that he intends to deprecate any "new API demand " — though I imagine he'll let anyone in who will pay for priority processing . ChatGPT-5 feels like the ultimate comeuppance for a company that was never forced to build a product, choosing instead to bolt increasingly-complex "tools" onto the sides of models in the hopes that one would magically appear. Now each and every "feature" of ChatGPT burns even more money than it did before.  ChatGPT-5 feels like a product that was rushed to market by a desperate company that had to get something out the door. In simpler terms, OpenAI gave ChatGPT a middle manager.

0 views

The Enshittification of Generative AI

Thanks for subscribing to Where’s Your Ed At Premium, please shoot me an email at [email protected] if you ever have any questions. Yesterday, OpenAI launched GPT-5 , a new “flagship” model of some sort that’s allegedly better at coding and writing, but upon closer inspection it feels like the same old shit it’s been shoveling for the last year or two.  Sure, I’m being dismissive, but three years and multiple half-billion-dollar training runs later , OpenAI has delivered us a model that is some indeterminate level of “better” that “scared” Sam Altman , and immediately began doing what some Twitter users called “ chart crimes ” with its supposed coding benchmark charts.  This also begs the question: what is GPT-5? WIRED calls it a “flagship language model,” but OpenAI itself calls it a “unified system with a smart, efficient model that answers most questions, a deeper reasoning model, and a real-time router that quickly decides which[model]  to use based on conversation type, complexity, tool needs, and your explicit intent.” That sure sounds like two models to me, and not necessarily new ones! Altman, back in February , said that GPT-5 was “a system that integrates a lot of our technology, including o3.” It is a little unclear what GPT-5 — or at least the one accessed through ChatGPT — is. According to Simon Willison , there’s three sub-models — a regular, mini and a nano model, “which can each be run at one of four reasoning levels” if you configure them using the API. When it comes to what you access on ChatGPT, however, you’ve got two options — GPT-5 and GPT-5-Thinking, with the entire previous generation of GPT models no longer available for most users to access. I believe GPT-5 is part of a larger process happening in generative AI — enshittification , Cory Doctorow’s term for when platforms start out burning money offering an unlimited, unguarded experience to attract their users, then degrade and move features to higher tiers as a means of draining the blood from users.  With the launch of GPT-5, OpenAI has fully committed to enshittifying its consumer and business subscription products, arbitrarily moving free users to a cheaper model and limiting their ability to generate images, and removing the ability to choose which model you use in its $20, $35 and “enterprise” subscriptions, moving any and all choice to its “team” and $200-a-month “pro” subscriptions.  OpenAI’s justification is an exercise in faux-altruism , framing “taking away all choice” as a “real-time router that quickly decides which [model] to use.” ChatGPT Plus and Team members now mostly have access to two models — GPT-5 and GPT-5-Thinking — down from the six they had before.  This distinction is quite significant. Where users once could get hundreds of messages a day on OpenAI’s o4-mini-high and o4-mini reasoning models, GPT-5 for ChatGPT Plus subscribers offers 200 reasoning (GPT-5-thinking) messages a week , with 80 GPT-5 messages every 3 hours which allow you to ask it to “think” about its answer, shoving you over to an undisclosed reasoning model. This may seem like a good deal, OpenAI is likely putting you on the cheapest model whenever it can in the name of “the best choice.” While Team accounts have “unlimited” access to GPT-5, they still face the same 200-reasoning-messages-a-week limit , and while yes, you could ask it to “think” more, do you think that OpenAI is going to give you their best reasoning models? Or will they, as they said, “bring together the best of their previous models” and “choose the right one for the job”? Furthermore, OpenAI is permanently sunsetting ChatGPT access to every model that doesn’t start with GPT-5 on August 14th except for customers of its most expensive subscription tier. OpenAI will (and it appears this applies to the $200-a-month "Pro" plan too, I'm told by reporter Joanna Stern )- reduce your model options to two or three choices (Chat, Thinking and Pro), and will choose whatever sub-model it sees fit in the most opaque way possible. GPT-5 is, by definition, a “trust me bro” product.  OpenAI is trying to reduce the burden of any particular user on the system under the guise of providing the “smartest, fastest model,” with “smartest” defined internally in a way that benefits the company, marketed as “choosing the best model for the job.”  Let's see how users feel! An intrepid Better Offline listener pulled together some snippets from r/ChatGPT , where users are mourning the loss of GPT-4o , furious at the loss of other models and calling GPT-5, in one case, " the biggest peice (sic) of garbage even as a paid user ," who says that "projects are absolutely brain-dead now." One user said that GPT-5 is " the biggest bait-and-switch in AI history ," another said that OpenAI " deleted a workfow of 8 models overnight, with no prior warning ," and another said that " ChatGPT 5 is the worst model ever ." In fact, there are so many of these posts that I could find posts to link to for every word of this paragraph in under five minutes. Yet OpenAI isn’t just screwing over consumers. Developers that want to integrate OpenAI’s model now have access to “priority processing” — previously an enterprise-only feature ( see this archive from July 21st 2025 ) to guarantee low latency and uptime. While this sounds like something altruistic, or a new beneficial feature, I’m not convinced. I believe there’s only one reason to do this: that OpenAI intends to, or will be forced to due to capacity constraints, start degrading access to its API.  As with every model developer, we have no real understanding of what may or may not lead to needing “reliable, high-speed performance” from API access, but the suggestion here is that failing to pay OpenAI’s troll toll will put your API access in the hole. That toll is harsh, too, nearly doubling the API price on each model , and while the Priority Processing Page has pricing for all manner of models, its pricing page reduces the options down to two models — GPT-5 and GPT-5-mini, suggesting it may not intend to provide priority access in perpetuity.  OpenAI is far from alone in turning the screws on its customers. As I’ll explain, effectively every consumer generative AI company has started some sort of $200-a-month “pro” plan — Perplexity Max, Gemini ($249.99 a month before discounts), Cursor Ultra, Grok Heavy (which is $300 a month!), and, of course, Anthropic, whose $100-a-month and $200-a-month plans allowed Claude Code users to spend anywhere from 100% to 10,000% of their monthly subscription in API calls . This led to rate limits starting August 28 2025 — a conveniently-placed date to allow Anthropic to close as much as $5 billion in funding before its users churn.  Worse still, Anthropic burned all of that cash to get Claude Code to $400 million in annualized revenue according to The Information — around $33 million in monthly revenue that will almost certainly evaporate as its customers hit week-long rate limits on a product that’s billed monthly.  These are not plans created for “power users.” They are the actual price points at which these things need to be to be remotely sustainable, though Sam Altman said earlier in the year that ChatGPT Pro’s $200-a-month subscription was losing OpenAI money . And with GPT-5, meaningful functionality — the ability to choose the specific model you want for a task — is being completely removed for ChatGPT Plus and Team subscribers . This is part of an industry-wide enshittification of generative AI, where the abominable burn rates behind these products are forcing these companies to take measures ranging from minor to drastic.  The problem, however, is that these businesses have yet to establish truly essential products, and even when they create something popular — like Claude Code — they can’t make it popular without burning horrendous amounts of cash. The same goes for Cursor, and I believe just about every other major product built on top of Large Language Models. And I believe that when they try to adjust pricing to reflect their actual costs, that popularity will begin to wane. I believe we’re already seeing that with Claude Code, based on the sentiment I’ve seen on the tool’s Reddit page, although I’m also wary of making any sweeping statements right now, as it’s just too early to say.  The great enshittification of AI has begun.

0 views

AI Is A Money Trap

In the last week, we’ve had no less than three different pieces asking whether the massive proliferation of data centers is a massive bubble, and though they, at times, seem to take the default position of AI’s inevitable value, they’ve begun to sour on the idea that it’s going to happen soon. Meanwhile, quirked-up threehundricorn OpenAI has either raised or is about to raise another $8.3 billion in cash , less than two months since it raised $10 billion from SoftBank and a selection of venture capital firms.  I hate to be too crude, but where the fuck is this money going? Is OpenAI just incinerating capital? Is it compute? Is it salaries? Is it compute? Is it to build data centers, because SoftBank isn’t actually building anything for Stargate ?  The Information suggested OpenAI is using the money to build data centers — possibly the only worse investment it can make other than generative AI, and it’s one that it can’t avoid because OpenAI also is somehow running out of compute. And now they're in " early-stage discussions " about an employee share sale that would value the company at $500 billion, a ludicrous number that shows we're leaving the realm of reality. To give you some context, Shopify's market cap is $197 billion, Salesforce's is $248 billion, and Netflix's is $499 billion. Do you really think that OpenAI is worth more than these companies? Do you think they're worth more than AMD at a $264 billion market cap? Do you? Amongst this already-ridiculous situation sits the issue of OpenAI and Anthropic’s actual revenues, which I wrote about last week , and have roughly estimated to be $5.26 billion and $1.5 billion respectively (as of July). In any case, these estimates were made based on both companies’ predilection for leaking their “annualized revenues,” or monthx12.  This extremely annoying term is one that I keep bringing up because it’s become the de-facto way for generative AI companies to express their revenue, and both OpenAI and Anthropic are leaking them intentionally, and doing so in a way that suggests they’re not using even the traditional ways of calculating them. OpenAI leaked on July 30 2025  that it was at $12 billion annualized revenue — so around $833 million in a 30-day period — yet two days later on August 1 2025 the New York Times reported they were at $13 billion annualized revenue , or $1.08 billion of monthly revenue. It’s very clear OpenAI is not talking in actual calendar months, at which point we can assume something like a trailing 30 day window (as in the “month” is just 30 days rather than a calendar month). We can, however, declaratively say that it’s not doing “the month of June” or “the month of July” because if it was, OpenAI wouldn’t have given two vastly different god damn numbers in the same two day period. That doesn’t make any sense. There are standard ways to handle annualized revenue, and it's clear they're not following them. And to be even clearer, while I can’t say for certain, I believe these leaks are deliberate. OpenAI’s timing matches exactly with fundraising.  On Anthropic’s side, these revenues are beginning to get really weird. Anthropic went from making $72 million ( $875 million annualized ) in January to $433 million in July — or at least, it leaked on July 1, 2025 that it was at $4 billion annualized to The Information ($333 million a month) and claimed it had reached $5 billion annualized revenue ($416 million) to Bloomberg on July 29 2025 .  How’d it get there? I’m guessing it was from cranking up prices on Cursor, and we’ve had the confirmation that’s the case thanks to The Information reporting that $1.4 billion of its annualized revenue is from its top two customers (so around $116 million a month), the biggest of which is Cursor. Confusingly, The Information also says that Anthropic’s Claude Code is “generating nearly $400 million in annualized revenue, roughly doubling from just a few weeks ago,” meaning about $33 million of monthly revenue.  In any case, I think Cursor is a huge indicator of the current fragility of the bubble — and the fact that for most AI startups, there’s simply no way out, because being acquired or going public does not appear to be a viable route.  I know it sounds a little insane, but I believe that Cursor is the weak point of the entire AI bubble, and I’ll explain why, and how this could go. This is, by no means, inevitable, but I cannot work out what Cursor does other than this. Cursor, at this point, faces two options: die, or get acquired. This is not an attack on anyone who works at the company, nor anything personal. The unit economics of this business do not make sense and yet, on some level, its existence is deeply important to the valley’s future.  OpenAI? OpenAI couldn’t acquire Windsurf because it was too worried Microsoft would get the somehow-essential IP of one of what feels like a hundred different AI-powered coding environments . It also already tried and failed to buy Cursor , and if I’m honest, I bet Cursor would sell now. Honestly, Cursor fucked up bad not selling then. It could have got $10 billion and Sam Altman would’ve had to accelerate the funding clause. It would’ve been so god-damn sick, but now the only “sick” thing here is Cursor’s fragile, plagued business model.  How about Anthropic? Eh! It already has their own extremely-expensive coding environment, Claude Code, which I estimated loses the company 100% to 10,000% of a subscription per-customer a few weeks ago , and now Anthropic is adding weekly limits on accounts , which will, I believe, create some of the most gnarly churn in SaaS history. Also, does Anthropic really want to acquire its largest customer? Also, with what money? It’s not raising $5 billion to bail out Cursor. Anthropic needs that to feed directly into Andy Jassy’s pocket to keep offering increasingly-more-complex models that never quite seem to be good enough.  Google ? It just sort-of-bought Windsurf ! It can’t do that again. It’s already given out the participation trophy multiple billions of dollars to investors and founders so nobody has to get embarrassed about this, and then allowed Cognition to pick up the scraps of a business that made $6.83 million a month after burning $143 million of investor capital (TechCrunch reports Windsurf was left with $100 million in cash post-acquisition). TechCrunch also reports that Cognition paid $250 million for what remained , and that this deal didn’t actually pay out the majority of Windsurf’s employees, Meta ? If I’m Cursor’s CEO, I am calling Mark Zuckerberg and pretending that I think the only person in the world who can usher in the era of Superintelligence is the guy who burned more than $45 billion on the metaverse and believes that not wearing AI glasses in the future will be a disadvantage . I would be saying all manner of shit about the future, and that the only way to do this was to buy my AI-powered coding startup that literally can’t afford to exist. And that really is the problem. These companies are all going through the same motions that every company before them did — raise as much money as possible, get as big as possible, and eventually scale to the point you’re fat with enterprise cash.  Except the real problem is that, just like big tech’s new gluttony of physical real estate it's taken on, generative AI companies are burdened with a constant and aggressive form of cloud debt — the endless punishment of the costs of accessing the API for generative AI models that always seem to get a little better, but never in such a way that anything really changes other than how much Anthropic and OpenAI are going to need at the end of the month or they break your startup’s legs. I’m not even trying to be funny! Anthropic raised its prices on Cursor so severely it broke its already-unprofitable business model. These products — while also, for the most part, not producing that much revenue — need to be sold with users being aware of (and sensitive to) the cost of providing them, and Cursor’s original product was $20-a-month for 500 “fast requests” of different models, in the same way that accessing Claude Code on any subscription is either $20, $100, or $200 a month rather than paying per API call, because these companies all sell products that shield the customer from the actual costs of running the services. The irony is that, despite being willing to kill these companies by fundamentally changing the terms upon which they access these models, Anthropic is also, in some way, dependent on Cursor, Replit, and other similar firms continuing to buy tokens at the same rate as before, as that consumption is baked into its ARR figures, as well as the forward-looking revenue projections.  It is, in some sense, a Kobayashi Maru . Anthropic has an existential need to screw over its customers by hiking rates and imposing long-term commitments, but its existence is also, in some way, predicated on these companies continuing to exist. If Cursor and Replit both die, that’s a significant chunk of Anthropic's API business gone in a flash — and, may I remind you, that significantly overshadows its subscription business (making it almost like an inverse of OpenAI, where subscriptions drive the bulk of revenue).  Anthropic’s future is wedded to Cursor, and I just don’t see how Cursor survives, let alone exits, or gets subsumed by another company in a way that mirrors how acquisitions have worked since…ever.   If Cursor does not sell for a healthy amount — I’m talking $10 billion plus, and I mean actually sell, not “the founders are hired in a strange contractual agreement that pays out investors and its assets are sold to Rick from Pawn Stars” — it will prove that no generative AI company, to this date, has actually been successful. In reality, I expect a Chumlee-esque deal that helps CEO Michael Truell buy a porsche while his staff makes nothing. Is Cursor worth $10 billion? Nope! No matter how good its product may or may not be, it is not good enough to be sold at a price that doesn’t require Cursor to incinerate hundreds of millions of dollars with no end in sight. And this ultimately gives us the real conundrum — why aren’t generative AI startups selling?   Before we go any further, there have been some acquisitions, but they are sparse, and seem almost entirely centered around bizarre acqui-hires and confusing fire sales. AMD bought Silo AI, “the largest private AI lab in Europe,” in August 2024 for $665 million , which appears to be the only real acquisition in generative AI history, and appears to be partially based on Silo’s use of AMD’s GPUs .  Elsewhere, NVIDIA bought OctoAI for an estimated $250 million in September 2024 , after buying Brev.dev in July 2024 for an undisclosed sum , and then Gretel in March 2025 . Yet in all three cases these are products to deploy generative AI, and not products built on top of generative AI or AI models. Canva bought “generative AI content and research company” Leonardo.AI in July 2024 for an undisclosed sum. Really, the only significant one I’ve seen was on July 29 2025 — publicly-traded customer service platform NICE buying AI-powered customer service company Cognigy in a $955 million deal . According to Cxtoday, Cognigy expects about $85 million in revenue this year , though nobody appears to be talking about costs. However, Cognigy, according to some sources , charges tens or hundreds of thousands per contract for its “AI voice agents” that can “understand and respond to user input in a natural way.” Great! We’ve got one real-deal “company built on models” acquisition, and it’s a company that most people haven’t heard of making around $7 million a month.  Let’s take a look at the others.  Outside of one very industry-specific acquisition, there just doesn’t seem to be the investor hunger to buy a company valued at $9.9 billion . And you have to ask why. If AI is, as promised, the thing that’ll radically change our economy, and these companies are building the tools that’ll bring about that change, why does nobody want to buy them?  And, in the broader term, what does it mean when these companies — those with $10bn, or in the case of OpenAI, $300bn valuations — can’t be bought, and can’t go public? Where does this go? What happens next? What’s the gameplan here? How will the venture firms that ploughed billions of capital into these businesses bring a return for their LPs if there are no IPOs or buyouts?  The economic implications of these questions are, quite frankly, terrifying — especially when you consider the importance that VC has historically held in building the US tech ecosystem, and they raise further questions about the impact of an AI bubble on companies that are promising, and do have a viable business model, and a product with actual fit, but won’t be able to actually raise any cash.  Great! I would believe it was possible if it had ever, ever happened, which it has not.  I’m not even being sarcastic or rude. It has just not happened. No company that actually stakes their entire product on generative AI appears to be able to make money. Glean, a company that makes at best $8.3 million a month ($100 million annualized revenue) said it had $550 million in cash December of last year , and then had to raise $150 million in June of this year . Where did that money go? Why does a generative search engine product with revenues that are less than a third of the Cincinnati Reds baseball team need half a billion dollars to make $8.3 million a month?  I’m not saying these companies are unnecessary, so much as they may very well be impossible to run as real businesses. This isn’t even a qualitative judgment of any one generative AI company. I’m just saying, if any of these were good businesses, they would be either profitable or being acquired in actual deals, and there would be good businesses by now.  The amount of cash they are burning does not suggest they’re rapidly approaching any kind of sane burn rate, or we would have heard. Putting aside any kind of skepticism I have, anything you may hold against me for what I say or the way I say it, where are the profitable companies? Why isn’t there one, outside of the companies creating data to train the AI models, or Nvidia? We’re three years in, and we haven’t had one. We also have had no exits and no IPOs. There has been no cause for celebration, no validation of a business model through another company deciding that it was necessary to continue its dominance by raising funds on the public market, or allowing actual investors — flawed though they may be — act as the determiner of their value.  It is unclear what the addition of Windsurf’s intellectual property adds to Cognition, much like it’s a little unclear what differentiates Cognition’s so-called AI-powered software engineer “Devin” from anything else on the market. I hear Goldman is paying for it , and said the stupidest shit I’ve ever heard to CNBC that nevertheless shows how little it’s actually paying for: “We’re going to start augmenting our workforce with Devin, which is going to be like our new employee who’s going to start doing stuff on the behalf of our developers,” Argenti told CNBC. “Initially, we will have hundreds of Devins [and] that might go into the thousands, depending on the use cases.”  Hundreds of Devins = hundreds of seats. At a very optimistic 500 users at the highest-end pricing of $500-a-month (if it’s $20-a-month, Cognition is making a whole, at most, less than $20,000 a month) — and let’s assume that it does a discount at enterprise scale, because that always happens — that’s $250,000 a month! Wow! $3 million in revenue? On a trial basis? Amazing! In fact, I can’t find a shred of evidence that Cognition otherwise makes much money. Despite currently raising $300 million at a $10 billion valuation , I can find no information about Cognition’s revenues beyond one comment from The Information from July 2024, when Cognition raised at a $2 billion valuation : Cognition’s fundraise is the latest example of AI startups raising capital at sky-high valuations despite having little or no revenue.”  In a further move per The Information that is both a pale horse and a deeply scummy thing to do, Cognition has now laid off 30 people from the Windsurf team, and is now offering the remaining 200 buyouts equal to 9 months of salary and, I assume, the end of any chance to accrue further stock in Cognition. CEO Scott Wu said the following in the email telling Windsurf employees about the layoffs and buyouts: “We don’t believe in work-life balance—building the future of software engineering is a mission we all care so deeply about that we couldn’t possibly separate the two,” he said. “We know that not everyone who joined Windsurf had signed up to join Cognition where we spend 6 days at the office and clock 80+ hour weeks.” All that piss, vinegar, and burning of the midnight oil does not appear to have created a product that actually matters. I realize this is a little cold, but if you’re braying and smacking your chest about your hard-charging, 6-days-a-week office culture, you should be able to do better than “we have one publicly-known customer and nobody knows our revenue.” Maybe it’s a little simpler: Cognition paid $250 million to acquire Windsurf so that it could, after the transaction, say they have $82 million in annualized revenue. If that’s the case, this is one of the dodgiest, weirdest acquisitions I’ve seen in my life — two founders getting a few hundred million dollars between them and their investors, and a few of their colleagues moving with them to Google, leaving the rest of the staff effectively jobless or in Hell with little payoff for their time working at Windsurf. I can only imagine how it must have felt to go from being supposedly acquired by OpenAI to this farcical “rich get richer” bullshit. It also suggests that the actual underlying value of Windsurf’s IP was $250 million . So, I ask, why, exactly, is Cognition worth $10 billion? And why did it have to raise $300 million after raising “hundreds of millions” according to Bloomberg in March? Where is the money going? It doesn’t seem to have great revenue, Carl Brown of the Internet of Bugs revealed it faked the demo of “Devin the AI powered software developer” last year , and Devin doesn’t even rank on SWE-benchmark , the industry standard for model efficacy at coding tasks.  At best, it’s now acquired their own unprofitable coding environment and the smidgen of revenue associated. How would Cognition go public? What is the actual exit path for Cognition, or any other generative AI startup?  And that, right there, is Silicon Valley’s own housing crisis, except instead of condos houses they can’t afford with sub-prime adjustable rate mortgages, venture capitalists have invested in unprofitable, low-revenue startups with valuations that they can never sell at. And, like homeowners in the dismal years of 2008 and 2009, they’re almost certainly underwater — they just haven’t realized it yet. Where consumers were unable to refinance their mortgages to bring their monthly payments down, generative AI startups face pressure to continually raise at higher and higher valuations to keep up with their costs, with each one making it less likely their company will survive.  The other difference is that, in the case of the housing crisis, those who were able to hold onto their properties eventually saw their equity recover to their pre-crash levels, in part because housing is essential and because its price is influenced just as much by supply and demand, as it is the ability for people to finance the purchase of properties, and when the population increases, so too does the demand for housing. None of that is true with AI. There’s a finite number of investors, a finite number of companies, and a finite amount of capital — and those companies are only as valuable as the expectations that investors have for them, and as the broader sentiment towards AI.  Who is going to buy Cognition? Because the only other opportunity for the investors who put the money into this company to make money here — let alone to recoup their initial investment —  is for Cognition to go public. Do you think Cognition will go public? How about Cursor? It’s worth $9.9 billion, and there was a rumour that it was raising at a valuation of $18 billion to $20 billion back in June . Do you see Perplexity, at a valuation of $18 billion , selling to another company? The alternative, as discussed, is that Perplexity, a company with 15 million users and, at $150 million annualized revenue , is still making less than half of the revenue of the Cincinnati Reds baseball team ( $325 million in annual revenue, and that’s real money, not “annualized revenue”), must go public. Perplexity has, at this point, raised over a billion dollars to lose $68 million in 2024 on $34 million of revenue.  By comparison, the Cincinnati Reds is a great business, with a net monthly income of $29 million, all to provide a service that upsets and humiliates millions of people from Ohio every year for the pleasure of America.  Putting aside the Reds, what exactly is it that Perplexity could offer to the public markets as a stock, or to an acquirer? Apple considered acquiring it in June , but Apple tends to acquire the companies it wants to integrate into the core business ( as was the case with Siri ), which makes me think that Perplexity leaked information about a deal that was never really serious. Hell, Meta talked about acquiring it too . Isn’t it weird that two different companies talked about buying Perplexity but neither of them did it? CEO Aravind Srivinas said in July that he wanted to “ remain independent ,” which is a weird thing to say after talking to two giant multi-trillion-dollar market cap tech firms about selling to them.  It’s almost as if nobody actually wants to buy Perplexity, or any of these sham companies, which I know sounds mean, but if you are worth billions or tens of billions of dollars and you can’t make more than a bottom-tier baseball team in fucking Ohio , you are neither innovative nor deserving of said valuation.  But really, my pissiness and baseball comparisons aside, what exactly is the plan for these companies? They don’t make enough money to survive without a continuous flow of venture capital, and they don’t seem to make impressive sums of money even when allowed to burn as much as they’d like. These companies are not being forced to live frugally, or at least have yet to be made to, perhaps because they’re all actively engaged at spending as much money as possible in pursuit of finding an idea that makes more money than it loses. This is not a rational or reasonable way to proceed. Yes, there are startups that can justify burning capital. Yes, there are companies that have burned hundreds of millions of dollars to find their business models, or billions in the case of Uber, but none of these companies are like those companies in the generative AI space. GenAI businesses don’t have the same economics, nor do they have the same total addressable markets. If you’re going to say “Amazon Web Services,” I already explained why you’re wrong a few weeks ago . These startups are their VC firms’ subprime mortgages, overstuffed valuations with no exit route, and no clear example of how to sell them or who to sell them to. The closest they’ve got is using generative AI startups as beauty pageants for guys wearing Patagonia, finding ways to pretend that the guy who runs an AI startup — sorry, AI lab — is some sort of mysterious genius versus just another founder in just another bubble with just another overstuffed valuation.  The literal only liquidity mechanism (outside of Cognigy) that generative AI has had so far is “selling AI talent to big tech at a premium.” Nobody has gone or is going public, and if they are not going public, the only route for these companies is to either become profitable — which they haven’t — or sell to somebody, which they do not. But I’ve been dancing around the real reason they won’t sell: because, fundamentally, generative AI does not let companies build something new . Anyone that builds a generative AI product is ultimately just prompting the model, albeit in increasingly more-complex ways at the scale of something like Claude Code — though Anthropic has the advantage of being one of the main veins of infrastructure. This means that a generative AI company owns very few unique things beyond their talent, and will forever be at the mercy of any and all decisions that their model provider makes, such as increasing prices or creating competing products.  I know it sounds ludicrous, but this is the reality of these companies. While there are some companies that have some unique training and models, none of them seem to be building interesting or unique products as a result.  If your argument is that these things take some time — how long do they have?  No, really! So many of you have said that “this is what happens, they burn a bunch of money, they grow, and then…” and then you stop short because the next thing you say is “turn profitable by getting enterprise customers.” Nobody can do the first part and few can do the second part in anything approaching a consistent fashion.  But really, how long should we give them? Three years?  Perplexity’s had three years and a billion dollars, it doesn’t seem to be close to profitable. How long does Perplexity deserve, exactly? An eternity?  Every single example of a company that has “burned a lot of money and then not done so in the end” has been a company with a physical thing or connections to the real world, with the exception of Facebook, which was never the kind of cash-burning monstrosity that generative AI is.  There has never been a software company that has just chewed through hundreds of millions — or billions — of dollars and then suddenly became profitable, mostly because the magical valuations of software have been in their ability to transcend infrastructure. One’s unit economics in the sales of software like Microsoft Office or providing access to Instagram do not require the most powerful graphics processing units run at full tilt at all times, and those are products that people like and want to use every day.  I get people saying “they’re in the growth stage!” about a few companies, but when all of them are unprofitable, and even the unprofitable ones outside of OpenAI and Anthropic aren’t really making impressive amounts of money anyway? C’mon! This isn’t anything like any boom that leads to something, and it’s because the economics do not make sense. And that’s before we get to OpenAI and Anthropic! So, as a reminder, OpenAI appears to have burned at least ten billion dollars in the last two months. It is has just raised another $8.3 billion dollars ( after raising $10 billion in June according to the New York Times ), and intends to receive around $22.5 billion by the end of year from SoftBank, and that is assuming it becomes a for-profit entity by the end of the year, and if that doesn’t happen, the round gets cut to $20 billion total, meaning that SoftBank would only be on the hook for a further $1.7 billion. I am repeating myself, but I need you to really get this: OpenAI just got $10 billion in June 2025, and had to raise another $8.3 billion in August 2025. That is an unbelievable cash burn, one dwarfing any startup in history, rivalled only by xAI, makers of “Grok, the racist LLM,” losing it over $1 billion a month . I should be clear that if OpenAI does not convert to a for-profit, there is no path forward. To continue raising capital, OpenAI must have the promise of an IPO. It must go public , because at a valuation of $300 billion, OpenAI can no longer be acquired, because nobody has that much money and, if let’s be real, nobody actually believes OpenAI is worth that much. The only way to prove that anybody does is to take OpenAI public, and that will be impossible if it cannot convert. And, ironically, Softbank’s large and late-stage participation makes any exit harder, as early investors will see their holdings diluted as a percentage of total equity — or whatever the hell we’re calling it. While a normal company could just issue equity, and deal with the dilution that way, OpenAI’s structure necessitates a negotiation where companies can obstruct the entire process if they see fit.  Speaking of companies that might obstruct that transition, let’s talk about Microsoft.  As I asked in my premium newsletter a few weeks ago , what if Microsoft doesn’t want OpenAI to convert? It owns all the IP, it owns access to all OpenAI’s research, and already runs most of its infrastructure. While — assuming a best-case scenario — that it would end up owning a massive chunk of the biggest tech startup of all time (I’m talking about equity, not OpenAI’s current profit-sharing units), Microsoft might also believe that it stands more to gain by letting AI die and assuming its role in the AI ecosystem.  Embrace. Extend. Extinguish.  But let’s assume it converts, and OpenAI now…has to continue raising money at a rate that will require it, allegedly, to only need to raise $17 billion in 2027.  That number doesn’t make sense, considering it already had to bring forward its $8.3 billion fundraise by at least three months, but let’s stick with that idea. OpenAI believes it will be profitable, somehow, by 2030, and even if we assume that, that means it intends to burn over a hundred billion dollars to get there. Is the plan to take OpenAI public, dumping a toxic asset onto the public markets, only to let it flounder and convulse and die for all to see? Can you imagine OpenAI’s S-1? How well do you think this company would handle a true financial audit from a major accounting firm?  If you want to know what that looks like, google “WeWork,” which went from tech industry darling to joke in a matter of days, in part because it was forced to disclose how bad things actually were on its S-1. No, really, read this article . With that in mind, I feel similarly about Anthropic. Nobody is buying this company at $ 170 billion , and thus the only way to access liquidity would be to take it public, and show the world how a company that made $72 million in January 2025 and then more than $400 million in July 2025 also loses $3 billion or more after revenue, and then let the market decide on its fair price.  The arguments against my work always come down to “costs will go down” and “these products will become essential.” Outside of ChatGPT, there’s really no proof that these products are anything remotely essential, and I argue there’s very little about ChatGPT that Microsoft couldn’t provide with rate limits via Copilot.  I’d also argue that “essential” is a very subjective term. Essential — in the sense that some people use it as search — doesn’t mean that it’s useful for enterprises, or the majority of people.  And, I guess, ChatGPT somehow makes $1 billion a month in revenue selling access to premium versions of ChatGPT — though I’m not 100% sure how. Assuming it has 20 million customers paying $20 a month, that’s $400 million a month, then 5 million business customers paid an average of $100 each, that’s $900 million…and is that average really that good? Are that many people paying $35 a month, or $50, or $200? OpenAI doesn’t break out the actual revenues behind these numbers for a reason, and I believe that reason is “they don’t look as good.” What’s OpenAI’s churn like? And does it really, as I wrote last week , end the year making more than Spotify at $1.5 billion a month?  We don’t know, and OpenAI (much like Anthropic) has never shared actual revenues, choosing instead to leak to the media and hope to obfuscate the actual amounts of money being spent on its services.  Anyway, long story short, these companies are unprofitable with no end in sight, don’t even make that much money in most cases, are valued more than anybody would ever buy them for, do not have much in the way of valuable intellectual property, and the two biggest players burn billions of dollars more than they make. Even if this were going to happen — it will not! — who would they give the money to and for how long? Would they give it to all the startups? Is every startup going to get a Paycheck Protection Program but for generative AI? How would that play out in rural red districts (where big tech has never been popular), which are being hit with both massive cuts to welfare, as well as the shockwaves of a trade war that has made American agricultural exports (like feedstocks, which previously went to China by the shipload) less appealing worldwide?  So they bail out OpenAI, then stuff it full of government contracts to the tune of $15 billion a year, right? Sorry, just to be clear, that’s the low end of what this would take to do, and they’ll have to keep doing it forever, until Sam Altman can build enough data centers to…keep burning billions, because there’s no actual plan to make this profitable.  Say this happens. Now what? America has a bullshit generative AI company attached to the state that doesn’t really innovate and doesn’t really matter in any meaningful way, except that it owns a bunch of data centers?  I don’t think this happens! I think this is a silly idea, and the most likely situation would be that Microsoft would unhinge its jaw and swallow OpenAI and its customers whole. Hey, did you know that Microsoft’s data center construction is down year-over-year , and it’s basically signed no new data center leases? I wonder why it isn’t building these new data centers for OpenAI? Who knows.  Stargate isn’t saving it, either. As I wrote previously, Stargate doesn’t actually exist beyond the media hype it generated . And yes, OpenAI is offering ChatGPT at $1 for a year to US government workers - and I cannot express how little this means other than that they are horribly desperate. This product doesn't do enough to make it essential, and this fire sale doesn't change anything. Anyway, does the government do this for everybody? Because everyone else is gonna need it as none of these companies can go public as they all suffer from the burden of generative AI. And, if the government does it, will it also subsidize the compute of for-profit companies like Cursor? To what end? Where is the limit? I think this is a question that we have to seriously consider at this point, because its ramifications are significant. If I’m honest, I think the future of LLMs will be client-side on egregiously-expensive personal setups for enthusiasts, and in a handful of niche enterprise roles. Large Language Models do not scale profitably, and their functionality is not significant enough to justify the costs of running them. By immediately applying old economics — the idea that you would pay a monthly fee to have relatively-unlimited access — companies like OpenAI and Anthropic immediately trained users to use their products in a way that was antithetical to their costs.  Then again, had these models been served in a way that was mindful of their costs, there would likely have been no way to even get this far. If OpenAI is making a billion dollars a month, it is possibly losing that much (or more) after revenue, and that’s the money it can get selling the product in a form that can never turn profitable. If OpenAI charged in line with its actual costs, would it even be able to justify a freely-available version of ChatGPT, outside of a few free requests?  The revenue you see today is what people are willing to pay for a product that loses money, and I cannot imagine they would pay as much if the companies in question charged their costs. If I’m wrong, Cursor will be just fine, and that’s assuming that Cursor’s current hobbled form is even profitable, which it has not said it is. So, you’ve got an entire industry of companies that struggle to do anything other than lose a lot of money. Great.  And now we have a massive expansive data centre buildout, the likes of which we’ve never seen, all to capture demand for a product that nobody makes much money selling. This, naturally, leads to an important question: how do these people building data centers actually make money? Last week, the Wall Street Journal published one of the more worrying facts I’ve seen in the last two years: The massive buildout of data centers — and the associated physical gear like chips, servers, and raw materials for building them — has become a massive, dominant economic force…building capacity for an industry that is yet to prove it can make real revenues.  And no, Microsoft talking about its Azure revenue in its last quarterly earnings for the first time is not the same thing , as it stopped explicitly stating their AI revenue in January (when it was $13 billion annualized). Anyway, AI capex allegedly — though I have some questions about this figure! — accounts for 1.2% of the US GDP in the first half of the year, and accounted for more than half of the (to quote the Wall Street Journal) “already-sluggish” 1.2% growth rate of the US economy.  Another Wall Street Journal piece published a few days later discussed how data center development is souring the free cash flow for big tech, turning them from the kind of “asset-light” businesses that the markets love into entities burdened by physical real estate and their associated costs: These numbers are all very scary, and I mean that sincerely, but they also fail to express why. How much was actually spent on AI capex in the US? One would think two different articles on this subject would include that number versus a single quarter’s worth, but from my estimates, I expect capital expenditures from the Magnificent Seven alone to crest $200 billion in the first half of 2025, with Axios estimating they’d spend around $400 billion this year .  Most articles are drafting off of a blog from Paul Kedrosky , who estimates total AI capex would be somewhere in the region of $520 billion in total for the year, which felt conservative to me, so I did the smart thing and asked him. Kedrosky noted that these numbers focus entirely on the four big spenders — Microsoft, Google, Meta and Amazon, and his own estimated $312 billion capex, and the 1.2% number came from the assumption that the US GDP in 2025 will be around $28 trillion (which, I add, is significantly lower than other forecasts, which puts it closer to $30 trillion ).  Kedrosky, in his own words, was trying to be conservative, using public data and then building his analysis from there. I, personally, believe his estimate is too conservative — because it doesn’t factor in the capital expenditures from Oracle, which (along with Crusoe) is building the vast Abilene Texas data center for OpenAI , or any private data center developers sinking cash into AI capex.  When I asked him to elaborate, he estimated that “...AI spend, all-in, was around half of 3.0%  Q2 real GDP growth, so 2-3x the lower bound, given multipliers, debt, etc. it could be half of US GDP full-year GDP growth.” That’s so cool! Half of the US economy’s growth came from building data centers for generative AI, which has the combined revenue of a little more than the fucking smart watch industry in 2024 .   Another troubling point is that big tech doesn’t just buy data centers and then use them, but in many cases pays a construction company to build them, fills them with GPUs and then leases them from a company that runs them , meaning that they don’t have to personally staff up and maintain them. This creates an economic boom for construction companies in the short term, as well as lucrative contracts for ongoing support… as long as the company in question still wants them. While Microsoft or Amazon might use a data center and, indeed, act as if it owns it, ultimately somebody else is holding the bag and the ultimate responsibility for the data centers. One such company is QTS, a data center developer that leases to both Amazon and Meta according to the New York Times , which was acquired by Blackstone in 2021 for $10 billion. Since then, Blackstone has used commercial mortgage-backed securities — I know! — to raise over $8.7 billion since then to sink into QTS’ expansion , and as of mid-July said it’d be investing $25 billion in AI data centers and energy . Blackstone, according to the New York Times, sees “strong demand from tech companies,” who are apparently “willing to sign what they describe as airtight leases for 15 to 20 years to rent out data center space.”  Yet the Times also names another problem — the “unanswered question” of how these private equity firms actually exit these situations. Blackstone, KKR and other asset management firms do not buy companies with the intention of syphoning off revenue, but to pump them up and sell them to another company. Much like AI startups, it isn’t obvious who would buy QTS at what I imagine would be a $25 billion or $30 billion valuation, meaning that Blackstone would have to take them public. Similarly, KKR’s supposed $50 billion partnership with investment firm Energy Capital partners to build data centers and their associated utilities does not appear to have much of an exit plan either. And let’s not forget Oracle, OpenAI, and Crusoe’s abominable mess in Abilene Texas, where Oracle is paying for the $40 billion of GPUs and Crusoe is spending $15 billion raised from Blue Owl Capital and Primary Digital infrastructure to build data centers for OpenAI, a company that loses billions of dollars a year. Why? So that OpenAI can, allegedly starting in 2028, pay Oracle $30 billion a year for compute , and yes, I am being fully serious.  To be clear, OpenAI, by my estimates, has only made around $5.26 billion this year (and will have trouble hitting its $12.7 billion revenue projection for 2025), and will likely lose more than $10 billion to do so. Oracle will, according to The Information, owe Crusoe $1 billion in payments across the 15 year span of its lease . How does Crusoe afford to pay back its $15 billion in loans? Beats me! The Information says it’s raising $1 billion to “take on cloud giants” by “earning construction management fees and rent, and it can sell its stake in the project upon reaching certain completion milestones,” while also building its own AI compute, making the assumption that the demand is there outside of hyperscalers. Then there’s CoreWeave, my least-favourite company in the world. As I discussed a few months ago, CoreWeave is burdened by obscene debt and a horrifying cash burn, and has seen its stock spike to a high of $183 on June 20, 2025 to around $111 as of writing this sentence, which has led to its all-stock attempt to acquire developer Core Scientific for $9 billion to start to fall apart as shareholders balk at the worrisome drop in CoreWeave’s stock price. CoreWeave has, since going public, had to borrow billions of dollars to fund its obscene capital expenditures to handle the upcoming October 2025 start date for OpenAI’s $11.9 billion, 5-year-long deal for compute , which is also when CoreWeave must start paying off its largest loan. CoreWeave lost $314 million in its last earnings, and I see no path to profitability or, honestly, its ability to keep doing business if the market sours. Coreweave, I add, is pretty much reliant on Microsoft as its primary customer. While this relationship has been fairly smooth (so far, and as far as we know), this dependence also presents an existential threat to Coreweave, and is part of the reason why I’m so pessimistic about its survival. Microsoft has its own infrastructure, and has every incentive to cut out middlemen when it's able to meet supply with the demand it itself owns (or leases, rather than subcontracts out), simply because middlemen add costs and shrink margins . If Microsoft walks, what’s left? How does it service its ongoing obligations, and its mountain of debt?  In all of these cases, data center developers seem to have very few options as to making actual money. We have companies spending billions of dollars to vastly expand their data center footprint, but very little evidence that doing so results in revenue let alone some sort of payoff, and similarly, the actual capital expenditures they’re making are…much smaller than those of big tech.  Digital Realty Trust — one of the largest developers with over 300 data centers worldwide and $5.55 billion in revenue in 2024 — only spent $3.5 billion in capex last quarter , and Equinix ( $8.7 billion revenue in 2024 ), which has 270 of them, put capex at $3.5 billion too . NTT Global Data Centers, which has over 160 data centers, has dedicated $10 billion in capital expenditures “through 2027” to build out data centers .  Yet in many of these cases, it’s because these companies are — to quote a source of mine — “functionally obsolete for this cycle,” because legacy data centers are not plug-and-play ready for GPUs to slot into. Any investment in capex by these companies would have to be for both GPUs and either building or retrofitting (basically ripping the insides out of old) data centers. This means that the money flowing into AI data centers is predominantly going to neoclouds like CoreWeave and Crusoe, and all seems to flow back to private equity firms that never thought about where the cashout might be. Blackstone led CoreWeave’s $7.5 billion loan with Magnetar Capital , and Crusoe signed a deal a week ago with infrastructure firm Blackstone-owned Tallgrass to build a data center in Wyoming , all of which seems very good for Blackstone unless you think “how does it actually make money here,” as private equity firms do not generally like to hold assets longer than five years.  Even if it did, its capital expenditures are a drop in the bucket in the grand scheme of things. Assuming Crusoe burns, as The Information suggests it will , as much as $4 billion in 2025, CoreWeave spends as much as $20 billion, Digital Realty Trust spends $14 billion, Global Data Centers spends $3.33 billion (that’s $10bn over 3 years), and Equinix spends $14 billion. That’s $55.33 billion in AI capex spent in 2025 from the largest developers of data centers in the world. For some context, as discussed above, $102 billion was spent by Meta, Alphabet, Microsoft and Amazon in the last quarter.  Private equity may ultimately face the same problem as many AI startups: there is no clear exit strategy for these investments. In the absence of real liquidity, firms will likely resort to all manner of financial engineering (read: bullshit) —marking up portfolio companies using internally generated valuations, charging fees on those inflated marks, and using those marks to entice new commitments from limited partners. Compounding this is their ability to lend increasing amounts of capital to their own portfolio companies via affiliated private credit vehicles—effectively recycling capital and pushing valuation risk further down the line. This kind of self-reinforcing leverage loop is particularly opaque in private credit, which now underpins much of the AI infrastructure buildout. The complexity of these arrangements makes it hard to anticipate the full economic fallout if the cycle breaks down, but the systemic risk is building. In any case, the supposed “AI capex boom” that is driving the US economy is not, as reported, driven by the massive interest in building out AI infrastructure for a variety of customers.  The reality is simple: the majority of all AI capex is from big tech, which is a massive systemic weakness in our economy. While some might say that “AI capex” has swallowed the US economy, I think it’s more appropriate to say that Big Tech Capex Has Swallowed The US Economy. I also want to be clear that the economy — which is the overall state of the country’s production and consumption of stuff, and the flow of money between participants in said economy — and the markets (as in the stock market) are very different things, but the calculations from Kedrosky and others have now allowed us to see where one might hit the other. You see, the markets do not actually represent reality . While Microsoft, Amazon, Google, and Meta might want you to think there’s a ton of money in AI, their growth is mostly from selling further iterations and contracts for their existing stuff, or in the case of Meta further increasing its ad revenue. The economy is where things are actually bought and sold, representing the economic effects of both the things happening to build out AI and selling access to services and the AI models themselves. I recognize this is simplistic, but I am laying it out for a reason. As I discussed at length in the Hater’s Guide to the AI Bubble , NVIDIA is the weak point in the stock market, representing roughly 19% of the value of the Magnificent 7, which in turn makes up about 35% of the value of the US stock market. The associated Magnificent Seven stocks have seen a huge boom through their own growth, which has been mistakenly and incorrectly attributed to revenue from AI, which as I laid out previously, is about $35 to $40 billion in the last two years. Nevertheless, the markets can continue to be irrational because all they care about is “number going up,” as the “value” of a stock is oftentimes disconnected from the value of the company itself, instead associated with its propensity for growth. GDP and other measurements of the economy aren’t really something you can fudge quite as easily ( at least, in transparent, democratic societies ), nor can you say a bunch of fancy words to make people feel better in the event that growth stalls or declines.  This leads me to my principle worry: that “AI capex” is actually a term for the expenditures of four companies, namely Microsoft, Amazon, Google and Meta, with NVIDIA’s GPU sales being part of that capex too. While we can include others like Oracle, Musk’s xAI, and various Neoclouds like CoreWeave and Crusoe — who, according to D.A. Davidson’s Gil Luria, will account for about 10% of NVIDIA’s GPU sales in 2025 — the reality is that whatever economic force is being driven by “AI investment” is really just four companies building and leasing data centers to burn on generative AI, a product that makes a relatively small amount of money before losing a great deal more.  42% of NVIDIA’s revenue comes from the Magnificent Seven ( per Laura Bratton at Yahoo Finance ), which naturally means that big tech is the lynchpin of investment in data centers.  I’ll put it far more simply: if AI capex represents such a large part of our GDP and economic growth, our economy does, on some level, rest on the back of Microsoft, Google, Meta and Amazon and their continued investment in AI. What should worry everybody is that Microsoft — which makes up 18.9% of NVIDIA’s revenue — has signed basically no leases in the last 12 months, and its committed datacenter construction and land purchases are down year-over-year .  While its capex may not have dipped yet (in part because the chip-heavy nature of generative AI means that capex isn’t exclusively dominated by property), it’s now obvious that if it does there will be direct effects on both the US economy and stock market, as Microsoft is part of what amounts to a stimulus package propping up America’s economic growth.   And not to repeat the point too much, but big tech has yet to actually turn anything resembling a profit on these data centers, and isn’t making much revenue at all out of generative AI. How, exactly, does this end? What is the plan here? Is big tech going to spend hundreds of billions a year in capital expenditures on generative AI in perpetuity? Will they continue to buy more and more NVIDIA chips as they do so?  At some point, surely these companies have built enough data centers? Surely, at some point, they’ll run out of space to put these GPUs in? Is the plan to, by then, make so much money from AI that it won’t matter? What does NVIDIA do at that point? And how does the US economy rebound from the loss of activity that follows? As I’ve said again and again, the generative AI bubble is, and always has been, fundamentally irrational, and inherently gothic, playing in the ruins, patterns and pathways of previous tech booms despite this one having little or no resemblance to them. Though the tech industry loves to talk about building a glorious future, its present is one steeped in rituals of decay and death, where the virtues of value creation and productivity take a backseat to burning billions and lying to the public again and again and again . The way in which the media has participated in these lies is disgusting. Venture capital, still drunk off the fumes of 2021, keeps running the old playbook: shove as much money into a company as possible in the hopes you can dump it onto an acquirer or the public markets, only to get high on their own supply, pushing valuations to the point that there is no possible liquidity event for the majority of big private AI companies as a result of their overstuffed valuations, burdensome business models and lack of any real intellectual property. And, like the rest of the AI bubble, Silicon Valley’s only liquidity path out of the bubble is big tech itself. Without Google, Character.ai and Windsurf’s founders would likely have been left for dead, and the same goes for Inflection, and I’d even argue Scale AI, whose $14.3 billion “investment” from Meta effectively decapitated the company, removing its CEO Alexandr Wang, leaving the rest of the company to die, laying off 14% of its staff and 500 contractors mere weeks after its CEO and investors cashed in.  In fact, generative AI is turning out to be a fever dream entirely made up by big tech. OpenAI would be dead if it wasn’t for the massive infrastructure provided by Microsoft at-cost in return for rights to its IP, research, and the ability to sell its models on top of the tens of billions of dollars of venture capital thrown into its billion-dollar cash incinerator. Anthropic would be dead if both Google and Amazon — the latter of which provides much of its infrastructure — hadn’t invested billions in keeping it alive so that it can burn $3 billion or more in 2025 while fucking over its enterprise customers and rate limiting the rest .  The generative AI industry is, at its core, unnatural. It does not make significant revenue compared to its unbelievable costs, nor does it have much revenue potential. It requires, unlike just about every software revolution, an unbelievable amount of physical infrastructure to run, and because nobody but big tech can afford to build the infrastructure necessary, creates very little opportunity for competition or efficiency. As the markets are in the throes of the growth-at-all-costs Rot Economy, they have failed to keep big tech in line, conflating big tech’s ability to grow with growth driven as a result of their capital expenditures. Sensible, reasonable markets would notice the decay of free cash flow or the ridiculousness of big tech’s capex bonanza, but instead they clap and squeal every time Satya Nadella jingles his keys. What is missing is any real value generation. Again, I tell you, put aside any feelings you may have about generative AI itself , and focus on the actual economic results of this bubble. How much revenue is there? Why is there no profit? Why are there no exits? Why does big tech, which has sunk hundreds of billions of dollars into generative AI, not talk about the revenues they’re making? Why, for three years straight, have we been asked to “just wait and see,” and for how long are we going to have to wait to see it? What’s incredible is that the inherently compute-intensive nature of generative AI basically requires the construction of these facilities, without actually representing whether they are contributing to the revenues of the companies that operate the models (like Anthropic or OpenAI, or any other business that builds upon them). As the models get more complex and hungry, more data centers get built — which hyperscalers book as long-term revenue, even though it’s either subsidised by said hyperscalers, or funded by VC money. This, in turn, stimulates even more capex spending. And without having to answer any basic questions about longevity or market fit.  Yet the worst part of this financial farce is that we’ve now got a built-in economic breaking point in the capex from AI. At some point capex has to slow — if not because of the lack of revenues or massive costs associated, but because we live in a world with finite space , and when said capex slow happens, so will purchases of NVIDIA GPUs, which will in turn, as proven by Kedrosky and others, slow America’s economic growth. And that growth is pretty much based on the whims of four companies, which is an incredibly risky and scary proposition. I haven’t even dug into the wealth of private credit deals that underpin buildouts for private AI “neoclouds” like CoreWeave, Crusoe, Nebius, and Lambda, in part because their economic significance is so much smaller than big tech’s ugly, meaningless sprawl.  To quote Kedrosky :  You can’t bail this out, because there is nothing to bail out. Microsoft, Meta, Amazon and Google have plenty of money and have proven they can spend it. NVIDIA is already doing everything it can to justify people spending more on its GPUs. There’s little more it can do here other than soak up the growth before the party ends.  That capex reduction will bring with it a reduction in expenditures on NVIDIA GPUs, which will take a chunk out of the US stock market. Although the stock market isn’t the economy, the two things are inherently linked, and the popping of the AI bubble will have downstream ramifications, just like the dot com bubble did on the wider economy. Expect to see an acceleration in layoffs and offshoring, in part driven by a need for tech companies to show — for the first time in living memory — fiscal restraint. For cities where tech is a major sector of the economy — think Seattle and San Francisco — there’ll be knock-on effects to those companies and individuals that support the tech sector (like restaurants, construction companies building apartments, Uber drivers, and so on). We’ll see a drying-up of VC funding. Pension funds will take a hit — which will affect how much people have to spend in retirement. It’ll be grim.  Worse than that is the fact that these data centers will be, by definition, non-performing assets, and one that inflicted an opportunity cost that’ll be almost impossible to calculate. While a house, once built and sold, technically falls into that category (it doesn’t add to any economic productivity), people at least need somewhere to live. Shelter is an essential component of life. You can live without a data center the size of Manhattan.  What would have happened if companies like Microsoft and Meta instead spent the money on things that actually drove productivity, or created a valuable competitive business that drove economic activity? Hell, even if they just gave everyone a 10% raise, it would have likely been better for the economy than this, if we’re factoring in things like consumer spending.  It’s just waste. Profligate, pointless waste.  In summary, we’re already facing the prospect of a recession, and though I am not an economist, I can imagine it being a particularly nasty one given that the Magnificent Seven accounted for 47.87% of the Russell 1000 Index’s returns in 2024 . Even if big tech somehow makes this crap profitable, it’s hard to imagine that they’ll counterbalance any capex reduction with revenue, because there doesn’t seem to be that much revenue in generative AI to begin with. This is what happens when you allow the Rot Economy to run wild, building the stock market and tech industry on growth over everything else. This is what happens when the tech media repeatedly fails to hold the powerful to account, catering to their narratives and making excuses for their abominable, billion-dollar losses and mediocre, questionably-useful products.  Waffle on all you want about the so-called “agentic era” or “annualized revenues” that make you hot under the collar — I see no reason for celebration about an industry with no exit plans and needless capital expenditures that appear to be one of the few things keeping the American economy growing.  I have been writing about the tech industry’s obsession with generative AI for two years, and never have I felt more grim. Before this was an economic uncertainty — a way that our markets might contract, that big tech might take a big haircut, that a bunch of money might be wasted but otherwise the world would keep turning.  It feels as if everything is aligning for disaster, and I fear there’s nothing that can be done to avert it.

0 views

How Much Money Do OpenAI And Anthropic Actually Make?

Hello and welcome to the latest premium edition of Where's Your Ed At, I appreciate any and all of your subscriptions. I work very hard on these, and they help pay for the costs of running Ghost and, well, my time investigating different things. If you're on the fence, subscribe! I promise it's worth it. I also want to give a HUGE thank you to Westin Lee , a writer who has written about business and the use of AI, who was the originator of the whole "what if we used ARR to work out what these people make?" idea. He's been a tremendous help, and I recommend you check out his work. If you're an avid reader of the business and tech media, you'd be forgiven for thinking that OpenAI has made (or will make) in excess of $10 billion this year, and Anthropic in excess of $4 billion. Why? Because both companies have intentionally reported or leaked their "annualized recurring revenue" – a month's revenue multiplied by 12. OpenAI leaked yesterday to The Information that it hit $12 billion in "annual recurring revenue" – suggesting that its July 2025 revenues were around $1 billion. The Information reported on July 1 2025 that Anthropic's annual run rate was $4 billion – meaning that its revenue for the month of June 2025 was around $333 million. Then, yesterday, it reported that the run rate was up to $5 billion.  These do not, however, mean that their previous months were this high, nor do they mean that they've "made" anything close to these numbers. Annualized recurring revenue is one of the most regularly-abused statistics in the startup world, and can mean everything from "[actual month]x12" to "[30 day period of revenue]x12" and in most cases it's a number that doesn't factor in churn. Some companies even move around the start dates for contracts as a means of gaming this number.  ARR, also, doesn’t factor seasonality of revenue into the calculations. For example, you’d expect ChatGPT to have peaks and troughs that correspond with the academic year, with students cancelling their subscriptions during the summer break. If you use ARR, you’re essentially taking one month and treating it as representative of the entire calendar year, when it isn’t.  These companies are sharing (or leaking) their annualized revenues for a few reasons: In any case, I want to be clear this is a standard metric in non-public Software-as-a-Service (SaaS) businesses. Nothing is inherently wrong with the metric, save for its use and what's being interpreted from it. Nevertheless, there has been a lot of reporting on both OpenAI and Anthropic's revenues that has created incredible confusion in the market that benefits both companies, making them seem far more successful than they really are, and giving them credit for revenue they are yet to book. Before I dive into this — and yes, before the premium break — I want to establish some facts. The intention of either reporting or leaking their annualized revenue numbers was to make you think that OpenAI would hit its projected $12.7 billion revenue number, and Anthropic would hit its "optimistic" $4 billion number, because those "annualized revenue" figures sure seem to have the word "annual" in them. Yet through an historic analysis of reported annual recurring revenue numbers over the past three years, I've found things to be a little less certain. You see, when a company reports their "annual recurring revenue," what they're actually telling you is how much they made in a month, and I've sat down and found every single god damn bit of reporting about these numbers, calculating (based on the compound growth necessary between the months of reported monthly revenue) how much these companies are actually making in cash. My analysis, while imperfect (as we lack the data for certain months), aligns closely enough with projections that I am avoiding any egregious statements. OpenAI and Anthropic's previous projections were fairly accurate, though as I'll explain in this piece, I believe their new ones are egregious and ridiculous. More importantly, in all of these stories, there was only one time that these companies shared their revenues — when OpenAI shared its $10 billion runrate in May, though the July $12 billion ARR leak is likely intentional too. In fact, I believe both were an intentional attempt to mislead the general public into believing the company was more successful than it is. Based on my analysis, OpenAI made around $3.616 billion in revenue in 2024, and so far in 2025 has made, by my calculations, around $5.266 billion in revenue as of the end of July . This is also a slower growth rate than it’s experienced so far in the year. Going from $5.5 billion in annualized revenue in December 2024 to $10 billion annualized in May 2025 was a compound growth rate of around 12.7%. The "jump" from $10 billion ARR to $12 billion ARR is 9.54%. While I realize this may not seem like a big drop, every single penny counts, and percentage point shifts are worth hundreds of millions (if not billions) of dollars. OpenAI has been projected to make $12.7 billion in revenue in 2025. Making this number will be challenging, and require OpenAI to grow by 14%, every single month, without fail. For OpenAI to hit this number will require it to make nearly $2 billion a month in revenue by the end of the year to account for the disparity with the earlier months in the year when it made far, far less. I also have serious suspicions about how much OpenAI actually made in May, June and July 2025.  While The Information reported OpenAI hit $12 billion in annualized revenue, they did so in an obtuse way : Yet the New York Times, mere days later, reported $13 billion annualized revenue : First and foremost, it’s incredibly fucking suspicious that two very different numbers were reported here so closely, and even more so that the June 9 2025 announcement of OpenAI hitting $10 billion in annualized revenue was not, as I had originally believed, discussing the month of May 2025. This likely means that OpenAI is not using standard annualized revenue metrics - which would traditionally mean “the last month’s revenue multiplied by 12,” and instead choosing “if all the monthly subscribers and contracts that are currently paying us on this day, June 9 2025, were to be multiplied by 12, we’d have $X annualized revenue.” This is astronomically fucking dodgy. For the sake of this analysis, I am assuming any announcement of annualized revenue refers to the month previous. So, for example, when OpenAI announced they hit $10 billion in annualized revenue, I am going to assume this is for the month of May 2025. This analysis is going to favour the companies in question. If OpenAI “hit $10 billion annualized” in or around June 9 2025, it likely means that their May revenues were lower than that. Similarly, OpenAI “hitting” $12 billion in annualized revenue (announced end of July 2025) - which I have factored into my analysis - is considered the revenue they hit in July 2025.  In reality, this is likely to credit them more revenue than they deserve. If June’s annualized revenue was $10 billion, that means they made $833 million, rather than the $939 million I credit them with for the month.  One cannot hit $12 billion AND $13 billion annualized in one month unless you are playing extremely silly games with the numbers in question, such as moving around when you start a 30 day period to artificially inflate things. In any case, my analysis for OpenAI’s revenue for August is around $13.145 billion - so in line with a “$13 billion annualized” figure. In any case, I am sticking with my analysis as it stands. However, the timing of these annualized revenue leaks now makes me doubt the veracity of their previous leaks, in the sense that there’s every chance that they too are either inflated or used in a deceptive manner. Based on these numbers, OpenAI's current growth rate is around 9.54% — and at that current pace, it will finish the year at around $11.89 billion in revenue. This is an impressive number, meaning it’d be making over $1.5 billion a month in revenue by December 2025 — but such an impressive number will be difficult to reach, and mean it has something in the region of $18 billion annualized revenue by the end of the year. I also question whether it can make it, and even if it does, how it could possibly afford to serve that revenue long-term. In Anthropic's case, I am extremely confident, based on its well-reported annualized revenues, that Anthropic has, through July 2025, made around $ 1.5 billion in revenue. This is, of course, assuming that their annualized revenue leaks are for calendar months, and if they're not, this number could actually be lower. This is not a question of opinion. Other than April, we have ARR for every single month of the year.  Bloomberg is now reporting that Anthropic sees its revenue rate " maybe [going] to $9 billion annualized by year-end ," which, to use a technical term, is total bullshit, especially as this number was leaked as Anthropic is fundraising. In any case, I believe Anthropic can beat its base case estimates. It will almost certainly cross $2 billion in revenue, but I also believe that revenue growth is slowing for these companies, and the amount of cash we credit them as actually making is decidedly more average than "annualized revenue" would have you believe.

0 views