Posts in Finance (20 found)

Premium: The Hater's Guide to Private Equity

We have a global intelligence crisis, in that a lot of people are being really fucking stupid. As I discussed in this week’s free piece , alleged financial analyst Citrini Research put out a truly awful screed called the “2028 Global Intelligence Crisis” — a slop-filled scare-fiction written and framed with the authority of deeply-founded analysis, so much so that it caused a global selloff in stocks .  At 7,000 words, you’d expect the piece to have some sort of argument or base in reality, but what it actually says is that “AI will get so cheap that it will replace everything, and then most white collar people won’t have jobs, and then they won’t be able to pay their mortgages, also AI will cause private equity to collapse because AI will write all software.”  This piece is written specifically to spook *and* ingratiate anyone involved in the financial markets with the idea that their investments are bad but investing in AI companies is good, and also that if they don't get behind whatever this piece is about (which is unclear!), they'll be subject to a horrifying future where the government creates a subsidy generated by a tax on AI inference (seriously). And, most damningly, its most important points about HOW this all happens are single sentences that read "and then AI becomes more powerful and cheaper too and runs on a device."  Part of the argument is that AI agents will use cryptocurrency to replace MasterCard and Visa. It’s dogshit. I’m shocked that anybody took it seriously. The fact this moved markets should suggest that we have a fundamentally flawed financial system — and here’s an annotated version with my own comments. This is the second time our markets have been thrown into the shitter based on AI booster hype. A mere week and a half ago, a software sell-off began because of the completely fanciful and imaginary idea that AI would now write all software . I really want to be explicit here: AI does not threaten the majority of SaaS businesses, and they are jumping at ghost stories.  If I am correct, those dumping software stocks believe that AI will replace these businesses because people will be able to code their own software solutions. This is an intellectually bankrupt position, one that shows an alarming (and common) misunderstanding of very basic concepts. It is not just a matter of “enough prompts until it does this” — good (or even functional!) software engineering is technical, infrastructural, and philosophical, and the thing you are “automating” is not just the code that makes a thing run.  Let's start with the simplest, and least-technical way of putting it: even in the best-case scenario, you do not just type "Build Be A Salesforce Competitor" and it erupts, fully-formed, from your Terminal window. It is not capable of building it, but even if it were, it would need to actually be on a cloud hosting platform, and have all manner of actual customer data entered into it. Building software is not writing code and then hitting enter and a website appears, requiring all manner of infrastructural things (such as "how does a customer access it in a consistent and reliable way," "how do I make sure that this can handle a lot of people at once," and "is it quick to access," with the more-complex database systems requiring entirely separate subscriptions just to keep them connecting ).  Software is a tremendous pain in the ass. You write code, then you have to make sure the code actually runs, and that code needs to run in some cases on specific hardware, and that hardware needs to be set up right, and some things are written in different languages, and those languages sometimes use more memory or less memory and if you give them the wrong amounts or forget to close the door in your code on something everything breaks, sometimes costing you money or introducing security vulnerabilities.  In any case, even for experienced, well-versed software engineers, maintaining software that involves any kind of customer data requires significant investments in compliance, including things like SOC-2 audits if the customer itself ever has to interact with the system, as well as massive investments in security.  And yet, the myth that LLMs are an existential threat to existing software companies has taken root in the market, sending the share prices of the legacy incumbents tumbling. A great example would be SAP, down 10% in the last month.  SAP makes ERP (Enterprise Resource Planning, which I wrote about in the Hater's Guide To Oracle ) software, and has been affected by the sell-off. SAP is also a massive, complex, resource-intensive database-driven system that involves things like accounting, provisioning and HR, and is so heinously complex that you often have to pay SAP just to make it function (if you're lucky it might even do so). If you were to build this kind of system yourself, even with "the magic of Claude Code" (which I will get to shortly), it would be an incredible technological, infrastructural and legal undertaking.  Most software is like this. I’d say all software that people rely on is like this. I am begging with you, pleading with you to think about how much you trust the software that’s on every single thing you use, and what you do when a piece of software stops working, and how you feel about the company that does that. If your money or personal information touches it, they’ve had to go through all sorts of shit that doesn’t involve the code to bring you the software.  Any company of a reasonable size would likely be committing hundreds of thousands if not millions of dollars of legal and accounting fees to make sure it worked, engineers would have to be hired to maintain it, and you, as the sole customer of this massive ERP system, would have to build every single new feature and integration you want. Then you'd have to keep it running, this massive thing that involves, in many cases, tons of personally identifiable information. You'd also need to make sure, without fail, that this system that involves money was aware of any and all currencies and how they fluctuate, because that is now your problem. Mess up that part and your system of record could massively over or underestimate your revenue or inventory, which could destroy your business. If that happens, you won't have anyone to sue. When bugs happen, you'll have someone who's job it is to fix it that you can fire, but replacing them will mean finding a new person to fix the mess that another guy made.  And then we get to the fact that building stuff with Claude Code is not that straightforward. Every example you've read about somebody being amazed by it has built a toy app or website that's very similar to many open source projects or website templates that Anthropic trained its training data on. Every single piece of SaaS anyone pays for is paying for both access to the product and a transfer of the inherent risk or chaos of running software that involves people or money. Claude Code does not actually build unique software. You can say "create me a CRM," but whatever CRM it pops out will not magically jump onto Amazon Web Services, nor will it magically be efficient, or functional, or compliant, or secure, nor will it be differentiated at all from, I assume, the open source or publicly-available SaaS it was trained on. You really still need engineers, if not more of them than you had before. It might tell you it's completely compliant and that it will run like a hot knife through butter — but LLMs don’t know anything, and you cannot be sure Claude is telling the truth as a result. Is your argument that you’d still have a team of engineers (so they know what the outputs mean), but they’d be working on replacing your SaaS subscription? You’re basically becoming a startup with none of the benefits.  To quote Nik Suresh, an incredibly well-credentialed and respected software engineer (author of I Will Fucking Piledrive You If You Mention AI Again ), “...for some engineers, [Claude Code] is a great way to solve certain, tedious problems more quickly, and the responsible ones understand you have to read most of the output, which takes an appreciable fraction of the time it would take to write the code in many cases. Claude doesn't write terrible code all the time, it's actually good for many cases because many cases are boring. You just have to read all of it if you aren't a fucking moron because it periodically makes company-ending decisions.” Just so you know, “company-ending decisions” could start with your vibe-coded Stripe clone leaking user credit card numbers or social security numbers because you asked it to “just handle all the compliance stuff.” Even if you have very talented engineers, are those engineers talented in the specifics of, say, healthcare data or finance? They’re going to need to be to make sure Claude doesn’t do anything stupid !  So, despite all of this being very obvious , it’s clear that the markets and an alarming number of people in the media simply do not know what they are talking about. The “AI replaces software” story is literally “Anthropic has released a product and now the resulting industry is selling off,” such as when it launched a cybersecurity tool that could check for vulnerabilities (a product that has existed in some form for nearly a decade) causing a sell-off in cybersecurity stocks like Crowdstrike — you know, the one that had a faulty bit of code cause a global cybersecurity incident that lost the Fortune 500 billions , and led to Delta Air Lines suspending over 1,200 flights over six long days of disruption .  There is no rational basis for anything about this sell-off other than that our financial media and markets do not appear to understand the very basic things about the stuff they invest in. Software may seem complex, but (especially in these cases) it’s really quite simple: investors are conflating “an AI model can spit out code” with “an AI model can create the entire experience of what we know as “software,” or is close enough that we have to start freaking out.” This is thanks to the intentionally-deceptive marketing pedalled by Anthropic and validated by the media. In a piece from September 2025, Bloomberg reported that Claude Sonnet 4.5 could “code on its own for up to 30 hours straight,”  a statement directly from Anthropic repeated by other outlets that added that it did so “on complex, multi-step tasks,” none of which were explained. The Verge, however, added that apparently Anthropic “ coded a chat app akin to Slack or Teams ,” and no, you can’t see it, or know anything about how much it costs or its functionality. Does it run? Is it useful? Does it work in any way? What does it look like? We have absolutely no proof this happened other than them saying it, but because the media repeated it it’s now a fact.  Perhaps it’s not a particularly novel statement, but it’s becoming kind of obvious that maybe the people with the money don’t actually know what they’re doing, which will eventually become a problem when they all invest in the wrong thing for the wrong reasons.   SaaS (Software as a Service, which almost always refers to business software) stocks became a hot commodity because they were perpetual growth machines with giant sales teams that existed only to make numbers go up, leading to a flurry of investment based on the assumption that all numbers will always increase forever, and every market is as giant as we want. Not profitable? No problem! You just had to show growth. It was easy to raise money because everybody saw a big, obvious path to liquidity, either from selling to a big firm or taking the company public… …in theory.  Per Victor Basta , between 2014 and 2017, the number of VC rounds in technology companies halved with a much smaller drop in funding, adding that a big part was the collapse of companies describing themselves as SaaS, which dropped by 40% in the same period. In a 2016 chat with VC David Yuan, Gainsight CEO Nick Mehta added that “the bar got higher and weights shifted in the public markets,” citing that profitability was now becoming more important to investors.  Per Mehta, one savior had arrived — Private Equity, with Thoma Bravo buying Blue Coat Systems in 2011 for $1.3 billion (which had been backed by a Canadian teacher’s pension fund!), Vista Equity buying Tibco for $4.3 billion in 2014, and Permira Advisers (along with the Canadian Pension Plan Investment Board) buying Informatica for $5.3 billion ( with participation from both Salesforce and Microsoft ) in 2015, 16 years after its first IPO. In each case, these firms were purchased using debt that immediately gets dumped onto the company’s balance sheet, known as a leveraged buyout.  In simple terms, you buy a company with money that the company you just bought has to pay off. The company in question also has to grow like gangbusters to keep up with both that debt and the private equity firm’s expectations. And instead of being an investor with a board seat who can yell at the CEO, it’s quite literally your company, and you can do whatever you want with (or to) it. Yuan added that the size of these deals made the acquisitions problematic, as did their debt-filled: Symantec would acquire Blue Coat for $4.65 billion in 2016 , for just under a 4x return. Things were a little worse for Tibco. Vista Equity Partners tried to sell it in 2021 amid a surge of other M&A transactions , with the solution — never change, private equity! — being to buy Citrix for $16.5 billion (a 30%% premium on its stock price) and merge it with Tibco, magically fixing the problem of “what do we do with Tibco?” by hiding it inside another transaction. Informatica eventually had a $10 billion IPO in 2021, which was flat in its first day of trading , never really did more than stay at its IPO price, then sold to Salesforce for $8 billion in 2025 , at an equity value of $8 billion , which seems fine but not great until you realize that, with inflation, the $5.3 billion that Permira invested in 2015 was about $7.15 billion in 2025’s money. In every case, the assumption was very simple: these businesses would grow and own their entire industries, the PE firm would be the reason they did this (by taking them private and filling them full of debt while making egregious growth demands), and the meteoric growth of SaaS would continue in perpetuity.  Yet the real year that broke things was 2021. As everybody returned to the real world, consumer and business spending skyrocketed, leading ( per Bloomberg ) to a massive surge in revenues that convinced private equity to shove even more cash and debt up the ass of SaaS: Bloomberg is a little nicer than I am, so they’re not just writing “deals were waved through because everybody assumed that software grows forever and nobody actually knew a thing about the technology or why it would grow so fast.” Unsurprisingly, this didn’t turn out to be true. Per The Information , PE firms invested in or bought 1,167 U.S. software companies for $202 billion, and usually hold investments for three to five years. Thankfully, they also included a chart to show how badly this went:  2021 was the year of overvaluation, and ( per Jason Lemkin of SaaStr ) 60% of unicorns (startups with $1bn+) valuations hadn’t raised funds in years. The massive accumulated overinvestment, combined with no obvious pathway to an exit, led to people calling these companies “ Zombie Unicorns ”: The problem, to quote The Information, is that “PE firms don’t want to lock in returns that are lower than what they promised their backers, say some executives at these firms,” and “many enterprise software firms’ revenue growth has slowed.” Per CNBC in November 2025 , private equity firms were facing the same zombie problem: Per Jason Lemkin , private equity is sitting on its largest collection of companies held for longer than four years since 2012, with McKinsey estimating that more than 16,000 companies (more than 52% of the total buyout-backed inventory) had been held by private equity for more than four years, the highest on record. In very simple terms, there are hundreds of billions of tech companies sitting in the wings of private equity firms that they’re desperate to sell, with the only customers being big tech firms, other private equity firms, and public offerings in one of the slowest IPO markets in history . Investing used to be easy. There were so many ideas for so many companies, companies that could be worth billions of dollars once they’d been fattened up with venture capital and/or private equity. There were tons of acquirers, it was easy to take them public, and all you really had to do was exist and provide capital. Companies didn’t have to be good , they just had to look good enough to sell. This created a venture capital and private equity industry based on symbolic value, and chased out anyone who thought too hard about whether these companies could actually survive on their own merits. Per PitchBook, since 2022, 70% of VC-backed exits were valued at less than the capital put in , with more than a third of them being startups buying other startups in 2024. Private equity firms are now holding assets for an average of 7 years , McKinsey also added one horrible detail for the overall private equity market, emphasis mine:  You see, private equity is fucking stupid, doesn’t understand technology, doesn’t understand business, and by setting up its holdings with debt based on the assumption of unrealistic growth, they’ve created a crisis for both software companies and the greater tech industry.  On February 6, more than $17.7 billion of US tech company loans dropped to “distressed” trading levels (as in trading as if traders don’t believe they’ll get paid, per Bloomberg ), growing the overall group of distressed tech loans to $46.9 billion, “dominated by firms in SaaS.” These firms included huge investments like Thoma Bravo’s Dayforce ( which it purchased two days before this story ran for $12.3 billion ) and Calabrio ( which it acquired for “over” $1 billion in April 2021 and merged with Verint in November 2025 ).  This isn’t just about the shit they’ve bought , but the destruction of the concept of “value” in the tech industry writ large. “Value” was not based on revenues, or your product, or anything other than your ability to grow and, ideally, trap as many customers as possible , with the vague sense that there would always be infinitely more money every year to spend on software.  Revenue growth came from massive sales teams compensated with heavy commissions and yearly price increases, except things have begun to sour, with renewals now taking twice as long to complete , and overall SaaS revenue growth slowing for years . To put it simply, much of the investment in software was based on the idea that software companies will always grow forever, and SaaS companies — which have “sticky” recurring revenues — would be the standard-bearer. When I got into the tech industry in 2008, I immediately became confused about the amount of unprofitable or unsustainable companies that were worth crazy amounts of money, and for the most part I’d get laughed at by reporters for being too cynical.  For the best part of 20 years, software startups have been seen as eternal growth-engines. All you had to do was find a product-market fit, get a few hundred customers locked in, up-sell them on new features and grow in perpetuity as you conquered a market. The idea was that you could just keep pumping them with cash, hire as many pre-sales (technical person who makes the sale), sales and customer experience (read: helpful person who also loves to tell you more stuff) people as you need to both retain customers and sell them as much stuff as you need.  Innovation was, as you’d expect, judged entirely by revenue growth and net revenue retention : In practice, this sounds reasonable: what percentage of your revenue are you making year-over-year? The problem is that this is a very easy to game stat, especially if you’re using it to raise money, because you can move customer billing periods around to make sure that things all continue to look good. Even then, per research by Jacco van der Kooji and Dave Boyce , net revenue retention is dropping quarter over quarter. The other problem is that the entire process of selling software has separated from the end-user, which means that products (and sales processes) are oriented around selling that software to the person responsible for buying it rather than those doomed to use it.  Per Nik Suresh’s Brainwash An Executive Today , in a conversation with the Chief Technology Officer of a company with over 10,000 people, who had asked if “data observability,” a thing that they did not (and would not need to, in their position) understand, was a problem, and whether Nik had heard of Monte Carlo. It turned out that the executive in question had no idea what Monte Carlo or data observability was , but because they’d heard about it on LinkedIn, it was now all they could think about.  This is the environment that private equity bought into — a seemingly-eternal growth engine with pliant customers desperate to spend money on a product that didn’t have to be good , just functional-enough. These people do not know what they are talking about or why they are buying these companies other than being able to mumble out shit like “ARR” and “NRR+” and “TAM” and “CAC” and “ARPA” in the right order to convince themselves that something is a good idea without ever thinking about what would happen if it wasn’t. This allowed them to stick to the “big picture,” meaning “numbers that I can look at rather than any practical experience in software development.” While I guess the concept of private equity isn’t morally repugnant, its current form — which includes venture capital — has led the modern state of technology into the fucking toilet, combining an initial flux of viable businesses, frothy markets and zero interest rates making it deceptively easy to raise money to acquire and deploy capital, leading to brainless investing, the death of logical due diligence, and potentially ruinous consequences for everybody involved. Private equity spent decades buying a little bit of just about everything, enriching the already-rich by engaging with the most vile elements of the Rot Economy’s growth-at-all-costs mindset . Its success is predicated on near-perpetual levels of liquidity and growth in both its holdings and the holdings of those who exist only to buy their stock, and on a tech and business media that doesn’t think too hard about the reality of the problems their companies claim to solve. The reckoning that’s coming is one built specifically to target the ignorant hubris that made them rich.  Private equity has yet to be punished by its limited partners and banks for investing in zombie assets, allowing it to pile into the unprofitable data centers underpinning the AI bubble, meaning that companies like Apollo, Blue Owl and Blackstone — all of whom participated in the ugly $10.2 billion acquisition of Zendesk in 2022 ( after it rejected another PE offer of $17 billion in 2021 ) that included $5 billion in debt — have all become heavily-leveraged in giant, ugly debt deals covering assets that are obsolete to useless in a few years . Alongside the fumbling ignorance of private equity sits the $3 trillion private credit industry , an equally-putrid, growth-drunk, and poorly-informed industry run with the same lax attention to detail and Big Brain Number Models that can justify just about any investment they want. Their half-assed due diligence led to billions of dollars of loans being given to outright frauds like First Brands , Tricolor and PosiGen , and, to paraphrase JP Morgan’s Jamie Dimon, there are absolutely more fraudulent cockroaches waiting to emerge . You may wonder why this matters, as all of this is private credit. Well, they get their money from banks. Big banks. In fact, according to the Federal Reserve of Boston , about 14% ($300 billion) of large banks’ total loan commitments to non-banking financial institutions in 2023 went to private equity and private credit, with Moody’s pegging the number around $285 billion, with an additional $340 billion in unused-yet-committed cash waiting in the wings . Oh, and they get their money from you . Pension funds are among some of the biggest backers of private credit companies , with the New York City Employees Retirement System and CalPERS increasing their investments.  Today, I’m going to teach you all about private equity, private credit, and why years of reframing “value” to mean “growth” may genuinely threaten the global banking system, as well as how effectively every company raises money. An entirely-different system exists for the wealthy to raise and deploy capital, one with flimsy due diligence, a genuine lack of basic industrial knowledge, and hundreds of billions of dollars of crap it can’t sell.  These people have been able to raise near-unlimited capital to do basically anything they want because there was always somebody stupid enough to buy whatever they were selling, and they have absolutely no plan for what happens when their system stops working.  They’ll loan to anyone or invest in anything that confirms their biases, and those biases are equal parts moronic and malevolent. Now they’re investing teachers’ pensions and insurance premiums in unprofitable and unsustainable data centers, all because they have no idea what a good investment actually looks like.  Welcome to the Hater’s Guide To Private Equity, or “The Stupidest Assholes In The Room.”

0 views

On NVIDIA and Analyslop

Hey all! I’m going to start hammering out free pieces again after a brief hiatus, mostly because I found myself trying to boil the ocean with each one, fearing that if I regularly emailed you you’d unsubscribe. I eventually realized how silly that was, so I’m back, and will be back more regularly. I’ll treat it like a column, which will be both easier to write and a lot more fun. As ever, if you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 18,000 words, including vast, extremely detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I am regularly several steps ahead in my coverage, and you get an absolute ton of value. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual.  Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.  Before we go any further, I want to remind everybody I’m not a stock analyst nor do I give investment advice.  I do, however, want to say a few things about NVIDIA and its annual earnings report, which it published on Wednesday, February 25: NVIDIA’s entire future is built on the idea that hyperscalers will buy GPUs at increasingly-higher prices and at increasingly-higher rates every single year. It is completely reliant on maybe four or five companies being willing to shove tens of billions of dollars a quarter directly into Jensen Huang’s wallet. If anything changes here — such as difficulty acquiring debt or investor pressure cutting capex — NVIDIA is in real trouble, as it’s made over $95 billion in commitments to build out for the AI bubble .  Yet the real gem was this part: Hell yeah dude! After misleading everybody that it intended to invest $100 billion in OpenAI last year ( as I warned everybody about months ago , the deal never existed and is now effectively dead ), NVIDIA was allegedly “close” to investing $30 billion . One would think that NVIDIA would, after Huang awkwardly tried to claim that the $100 billion was “ never a commitment ,” say with its full chest how badly it wanted to support OpenAI and how intentionally it would do so. Especially when you have this note in your 10-K: What a peculiar world we live in. Apparently NVIDIA is “so close” to a “partnership agreement” too , though it’s important to remember that Altman, Brockman, and Huang went on CNBC to talk about the last deal and that never came together. All of this adds a little more anxiety to OpenAI's alleged $100 billion funding round which, as The Information reports , Amazon's alleged $50 billion investment will actually be $15 billion, with the next $35 billion contingent on AGI or an IPO: And that $30 billion from NVIDIA is shaping up to be a Klarna-esque three-installment payment plan: A few thoughts: Anyway, on to the main event. New term: analyslop, when somebody writes a long, specious piece of writing with few facts or actual statements with the intention of it being read as thorough analysis.  This week, alleged financial analyst Citrini Research (not to be confused with Andrew Left’s Citron Research)  put out a truly awful piece called the “2028 Global Intelligence Crisis,” slop-filled scare-fiction written and framed with the authority of deeply-founded analysis, so much so that it caused a global selloff in stocks .  This piece — if you haven’t read it, please do so using my annotated version — spends 7000 or more words telling the dire tale of what would happen if AI made an indeterminately-large amount of white collar workers redundant.  It isn’t clear what exactly AI does, who makes the AI, or how the AI works, just that it replaces people, and then bad stuff happens. Citrini insists that this “isn’t bear porn or AI-doomer fan-fiction,” but that’s exactly what it is — mediocre analyslop framed in the trappings of analysis, sold on a Substack with “research” in the title, specifically written to spook and ingratiate anyone involved in the financial markets.  Its goal is to convince you that AI (non-specifically) is scary, that your current stocks are bad, and that AI stocks (unclear which ones those are, by the way) are the future. Also, find out more for $999 a year. Let me give you an example: The goal of a paragraph like this is for you to say “wow, that’s what GPUs are doing now!” It isn’t, of course. The majority of CEOs report little or no return on investment from AI , with a study of 6000 CEOs across the US, UK, Germany and Australia finding that “ more than 80%  [detected] no discernable impact from AI on either employment or productivity .” Nevertheless, you read “GPU” and “North Dakota” and you think “wow! That’s a place I know, and I know that GPUs power AI!”  I know a GPU cluster in North Dakota — CoreWeave’s one with Applied Digital that has debt so severe that it loses both companies money even if they have the capacity rented out 24/7 . But let’s not let facts get in the way of a poorly-written story. I don’t need to go line-by-line — mostly because I’ll end up writing a legally-actionable threat — but I need you to know that most of this piece’s arguments come down to magical thinking and the utterly empty prose. For example, how does AI take over the entire economy?  That’s right, they just get better. No need to discuss anything happening today. Even AI 2027 had the balls to start making stuff about “OpenBrain” or whatever. This piece literally just says stuff, including one particularly-egregious lie:  This is a complete and utter lie. A bald-faced lie. This is not something that Claude Code can do. The fact that we have major media outlets quoting this piece suggests that those responsible for explaining how things work don’t actually bother to do any of the work to find out, and it’s both a disgrace and embarrassment for the tech and business media that these lies continue to be peddled.  I’m now going to quote part of my upcoming premium (the Hater’s Guide To Private Equity, out Friday), because I think it’s time we talked about what Claude Code actually does. I’ve worked in or around SaaS since 2012, and I know the industry well. I may not be able to code, but I take the time to speak with software engineers so that I understand what things actually do and how “impressive” they are. Similarly, I make the effort to understand the underlying business models in a way that I’m not sure everybody else is trying to, and if I’m wrong, please show me an analysis of the financial condition of OpenAI or Anthropic from a booster. You won’t find one, because they’re not interested in interacting with reality. So, despite all of this being very obvious , it’s clear that the markets and an alarming number of people in the media simply do not know what they are talking about or are intentionally avoiding thinking about it. The “AI replaces software” story is literally “Anthropic has released a product and now the resulting industry is selling off,” such as when it launched a cybersecurity tool that could check for vulnerabilities (a product that has existed in some form for nearly a decade) causing a sell-off in cybersecurity stocks like Crowdstrike — you know, the one that had a faulty bit of code cause a global cybersecurity incident that lost the Fortune 500 billions , and resulted in Delta Airlines having to cancel over 1,200 flights over a period of several days .  There is no rational basis for anything about this sell-off other than that our financial media and markets do not appear to understand the very basic things about the stuff they invest in. Software may seem complex, but (especially in these cases) it’s really quite simple: investors are conflating “an AI model can spit out code” with “an AI model can create the entire experience of what we know as ‘software,’ or is close enough that we have to start freaking out.” This is thanks to the intentionally-deceptive marketing pedalled by Anthropic and validated by the media. In a piece from September 2025, Bloomberg reported that Claude Sonnet 4.5 could “code on its own for up to 30 hours straight,”  a statement directly from Anthropic repeated by other outlets that added that it did so “on complex, multi-step tasks,” none of which were explained. The Verge, however, added that apparently Anthropic “ coded a chat app akin to Slack or Teams ,” and no, you can’t see it, or know anything about how much it costs or its functionality. Does it run? Is it useful? Does it work in any way? What does it look like? We have absolutely no proof this happened other than Anthropic saying it, but because the media repeated it it’s now a fact.  As I discussed last week, Anthropic’s primary business model is deception , muddying the waters of what’s possible today and what might be possible tomorrow through a mixture of flimsy marketing statements and chief executive Dario Amodei’s doomerist lies about all white collar labor disappearing .  Anthropic tells lies of obfuscation and omission.  Anthropic exploits bad journalism, ignorance and a lack of critical thinking. As I said earlier, the “wow, Claude Code!” articles are mostly from captured boosters and people that do not actually build software being amazed that it can burp up its training data and make an impression of software engineering.  And even if we believe the idea that Spotify’s best engineers are not writing any code , I have to ask: to what end? Is Spotify shipping more software? Is the software better? Are there more features? Are there less bugs? What are the engineers doing with the time they’re saving? A study from last year from METR said that despite thinking they were 24% faster, LLM coding tools made engineers 19% slower.  I also think we need to really think deeply about how, for the second time in a month, the markets and the media have had a miniature shitfit based on blogs that tell lies using fan fiction. As I covered in my annotations of Matt Shumer’s “Something Big Is Happening,” the people that are meant to tell the general public what’s happening in the world appear to be falling for ghost stories that confirm their biases or investment strategies, even if said stories are full of half-truths and outright lies. I am despairing a little. When I see Matt Shumer on CNN or hear from the head of a PE firm about Citrini Research, I begin to wonder whether everybody got where they were not through any actual work but by making the right noises.  This is the grifter economy, and the people that should be stopping them are asleep at the wheel. NVIDIA beat estimates and raised expectations, as it has quarter after quarter. People were initially excited, then started reading the 10-K and seeing weird little things that stood out. $68.1 billion in revenue is a lot of money! That’s what you should expect from a company that is the single vendor in the only thing anybody talks about.  Hyperscaler revenue accounted for slightly more than 50% of NVIDIA’s data center revenue . As I wrote about last year , NVIDIA’s diversified revenue — that’s the revenue that comes from companies that aren’t in the magnificent 7 — continues to collapse. While data center revenue was $62.3 billion, 50% ($31.15 billion) was taken up by hyperscalers…and because we don’t get a 10-Q for the fourth quarter, we don’t get a breakdown of how many individual customers made up that quarter’s revenue. Boo! It is both peculiar and worrying that 36% (around $77.7 billion) of its $215.938 billion in FY2026 revenue came from two customers. If I had to guess, they’re likely Foxconn or Quanta computing, two large Taiwanese ODMs (Original Design Manufacturers) that build the servers for most hyperscalers.  If you want to know more, I wrote a long premium piece that goes into it (among the ways in which AI is worse than the dot com bubble). In simple terms, when a hyperscaler buys GPUs, they go straight to one of these ODMs to put them into servers. This isn’t out of the ordinary, but I keep an eye on the ODM revenues (which publish every month) to see if anything shifts, as I think it’ll be one of the first signs that things are collapsing. NVIDIA’s inventories continue to grow, sitting at over $21 billion (up from around $19 billion last quarter). Could be normal! Could mean stuff isn’t shipping. NVIDIA has now agreed to $27 billion in multi-year-long cloud service agreements — literally renting its GPUs back from the people it sells them to — with $7 billion of that expected in its FY2027 (Q1 FY2027 will report in May 2026).  For some context, CoreWeave (which reports FY2025 earnings today, February 26) gave guidance last November that it expected its entire annual revenue to be between $5 billion and $5.15 billion. CoreWeave is arguably the largest AI compute vendor outside of the hyperscalers. If there was significant demand, none of this would be necessary. NVIDIA “invested” $17.5bn in AI model makers and other early-stage AI startups, and made a further $3.5bn in land, power, and shell guarantees to “support the build-out of complex datacenter infrastructures.” In total, it spent $21bn propping up the ecosystem that, in turn, feeds billions of dollars into its coffers.  NVIDIA’s l ong-term supply and capacity obligations soared from $30.8bn to $95.2bn , largely because NVIDIA’s latest chips are extremely complex and require TSMC to make significant investments in hardware and facilities , and it’s unwilling to do that without receiving guarantees that it’ll make its money back.  NVIDIA expects these obligations to grow .  NVIDIA’s accounts receivable (as in goods that have been shipped but are yet to be paid for) now sits at $38.4 billion, of which 56% ($21.5 billion) is from three customers. This is turning into a very involved and convoluted process! It turns out that it's pretty difficult to actually raise $100 billion. This is a big problem, because OpenAI needs $655 billion in the next five years to pay all its bills , and loses billions of dollars a year. If OpenAI is struggling to raise $100 billion today, I don't see how it's possible it survives. If you're to believe reports, OpenAI made $13.1 billion in revenue in 2025 on $8 billion of losses , but remember, my own reporting from last year said that OpenAI only made around $4.329 billion through September 2025 with $8.67 billion of inference costs alone. It is kind of weird that nobody seems to acknowledge my reporting on this subject. I do not see how OpenAI survives. it coded for 30 hours [from which you are meant to intimate the code was useful or good and that these hours were productive].  it made a Microsoft Teams competitor [that you are meant to assume was full-featured and functional like Teams or Slack, or…functional? And they didn’t even have to prove it by showing you it]  It was able to write uninterruptedly [which you assume was because it was doing good work that didn’t need interruption].

0 views
bitonic's blog. 1 weeks ago

A vibe-coded alternative to YieldGimp

If you’re a UK tax resident, short-term low-coupon gilts are the most tax efficient way to get savings-account-like returns, since most of their yield is tax free. This makes them very popular amongst retail investors, which now hold a large portion of the tradable low-coupon gilts. YieldGimp.com used to be a great free resource to evaluate the gilts currently available. However, it was recently turned into an app rather than a simple webpage. I’m not even sure if the app is free or paid, but I do not want to install the “YieldGimp platform” to quickly check gilt metrics when I buy them. So I asked my LLM of choice to produce an alternative, and after a few minutes and a few rounds of prompting I had something that served my needs. It is available for use at mazzo.li/gilts/ , and the source is on GitHub . It differs from YieldGimp in that it does not show metrics based on the current market price, but rather requires the user to input a price. I find this more useful anyway, since gilts are somewhat illiquid on my broker, so I need to come up with a limit price myself, which means that I want to know what the yield is at my price rather than the market price. It also lets you select a specific tax rate to produce a “gross equivalent” yield. It is not a very sophisticated tool and it doesn’t pretend to model gilts and their tax implications precisely (the repository’s README has more details on its shortcomings), but for most use cases it should be informative enough to sanity-check your trades without a Bloomberg terminal.

0 views

Premium: The AI Data Center Financial Crisis

Since the beginning of 2023, big tech has spent over $814 billion in capital expenditures, with a large portion of that going towards meeting the demands of AI companies like OpenAI and Anthropic.  Big tech has spent big on GPUs, power infrastructure, and data center construction,  using a variety of financing methods to do so, including (but not limited to) leasing. And the way they’re going about structuring these finance deals is growing increasingly bizarre.  I’m not merely talking about Meta’s curious arrangement for its facility in Louisiana , though that certainly raised some eyebrows. Last year, Morgan Stanley published a report that claimed hyperscalers were increasingly relying on finance leases to obtain the “powered shell” of a data center, rather than the more common method of operating leases.  The key difference here is that finance leases, unlike operating leases, are effectively long-term loans where the borrower is expected to retain ownership of the asset (whether that be a GPU or a building) at the end of the contract. Traditionally, these types of arrangements have been used to finance the bits of a data center that have a comparatively limited useful life — like computer hardware, which grows obsolete with time. The spending to date is, as I’ve written about again and again , an astronomical amount of spending considering the lack of meaningful revenue from generative AI.  Even after a year straight of manufacturing consent for Claude Code as the be-all-end-all of software development resulted in putrid results for Anthropic — $4.5 billion of revenue and $5.2 billion of losses before interest, taxes, depreciation and amortization according to The Information — with ( per WIRED ) Claude Code only accounting for around $1.1 billion in annualized revenue in December, or around $92 million in monthly revenue. This was in a year where Anthropic raised a total of $16.5 billion (with $13 billion of that coming in September 2025), and it’s already working on raising another $25 billion . This might be because it promised to buy $21 billion of Google TPUs from Broadcom , or because Anthropic expects AI model training costs to cost over $100 billion in the next 3 years . And it just raised another $30 billion — albeit with the caveat that some of said $30 billion came from previously-announced funding agreements with Nvidia and Microsoft, though how much remains a mystery. According to Anthropic’s new funding announcement, Claude Code’s run rate has grown to “over $2.5 billion” as of February 12 2026 — or around $208 million. Based on literally every bit of reporting about Anthropic, costs have likely spiked along with revenue, which hit $14 billion annualized ($1.16 billion in a month) as of that date.  I have my doubts, but let’s put them aside for now. Anthropic is also in the midst of one of the most aggressive and dishonest public relations campaigns in history. While its Chief Commercial Officer Paul Smith told CNBC that it was “focused on growing revenue” rather than “spending money,” it’s currently making massive promises — tens of billions on Google Cloud , “ $50 billion in American AI infrastructure ,” and $30 billion on Azure . And despite Smith saying that Anthropic was less interested in “flashy headlines,” Chief Executive Dario Amodei has said, in the last three weeks , that “ almost unimaginable power is potentially imminent ,” that AI could replace all software engineers in the next 6-12 months , that AI may (it’s always fucking may ) cause “ unusually painful disruption to jobs ,” and wrote a 19,000 word essay — I guess AI is coming for my job after all! — where he repeated his noxious line that “we will likely get a century of scientific and economic progress compressed in a decade.” Yet arguably the most dishonest part is this word “training.” When you read “training,” you’re meant to think “oh, it’s training for something, this is an R&D cost,” when “training LLMs” is as consistent a cost as inference (the creation of the output) or any other kind of maintenance.  While most people know about pretraining — the shoving of large amounts of data into a model (this is a simplification I realize) — in reality a lot of the current spate of models use post-training , which covers everything from small tweaks to model behavior to full-blown reinforcement learning where experts reward or punish particular responses to prompts. To be clear, all of this is well-known and documented, but the nomenclature of “training” suggests that it might stop one day, versus the truth: training costs are increasing dramatically, and “training” covers anything from training new models to bug fixes on existing ones. And, more fundamentally, it’s an ongoing cost — something that’s an essential and unavoidable cost of doing business.  Training is, for an AI lab like OpenAI and Anthropic, as common (and necessary) a cost as those associated with creating outputs (inference), yet it’s kept entirely out of gross margins : This is inherently deceptive. While one would argue that R&D is not considered in gross margins, training isn’t gross margins — yet gross margins generally include the raw materials necessary to build something, and training is absolutely part of the raw costs of running an AI model. Direct labor and parts are considered part of the calculation of gross margin, and spending on training — both the data and the process of training itself — are absolutely meaningful, and to leave them out is an act of deception.  Anthropic’s 2025 gross margins were 40% — or 38% if you include free users of Claude — on inference costs of $2.7 (or $2.79) billion, with training costs of around $4.1 billion . What happens if you add training costs into the equation?  Let’s work it out! Training is not an up front cost , and considering it one only serves to help Anthropic cover for its wretched business model. Anthropic (like OpenAI) can never stop training, ever, and to pretend otherwise is misleading. This is not the cost just to “train new models” but to maintain current ones, build new products around them, and many other things that are direct, impossible-to-avoid components of COGS. They’re manufacturing costs, plain and simple. Anthropic projects to spend $100 billion on training in the next three years, which suggests it will spend — proportional to its current costs — around $32 billion on inference in the same period, on top of $21 billion of TPU purchases, on top of $30 billion on Azure (I assume in that period?), on top of “tens of billions” on Google Cloud. When you actually add these numbers together (assuming “tens of billions” is $15 billion), that’s $200 billion.  Anthropic ( per The Information’s reporting ) tells investors it will make $18 billion in revenue in 2026 and $55 billion in 2027 — year-over-year increases of 400% and 305% respectively, and is already raising $25 billion after having just closed a $30bn deal. How does Anthropic pay its bills? Why does outlet after outlet print these fantastical numbers without doing the maths of “how does Anthropic actually get all this money?” Because even with their ridiculous revenue projections, this company is still burning cash, and when you start to actually do the maths around anything in the AI industry, things become genuinely worrying.  You see, every single generative AI company is unprofitable, and appears to be getting less profitable over time. Both The Information and Wall Street Journal reported the same bizarre statement in November — that Anthropic would “turn a profit more quickly than OpenAI,” with The Information saying Anthropic would be cash flow positive in 2027 and the Journal putting the date at 2028, only for The Information to report in January that 2028 was the more-realistic figure.  If you’re wondering how, the answer is “Anthropic will magically become cash flow positive in 2028”: This is also the exact same logic as OpenAI, which will, per The Information in September , also, somehow, magically turn cashflow positive in 2030: Oracle, which has a 5-year-long, $300 billion compute deal with OpenAI that it lacks the capacity to serve and that OpenAI lacks the cash to pay for, also appears to have the same magical plan to become cash flow positive in 2029 : Somehow, Oracle’s case is the most legit, in that theoretically at that time it would be done, I assume, paying the $38 billion it’s raising for Stargate Shackelford and Wisconsin, but said assumption also hinges on the idea that OpenAI finds $300 billion somehow . it also relies upon Oracle raising more debt than it currently has — which, even before the AI hype cycle swept over the company, was a lot.  As I discussed a few weeks ago in the Hater’s Guide To Oracle , a megawatt of data center IT load generally costs  (per Jerome Darling of TD Cowen) around $12-14m  in construction (likely more due to skilled labor shortages, supply constraints and rising equipment prices) and $30m a megawatt in GPUs and associated hardware. In plain terms, Oracle (and its associated partners) need around $189 billion to build the 4.5GW of Stargate capacity to make the revenue from the OpenAI deal, meaning that it needs around another $100 billion once it raises $50 billion in combined debt, bonds, and printing new shares by the end of 2026. I will admit I feel a little crazy writing this all out, because it’s somehow a fringe belief to do the very basic maths and say “hey, Oracle doesn’t have the capacity and OpenAI doesn’t have the money.” In fact, nobody seems to want to really talk about the cost of AI, because it’s much easier to say “I’m not a numbers person” or “they’ll work it out.” This is why in today’s newsletter I am going to lay out the stark reality of the AI bubble, and debut a model I’ve created to measure the actual, real costs of an AI data center. While my methodology is complex, my conclusions are simple: running AI data centers is, even when you remove the debt required to stand up these data centers, a mediocre business that is vulnerable to basically any change in circumstances.  Based on hours of discussions with data center professionals, analysts and economists, I have calculated that in most cases, the average AI data center has gross margins of somewhere between 30% and 40% — margins that decay rapidly for every day, week, or month that you take putting a data center into operation. This is why Oracle has negative 100% margins on NVIDIA’s GB200 chips — because the burdensome up-front cost of building AI data centers (as GPUs, servers, and other associated) leaves you billions of dollars in the hole before you even start serving compute, after which you’re left to contend with taxes, depreciation, financing, and the cost of actually powering the hardware.  Yet things sour further when you face the actual financial realities of these deals — and the debt associated with them.  Based on my current model of the 1GW Stargate Abilene data center, Oracle likely plans to make around $11 billion in revenue a year from the 1.2GW (or around 880MW of critical IT). While that sounds good, when you add things like depreciation, electricity, colocation costs of $1 billion a year from Crusoe, opex, and the myriad of other costs, its margins sit at a stinkerific 27.2% — and that’s assuming OpenAI actually pays, on time, in a reliable way. Things only get worse when you factor in the cost of debt. While Oracle has funded Abilene using a mixture of bonds and existing cashflow, it very clearly has yet to receive the majority of the $25 billion+ in GPUs and associated hardware (with only 96,000 GPUs “ delivered ”), meaning that it likely bought them out of its $18 billion bond sale from last September .  If we assume that maths, this means that Oracle is paying a little less than $963 million a year ( per the terms of the bond sale ) whether or not a single GPU is even turned on, leaving us with a net margin of 22.19%... and this is assuming OpenAI pays every single bill, every single time, and there are absolutely no delays. These delays are also very, very expensive. Based on my model, if we assume that 100MW of critical IT load is operational (roughly two buildings and 100,000 GB200s) but has yet to start generating revenue, Oracle is burning, without depreciation ( EDITOR’S NOTE: sorry! This previously said depreciation was a cash expense and was included in this number (even though it wasn’t! ) , but it's correct in the model! ), around $4.69 million a day in cash . I have also confirmed with sources in Abilene that there is no chance that Stargate Abilene is fully operational in 2026. In simpler terms: I will admit I’m quite disappointed that the media at large has mostly ignored this story. Limp, cautious “are we in an AI bubble?” conversations are insufficient to deal with the potential for collapse we’re facing.  Today, I’m going to dig into the reality of the costs of AI, and explain in gruesome detail exactly how easily these data centers can rapidly approach insolvency in the event that their tenants fail to pay.  The chain of pain is real: Today I’m going to explain how easily it breaks. If Anthropic’s gross margin was 38% in 2025, that means its COGS (cost of goods sold) was $2.79 billion. If we add training, this brings COGS to $6.89 billion, leaving us with -$2.39 billion after $4.5 billion in revenue. This results in a negative 53% gross margin. AI startups are all unprofitable, and do not appear to have a path to sustainability.  AI data centers are being built in anticipation of demand that doesn’t exist, and will only exist if AI startups — which are all unprofitable — can afford to pay them. Oracle, which has committed to building 4.5GW of data centers, is burning cash every day that OpenAI takes to set up its GPUs, and when it starts making money, it does so from a starting position of billions and billions of dollars in debt. Margins are low throughout the entire stack of AI data center operators — from landlords like Applied Digital to compute providers like CoreWeave — thanks to the billions in debt necessary to fund both construction and IT hardware to make them run, putting both parties in a hole that can only be filled with revenues that come from either hyperscalers or AI startups.  In a very real sense, the AI compute industry is dependent on AI “working out,” because if it doesn’t, every single one of these data centers will become a burning hole in the ground.

1 views
Stratechery 2 weeks ago

Amazon Earnings, CapEx Concerns, Commodity AI

Amazon's massive CapEx increase makes me much more nervous than Google's, but it is understandable.

0 views
Stratechery 3 weeks ago

Apple Earnings, Supply Chain Speculation, China and Industrial Design

Apple's earnings could have been higher but the company couldn't get enough chips; then, once again a new design meant higher sales in China.

0 views
Stratechery 1 months ago

Meta Earnings, Turning Dials, Zuckerberg’s Motivation

Meta is up, despite massive CapEx plans. The company is turning every dial to drive revenue, because Mark Zuckerberg thinks winning in AI is existential.

0 views
Stratechery 1 months ago

An Interview with Kalshi CEO Tarek Monsour About Prediction Markets

An interview with Kalshi co-founder and CEO Tarek Monsour about the value of prediction markets.

0 views

Lessons from Building AI Agents for Financial Services

I’ve spent the last two years building AI agents for financial services. Along the way, I’ve accumulated a fair number of battle scars and learnings that I want to share. Here’s what I’ll cover: - The Sandbox Is Not Optional - Why isolated execution environments are essential for multi-step agent workflows - Context Is the Product - How we normalize heterogeneous financial data into clean, searchable context - The Parsing Problem - The hidden complexity of extracting structured data from adversarial SEC filings - Skills Are Everything - Why markdown-based skills are becoming the product, not the model - The Model Will Eat Your Scaffolding - Designing for obsolescence as models improve - The S3-First Architecture - Why S3 beats databases for file storage and user data - The File System Tools - How ReadFile, WriteFile, and Bash enable complex financial workflows - Temporal Changed Everything - Reliable long-running tasks with proper cancellation handling - Real-Time Streaming - Building responsive UX with delta updates and interactive agent workflows - Evaluation Is Not Optional - Domain-specific evals that catch errors before they cost money - Production Monitoring - The observability stack that keeps financial agents reliable Why financial services is extremely hard. This domain doesn’t forgive mistakes. Numbers matter. A wrong revenue figure, a misinterpreted guidance statement, an incorrect DCF assumption. Professional investors make million-dollar decisions based on our output. One mistake on a $100M position and you’ve destroyed trust forever. The users are also demanding. Professional investors are some of the smartest, most time-pressed people you’ll ever work with. They spot bullshit instantly. They need precision, speed, and depth. You can’t hand-wave your way through a valuation model or gloss over nuances in an earnings call. This forces me to develop an almost paranoid attention to detail. Every number gets double-checked. Every assumption gets validated. Every model gets stress-tested. You start questioning everything the LLM outputs because you know your users will. A single wrong calculation in a DCF model and you lose credibility forever. I sometimes feel that the fear of being wrong becomes our best feature. Over the years building with LLM, we’ve made bold infrastructure bets early and I think we have been right. For instance, when Claude Code launched with its filesystem-first agentic approach, we immediately adopted it. It was not an obvious bet and it was a massive revamp of our architecture. I was extremely lucky to have Thariq from Anthropic Claude Code jumping on a Zoom and opening my eyes to the possibilities. At the time the whole industry, including Fintool, was all building elaborate RAG pipelines with vector databases and embeddings. After reflecting on the future of information retrieval with agents I wrote “ the RAG obituary ” and Fintool moved fully to agentic search. We even decided to retire our precious embedding pipeline. Sad but whatever is best for the future! People thought we were crazy. The article got a lot of praise and a lot of negative comments. Now I feel most startups are adopting these best practices. I believe we’re early on several other architectural choices too. I’m sharing them here because the best way to test ideas is to put them out there. Let’s start with the biggest one. When we first started building Fintool in 2023, I thought sandboxing might be overkill. “We’re just running Python scripts” I told myself. “What could go wrong?” Haha. Everything. Everything could go wrong. The first time an LLM decided to `rm -rf /` on our server (it was trying to “clean up temporary files”), I became a true believer. Here’s the thing: agents need to run multi-step operations. A professional investor asks for a DCF valuation and that’s not a single API call. The agent needs to research the company, gather financial data, build a model in Excel, run sensitivity analysis, generate complex charts, iterate on assumptions. That’s dozens of steps, each potentially modifying files, installing packages, running scripts. You can’t do this without code execution. And executing arbitrary code on your servers is insane. Every chat application needs a sandbox. Today each user gets their own isolated environment. The agent can do whatever it wants in there. Delete everything? Fine. Install weird packages? Go ahead. It’s your sandbox, knock yourself out. The architecture looks like this: Three mount points. Private is read/write for your stuff. Shared is read-only for your organization. Public is read-only for everyone. The magic is in the credentials. We use AWS ABAC (Attribute-Based Access Control) to generate short-lived credentials scoped to specific S3 prefixes. User A literally cannot access User B’s data. The IAM policy uses ` ${aws:PrincipalTag/S3Prefix} ` to restrict access. The credentials physically won’t allow it. This is also very good for Enterprise deployment. We also do sandbox pre-warming. When a user starts typing, we spin up their sandbox in the background. By the time they hit enter, the sandbox is ready. 600 second timeout, extended by 10 minutes on each tool usage. The sandbox stays warm across conversation turns. So sandboxes are amazing but the under-discussed magic of sandboxes is the support for the filesystem. Which brings us to the next lesson learned about context. Your agent is only as good as the context it can access. The real work isn’t prompt engineering it’s turning messy financial data from dozens of sources into clean, structured context the model can actually use. This requires a massive domain expertise from the engineering team. The heterogeneity problem. Financial data comes in every format imaginable: - SEC filings : HTML with nested tables, exhibits, signatures - Earnings transcripts : Speaker-segmented text with Q&A sections - Press releases : Semi-structured HTML from PRNewswire - Research reports : PDFs with charts and footnotes - Market data : Snowflake/databases with structured numerical data - News : Articles with varying quality and structure - Alternative data : Satellite imagery, web traffic, credit card panels - Broker research : Proprietary PDFs with price targets and models - Fund filings : 13F holdings, proxy statements, activist letters Each source has different schemas, different update frequencies, different quality levels. Agent needs one thing: clean context it can reason over. The normalization layer. Everything becomes one of three formats: - Markdown for narrative content (filings, transcripts, articles) - CSV/tables for structured data (financials, metrics, comparisons) - JSON metadata for searchability (tickers, dates, document types, fiscal periods) Chunking strategy matters. Not all documents chunk the same way: - 10-K filings : Section by regulatory structure (Item 1, 1A, 7, 8...) - Earnings transcripts : Chunk by speaker turn (CEO remarks, CFO remarks, Q&A by analyst) - Press releases : Usually small enough to be one chunk - News articles : Paragraph-level chunks - 13F filings : By holder and position changes quarter-over-quarter The chunking strategy determines what context the agent retrieves. Bad chunks = bad answers. Tables are special. Financial data is full of tables and csv. Revenue breakdowns, segment performance, guidance ranges. LLMs are surprisingly good at reasoning over markdown tables: But they’re terrible at reasoning over HTML `<table>` tags or raw CSV dumps. The normalization layer converts everything to clean markdown tables. Metadata enables retrieval. The user asks the agent: “ What did Apple say about services revenue in their last earnings call? ” To answer this, Fintool needs: - Ticker resolution (AAPL → correct company) - Document type filtering (earnings transcript, not 10-K) - Temporal filtering (most recent, not 2019) - Section targeting (CFO remarks or revenue discussion, not legal disclaimers) This is why `meta.json` exists for every document. Without structured metadata, you’re doing keyword search over a haystack. It speeds up the search, big time! Anyone can call an LLM API. Not everyone has normalized decades of financial data into searchable, chunked markdown with proper metadata. The data layer is what makes agents actually work. The Parsing Problem Normalizing financial data is 80% of the work. Here’s what nobody tells you. SEC filings are adversarial. They’re not designed for machine reading. They’re designed for legal compliance: - Tables span multiple pages with repeated headers - Footnotes reference exhibits that reference other footnotes - Numbers appear in text, tables, and exhibits—sometimes inconsistently - XBRL tags exist but are often wrong or incomplete - Formatting varies wildly between filers (every law firm has their own template) We tried off-the-shelf PDF/HTML parsers. They failed on: - Multi-column layouts in proxy statements - Nested tables in MD&A sections (tables within tables within tables) - Watermarks and headers bleeding into content - Scanned exhibits (still common in older filings and attachments) - Unicode issues (curly quotes, em-dashes, non-breaking spaces) The Fintool parsing pipeline: Raw Filing (HTML/PDF) Document structure detection (headers, sections, exhibits) Table extraction with cell relationship preservation Entity extraction (companies, people, dates, dollar amounts) Cross-reference resolution (Ex. 10.1 → actual exhibit content) Fiscal period normalization (FY2024 → Oct 2023 to Sep 2024 for Apple) Quality scoring (confidence per extracted field) Table extraction deserves its own work. Financial tables are dense with meaning. A revenue breakdown table might have: - Merged header cells spanning multiple columns - Footnote markers (1), (2), (a), (b) that reference explanations below - Parentheses for negative numbers: $(1,234) means -1234 - Mixed units in the same table (millions for revenue, percentages for margins) - Prior period restatements in italics or with asterisks We score every extracted table on: - Cell boundary accuracy (did we split/merge correctly?) - Header detection (is row 1 actually headers, or is there a title row above?) - Numeric parsing (is “$1,234” parsed as 1234 or left as text?) - Unit inference (millions? billions? per share? percentage?) Tables below 90% confidence get flagged for review. Low-confidence extractions don’t enter the agent’s context—garbage in, garbage out. Fiscal period normalization is critical. “Q1 2024” is ambiguous: - Calendar Q1 (January-March 2024) - Apple’s fiscal Q1 (October-December 2023) - Microsoft’s fiscal Q1 (July-September 2023) - “Reported in Q1” (filed in Q1, but covers the prior period) We maintain a fiscal calendar database for 10,000+ companies. Every date reference gets normalized to absolute date ranges. When the agent retrieves “Apple Q1 2024 revenue,” it knows to look for data from October-December 2023. This is invisible to users but essential for correctness. Without it, you’re comparing Apple’s October revenue to Microsoft’s January revenue and calling it “same quarter.” Here’s the thing nobody tells you about building AI agents: the model is not the product. The skills are now the product. I learned this the hard way. We used to try making the base model “smarter” through prompt engineering. Tweak the system prompt, add examples, write elaborate instructions. It helped a little. But skills were the missing part. In October 2025, Anthropic formalized this with Agent Skills a specification for extending Claude with modular capability packages. A skill is a folder containing a `SKILL.md` file with YAML frontmatter (name and description), plus any supporting scripts, references, or data files the agent might need. We’d been building something similar for months before the announcement. The validation felt good but more importantly, having an industry standard means our skills can eventually be portable. Without skills, models are surprisingly bad at domain tasks. Ask a frontier model to do a DCF valuation. It knows what DCF is. It can explain the theory. But actually executing one? It will miss critical steps, use wrong discount rates for the industry, forget to add back stock-based compensation, skip sensitivity analysis. The output looks plausible but is subtly wrong in ways that matter. The breakthrough came when we started thinking about skills as first-class citizens. Like part of the product itself. A skill is a markdown file that tells the agent how to do something specific. Here’s a simplified version of our DCF skill: That’s it. A markdown file. No code changes. No production deployment. Just a file that tells the agent what to do. Skills are better than code. This matters enormously: 1. Non-engineers can create skills. Our analysts write skills. Our customers write skills. A portfolio manager who’s done 500 DCF valuations can encode their methodology in a skill without writing a single line of Python. 2. No deployment needed. Change a skill file and it takes effect immediately. No CI/CD, no code review, no waiting for release cycles. Domain experts can iterate on their own. 3. Readable and auditable. When something goes wrong, you can read the skill and understand exactly what the agent was supposed to do. Try doing that with a 2,000-line Python module. We have a copy-on-write shadowing system: Priority: private > shared > public So if you don’t like how we do DCF valuations, write your own. Drop it in `/private/skills/dcf/SKILL.md`. Your version wins. Why we don’t mount all skills to the filesystem. This is important. The naive approach would be to mount every skill file directly into the sandbox. The agent can just `cat` any skill it needs. Simple, right? Wrong. Here’s why we use SQL discovery instead: 1. Lazy loading. We have dozens of skills with extensive documentation like the DCF skill alone has 10+ industry guideline files. Loading all of them into context for every conversation would burn tokens and confuse the model. Instead, we discover skill metadata (name, description) upfront, and only load the full documentation when the agent actually uses that skill. 2. Access control at query time. The SQL query implements our three-tier access model: public skills available to everyone, organization skills for that org’s users, private skills for individual users. The database enforces this. You can’t accidentally expose a customer’s proprietary skill to another customer. 3. Shadowing logic. When a user customizes a skill, their version needs to override the default. SQL makes this trivial—query all three levels, apply priority rules, return the winner. Doing this with filesystem mounts would be a nightmare of symlinks and directory ordering. 4. Metadata-driven filtering. The `fs_files.metadata` column stores parsed YAML frontmatter. We can filter by skill type, check if a skill is main-agent-only, or query any other structured attribute—all without reading the files themselves. The pattern: S3 is the source of truth, a Lambda function syncs changes to PostgreSQL for fast queries, and the agent gets exactly what it needs when it needs it. Skills are essential. I cannot emphasize this enough. If you’re building an AI agent and you don’t have a skills system, you’re going to have a bad time. My biggest argument for skills is that top models (Claude or GPT) are post-trained on using Skills. The model wants to fetch skills. Models just want to learn and what they want to learn is our skills... Until they ate it. Here’s the uncomfortable truth: everything I just told you about skills? It’s temporary in my opinion. Models are getting better. Fast. Every few months, there’s a new model that makes half your code obsolete. The elaborate scaffolding you built to handle edge cases? The model just... handles them now. When we started, we needed detailed skills with step-by-step instructions for some simple tasks. “First do X, then do Y, then check Z.” Now? We can often just say for simple task “do an earnings preview” and the model figures it out (kinda of!) This creates a weird tension. You need skills today because current models aren’t smart enough. But you should design your skills knowing that future models will need less hand-holding. That’s why I’m bullish on markdown file versus code for model instructions. It’s easier to update and delete. We send detailed feedback to AI labs. Whenever we build complex scaffolding to work around model limitations, we document exactly what the model struggles with and share it with the lab research team. This helps inform the next generation of models. The goal is to make our own scaffolding obsolete. My prediction: in two years, most of our basic skills will be one-liners. “Generate a 20 tabs DCF.” That’s it. The model will know what that means. But here’s the flip side: as basic tasks get commoditized, we’ll push into more complex territory. Multi-step valuations with segment-by-segment analysis. Automated backtesting of investment strategies. Real-time portfolio monitoring with complex triggers. The frontier keeps moving. So we write skills. We delete them when they become unnecessary. And we build new ones for the harder problems that emerge. And all that are files... in our filesystem. Here’s something that surprised me: S3 for files is a better database than a database. We store user data (watchlists, portfolio, preferences, memories, skills) in S3 as YAML files. S3 is the source of truth. A Lambda function syncs changes to PostgreSQL for fast queries. Writes → S3 (source of truth) Lambda trigger PostgreSQL (fs_files table) Reads ← Fast queries - Durability : S3 has 11 9’s. A database doesn’t. - Versioning : S3 versioning gives you audit trails for free - Simplicity : YAML files are human-readable. You can debug with `cat`. - Cost : S3 is cheap. Database storage is not. The pattern: - Writes go to S3 directly - List queries hit the database (fast) - Single-item reads go to S3 (freshest data) The sync architecture. We run two Lambda functions to keep S3 and PostgreSQL in sync: S3 (file upload/delete) fs-sync Lambda → Upsert/delete in fs_files table (real-time) EventBridge (every 3 hours) fs-reconcile Lambda → Full S3 vs DB scan, fix discrepancies Both use upsert with timestamp guards—newer data always wins. The reconcile job catches any events that slipped through (S3 eventual consistency, Lambda cold starts, network blips). User memories live here too. Every user has a `/private/memories/UserMemories.md` file in S3. It’s just markdown—users can edit it directly in the UI. On every conversation, we load it and inject it as context: This is surprisingly powerful. Users write things like “I focus on small-cap value stocks” or “Always compare to industry median, not mean” or “My portfolio is concentrated in tech, so flag concentration risk.” The agent sees this on every conversation and adapts accordingly. No migrations. No schema changes. Just a markdown file that the user controls. Watchlists work the same way. YAML files in S3, synced to PostgreSQL for fast queries. When a user asks about “my watchlist,” we load the relevant tickers and inject them as context. The agent knows what companies matter to this user. The filesystem becomes the user’s personal knowledge base. Skills tell the agent how to do things. Memories tell it what the user cares about. Both are just files. Agents in financial services need to read and write files. A lot of files. PDFs, spreadsheets, images, code. Here’s how we handle it. ReadFile handles the complexity: WriteFile creates artifacts that link back to the UI: Bash gives persistent shell access with 180 second timeout and 100K character output limit. Path normalization on everything (LLMs love trying path traversal attacks, it’s hilarious). Bash is more important than you think. There’s a growing conviction in the AI community that filesystems and bash are the optimal abstraction for AI agents. Braintrust recently ran an eval comparing SQL agents, bash agents, and hybrid approaches for querying semi-structured data. The results were interesting: pure SQL hit 100% accuracy but missed edge cases. Pure bash was slower and more expensive but caught verification opportunities. The winner? A hybrid approach where the agent uses bash to explore and verify, SQL for structured queries. This matches our experience. Financial data is messy. You need bash to grep through filing documents, find patterns, explore directory structures. But you also need structured tools for the heavy lifting. The agent needs both—and the judgment to know when to use each. We’ve leaned hard into giving agents full shell access in the sandbox. It’s not just for running Python scripts. It’s for exploration, verification, and the kind of ad-hoc data manipulation that complex tasks require. But complex tasks mean long-running agents. And long-running agents break everything. Subscribe now Before Temporal, our long-running tasks were a disaster. User asks for a comprehensive company analysis. That takes 5 minutes. What if the server restarts? What if the user closes the tab and comes back? What if... anything? We had a homegrown job queue. It was bad. Retries were inconsistent. State management was a nightmare. Then we switched to Temporal and I wanted to cry tears of joy! That’s it. Temporal handles worker crashes, retries, everything. If a Heroku dyno restarts mid-conversation (happens all the time lol), Temporal automatically retries on another worker. The user never knows. The cancellation handling is the tricky part. User clicks “stop,” what happens? The activity is already running on a different server. We use heartbeats sent every few seconds. We run two worker types: - Chat workers : User-facing, 25 concurrent activities - Background workers : Async tasks, 10 concurrent activities They scale independently. Chat traffic spikes? Scale chat workers. Next is speed. In finance, people are impatient. They’re not going to wait 30 seconds staring at a loading spinner. They need to see something happening. So we built real-time streaming. The agent works, you see the progress. Agent → SSE Events → Redis Stream → API → Frontend The key insight: delta updates, not full state. Instead of sending “here’s the complete response so far” (expensive), we send “append these 50 characters” (cheap). Streaming rich content with Streamdown. Text streaming is table stakes. The harder problem is streaming rich content: markdown with tables, charts, citations, math equations. We use Streamdown to render markdown as it arrives, with custom plugins for our domain-specific components. Charts render progressively. Citations link to source documents. Math equations display properly with KaTeX. The user sees a complete, interactive response building in real-time. AskUserQuestion: Interactive agent workflows. Sometimes the agent needs user input mid-workflow. “Which valuation method do you prefer?” “Should I use consensus estimates or management guidance?” “Do you want me to include the pipeline assets in the valuation?” We built an `AskUserQuestion` tool that lets the agent pause, present options, and wai When the agent calls this tool, the agentic loop intercepts it, saves state, and presents a UI to the user. The user picks an option (or types a custom answer), and the conversation resumes with their choice. This transforms agents from autonomous black boxes into collaborative tools. The agent does the heavy lifting, but the user stays in control of key decisions. Essential for high-stakes financial work where users need to validate assumptions. “Ship fast, fix later” works for most startups. It does not work for financial services. A wrong earnings number can cost someone money. A misinterpreted guidance statement can lead to bad investment decisions. You can’t just “fix it later” when your users are making million-dollar decisions based on your output. We use Braintrust for experiment tracking. Every model change, every prompt change, every skill change gets evaluated against a test set. Generic NLP metrics (BLEU, ROUGE) don’t work for finance. A response can be semantically similar but have completely wrong numbers. Building eval datasets is harder than building the agent. We maintain ~2,000 test cases across categories: Ticker disambiguation. This is deceptively hard: - “Apple” → AAPL, not APLE (Appel Petroleum) - “Meta” → META, not MSTR (which some people call “meta”) - “Delta” → DAL (airline) or is the user talking about delta hedging (options term)? The really nasty cases are ticker changes. Facebook became META in 2021. Google restructured under GOOG/GOOGL. Twitter became X (but kept the legal entity). When a user asks “What happened to Facebook stock in 2023?”, you need to know that FB → META, and that historical data before Oct 2021 lives under the old ticker. We maintain a ticker history table and test cases for every major rename in the last decade. Fiscal period hell. This is where most financial agents silently fail: - Apple’s Q1 is October-December (fiscal year ends in September) - Microsoft’s Q2 is October-December (fiscal year ends in June) - Most companies Q1 is January-March (calendar year) “Last quarter” on January 15th means: - Q4 2024 for calendar-year companies - Q1 2025 for Apple (they just reported) - Q2 2025 for Microsoft (they’re mid-quarter) We maintain fiscal calendars for 10,000+ companies. Every period reference gets normalized to absolute date ranges. We have 200+ test cases just for period extraction. Numeric precision. Revenue of $4.2B vs $4,200M vs $4.2 billion vs “four point two billion.” All equivalent. But “4.2” alone is wrong—missing units. Is it millions? Billions? Per share? We test unit inference, magnitude normalization, and currency handling. A response that says “revenue was 4.2” without units fails the eval, even if 4.2B is correct. Adversarial grounding. We inject fake numbers into context and verify the model cites the real source, not the planted one. Example: We include a fake analyst report stating “Apple revenue was $50B” alongside the real 10-K showing $94B. If the agent cites $50B, it fails. If it cites $94B with proper source attribution, it passes. We have 50 test cases specifically for hallucination resistance. Eval-driven development. Every skill has a companion eval. The DCF skill has 40 test cases covering WACC edge cases, terminal value sanity checks, and stock-based compensation add-backs (models forget this constantly). PR blocked if eval score drops >5%. No exceptions. Our production setup looks like this: We auto-file GitHub issues for production errors. Error happens, issue gets created with full context: conversation ID, user info, traceback, links to Braintrust traces and Temporal workflows. Paying customers get `priority:high` label. Model routing by complexity: simple queries use Haiku (cheap), complex analysis uses Sonnet (expensive). Enterprise users always get the best model. The biggest lesson isn’t about sandboxes or skills or streaming. It’s this: The model is not your product. The experience around the model is your product. Anyone can call Claude or GPT. The API is the same for everyone. What makes your product different is everything else: the data you have access to, the skills you’ve built, the UX you’ve designed, the reliability you’ve engineered and frankly how well you know the industry which is a function of how much time you spend with your customers. Models will keep getting better. That’s great! It means less scaffolding, less prompt engineering, less complexity. But it also means the model becomes more of a commodity. Your moat is not the model. Your moat is everything you build around it. For us, that’s financial data, domain-specific skills, real-time streaming, and the trust we’ve built with professional investors. What’s yours? Thanks for reading! Subscribe for free to receive new posts and support my work. I’ve spent the last two years building AI agents for financial services. Along the way, I’ve accumulated a fair number of battle scars and learnings that I want to share. Here’s what I’ll cover: - The Sandbox Is Not Optional - Why isolated execution environments are essential for multi-step agent workflows - Context Is the Product - How we normalize heterogeneous financial data into clean, searchable context - The Parsing Problem - The hidden complexity of extracting structured data from adversarial SEC filings - Skills Are Everything - Why markdown-based skills are becoming the product, not the model - The Model Will Eat Your Scaffolding - Designing for obsolescence as models improve - The S3-First Architecture - Why S3 beats databases for file storage and user data - The File System Tools - How ReadFile, WriteFile, and Bash enable complex financial workflows - Temporal Changed Everything - Reliable long-running tasks with proper cancellation handling - Real-Time Streaming - Building responsive UX with delta updates and interactive agent workflows - Evaluation Is Not Optional - Domain-specific evals that catch errors before they cost money - Production Monitoring - The observability stack that keeps financial agents reliable Why financial services is extremely hard. This domain doesn’t forgive mistakes. Numbers matter. A wrong revenue figure, a misinterpreted guidance statement, an incorrect DCF assumption. Professional investors make million-dollar decisions based on our output. One mistake on a $100M position and you’ve destroyed trust forever. The users are also demanding. Professional investors are some of the smartest, most time-pressed people you’ll ever work with. They spot bullshit instantly. They need precision, speed, and depth. You can’t hand-wave your way through a valuation model or gloss over nuances in an earnings call. This forces me to develop an almost paranoid attention to detail. Every number gets double-checked. Every assumption gets validated. Every model gets stress-tested. You start questioning everything the LLM outputs because you know your users will. A single wrong calculation in a DCF model and you lose credibility forever. I sometimes feel that the fear of being wrong becomes our best feature. Over the years building with LLM, we’ve made bold infrastructure bets early and I think we have been right. For instance, when Claude Code launched with its filesystem-first agentic approach, we immediately adopted it. It was not an obvious bet and it was a massive revamp of our architecture. I was extremely lucky to have Thariq from Anthropic Claude Code jumping on a Zoom and opening my eyes to the possibilities. At the time the whole industry, including Fintool, was all building elaborate RAG pipelines with vector databases and embeddings. After reflecting on the future of information retrieval with agents I wrote “ the RAG obituary ” and Fintool moved fully to agentic search. We even decided to retire our precious embedding pipeline. Sad but whatever is best for the future! People thought we were crazy. The article got a lot of praise and a lot of negative comments. Now I feel most startups are adopting these best practices. I believe we’re early on several other architectural choices too. I’m sharing them here because the best way to test ideas is to put them out there. Let’s start with the biggest one. The Sandbox Is Not Optional When we first started building Fintool in 2023, I thought sandboxing might be overkill. “We’re just running Python scripts” I told myself. “What could go wrong?” Haha. Everything. Everything could go wrong. The first time an LLM decided to `rm -rf /` on our server (it was trying to “clean up temporary files”), I became a true believer. Here’s the thing: agents need to run multi-step operations. A professional investor asks for a DCF valuation and that’s not a single API call. The agent needs to research the company, gather financial data, build a model in Excel, run sensitivity analysis, generate complex charts, iterate on assumptions. That’s dozens of steps, each potentially modifying files, installing packages, running scripts. You can’t do this without code execution. And executing arbitrary code on your servers is insane. Every chat application needs a sandbox. Today each user gets their own isolated environment. The agent can do whatever it wants in there. Delete everything? Fine. Install weird packages? Go ahead. It’s your sandbox, knock yourself out. The architecture looks like this: Three mount points. Private is read/write for your stuff. Shared is read-only for your organization. Public is read-only for everyone. The magic is in the credentials. We use AWS ABAC (Attribute-Based Access Control) to generate short-lived credentials scoped to specific S3 prefixes. User A literally cannot access User B’s data. The IAM policy uses ` ${aws:PrincipalTag/S3Prefix} ` to restrict access. The credentials physically won’t allow it. This is also very good for Enterprise deployment. We also do sandbox pre-warming. When a user starts typing, we spin up their sandbox in the background. By the time they hit enter, the sandbox is ready. 600 second timeout, extended by 10 minutes on each tool usage. The sandbox stays warm across conversation turns. So sandboxes are amazing but the under-discussed magic of sandboxes is the support for the filesystem. Which brings us to the next lesson learned about context. Context Is the Product Your agent is only as good as the context it can access. The real work isn’t prompt engineering it’s turning messy financial data from dozens of sources into clean, structured context the model can actually use. This requires a massive domain expertise from the engineering team. The heterogeneity problem. Financial data comes in every format imaginable: - SEC filings : HTML with nested tables, exhibits, signatures - Earnings transcripts : Speaker-segmented text with Q&A sections - Press releases : Semi-structured HTML from PRNewswire - Research reports : PDFs with charts and footnotes - Market data : Snowflake/databases with structured numerical data - News : Articles with varying quality and structure - Alternative data : Satellite imagery, web traffic, credit card panels - Broker research : Proprietary PDFs with price targets and models - Fund filings : 13F holdings, proxy statements, activist letters Each source has different schemas, different update frequencies, different quality levels. Agent needs one thing: clean context it can reason over. The normalization layer. Everything becomes one of three formats: - Markdown for narrative content (filings, transcripts, articles) - CSV/tables for structured data (financials, metrics, comparisons) - JSON metadata for searchability (tickers, dates, document types, fiscal periods) Chunking strategy matters. Not all documents chunk the same way: - 10-K filings : Section by regulatory structure (Item 1, 1A, 7, 8...) - Earnings transcripts : Chunk by speaker turn (CEO remarks, CFO remarks, Q&A by analyst) - Press releases : Usually small enough to be one chunk - News articles : Paragraph-level chunks - 13F filings : By holder and position changes quarter-over-quarter The chunking strategy determines what context the agent retrieves. Bad chunks = bad answers. Tables are special. Financial data is full of tables and csv. Revenue breakdowns, segment performance, guidance ranges. LLMs are surprisingly good at reasoning over markdown tables: But they’re terrible at reasoning over HTML `<table>` tags or raw CSV dumps. The normalization layer converts everything to clean markdown tables. Metadata enables retrieval. The user asks the agent: “ What did Apple say about services revenue in their last earnings call? ” To answer this, Fintool needs: - Ticker resolution (AAPL → correct company) - Document type filtering (earnings transcript, not 10-K) - Temporal filtering (most recent, not 2019) - Section targeting (CFO remarks or revenue discussion, not legal disclaimers) This is why `meta.json` exists for every document. Without structured metadata, you’re doing keyword search over a haystack. It speeds up the search, big time! Anyone can call an LLM API. Not everyone has normalized decades of financial data into searchable, chunked markdown with proper metadata. The data layer is what makes agents actually work. The Parsing Problem Normalizing financial data is 80% of the work. Here’s what nobody tells you. SEC filings are adversarial. They’re not designed for machine reading. They’re designed for legal compliance: - Tables span multiple pages with repeated headers - Footnotes reference exhibits that reference other footnotes - Numbers appear in text, tables, and exhibits—sometimes inconsistently - XBRL tags exist but are often wrong or incomplete - Formatting varies wildly between filers (every law firm has their own template) We tried off-the-shelf PDF/HTML parsers. They failed on: - Multi-column layouts in proxy statements - Nested tables in MD&A sections (tables within tables within tables) - Watermarks and headers bleeding into content - Scanned exhibits (still common in older filings and attachments) - Unicode issues (curly quotes, em-dashes, non-breaking spaces) The Fintool parsing pipeline: Raw Filing (HTML/PDF) ↓ Document structure detection (headers, sections, exhibits) ↓ Table extraction with cell relationship preservation ↓ Entity extraction (companies, people, dates, dollar amounts) ↓ Cross-reference resolution (Ex. 10.1 → actual exhibit content) ↓ Fiscal period normalization (FY2024 → Oct 2023 to Sep 2024 for Apple) ↓ Quality scoring (confidence per extracted field) Table extraction deserves its own work. Financial tables are dense with meaning. A revenue breakdown table might have: - Merged header cells spanning multiple columns - Footnote markers (1), (2), (a), (b) that reference explanations below - Parentheses for negative numbers: $(1,234) means -1234 - Mixed units in the same table (millions for revenue, percentages for margins) - Prior period restatements in italics or with asterisks We score every extracted table on: - Cell boundary accuracy (did we split/merge correctly?) - Header detection (is row 1 actually headers, or is there a title row above?) - Numeric parsing (is “$1,234” parsed as 1234 or left as text?) - Unit inference (millions? billions? per share? percentage?) Tables below 90% confidence get flagged for review. Low-confidence extractions don’t enter the agent’s context—garbage in, garbage out. Fiscal period normalization is critical. “Q1 2024” is ambiguous: - Calendar Q1 (January-March 2024) - Apple’s fiscal Q1 (October-December 2023) - Microsoft’s fiscal Q1 (July-September 2023) - “Reported in Q1” (filed in Q1, but covers the prior period) We maintain a fiscal calendar database for 10,000+ companies. Every date reference gets normalized to absolute date ranges. When the agent retrieves “Apple Q1 2024 revenue,” it knows to look for data from October-December 2023. This is invisible to users but essential for correctness. Without it, you’re comparing Apple’s October revenue to Microsoft’s January revenue and calling it “same quarter.” Skills Are Everything Here’s the thing nobody tells you about building AI agents: the model is not the product. The skills are now the product. I learned this the hard way. We used to try making the base model “smarter” through prompt engineering. Tweak the system prompt, add examples, write elaborate instructions. It helped a little. But skills were the missing part. In October 2025, Anthropic formalized this with Agent Skills a specification for extending Claude with modular capability packages. A skill is a folder containing a `SKILL.md` file with YAML frontmatter (name and description), plus any supporting scripts, references, or data files the agent might need. We’d been building something similar for months before the announcement. The validation felt good but more importantly, having an industry standard means our skills can eventually be portable. Without skills, models are surprisingly bad at domain tasks. Ask a frontier model to do a DCF valuation. It knows what DCF is. It can explain the theory. But actually executing one? It will miss critical steps, use wrong discount rates for the industry, forget to add back stock-based compensation, skip sensitivity analysis. The output looks plausible but is subtly wrong in ways that matter. The breakthrough came when we started thinking about skills as first-class citizens. Like part of the product itself. A skill is a markdown file that tells the agent how to do something specific. Here’s a simplified version of our DCF skill: That’s it. A markdown file. No code changes. No production deployment. Just a file that tells the agent what to do. Skills are better than code. This matters enormously: 1. Non-engineers can create skills. Our analysts write skills. Our customers write skills. A portfolio manager who’s done 500 DCF valuations can encode their methodology in a skill without writing a single line of Python. 2. No deployment needed. Change a skill file and it takes effect immediately. No CI/CD, no code review, no waiting for release cycles. Domain experts can iterate on their own. 3. Readable and auditable. When something goes wrong, you can read the skill and understand exactly what the agent was supposed to do. Try doing that with a 2,000-line Python module. We have a copy-on-write shadowing system: Priority: private > shared > public So if you don’t like how we do DCF valuations, write your own. Drop it in `/private/skills/dcf/SKILL.md`. Your version wins. Why we don’t mount all skills to the filesystem. This is important. The naive approach would be to mount every skill file directly into the sandbox. The agent can just `cat` any skill it needs. Simple, right? Wrong. Here’s why we use SQL discovery instead: 1. Lazy loading. We have dozens of skills with extensive documentation like the DCF skill alone has 10+ industry guideline files. Loading all of them into context for every conversation would burn tokens and confuse the model. Instead, we discover skill metadata (name, description) upfront, and only load the full documentation when the agent actually uses that skill. 2. Access control at query time. The SQL query implements our three-tier access model: public skills available to everyone, organization skills for that org’s users, private skills for individual users. The database enforces this. You can’t accidentally expose a customer’s proprietary skill to another customer. 3. Shadowing logic. When a user customizes a skill, their version needs to override the default. SQL makes this trivial—query all three levels, apply priority rules, return the winner. Doing this with filesystem mounts would be a nightmare of symlinks and directory ordering. 4. Metadata-driven filtering. The `fs_files.metadata` column stores parsed YAML frontmatter. We can filter by skill type, check if a skill is main-agent-only, or query any other structured attribute—all without reading the files themselves. The pattern: S3 is the source of truth, a Lambda function syncs changes to PostgreSQL for fast queries, and the agent gets exactly what it needs when it needs it. Skills are essential. I cannot emphasize this enough. If you’re building an AI agent and you don’t have a skills system, you’re going to have a bad time. My biggest argument for skills is that top models (Claude or GPT) are post-trained on using Skills. The model wants to fetch skills. Models just want to learn and what they want to learn is our skills... Until they ate it. The Model Will Eat Your Scaffolding Here’s the uncomfortable truth: everything I just told you about skills? It’s temporary in my opinion. Models are getting better. Fast. Every few months, there’s a new model that makes half your code obsolete. The elaborate scaffolding you built to handle edge cases? The model just... handles them now. When we started, we needed detailed skills with step-by-step instructions for some simple tasks. “First do X, then do Y, then check Z.” Now? We can often just say for simple task “do an earnings preview” and the model figures it out (kinda of!) This creates a weird tension. You need skills today because current models aren’t smart enough. But you should design your skills knowing that future models will need less hand-holding. That’s why I’m bullish on markdown file versus code for model instructions. It’s easier to update and delete. We send detailed feedback to AI labs. Whenever we build complex scaffolding to work around model limitations, we document exactly what the model struggles with and share it with the lab research team. This helps inform the next generation of models. The goal is to make our own scaffolding obsolete. My prediction: in two years, most of our basic skills will be one-liners. “Generate a 20 tabs DCF.” That’s it. The model will know what that means. But here’s the flip side: as basic tasks get commoditized, we’ll push into more complex territory. Multi-step valuations with segment-by-segment analysis. Automated backtesting of investment strategies. Real-time portfolio monitoring with complex triggers. The frontier keeps moving. So we write skills. We delete them when they become unnecessary. And we build new ones for the harder problems that emerge. And all that are files... in our filesystem. The S3-First Architecture Here’s something that surprised me: S3 for files is a better database than a database. We store user data (watchlists, portfolio, preferences, memories, skills) in S3 as YAML files. S3 is the source of truth. A Lambda function syncs changes to PostgreSQL for fast queries. Writes → S3 (source of truth) ↓ Lambda trigger ↓ PostgreSQL (fs_files table) ↓ Reads ← Fast queries Why? - Durability : S3 has 11 9’s. A database doesn’t. - Versioning : S3 versioning gives you audit trails for free - Simplicity : YAML files are human-readable. You can debug with `cat`. - Cost : S3 is cheap. Database storage is not. The pattern: - Writes go to S3 directly - List queries hit the database (fast) - Single-item reads go to S3 (freshest data) The sync architecture. We run two Lambda functions to keep S3 and PostgreSQL in sync: S3 (file upload/delete) ↓ SNS Topic ↓ fs-sync Lambda → Upsert/delete in fs_files table (real-time) EventBridge (every 3 hours) ↓ fs-reconcile Lambda → Full S3 vs DB scan, fix discrepancies Both use upsert with timestamp guards—newer data always wins. The reconcile job catches any events that slipped through (S3 eventual consistency, Lambda cold starts, network blips). User memories live here too. Every user has a `/private/memories/UserMemories.md` file in S3. It’s just markdown—users can edit it directly in the UI. On every conversation, we load it and inject it as context: This is surprisingly powerful. Users write things like “I focus on small-cap value stocks” or “Always compare to industry median, not mean” or “My portfolio is concentrated in tech, so flag concentration risk.” The agent sees this on every conversation and adapts accordingly. No migrations. No schema changes. Just a markdown file that the user controls. Watchlists work the same way. YAML files in S3, synced to PostgreSQL for fast queries. When a user asks about “my watchlist,” we load the relevant tickers and inject them as context. The agent knows what companies matter to this user. The filesystem becomes the user’s personal knowledge base. Skills tell the agent how to do things. Memories tell it what the user cares about. Both are just files. The File System Tools Agents in financial services need to read and write files. A lot of files. PDFs, spreadsheets, images, code. Here’s how we handle it. ReadFile handles the complexity: WriteFile creates artifacts that link back to the UI: Bash gives persistent shell access with 180 second timeout and 100K character output limit. Path normalization on everything (LLMs love trying path traversal attacks, it’s hilarious). Bash is more important than you think. There’s a growing conviction in the AI community that filesystems and bash are the optimal abstraction for AI agents. Braintrust recently ran an eval comparing SQL agents, bash agents, and hybrid approaches for querying semi-structured data. The results were interesting: pure SQL hit 100% accuracy but missed edge cases. Pure bash was slower and more expensive but caught verification opportunities. The winner? A hybrid approach where the agent uses bash to explore and verify, SQL for structured queries. This matches our experience. Financial data is messy. You need bash to grep through filing documents, find patterns, explore directory structures. But you also need structured tools for the heavy lifting. The agent needs both—and the judgment to know when to use each. We’ve leaned hard into giving agents full shell access in the sandbox. It’s not just for running Python scripts. It’s for exploration, verification, and the kind of ad-hoc data manipulation that complex tasks require. But complex tasks mean long-running agents. And long-running agents break everything. Subscribe now Temporal Changed Everything Before Temporal, our long-running tasks were a disaster. User asks for a comprehensive company analysis. That takes 5 minutes. What if the server restarts? What if the user closes the tab and comes back? What if... anything? We had a homegrown job queue. It was bad. Retries were inconsistent. State management was a nightmare. Then we switched to Temporal and I wanted to cry tears of joy! That’s it. Temporal handles worker crashes, retries, everything. If a Heroku dyno restarts mid-conversation (happens all the time lol), Temporal automatically retries on another worker. The user never knows. The cancellation handling is the tricky part. User clicks “stop,” what happens? The activity is already running on a different server. We use heartbeats sent every few seconds. We run two worker types: - Chat workers : User-facing, 25 concurrent activities - Background workers : Async tasks, 10 concurrent activities They scale independently. Chat traffic spikes? Scale chat workers. Next is speed. Real-Time Streaming In finance, people are impatient. They’re not going to wait 30 seconds staring at a loading spinner. They need to see something happening. So we built real-time streaming. The agent works, you see the progress. Agent → SSE Events → Redis Stream → API → Frontend The key insight: delta updates, not full state. Instead of sending “here’s the complete response so far” (expensive), we send “append these 50 characters” (cheap). Streaming rich content with Streamdown. Text streaming is table stakes. The harder problem is streaming rich content: markdown with tables, charts, citations, math equations. We use Streamdown to render markdown as it arrives, with custom plugins for our domain-specific components. Charts render progressively. Citations link to source documents. Math equations display properly with KaTeX. The user sees a complete, interactive response building in real-time. AskUserQuestion: Interactive agent workflows. Sometimes the agent needs user input mid-workflow. “Which valuation method do you prefer?” “Should I use consensus estimates or management guidance?” “Do you want me to include the pipeline assets in the valuation?”

0 views
Jeff Geerling 1 months ago

Raspberry Pi is cheaper than a Mini PC again (that's not good)

Almost a year ago, I found that N100 Mini PCs were cheaper than a decked-out Raspberry Pi 5 . So comparing systems with: Back in March last year, a GMKtec Mini PC was $159, and a similar-spec Pi 5 was $208. Today? The same GMKtec Mini PC is $246.99, and the same Pi 5 is $246.95: Today, because of the wonderful RAM shortages 1 , the Mini PC is the same price as a fully kitted-out Raspberry Pi 5. 16GB of RAM 512GB NVMe SSD Including case, cooler, and power adapter

0 views
Lalit Maganti 1 months ago

One Number I Trust: Plain-Text Accounting for a Multi-Currency Household

Two people. Eighteen accounts spanning checking, savings, credit cards, investments. Three currencies. Twenty minutes of work every week. One net worth number I actually trust. The payoff: A single, trustworthy net worth number growing over time. No app did exactly what I needed, so I built my own personal finance system using plain-text accounting principles and a powerful Python library called Beancount . This post shows you how I handle imports, investments, multi-currency, and a two-person view. It all started during the 2021 tax season. I had blocked out an entire weekend and was juggling statements, trying to compute capital gains, stressing about getting the numbers mixed up. “This is chaos”, I thought. “There must be a way to simplify this with automation”. Being a software engineer, I did what felt natural and hacked together a bunch of scripts on top of a database.

0 views

2025, A Retrospective

I'm not dropping this on the actual newsletter feed because it's a little self-indulgent and I'm not sure 88,000 or so people want an email about it. I have a lot of trouble giving myself credit for anything, and genuinely think I could be doing more or that I "didn't do that much" because I'm at a computer or on a microphone versus serving customers in person or something or rather. To try and give some sort of scale to the work from the last year, I've written down the highlights. It appears that 2025 was an insane year for me. Here's the rundown: I also did no less than 50 different interviews, with highlights including: Next year I will be finishing up my book Why Everything Stopped Working (due out in 2027), and continuing to dig into the nightmare of corporate finance I've found myself in the center of. I have no idea what happens next. My fear - and expectation - is that many people still do not realize that there is an AI bubble or will not accept how significant and dangerous the bubble is, meaning that everybody is going to act like AI is the biggest most hugest and most special thing in the world right up until they accept that it isn't. I will always cover tech, but I get the sense I'll be looking into other things next year - private equity, for one - that have caught my eye toward the end of the year. I realize right now everything feels a little intense and bleak, but at this time of year it's always worth remembering to be kinder and more thoughtful toward those close to us. It's cheesy, but it's the best thing you can possibly do. It's easy to feel isolated by the amount of hogs oinking at the prospect of laying you off or replacing you - and it turns out there are far more people that are afraid or outraged than there are executives or AI boosters. Never forget (or forgive them for) what they've done to the computer, and never forget that those scorned by the AI bubble are legion. Join me on r/Betteroffline , you are far from alone. I intend to spend the next year becoming a better writer, analyst, broadcaster, entertainer and person. I appreciate every single one of you that reads my work, and hope you'll continue to do so in the future. See you in 2026, [email protected] Cory Doctorow quoted me at the very front of his new book . I recorded over 110 episodes of my tech podcast Better Offline , starting with a 13.5 hour-long pop-up radio show at CES 2025. And yes, it's back next week, featuring David Roth, Adam Conover, Ed Ongweso Jr., Chloe Radcliffe, Robert Evans, Gare Davis, Cory Doctorow and a host of other great guests. Better Offline also won the Webby for best business podcast episode for last year's episode The Man That Destroyed Google Search . I also had some fantastic interviews, like when I went out to North Carolina to interview Steve Burke of GamersNexus , chatted to author Adam Becker about the technoligarchs , Pablo Torres and David Roth about independent media , and even comedian Andy Richter . I wrote over 440,000 words, not including the work I've done on the book or any notes I took to prepare for my show or newsletter. The newsletter also grew from 47,000~ish people at the end of last year to around 88,500 people. I want to be at 150,000 this time next year. I wrote some of my favourite free newsletters (many of which were turned into episodes of the show): Deep Impact , my analysis of the DeepSeek situation and why it scared the American AI industry (clue: it's cost-related and nothing to do with "national security"). Power Cut , an early warning sign that the bubble was bursting as Microsoft pulled out of gigawatts of data center deals. CoreWeave Is A Time Bomb , published March 17 2025, way before most had even bothered to think about this company deeply, a savage analysis of a "neocloud" - a company that only sells AI compute - backed by NVIDIA, who is also a customer, who CoreWeave also buys billions of GPUs from. The Era of the Business Idiot , probably my favourite piece I wrote this year, the story of how middle management has seized power, breeding out true meritocracy and value-creation in favor of symbolic growth and superficial intelligence. It ties together everything I've ever written. Make Fun Of Them , the piece that restarted my fire after a bit of a low point, where I call for a radical new approach to tech CEOs: mocking them, because they talk like idiots and provide little value to society outside of their dedication to shareholder value. The Hater's Guide To The AI Bubble , a piece that elevated me in a way that I never expected, a thorough and brutal broadside against an industry that has no profits and terrible costs, discussing how generative AI is nothing like Uber or Amazon Web Services, there are no profitable generative AI companies, agents do not and cannot exist, there is no AI SaaS story, and everything rides - and dies - on selling GPUs. AI Is A Money Trap , a piece about how AI companies' ridiculous valuations and unsustainable businesses make exits or IPOs impossible, how data center developers have no exit route, and US economic growth has become shouldered entirely by big tech. How To Argue With An AI Booster , a comprehensive guide to arguing with AI boosters, addressing both their bad faith debate style and their specific (and flimsy) arguments as to why generative AI is the future. The Case Against Generative AI , a comprehensive analysis of a financial collapse built on myths, the markets’ unhealthy obsession with NVIDIA's growth, and the fact that there is not enough money in the world to fund OpenAI. NVIDIA Isn't Enron, So What Is It? - A lighthearted and indepth analysis of NVIDIA as a company, a historic rundown of what happened with Lucent, WorldCom and Enron, as well as a guide to how it makes money, how its future relies on endless debt, how millions of GPUs are sitting waiting to be installed, and why it no longer makes sense to buy more GPUs. The Enshittifinancial Crisis , a piece about The Enshittifinancial Crisis, the fourth stage of enshittification, where companies turn on their shareholders. Unprofitable, unsustainable AI threatens future of venture capital, private equity and the markets themselves. I published two massive exclusives: How Much Anthropic and Cursor Spend On Amazon Web Services , which is exactly what it sounds like. How Much OpenAI Spends On Inference and Its Revenue Share With Microsoft , which also includes evidence that OpenAI's revenues were at around $4.5 billion by the end of September, a vast difference from the $4.3 billion for the first half of the year published by other outlets. The Financial Times , The Register and TechCrunch covered, while others aggressively ignored it. I launched the premium edition of my newsletter, and published multiple deeply important pieces of research: The Hater's Guide to NVIDIA , the single-most exhaustive rundown of the rickety nature of the company sitting at the top of the stock market – how its future is dependent on massive debt, how AI revenues will never pay back the cost of these GPUs, and how there are likely millions of GPUs sitting in warehouses, as there's no chance that 6 million Blackwell GPUs have actually been installed and turned on. Published November 24 2025, I made this call several weeks before famed short seller Michael Burry would do the same . How Does GPT-5 Work? - an exclusive piece (reported using internal documents from an infrastructure provider) on how GPT-5's router mode actually costs OpenAI more money to run. OpenAI Burned $4.1 Billion More Than We Knew - Where Is Its Money Going? - an analysis of reported cash burn and investments in OpenAI that proved the company burned more than $4 billion more than we know. OpenAI and Oracle Are Full of Crap - on September 12 2025, months before anybody started worrying about it, I published proof that OpenAI couldn't afford to pay Oracle and Oracle didn't have the capacity to service their farcical $300 billion, 5-year-long deal . OpenAI Needs A Trillion Dollars In The Next Four Years - on September 26 2025, I published a thorough review and analysis of OpenAI's agreed-upon compute and data center deals, and proved that it needed at least $1 trillion in the next four years to pull any of it off, several weeks before anyone else did . The Hater's Guide To The AI Bubble Volume 2 : a massive omnibus summary of every major AI company's weaknesses - the pathetic revenues, terrible margins and horrifying costs, and how hopeless everything feels. My own interview in the New Yorker's legendary "Talk Of The Town" section . Profiles with Slate , the Financial Times and FastCompany . An interview with MarketWatch about The Hater's Guide to the AI Bubble . A panel in Seattle with Cory Doctorow about Enshittification and The Rot Economy . A chat with Brooke Gladstone on NPR about the AI bubble . Two interviews with the BBC. An interview with Van Lathan and Rachel Lindsay on The Ringer's Higher Learning . Two episodes of Chapo Trap House. Interviews with The Lever , Parker Molloy's The Present Age , Bloomberg's Everybody's Business , The Majority Report , Newsweek's 1600 Podcast , TechCrunch , Defector , the New Yorker (by the legendary Cal Newport) , Guy Kawasaki's Remarkable People , both Slate's Death, Sex & Money and the excellent TBD podcast , TrashFuture multiple times, The Times Radio (I think multiple times?) and NPR Marketplace . Citations in an astonishing amount of major media outlets, with highlights including The Economist , The Guardian , Charlie Brooker (!) in The Hollywood Reporter , ArsTechnica , CNN , Semafor and ZDNet

0 views

The Enshittifinancial Crisis

Soundtrack: Lynyrd Skynyrd — Free Bird This piece is over 19,000 words, and took me a great deal of writing and research. If you liked it, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 15,000 words, including vast, extremely detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I am regularly several steps ahead in my coverage, and you get an absolute ton of value. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual.  Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.  If you have any issues signing up for premium, please email me at [email protected]. One time, a good friend of mine told me that the more I learned about finance, the more pissed off I’d get. He was right. There is an echoing melancholy to this era, as we watch the end of Silicon Valley’s hypergrowth era, the horrifying result of 15+ years of steering the tech industry away from solving actual problems in pursuit of eternal growth. Everything is more expensive, and every tech product has gotten worse, all so that every company can “do AI,” whatever the fuck that means. We are watching one of the greatest wastes of money in history, all as people are told that there “just isn’t the money” to build things like housing, or provide Americans with universal healthcare, or better schools, or create the means for the average person to accumulate wealth. The money does exist, it just exists for those who want to gamble — private equity firms, “ business development companies ” that exist to give money to other companies , venture capitalists, and banks that are getting desperate and need an overnight shot of capital from the Federal Reserve’s Overnight Repurchase Facility or Discount Window , two worrying indicators of bank stress I’ll get into later. No, the money does not exist for you or me or a person . Money is for entities that could potentially funnel more money into the economy , even if the ways that these entities use the money are reckless and foolhardy, because the system’s intent on keeping entities alive incentivizes it. We are in an era where the average person is told to pull up their bootstraps, to work harder, to struggle more , because, as Martin Luther King Jr. once said, it’s socialism for the rich and rugged free market capitalism for the poor. The “free market” is a fucking con . When you or I run out of money, our things are taken from us, we receive increasingly-panicked letters, we get phone calls and texts and emails and demands, we are told that all will be lost if we don’t “work it out,” because the financial system is not about an exchange of value but whether or not you can enter into the currently agreed-upon con.  By letting neoliberalism and the scourge of the free markets rule , modern society created the conditions for what I call The Enshittifinancial Crisis — the place at which my friend Cory Doctorow’s Enshittification Theory meets my own Rot Economy Thesis in a fourth stage of Enshittification. Per The New Yorker : I’ll walk you through it. Facebook was a huge, free platform, much like Instagram, that offered fast and easy access to everybody you knew. It acquired Instagram in 2012 to kill off a likely competitor, and over time would start making both products worse — clickbait notifications, a mandatory algorithmic feed that deliberately emotionally manipulated people and stoked political division, eventually becoming full of AI slop and videos, all so that Meta could continue to sell billions of dollars of ads a quarter. Per Kyle Chayka of the New Yorker, “Facebook’s feed, now choked with A.I.-generated garbage and short-form videos, is well into the third act of enshittification.” The third stage is critical, in that it’s when the company also turns on its business customers. A Marketing Brew story from September of last year told the tale of multiple advertisers who found their campaigns switching to different audiences, wasting their money and getting questionable results. A New York Times story from 2021 described companies losing upwards of 70% of their revenue during a Facebook ads outage , another from 2018 described how Meta (then Facebook) deliberately hid issues with its measurement of engagement on videos from advertisers for over a year , and more recently, Meta’s ads tools started switching out top-performing ads with AI-generated ones , in one case targeting men aged 30 to 45 with an AI-generated grandma, all without warning the advertiser . Meta doesn’t give a shit, because investors and analysts don’t give a shit. I could say “sell-side analysts” here — the ones that are trying to get you to buy a stock — but based on every analyst report I’ve read from a major bank or hedge fund, I truly think everybody is complicit.  In November 2025, Reuters revealed that Meta projected in late 2024 that 10% of its annual revenue ($16 billion) would come from advertisements for scams or banned goods , mere weeks after Meta announced a ridiculous $27 billion data center debt package , one that used deep accountancy magic to keep it off of its balance sheet despite Meta guaranteeing the entirety of the loan. One would think this would horrify investors for two reasons: One would be wrong. Morgan Stanley said a few weeks ago that it is “one of the handful of companies that can leverage its leading data, distribution and investments in AI,” and raised its target to $750, with a $1000-a-share bull case. Wedbush raised Meta’s price to $920, and Bank of America staunchly held firm at…$810 . I can find no analyst commentary on Meta making sixteen billion dollars on fraud , because it doesn’t matter to them, because this is the Rot Economy, and all that matters is number go up.   Reality — such as whether there’s any revenue in AI, or whether it’s a good idea that Meta is spending over $70 billion this year on capital expenditures on a product that has generated no revenue (and please, fucking spare me the bullshit around “Meta’s AI ads play,” that whole story is nonsense) — doesn’t matter to analysts, because stocks are thoroughly, inextricably enshittified, and analysts don’t even realize it’s happening. The stages of enshittification usually involve some sort of devil’s deal.  We have now entered Enshittification Stage 4, where businesses turn on shareholders. Analysts and investors have become trapped in the same kind of loathsome platform play as consumers and businesses, and face exactly the same kinds of punishment through the devaluation of the stock itself. Where platforms have prioritized profits over the health and happiness of users or business customers, they are now prioritizing stock value over literally anything , and have — through the remarkable growth of tech stocks in particular — created a placated and thoroughly whipped investor and analyst sect that never asks questions and always celebrates whatever the next big thing is meant to be. The value of a “stock” is not based on whether the business is healthy, or its future certain, but on its potential price to grow, and analysts have, thanks to an incredible bull run of tech stocks going on over a decade, been able to say “I bet software will be big” for most of the time, going on CNBC or Bloomberg and blandly repeating whatever it is that a tech CEO just said, all without any worries about “responsibility” or “the truth.”  This is because big tech stocks — and many other big stocks, if I’m honest — have made their lives easy as long as they don’t ask questions. Number always seems to be going up for software companies, and all you need to do is provide a vociferous defense of the “next big thing,” and come up with a smart-sounding model that justifies eternal growth.  This is entirely disconnected from the products themselves, which don’t matter as long as Number Go Up . If net income is high and the company estimates it will continue to grow, then the company can do whatever the fuck it want with the product it sells or the things that it buys. Software Has Eaten The World in the sense that Andreesen got his wish, with investors now caring more about the “intrinsic value” of software companies rather than the businesses or products themselves. And because that’s happening, investors aren’t bothering to think too hard about the tech itself, or the deteriorating products underlying tech companies, because “these guys have always worked it out” and “these companies have always managed to keep growing.” As a result, nobody really looks too deep. Minute changes to accounting in earnings filings are ignored, egregious amounts of debt are waved off, and hundreds of billions of dollars of capital expenditures are seen as “the new AI revolution” versus “a huge waste of money.” By incentivizing the Rot Economy — making stocks disconnected from the value of the company beyond net income and future earnings guidance — companies have found ways to enshittify their own stocks, and shareholders will be the ones who suffer, all thanks to the very downstream pressure that they’ve chosen to ignore for decades. You see, while one might (correctly) see that the deterioration of products like Facebook and Google Search was a sign of desperation, it’s important to also see it as the companies themselves orienting around what they believe analysts and investors want to see.   You can also interpret this as weakness, but I see it another way: stock manipulation, and a deliberate attempt to reshape what “value” means in the eyes of customers and investors. If the true value of a stock is meant to be based on the value of its business, cash flow, earnings and future growth, a company deliberately changing its products is an intentional interference with value itself, as are any and all deceptive accounting practices used to boost valuations. But the real problem is that analysts do not…well…analyze, not, at least, if it goes against the market consensus. That’s why Goldman Sachs and JP Morgan and Futurum and Gartner and Forrester and McKinsey and Morgan Stanley all said that the metaverse was inevitable — because they do not actually care about the underlying business itself, just its ability to grow on paper.  Need proof that none of these people give a fuck about actual value? Mark Zuckerberg burned $77 billion on the metaverse , creating little revenue or shareholder value and also burning all that money without any real explanation as to where it went. The street didn’t give a shit because meta’s existent ads business continued to grow, same as it didn’t give a shit that Mark Zuckerberg burned $70 billion on capex, even though we also really don’t know where that went either. In fact, we really have no idea where all this AI spending is going. These companies don’t tell us anything. They don’t tell us how many GPUs they have, or where those GPUs are, or how many of them are installed, or what their capacity is, or how much money they cost to run, or how much money they make. Why would we? Analysts don’t even look at earnings beyond making sure they beat on estimates. They’ve been trained for 20 years to take a puddle-deep look at the numbers to make sure things look okay, look around their peers and make sure nobody else is saying something bad, and go on and collect fees.  The same goes for hedge funds and banks propping up these stocks rather than asking meaningful questions or demanding meaningful answers. In the last two years, every major hyperscaler has extended the “useful life” of its servers from 3 years to either 5.5 or 6 years — and in simple terms, this allowed them to incur a smaller depreciation expense each quarter as a result, boosting net income. Those who are meant to be critical — analysts and investors sinking money into these stocks — had effectively no reaction, despite the fact that Meta used ( per the Wall Street Journal ) this adjustment to reduce its expenses by $2.3 billion in the first three quarters of this year.   This is quite literally disconnected from reality, and done based on internal accounting that we are not party to. Every single tech firm buying GPUs did this and benefited to the tune of billions of dollars in decreased revenues, and analysts thought it was fine and dandy because number went up.  Shareholders are now subordinate to the shares themselves, reacting in the way that the shares demand they do, being happy for what the companies behind the shares give them, and analysts, investors and even the media spend far more energy fighting the doubters than they do showing these companies scrutiny.   Much like a user of an enshittified platform, investors and analysts are frogs in a pot, the experience of owning a stock deteriorating since Jack Welch and GE taught corporations that the markets are run with the kind of simplistic mindset built for grifter exploitation.  And much like those platforms, corporations have found as many ways as possible to abuse shareholders, seeing what they can get away with, seeing how far they can push things as long as the numbers look right, because analysts are no longer looking for sensible ideas. Let me give you an example I’ve used before. Back in November 1998, Winstar Communications signed a “$2 billion equipment and finance agreement with Lucent Technologies” where Winstar would borrow money from Lucent to buy stuff from Lucent, all to create $100 million in revenue over 5 years.  In December 1999, Barron’s wrote a piece called “ In 1999 Tech Ruled ”: Airnet? Bankrupt . WinStar? Horribly bankrupt. While Ciena survived, it had spent over a billion dollars to acquire other companies (all stock , of course), only to see its revenue dwindle basically overnight from $1.6bn to $300 million as the optical cable industry collapsed .   One would have been able to work out that Winstar was a dog, or that all of these companies were dogs, if you were to look at the numbers, such as “how much they made versus how much they were spending.” Instead, analysts, the media and banks chose to pump up these stocks because the numbers kept getting bigger, and when the collapse happened, rationalizations were immediately created — there were a few bad apples (Enron, Winstar, WorldCom), “the fiber was useful” and thus laying it was worthwhile, and otherwise everything was fine. The problem, in everybody else’s mind, was that everybody had got a bit distracted and some companies that weren’t good would die. All of that lost money was only a problem because it didn’t pay off. This was a misplaced gamble, and it taught tech executives one powerful lesson: earnings must be good, without fail, by any means necessary, and otherwise nothing else matters to Wall Street.  It’s all about incentives. A sell-side analyst that tells you not to buy something is a problem. A journalist that is skeptical or critical of an industry in the midst of a growth or hype cycle is considered a “hater” — don’t I fucking know it . Analysts that do not sing the same tune as everybody else are marginalized, mocked and aggressively policed. And I don’t fucking care. Stop being fucking cowards. By not being skeptical or critical you are going to lead regular people into the jaws of another collapse. The dot com bubble was actually a great time to start reevaluating how and why we value stocks — to say “hey, wait, that $2 billion deal will only make $100 million in revenue?” or “this company spends $5 for every $1 it makes!” — but nobody, it appears, remained particularly suspicious of the tech industry, or a stock market that was increasingly orienting itself around conning shareholders. And because shareholders, analysts and the media alike refused to retain a single shred of suspicion leaving the dot com era, the mania never actually subsided. Financial publications still found themselves dedicated to explaining why the latest hype cycle was real. Journalists still found themselves told by editors that they had to cover the latest fad, even if it was nonsensical or clearly rotten. Analysts still grabbed their swords and rushed to protect the very companies that have spent decades misleading them.  Much like we spent years saying that Facebook was a “good deal” because it was free, analysts and investors say tech stocks are “great to hold” because they kept growing, even if the reason they “kept growing” was a series of interlocking monopolies, difficult-to-leave platforms and impossible-to-fight traction and pricing, all of which have an eventual sell-by date. I realize I’m pearl-clutching over the amoral status of capitalism and the stock market, but hear me out: what if we’re actually in a 15-to-20-year-long knife-catching competition? What if all anybody has done is look at cashflow, net income, future growth guidance, and called it a day? A lack of scrutiny has allowed these companies to do effectively anything they want, bereft of worrisome questions like "will this ever make a profit?" What if we basically don’t know what the fuck is going on? What if all of this was utterly senseless? As I wrote last year, the tech industry has run out of hypergrowth ideas, facing something I call “the Rot Com bubble .” In simple terms, they’re only “doing AI” because there do not appear to be any other viable ideas to continue the Rot Economy’s eternal growth-at-all-costs dance.  Yet because growth hasn’t slowed yet , analysts, the media and other investors are quick to claim that AI is “ paying off ,” even if nobody has ever said how much AI revenue is being generated or, in the case of Salesforce, they can say “ nearly $1.4 billion ARR ,” which sounds really big until you realize a company with $10.9 billion in revenue is boasting about making less than $116 million in revenue in a month. Nevertheless, because Salesforce set a new revenue target of $60 billion by 2030, the stock jumped 4% . It doesn’t matter that most Agentforce customers don’t pay for the service, or that AI isn’t really making much money, or really anything, other than Number Go Up. The era we live in is one of abject desperation, to the point that analysts and investors — and shareholders by extension — will take any abuse from management. They will allow companies to spend as much money as they want in whatever ways they want, as long as it continues the charade of “number go up.” Let me spell it out a little more, using the latest earnings of various hyperscalers as an example. We have no idea, because analysts and investors are in an abusive relationship with tech stocks. It is fundamentally insane that Microsoft, Meta, Amazon and Google have spent $776 billion in capital expenditures in the space of three years , and even more so that analysts and investors, when faced with such egregious numbers, simply sit back and say “they’re building the infrastructure of the future, baby!” Analysts and traders and investors and reporters do not think too hard about the underlying numbers, because doing so immediately makes you run head-first into a number of worrying questions such as “where did all that money go?” and “will any of this pay off?” and “how many GPUs do they actually own?” Analysts have, on some level, become the fractional marketing team for the stocks they’re investing in. When Oracle announced its $300 billion deal with OpenAI in September — one that Oracle does not have the capacity to fill and OpenAI does not have the money to pay for – analysts heaved and stammered like horny teenagers seeing their first boob: These are the same people that retail and institutional investors rely upon for advice on what stocks to buy, all acting with the disregard for the truth that comes from years of never facing a consequence. Three months later, and Oracle has lost basically all of the stock bump it saw from the OpenAI deal, meaning that any retail investor that YOLO’d into the trade because, say, analysts from major institutions said it was a good idea and news outlets acted like this deal was real , already got their ass kicked.  And please, spare me the “oh they shouldn’t trade off of analysts” bullshit. That’s the kind of victim-blaming that allows these revered fuckwits to continue farting out these meaningless calls. In reality, we’re in an era of naked, blatant, shameless stock manipulation, both privately and publicly, because a “stock” no longer refers to a unit of ownership in a company so much as it is a chip at a casino where the house constantly changes the rules. Perhaps you’re able to occasionally catch the house showing its hand, and perhaps the house meant for you to see it. Either way, you are always behind, because the people responsible for buying and selling stocks at scale under the auspices of “knowing what’s going on” don’t seem to know what they’re talking about, or don’t care to find out. Let’s walk through the latest surge of blatant stock manipulation, and how the media and analysts helped it happen. Oracle announces its unfillable, unpayable $300 billion deal with OpenAI , leading to 30%+ bump in stock price . Analysts, who should ostensibly be able to count, call it “momentous” and say they’re “in shock.” On September 22 2025, CEO Safra Catz steps down , and nobody seems to think that’s weird or suspicious.  Two months later, Oracle’s stock is down 40% , with investors worried about Oracle’s growing capex, which is surprising I suppose if you didn’t think about how Oracle would build the fucking data centers. Basically anyone who traded into this got burned. NVIDIA announced a “strategic partnership” to invest “up to $100 billion” and build 10GW of data centers with OpenAI, with the first gigawatt to be deployed in the second half of 2026. Where would the data centers go? How would OpenAI afford to build them? How would OpenAI build a gigawatt in less than a year? Don’t ask questions, pig! NVIDIA’s stock bumped from from $175.30 to $181 in the space of a day. The media wrote about the story as if the deal was done, with CNBC claiming that “the initial $10 billion tranche [was] expected to close within a month or so once the transaction has been finalized.” I read at least ten stories that said that “NVIDIA had invested $100 billion.” Analysts would say that NVIDIA was “locking in OpenAI” to “remain the backbone of the next-gen AI infrastructure,” that “demand for NVIDIA GPUs is effectively baked into the development of frontier AI models,” that the deal “[strengthened] the partnership between the two companies…[and] validates NVIDIA’s long-term growth numbers with so much volume and compute capacity.” Others would say that NVIDIA was “enabling OpenAI to meet surging demand.” Three analysts — Rasgon at Bernstein, Luria at D.A. Davidson and Wagner at Aptus Capital — all raised circular deal concerns, but they were the minority, and those concerns were still often buried under buoyant optimism about the prospects of the company. One eensy weensy problem though, everyone! This was a “letter of intent” — it said so in the announcement! — and on NVIDIA’s November earnings , it said that it “entered into a letter of intent with an opportunity to invest in OpenAI.”  It turns out the deal didn’t exist and everybody fell for it! NVIDIA hasn’t sent a dime and likely won’t. A letter of intent is a “concept of a plan.” Back in October, Reuters reported that Samsung and SK Hynix had " signed letters of intent to supply memory chips for OpenAI's data centers ," with South Korea's presidential office saying that said chip demand was expected to reach "900,000 wafers a month," with "much of that from Samsung and SK Hynix," which was quickly extrapolated to mean around 40% of global DRAM output . Stocks in both companies, to quote Reuters , “soared,” with Samsung climbing 4% and SK Hynix more than 12% to an all-time high. Analyst Jeff Kim of KB Securities said that “there have been worries about high bandwidth memory prices falling next year on intensifying competition, but such worries will be easily resolved by the strategic partnership,” adding that “Since Stargate is a key project led by President Trump, there also is a possibility the partnership will have a positive impact on South Korea's trade negotiations with the U.S.” Donald Trump is not “leading Stargate.” Stargate is a name used to refer to data centers built by OpenAI. KB Securities has around $43 billion of assets under management. This is the level of analysis you get from these analysts! This is how much they know! On SK Hynix's October 29 2025 earnings call , weeks after the announcement, its CEO, Kim Woo-Hyun, was asked a question about High Bandwidth Memory growth by SK Kim from Daiwa Securities: This is the only mention of OpenAI. Otherwise, SK Hynix has not added any guidance that would suggest that its DRAM sales will spike beyond overall growth, other than mentioning it had "completed year 2026 supply discussions with key customers." There is no mention of OpenAI in any earnings presentation. On Samsung's October 30 2025 earnings call , Samsung mentioned the term "DRAM" 18 times, and neither mentioned OpenAI nor any letters of intent. In its Q3 2025 earnings presentation, Samsung mentions it will "prioritize the expansion of the HBM4 [high bandwidth memory 4] business with differentiated performance to address increasing AI demand." Analysts do not appear to have noticed a lack of revenue from an apparent deal for 40% of the world’s RAM! Oh well! Pobody’s nerfect! Both Samsung and SK Hynix’s stocks have continued to rise since, and you’d be forgiven for thinking this deal was something to do with it, even though it wasn’t. AMD announced that it had entered a “multi-year, multi-generation agreement” with OpenAI to build 6 GW of data centers, with “the first 1GW deployment set to begin in the second half of 2026,” calling the agreement “definitive” with terms that allowed OpenAI to buy up to 10% of AMD’s stock, vesting over “specific milestones” that started with the first gigawatt of data center development. Said data centers would also use AMD’s yet-to-be-released MI450 GPUs. The deal would, per Reuters , bring in “tens of billions of dollars of revenue.” Where would those data centers go? How would OpenAI pay for them? Would the chips be ready in time? Silence, worm! How dare you ask questions? How dare you? Why are you asking questions? NUMBER GO UP! AMD’s shares surged by 34% , with analyst Dan Ives of Wedbush saying that this was a “major valuation moment” for AMD. As an aside, Ives said that NVIDIA would benefit from the metaverse in 2021 , and told CBS News in November 22 2021 that “ the metaverse [was] real and Wall Street [was] looking for winners .” One would think that AMD’s November earnings — a month after the announcement — might be a barn-burner full of remaining performance obligations from OpenAI. In fact, CEO Lisa Su said that “[AMD expected] this partnership will significantly accelerate [its] data center AI business, with the potential to generate well over $100 billion in revenue over the next few years.” Here’s how AMD’s 10-Q filing referred to it: …so, no revenue from OpenAI at all, I guess? AMD raised guidance by 35% over the next five years   AMD's trailing 12-month revenue is $32 billion . "Tens of billions of dollars" would surely lead to more than a 35% boost (an increase of $11.2 billion or so) in the next five years? Guess all of that was for nothing. No follow-up from the media, no questions from analysts, just a shrug and we all move on. Anyway, AMD’s stock is now down from a high of $259 at the end of October to around $214 as of writing this sentence. Everybody who traded in based on analyst and media comments got fucked. So, back on September 5, Broadcom said on its earnings call that it had a $10 billion order from a mystery customer, which analysts quickly assumed was OpenAI , leading to the stock popping 9%, and gradually increasing to a high of $369 or so on September 10, before declining a little until October 13, when Broadcom announced its ridiculous 10 gigawatt deal with OpenAI , claiming that it would deploy 10GW of OpenAI-designed chips, with the first racks to deploy the second half of 2026 and the entire deployment completed by end of 2029. The same day, its president of semiconductor solutions Charlie Kawwas added that said mystery customer was actually somebody else : Nevertheless, Broadcom's stock popped by 9% on the news about the 10GW deal, with CNBC adding that "the companies have been working together for 18 months." Because it's OpenAI, nobody sat and thought about whether somebody at Broadcom saying "well, OpenAI has yet to order these chips yet" was a problem. In fact, the answer to “how does OpenAI afford this?” appeared to be “they’d afford it” when it came to analysts: Not to worry, OpenAI’s solution was far simpler: it didn’t order any chips. During Broadcom's November earnings call, where Broadcom revealed that the $10 billion order was actually from Anthropic , another LLM startup that burns billions of dollars, which was buying Google's TPUs, and also booked another $11 billion in orders. Analysts somehow believed that Anthropic is “positioned to spend heavily” despite being another venture-backed welfare recipient in the same flavor as OpenAI. Oh, right, that 10GW OpenAI deal. Broadcom CEO Hock Tan said that he did “ not expect much in 2026 ” from the deal, and guidance did not change to reflect it. Broadcom climbed to a high of $412 leading up to its earnings, and I imagine it did so based on people trading on the belief that OpenAI and Broadcom were doing a deal together, which does not appear to be happening. While there’s an alleged $73 billion backlog, every dollar from Anthropic is questionable. Actually, yes we can. Whenever a company says “letter of intent” — as NVIDIA and SK Hynix/Samsung did — it’s important to immediately stop taking the deal seriously until you get the word “contract” involved. Not “agreement” or “deal” or “announcement,” but “contract,” because contracts are the only thing that actually matters. Similarly, it’s time for everybody — analysts, the media, members of congress, the fucking pope, I don’t care — to start treating these companies with suspicion, and to start demanding timelines. NVIDIA and Microsoft announced their $15 billion investment in Anthropic over a month ago. Where’s the money? Why does the agreement say “up to $10 billion” for NVIDIA and “up to $5 billion” from Microsoft? These subtle details suggest that the deal is not going to be for $15 billion, and the lack of activity suggests it might not happen at all.   These deals are announced with the intention of suggesting there is more revenue and money in generative AI than actually exists. Furthermore, it is irresponsible and actively harmful for analysts and the media to continually act as if these deals will actually get paid when you consider the financial conditions of these companies. As part of its alleged funding announcement with NVIDIA and Microsoft, Anthropic agreed to purchase $30 billion of Azure compute . It also agreed to spend "tens of billions of dollars" with Google Cloud . It ordered $10 billion in chips from Broadcom earlier in the year, and apparently placed another $11 billion order in its latest fiscal quarter . How does it pay for those? It allegedly will burn $2.8 billion this year (I believe it burned much, much more ) and raised $16.5 billion in funding (before Microsoft and NVIDIA’s involvement, which we cannot confirm has actually happened). How are investors tolerating Broadcom not directly stating “the future financial condition of this company is questionable”? Has Broadcom created a reserve for this deal?  If not, why not? Anthropic will make no more than $5 billion this year, and has raised $17.5bn (with a further $2.5bn coming in the form of debt). How can it foreseeably afford to pay $10 billion, or $11 billion, or $21 billion, considering its already massive losses and all those other obligations mentioned? Will Jensen Huang hand over $10 billion so that Anthropic can hand it to Broadcom? I realize the counter-argument is that companies aren’t responsible for their counterparties’ financial health, but my argument is that it’s the responsibility of any public company to give a realistic view of its financial health, which includes noting if a chunk of its revenue is from a startup that can’t afford to pay for its orders. There is no counter to that! Anthropic cannot afford to pay Broadcom $10 billion right now!  Nevertheless, the problem is that in any bubble, being really stupid and ignorant works right up until it doesn’t, and however harsh the dot com bubble might have been, it wasn’t harsh enough and those who were responsible were left unpunished and unashamed, guaranteeing that this cycle would happen again.  I want to be really, abundantly clear about what’s happening: every single stock you see “growing because of AI” outside of those selling RAM and GPUs is actually growing because of something else. Microsoft, Amazon, Google and Meta all have other products that are making them money. AI is not doing it, and because analysts and investors do not think about things for two seconds, they have allowed themselves to be beaten down and turned into supplicants for public stocks.  Investors have allowed themselves to be played, and the results will be worse than the dot com bubble bursting by several echelons. I’m gonna be really simplistic for a second. I am skeptical of AI because everybody loses money. I believe every AI company is unprofitable with margins that are getting increasingly worse as they scale , and as a result that none of them will be able to either get acquired or go public.  This means that venture capitalists that have sunk money into AI stocks are going to be sitting on a bunch of assets under management (AUM) — the same assets they collect fees on — that will eventually crater or go to zero, because there will be no way for any liquidity event to occur.  This is at a time of historically-low liquidity for venture capitalists, with Pitchbook estimating there will only be $100.8 billion in venture capital funds available at the end of 2025 .  Venture capitalists raise money from limited partners, who invest in venture capital with the hope of returns that outpace investing in the public markets. Venture capital vastly overinvested during 2021 and 2022, This was also a problem in private equity . In simple terms, this means these funds are sitting on tons of stock that they cannot shift, and the longer it takes for a company to either go public or acquired, the more likely it is the VC or PE firm will have to mark down its value.  This is so bad that according to Carta, as of August 2024, less than 10% of VC funds raised in 2021 have made any distributions to their investors . In a piece from September , Carta revealed that “about 15% of funds” from 2023 have generated any disbursements as of Q2 2025, and the median net internal rate of return was a median 0.1% , meaning that, at best, most investors got their money back and absolutely nothing else . In fact, investing in venture capital has kinda fucking sucked. According to Carta, “As of the end of Q2, most VC funds across all recent vintages had a  TVPI somewhere between 0.8x and 2x. But there are some areas where standout TVPIs are surfacing.” TVPI means Total Value To Paid-in Capital, or the amount of money you made for each dollar invested. This chart may seem confusing, it tells you that for the most part, VCs have struggled to provide even money returns since 2017. A “decent” TVPI is 2.5x, and as you’ll see, things have effectively collapsed since 2021. Companies are not going public or being acquired at the same rate, meaning that investor capital is increasingly locked up, meaning that limited partners are still waiting for a payoff from the last bubble, let alone this one. Carta would update the piece in December 2025 , and things would somehow get worse. TVPI soured further, suggesting a further lack of exits across the board. The only slight improvement was the median IRR rose to 0.5% for funds from 2021 and 0.1% for funds from 2022.  In simple terms, we are looking at years of locked-up capital leaving venture capital cash-starved and a little desperate. The worst part? All of this is happening during a generational increase in the amounts that startups need to raise thanks to the ruinous costs of generative AI, and the negative margins of AI-powered services. To quote myself : None of these companies are profitable, nor do they have any path to an acquisition or IPO. Why? Because even the most advanced AI software company is ultimately prompting Anthropic or OpenAI’s models, meaning that their only real intellectual property is those prompts and their staff, and whatever they can build around the models they don’t control, which has been obvious from the meager “acquisitions” we’ve seen so far.  Windsurf, which was allegedly being sold to OpenAI, ended up selling its assets to Cognition in July , with Google paying $2.4 billion for its co-founders and a “licensing agreement,” similar to its acquisition of Character.Ai , where it paid $2.7 billion to rehire Noam Shazeer , license its tech, and pay off the stock of its remaining staff. This is also exactly what Microsoft did with Inflection AI and its co-founder Mustafa Suleyman . OpenAI’s acquisitions of Statsig ($1.1bn), Io Products ($6.5bn) and Neptune ($400m) were all-stock. Every other acquisition — Wiz, Confluent, Informatica, and so on ( CRN has a great list here ) — is either somebody trying to pretend that (for example) Wiz is related to AI, or trying to say that a data streaming platform is AI-related because AI needs that, which may be true, but doesn’t mean that any AI startups are actually selling. And they’re not, which is a problem, as 41% of US venture dollars in 2025 have gone into AI as of August, and according to Axios, the global number was around 51% . A crisis is brewing. Nerdlawyer, back in October, wrote about the explosive growth of secondary markets :  In simpler terms, there are now Hot Potato Funds, where either another limited partner buys another one’s allocation, the companies themselves buy back their stock, or the stock is resold to other private investors.  While this piece frames this as a positive, the reality is far grimmer. Venture capitalists are sitting on piles of immovable equity in companies worth far less than they invested at, and the answer, it appears, is to find somebody else to buy the dead weight.  According to Newcomer , only 1117 venture funds closed in 2025 (down from 2100 in 2024), and 43% of dollars raised went to the largest venture funds, per The New York Times and PitchBook, suggesting limited partners are becoming less-interested in pumping cash into the system at a time when AI startups are demanding more capital than has ever been raised. How long can the venture capital industry keep handing out $100 million to $500 million to multiple startups a year? Because all signs suggest that the current pace of funding must continue in perpetuity , as nobody appears to have worked out that generative AI is inherently unprofitable, and thus every single company is on the Silicon Valley Welfare System until everybody gives up, or the system itself cannot sustain the pressure. I’ve read too many people make off-handed comments about this “being like the dot com boom” and saying that “lots of startups might die but what’s left over will be good,” and I hate them for both their flippancy and ignorance.  None of the current stack of AI companies can survive on their own, meaning that the venture capital industry is holding them up. If even one of these companies falters and dies, the entire narrative will die. If that happens, it will be harder for AI companies to raise, and even harder to sell an AI company to someone else. This is a punishment for a decade-plus of hubris, where companies were invested in without ever considering a path to profitability. Venture capital has made the same mistake again and again, believing that because Uber, or Facebook, or Airbnb, or any number of companies founded nearly twenty years ago were unprofitable (with paths to profitability in all three cases, mind), it was totally okay to keep pumping up companies that had no path to profitability, which eventually became “had no apparent business model” (see: the metaverse, web3), which eventually became “have negative margins so severe and valuations so high that we will need an IPO at a market cap higher than Netflix.” This is Silicon Valley’s Rot Economy — the desperate, growth-at-all-costs attachment to startups where you “really like the founder,” where “the market could be huge” (who knows if it is!), where you just don’t need to worry about profitability because IPOs and exits were easy.  Venture capital also used to be easy , because we were still in the era of hypergrowth. You could be a stupid asshole that doesn’t know anything, but there were so many good deals , and the more well-known you were, the more likely you’d be brought them first, guaranteeing a bigger payout, guaranteeing more LP capital, guaranteeing more opportunities that were of a higher quality because you were a big name. It was easier to make a valuable company, easier to get funded, and easier to sell, because the goal was always “get funded, grow as large an audience as possible, or go public/get acquired.” As a result, venture capital encouraged growth-at-all-costs thinking. In 2010, Ben Horwitz said that “the only thing worse for an entrepreneur than start-up hell (bankruptcy) is start-up purgatory”: This poisonous theory paid off, in that startups got used to building high-growth, low-margin companies that would easily sell to other companies or the markets themselves.  Until it didn’t, of course. Per Nerdlawyer , IPOs have collapsed as an exit route, along with easy-to-raise capital.  Per PitchBook, since 2022, 70% of VC-backed exits were valued at less than the capital put in , with more than a third of them being startups buying other startups in 2024. The money is drying up as the value of VCs’ assets is decreasing , at a time when VCs need more money than ever , because everybody is heavily leveraged in the single-most-expensive funding climate in history. And as we hit this historic liquidity crisis, the two largest companies — OpenAI and Anthropic — are becoming drains on the system that, in a very real sense, are participating in a massive redistribution of capital reserved for startups to one of a few public companies. No, really!  OpenAI is trying to raise as much as $100 billion in funding so it can continue to pass money to one of a few public companies — $38 billion to Amazon Web Services over seven years, $22.4 billion to CoreWeave over five years, and $250 billion over an indeterminate period on Microsoft Azure . If successful, OpenAI’s venture telethon will raise more money than has ever been raised in a single round, draining funds that actual startups need. Anthropic has agreed to $70 billion in compute and chip deals across Google, Amazon and Broadcom, and that’s not including the Hut8 compute deal that Google is backing . This money will come from what remains of venture capital, private equity and hyperscaler generosity.  Yet elsewhere, even the money that goes to regular startups is ultimately being sent to hyperscalers. That AI startup that needs to keep raising $100 million in a single round isn’t sending that cash to other startups — it’s mostly going to OpenAI (Microsoft, Amazon, CoreWeave, Google), Anthropic (Google, Microsoft, Amazon), or one of the large hyperscalers for Azure, AWS or Google Cloud.  Silicon Valley didn’t birth the next big tech firm. It incubated yet another hyperscaler-level parasite, except instead of just spending money on hyperscaler services (and raising money to do so), both Anthropic and OpenAI actively drain the venture capital system as well, as they both burn billions of dollars.  By creating something that’s incredibly expensive to run, they naturally create startups more-dependent on the venture capital system, and the venture capital system has no idea what to do other than say “just grow, baby!” Both OpenAI and Anthropic’s models might be getting cheaper on a per-million-token basis, but use more tokens, increasing the cost of inference , which in turn increases the costs of startups doing business, which in turn means OpenAI, Anthropic, and all connected startups lose more money, which increases the burn on venture capital. This is a doom-spiral, one that can only be reversed through the most magical and aggressive turnaround we will have seen in history, and it will have to happen next year, without fail.  It won’t.  So why did venture do this? Folks, we haven’t seen values this big in a long time. These are the biggest numbers we’ve ever seen. They’re simply tremendous. OpenAI is maybe worth $830 billion dollars , can you believe that? They lose so much money but folks we don’t worry about that, because they’re growing so fast. We love that Clammy Sam Altman — they call him “Clamuel” — tells everybody he’s giving them one billion dollars. Data centers are going to have the biggest deals we’ve ever seen, even [ tchhh sound through teeth ] if we have to work with Dario. You see, right now AI startups are big, exciting news for the limited partners funding LLM firms.  Things feel exciting because the value of the assets under management (AUM) are going up, which is nothing dodgy, but just how VCs value things and if they are valuing AI stocks, that is how their fees are paid. Investing early in OpenAI allows a VC — or even an asset manager like Blackstone, which invested in 2024 — to say it has a big holding and a big increase in its AUM.  We are currently in the sowing stage . Nevertheless, AI stocks make VCs who bet on them two years ago look like geniuses on paper. You got in early on OpenAI, Anthropic, Cursor, Cognition, Perplexity or any other company that loves to burn several dollars per dollar of revenue, you have a big, beautiful number, the biggest you’ve ever seen, and your limited partners need to pay you a fee just to manage it. Venture capital hasn’t seen valuations like this in a long time , and on paper , it feels like a lot of VCs got in on companies worth billions of dollars. On paper, Cognition is worth $10.2 billion , Perplexity $18 billion , Cursor $29.3 billion , Lovable $6.6 billion , Cohere $6.8 billion , Replit $3 billion , and Glean $7.2 billion — massive valuations for companies that all basically do products that OpenAI or Anthropic or Amazon or Google or any number of Chinese companies are already working to clone. They are all losing tons of money and have no path to profitability.  But right now the numbers are simply tremendous. I’ve heard venture capitalists tell me that there are times when they have to agree to invest with little to no information or know that they’ll lose the opportunity to another sucker investor. I’ve heard venture capitalists say they don’t have any insight into finances. Venture capitalists would, of course, claim I’m insane, saying that the “growth is obviously there” while pointing to whatever startup has made $100 million ARR ($8.3 million in a month), all while not discussing the underlying operating expenses. The idea, I believe, is that the current spate of AI spending is only set to increase next year, and that will…somehow lead to fixing margins? Venture capitalists staunchly refuse to learn anything other than “invest in growth and then profit from growth,” even if “profiting from growth” doesn’t seem to be happening anymore. In reality, venture capital shouldn’t have touched LLMs with a fifteen foot pole, because the margins were obviously, blatantly bad from the very beginning. We knew OpenAI would lose $5 billion in the middle of 2024 . A sane venture capital climate would have fucking panicked , but instead chose to double, triple and quadruple down. I believe that massive valuation drawdowns are a certainty. There are losses coming. Venture capitalists, I have to ask you: what happens if OpenAI dies? Do you think that this will make investors interested in funding or acquiring other AI startups? How much longer are we going to do this? When will venture capital realize it’s setting itself up for disaster? And what, exactly, is the plan? OpenAI and Anthropic will suck the lakes dry like an NVIDIA GPU named after Nancy Reagan. How is this meant to continue, and what will be left when it does? The answer is simple: there won’t be money for venture capital for a while. Those AI holdings are going to be worth, at best, 50%, if they retain any value at all. Once one of these startups die, a panic will ensue, sending venture capitalists scrambling to get their holdings acquired, until there’s little or no investor interest left. Why would LPs ever trust venture capital after this? Why would anybody? Because based on the past four years, it doesn’t appear that venture capital is actually good at investing money — it just got lucky, year after year, until there were few ideas that could sell for hundreds of millions or billions of dollars.  Venture capital believed it knew better as it turned its back on basic business fundamentals, starting with Clubhouse, crypto, the metaverse, and now generative AI. Yet they’re far from the only fuckwits on the dickhead express. Per Bloomberg , there were at least $178.5 billion in data-center credit deals in the US in 2025, rivaling the $215.4 billion invested in US venture capital in 2024 and the $197.2 billion invested in US VC through August 7 2025 , and over $100 billion more than the $60.69 billion of data center credit deals done in 2024 . I’m very worried, and I’m going to tell you why, using a company called CoreWeave that I’ve been actively warning people about since March . CoreWeave is something called a “neocloud.” It’s a company that sells AI compute, and does so by renting out NVIDIA GPUs, and as I explained a few months ago , it does so by building data centers backed by endless debt:  CoreWeave is one of the largest providers of AI compute in the world, and its business model is indicative of how most data center companies make money, and to explain my concerns, I’m going to explain why using this chart from CoreWeave’s Q2 2025 earnings presentation . First, CoreWeave signs contracts — such as its $14 billion deal with Meta and $22.4 billion deal with OpenAI — before it has the physical infrastructure to service them. It then raises debt using this contract as collateral , orders the GPUs from NVIDIA, which arrive after three months, and then take another three months to install, at which point monthly client payments begin. To really simplify this: data center developers are raising money months up to a year before they ever expect to make a penny. In fact, I can find no consistent answer to “how long a data center takes to build,” and the answer here is pretty important, because that’s how the money is gonna get made from these things. You may notice that “monthly payments” begin at 6 to 30 months, a curious and broad blob of time. You see, data centers are extremely difficult to build, and the concept of an “AI data center” is barely a few years old, with the concept of hundreds of megawatts in one data center campus entirely made up of AI GPUs barely two years old, which means basically everybody building one is doing so for the first time, and even experienced developers are running into problems. For example, Core Scientific, CoreWeave’s weird partner organization it tried and failed to buy , has been trying to convert its Denton Texas cryptocurrency mining data center into an AI data center since November 2024 , specifically so that CoreWeave can rent it to Microsoft for OpenAI. This hasn’t gone well, with the Wall Street Journal reporting a few weeks ago that Denton has been wracked with “several months” of delays thanks to rainstorms preventing contractors from pouring concrete. The cluster is apparently going to have 260MW of capacity. What this means for CoreWeave is that it can’t start getting paid by OpenAI, because, per its contract, customers don’t have to start paying until the compute is actually available. This is a very important detail to know for literally any data center development you’ve ever seen. As of its latest Q3 2025 earnings filing , CoreWeave is sitting on $1.1 billion in deferred revenue ( income for services not yet rendered ), up from $951 million in Q2 2025 and $436 million in Q1 2025 . This means deposits have been made, but the contract has yet to be serviced. Now, I’m a curious little critter , so I went and found the 921-page $2.6 billion DDTL 3.0 loan agreement between CoreWeave and banks including Morgan Stanley, MUFG Bank and Goldman Sachs , and in doing so learned the following: I apologize, that suggests that CoreWeave isn’t already in trouble. Buried inside NVIDIA’s latest earnings (page 17) there was a little clue:  Credit where credit is due — eagle-eyed analyst JustDario caught this in November — but in CoreWeave’s condensed consolidated balance sheets, there sits a $477.5 million line-item under “restricted cash and cash equivalents, non-current.” Though this might not be the NVIDIA escrow — this number shifted from $617m in Q1 to $340m in Q2 — it lines up all-too-precisely…and who else would NVIDIA be guaranteeing?  In any case, CoreWeave is likely getting the best deals in data center debt outside of Oracle. It has top-tier financiers (who I will get to shortly), the full backing of NVIDIA (which is both an investor, customer and apparent financial backstop), and the ability to raise debt quickly . CoreWeave’s deals are likely indicative of how data center financing takes place, and those top-tier financiers? It’s been in basically every deal. In fact… So, I went and dug through a pile of 26 prominent data center loan deals, including the proposed $38 billion debt package that Oracle and Vantage Data Center Partners are raising for Stargate Shackelford and Wisconsin, Stargate Abilene, New Mexico, SoftBank’s $15 billion bridge loan (which I included for a reason that will become obvious shortly) and multiple CoreWeave loans, and found a few commonalities: I realize there are far more data center deals than these, but I wanted to show you exactly how centralized these deals are .  The largest deals — the $38 billion Stargate TX/WI deal and $18 billion Stargate New Mexico deal — both involved Goldman Sachs, BNP Paribas, SMBC and MUFG, and all four of those companies have, at some point, funded CoreWeave. In fact, everybody appears to have funded CoreWeave at some point — CitiBank, Credit Agricole, Societe Generale, Wells Fargo, Carlyle, Blackstone, BlackRock, Barclays, Magentar, and Jefferies to name a few. Of the 40 banks and financial institutions I researched, 24 have, at some point, loaned to or organized debt for CoreWeave. Of those institutions, Blackstone, Deutsche Bank, JP Morgan Chase, Morgan Stanley, MUFG and Wells Fargo have done so multiple times.  CoreWeave is a deeply unprofitable company saddled with incredible debt and deteriorating margins, with one of its largest clients paying net 360, and, as I’ve said, is arguably the best-financed data center company in the world.  What I’m getting at is that most data center deals are likely much worse than the terms that CoreWeave faces, and are likely financed in a similar way , where a client is signed for data center capacity that doesn’t exist, such as when Nebius raised $4.3 billion through a share sale and convertible notes (read: loans) to handle its $17.4 billion data center contract with Microsoft , and guess what? Goldman Sachs acted as lead underwriter on the deal, with assistance from Bank of America, CitiGroup, and Morgan Stanley, all three of which have invested in CoreWeave. AI data centers are expensive, require debt due to the massive cost of construction and GPUs, and all take at least a year, if not two to start generating revenue, at which point they also begin losing money because it seems that renting out AI GPUs is really unprofitable .  Every single major bank and financial institution has piled hundreds of millions if not billions of dollars into building data centers that take forever to even start generating money, at which point they only seem to lose it. Worse still, NVIDIA sells GPUs on a one-year upgrade cycle, meaning that all of those data centers being built right now are being filled with Blackwell chips, and by the time they turn on, NVIDIA will be selling its next-generation Vera Rubin chips. Now, you’ve probably heard that Vera Rubin will use the same racks (Oberon) as Blackwell, which is true to an extent , but won’t be true for long, as NVIDIA intends to shift to Kyber racks in 2027 , hoping to build 1MW IT racks (which will involve entire racks-full of power supplies!), meaning that all of those data centers you see today — whenever they get built! — will be full of racks incompatible with the next generation of GPUs. This will also decrease the value of the assets inside the data centers, which will in turn decrease the value of the assets held by the firms investing. Stargate Abilene? The one invested in by JP Morgan, Blue Owl, Primary Digital Infrastructure and Societe Generale? The one that’s heavily delayed and won’t be ready until the end of 2026 at earliest? Full to the brim with two-year-old GB200 racks !  By the beginning of 2027, Stargate Abilene will be obsolete, as will any and all data centers filled with Blackwell GPUs, as will any and all data centers being built today. Every single one takes 1-3 years and hundreds of millions (or billions) in debt, every single one faces the same kinds of construction delays, and better yet, almost all of them will turn on in roughly the same time frame. Now, I ain’t no economist, but I do know that “supply and demand” has an effect on pricing. What do you believe happens to the price of renting a Blackwell GPU when all of these data centers come on? Do you think it becomes more valuable? Or less?   And while we’re on the subject, what do you think happens if there isn’t sufficient demand?  Right now, OpenAI makes up a large chunk of the global sale of compute — at least $8.67 billion of Azure revenue through September 2025, $22.4 billion of CoreWeave’s backlog, $38 billion of Amazon’s backlog, and so on and so forth — and made, based on my reporting, just over $4.5 billion in that period . It cannot afford to pay anybody, and nowhere is that more obvious than when it negotiated year-long payment terms for CoreWeave.   Otherwise, when you remove the contracts signed by hyperscalers and OpenAI (which I do not believe has paid anybody other than Microsoft yet), based on my analysis , there was less than a billion dollars of AI compute revenue in 2025, or 0.5831% of the money spent on data centers.   Hyperscaler revenue is also immediately questionable, with Microsoft’s deal with Nebius ( per its 6k filing ) set to default in the event that Nebius cannot provide the capacity it sold out of its unfinished Vineland, New Jersey data center, which is being built by DataOne, a company which has never built an AI data center with a CEO that has his LinkedIn location set to “ United Arab Emirates ” with funding from a concrete firm that is also a vendor on the construction project . I also believe Microsoft is setting Nebius up to fail. Based on discussions with sources with direct knowledge of plans for the Vineland, New Jersey data center, Nebius has agreed to timelines that involve having 18,000 NVIDIA B200 and B300 GPUs by the end of January for a total of 50MW, with another 18,000 B300s due by the end of May. On speaking with experts in the field about how viable these plans are, two laughed, and one told me to fuck off. If Nebius fails to build the capacity, Microsoft can walk away, much like OpenAI can walk away from Stargate in the event that Oracle fails to build it on time ( as reported by The Information in April ), and I believe that this is the case for literally any data center provider that’s building a data center for any signed-up tenant. This is another layer of risk to data center development that nobody bothers to discuss, because everybody loves seeing these big, beautiful numbers. Except the numbers might have become a little too beautiful for some.  A few weeks ago, the Financial Times reported that Blue Owl Capital had pulled out of the $10 billion Michigan Stargate Data Center project , citing “concerns about its rising debt and artificial intelligence spending.” To quote the FT, “Blue Owl had been in discussions with lenders and Oracle about investing in the planned 1 gigawatt data centre being built to serve OpenAI in Saline Township, Michigan.” What debt, you ask? Well, Blue Owl — formerly the loosest legs in data center financing — was in CoreWeave’s $600 million and $750 million debt deals for its planned Virginia data center with Chirisa Technology Parks , as well as a $4 billion CoreWeave data center project in Lancaster, Pennsylvania , Stargate Abilene and Stargate Mexico, Meta’s $30 billion Hyperion data center , and a $1.3 billion data center deal in Australia through Stack Infrastructure, a company it owns through its acquisition of IPI Partners.  To be clear, Blue Owl “pulling out” is not the same as a regular deal. It’s a BDC — Business Development Corporation — that invests both its own money and rallies together various banks, in this case SMBC, BNP Paribas, MUFG and Goldman Sachs (all part of Stargate New Mexico).  Blue Owl is incredibly well-connected and experienced in putting together these kinds of deals, and very likely went to the many banks it’s worked with over the years, who apparently had “concerns about its rising debt,” much of it issued by them! While rumours suggest that Blackstone may “step in,” the banks that will actually back a $10 billion deal are fairly narrow, and “stepping in” would require billions of dollars and legal logistics. So, why are things looking shaky? Well, remember that thing about how this data center would be leased to Oracle? Well, it had a free cash flow of negative thirteen billion on revenues of $16 billion , with its most-recent earnings only "beat" on estimates only thanks to the sale of its $2.68 billion stake in Ampere . Its debt is exploding (with over a billion dollars in interest payments in its last quarter), its GPU gross margins are 14% (which does not mean profitable) , its latest NVIDIA GB200 GPUs have a negative 100% gross margin , and it has $248 billion in upcoming data center leases yet to begin.  All, for the most part, to handle compute for one customer: OpenAI, which needs to raise $100 billion, I guess. We’ve already got some signs of concern within the banking world around data center exposure.  In November, the FT reported that Deutsche Bank — which backed CoreWeave multiple times and several data centers — was “exploring ways to hedge its exposure to data centers after extending billions of dollars in debt,” including shorting a “basket of AI-related stocks” or buying default protection on some of its debt using synthetic risk transfers , which are when a bank sells the full or partial credit risk of a loan (or loans) to another bank while keeping the loans on their book, paying a monthly fee to investors (this is a simplification). In December, Fortune reported that Morgan Stanley (CoreWeave three times, IPI Partners, Hyperion, SoftBank Bridge Loan) was also considering synthetic risk transfers on “loans to businesses involved in AI infrastructure.” Back in April , SMBC sold synthetic risk transfers tied to “private debt BDCs” — and while this predates the large data center deals done by Blue Owl, SMBC has overseen multiple Blue Owl deals in the past. In December, SMBC closed another SRT , selling off risk from “Australian and Asian project finance loans,” though I can’t confirm if any of them were data center related. In December, Goldman Sachs paused a planned mortgage-bond sale for data center operator CyrusOne , with the intent to revive it in the first quarter of 2026. Oracle’s credit risk reached a 16-year high in the middle of December , with credit default swaps (basically, betting that Oracle will default on its debts, an unlikely yet no-longer-impossible event) climbing to their highest price since the great financial crisis.  While Morgan Stanley and Deutsche Bank’s SRTs are yet to close, it’s still notable that two of the largest players in data center financing feel the need to hedge their bets. So, what exactly are they hedging against? Simple! That tenants won’t arrive and debts won’t get paid.  I also believe they’re going to need bigger hedges, because I don’t think there is enough actual demand for AI to meet the data centers being built, and I think most data center loans end up being underwater within the next two years. I realize we’ve taken a great deal of words to get here, but every single part was necessary to explain what I think happens next. Let’s start by quoting my premium newsletter from a few weeks ago : You see, every little link in the chain of pain is necessary to understand things.  In really simple terms, I believe that almost every investment in a data center or AI startup may go to zero.  Let me explain. If we assume that 50% of $171.5 (so $85.75) billion in data center debt is in GPUs, that’s around 3.2GW of data center capacity, based on my model of NVIDIA’s approximate split of sales between different AI GPUs from my premium piece last week . The likelihood of the majority of these projects being A) completed within the next year and B) completed on budget is very, very small. Every delay increases the likelihood of default, as each of these projects is heavily debt-based. The customers of these projects are either hyperscalers (who are only “doing AI” because they have no other hypergrowth ideas and because Wall Street currently approves) or AI startups, all of whom are unprofitable. While there are potentially hedge funds or other companies looking for “private AI” integrations, I think this is a very, very small market. On top of that, AI compute itself may not be profitable, and because, by my estimate, everybody has spent about $85 billion on filling data centers with the same GPUs, the aggregate price of renting out GPUs will decline. Already the average price of Blackwell GPUs has declined to an average of $4.41 an hour according to Silicon Data , and that’s before the majority of Blackwell-powered GPUs come online. Yet the customer base shrinks from there, because the majority of AI startups aren’t actually renting GPUs — they build products on top of models built by OpenAI or Anthropic, who have made it clear they’re buying capacity from either hyperscalers or, in OpenAI’s case, getting Oracle or CoreWeave to build it for them. Why? Because building your own model is incredibly capital-intensive, and it’s hard to tell if the results will be worth it. Now, let’s assume — I don’t actually believe it will, but let’s try anyway — that all of that 3.2GW of capacity comes online. How much compute does an AI company use? OpenAI claims it has 2GW of capacity as of the end of 2025 , and is allegedly approaching 900 million weekly active users . I don’t think there are any AI companies with even 10% of that userbase, but even if there were, OpenAI spent $8.67 billion on inference through the end of September. Who can afford to pay even 10% of that a year? Or 5%?  Yet in reality, OpenAI is likely more indicative of the overall compute spend of the entire AI industry. As I’ve said, most companies are powered not by their own GPU-driven models, but by renting them from other providers.  OpenAI and Anthropic spent a combined $11.33 billion in compute on Azure and AWS respectively through the first 9 months of this year, and as the two largest consumers of AI compute, which suggests two things: In fact, it would take sinking every single dollar of venture capital — over $200 billion — every single year and then some funneled into AI compute just to provide the revenue to justify these deals.  In the space of a year, Microsoft Azure made $75 billion , Google Cloud $43 billion and Amazon Web Services $100 billion .  Need more proof? Still don’t believe me? Then skip to page 18 of NVIDIA’s most-recent earnings : If there’s such incredible, surging demand, why exactly is NVIDIA spending six fucking billion dollars a year in 2026 and 2027 on cloud compute ? NVIDIA doesn’t need the compute — it just shut down its AWS rival DGX Cloud ! It looks far more like NVIDIA is propping up an industry with non-existent demand. I’m afraid there is no secret AWS-sized spend waiting in the wings for the right moment to pounce. There is no secret demand wave, nor is there any capacity crunch that is holding back incredible swaths of revenue. Oracle’s $523 billion in remaining performance obligations are made up of OpenAI, Meta, and fucking NVIDIA .  For AI data centers to make sense, most startups would have to start becoming direct users of AI compute , while also spending more on cloud compute services than they’ve ever spent. The largest consumers of AI compute are both unprofitable, unsustainable monstrosities.  Eventually, reality will dawn on one or more of these banks. Projects will get delayed thanks to weather, or budgetary issues, or when customers walk away ( as just happened to data center REIT Fermi ). Loan payments will start going unpaid. Elsewhere, AI startups will keep asking for money, again and again, and for a while they’ll keep raising, until the valuations get too high, or VC coffers get too low.  You’re probably gonna say at this point that Anthropic or OpenAI might go public, which will infuse capital into the system, and I want to give you a preview of what to look forward to, courtesy of AI labs MiniMax and Zhipu (as reported by The Information), which just filed to go public in Hong Kong.  Anyway, I’m sure these numbers are great- oh my GOD ! In the first half of this year, Zhipu had a net loss of $334 million on $27 million in revenue , and guess what, 85% of that revenue came from enterprise customers. Meanwhile, MiniMax made $53.4 million in revenue in the first nine months of the year, and burned $211 million to earn it. It is time to wake up. These are the real-life costs of running an AI company. OpenAI and Anthropic are going to be even worse. This is why nobody wants to take AI companies public. This is why nobody wants to talk about the actual costs of AI. This is why nobody wants you to know the hourly cost of running a GPU, and this is why OpenAI and Anthropic both burn billions of dollars — the margins fucking stink , every product is unprofitable , and none of these companies can afford their bills based on their actual cashflow. Generative AI is not a functional industry, and once the money works that out, everything burns. Though many AI data centers boast of having tenancy agreements, remember that these agreements are either with AI startups that will run out of money or hyperscalers with legal teams numbering in the thousands. Every single deal that Microsoft, Amazon, Meta, Google or NVIDIA signs is riddled with outs specifically hedging against this scenario, and there won’t be a damn thing that anybody can do if hyperscalers decide to walk away. Before then, NVIDIA’s bubble is likely to burst. As I discussed a few weeks ago, NVIDIA claims to have shipped six million Blackwell GPUs , and while it may be employing very dodgy maths (claiming each Blackwell GPU is actually two GPUs because each one has two chips ), my modeling of its last three quarters suggests that NVIDIA shipped around 5.33GW’s worth of GPUs — and based on reading about every single data center I can find, it doesn’t appear that many have been built and powered on. Worse still, NVIDIA’s diversified revenue is collapsing. In Q1FY26, two customers represented 16% and 14% of revenue, in Q2FY26 two customers represented 23% and 16% of revenue, and in Q3FY26 four customers represented 22%, 15%, 13% and 11% of total revenue, with all that money going toward either GPUs or networking gear. I go into detail here , but I put it in a chart to show you why this is bad: In simpler terms, NVIDIA’s revenue is no longer coming from a diverse swath of customers. In Q1FY26, NVIDIA had $30.84 billion of diversified revenue, Q2 $28.51 billion, and Q3 $22.23 billion.  NVIDIA GPUs are astronomically expensive — $4.5 million for a GB300 rack of 72 B300 GPUs, for example — and filling data centers full of them requires debt unless you’re a hyperscaler. While I can’t say for sure, I believe NVIDIA’s diversified revenue collapse is a sign that smaller data center projects are starting to have issues getting funded, and/or hyperscalers are pulling back on their GPU purchases.  To look through the eyes of an AI booster — all I’m seeing is blue and yellow, as usual! — one might say that these big customers are covering the loss of revenue, but the reality is that these big projects are run on debt issued by banks that are becoming increasingly-worried about nobody paying them back. The mistake that every investor, commentator, analyst and member of the media makes about NVIDIA is believing that its sales are an expression of demand for AI compute, when it’s really more of a statement about the availability of debt from banks and private credit.  Similarly, the continued existence of AI startups is an expression of the desperation of venture capital, and the continuing flow of massive funding rounds is a sign that they see no other avenues for growth.  Eventually, data centers are going to go unbuilt, and data center debt packages will begin to fall apart. Remember, Oracle’s $38 billion data center deal is actually yet to close , much like Stargate New Mexico is yet to close. These deals, while seeming like they’re trending positively, are both incredibly important to the future of the AI bubble, and any failure will spook an already-nervous market. Only one link in the chain needs to break. Every part of the AI bubble — this fucking charade — is unprofitable, save for NVIDIA and the construction firms erecting future laser tag arenas full of negative-margin GPUs. What happens if the debt stops flowing to data centers? How will NVIDIA sell those 20 million Blackwell and Vera Rubin GPUs ? What happens if venture capitalists start running low on funds, and can’t keep feeding hundreds of millions of dollars to AI startups so that they can feed them to Anthropic or OpenAI?  What happens to OpenAI and Anthropic if their already negative-margin businesses when their customers run out of money? What happens to Oracle or CoreWeave’s work-in-progress data centers if OpenAI can’t pay its bills? What happens to Anthropic’s $21 billion of Broadcom orders, or tens of billions of Google Cloud spend? In the last year, I estimate I’ve been asked the question “what if you’re wrong?” over 25 times. Every single time the question comes with an undercurrent of venom — the suggestion that I’m being an asshole for daring to question the wondrous AI bubble. Every single person who has asked this has been poorly-read — both in terms of my work and the surrounding economics and technological possibilities of Large Language Models — and believes they’re defending technology, when in reality they’re defending growth , and the Rot Economy’s growth-at-all-costs mindset.  In many cases they are not excited about technology , but the prospects of being first in line to lick an already-sparkling boot. This has never been about progress or productivity. If it was, we’d actually see progress, or productivity boosts, or anything other than the frothiest debt and venture markets of all time. Large Language Models do not create novel concepts, they are inconsistent and unreliable, and even the “good” things they do vary wildly thanks to the dramatic variance of a giant probability machine. LLMs are not good enough for people to pay regular software prices at any scale, and the consequences of this will be that every single dollar spent on GPUs has been for exactly one point: manipulating the value of their stocks. AI does not have the business returns and may have negative gross margins. It is inconsistent, ugly, unreliable, expensive and environmentally ruinous, pissing off a large chunk of consumers and underwhelming most of the rest, other than those convinced they’re smart for using it or those who have resigned to giving up at the sight of a confidence game sold by a tech industry that stopped making products primarily focused on solving the problems of consumers or businesses some time ago.  You may say that I’m wrong because Google, Microsoft, Meta and Amazon continue to have healthy net revenues and revenue growth, and as I previously said, these companies are not sharing AI revenues and their existing businesses are still growing due to the massive monopolies they’ve built.  And I want to plea to AI boosters and bullish analysts alike: you are being had. Satya Nadella, Sam Altman, Dario Amodei, Jensen Huang, Mark Zuckerberg, Larry Ellison, Safra Catz, Elon Musk, Clay Magouyrk, Mark Sicilia, Michael Truell, Aravind Srivinas — all of them are laughing at you behind your back, because they know that you are never going to ask the obvious questions that would defeat my arguments, and know that you will never, ever push back on them. The enshittification of the shareholder has the downstream effect of an enshittification of the media and Wall Street analysts writ large. These companies own you. They treat you with disdain and condescension, because they know you’ll let them. They know that no sell-side analyst will ever ask them “when will you be profitable?” or “how much are you spending?” or if you do ask, they know you will experience temporary amnesia and forget whatever answer they give, because these are the incentives of an enshittified stock market, where stocks are not extrapolations of shareholder value but chips in a fucking casino where the house always wins and changes the rules every three months. They have changed the meaning of “stock” to mean “what the market will reward,” and when you allow companies to start dictating the terms of what will be rewarded — as neoliberalism, Friedman, Reagan, Nixon, NAFTA, Thatcher, and every other policy has, orienting everything exclusively around growth — companies eventually cut off any powers that may curtail any reevaluation of the fundamental terms of capitalism, and the incentives within.  Focusing on growth-at-all-costs thinking naturally encourages, enables, and empowers grifters, because all they ever have to promise is “more” — more users, more debt, more venture, more features, more everything .  The very institutions that are meant to hold companies accountable — analysts and the media — are far more desperate to trade scoops for interviews, to pull punches, to find ways to explain why a company is right rather than understand what the company is doing, and this is something pushed not by writers, but by editors that want to make sure they stay on the right side of the largest companies. And if I’m right, OpenAI’s death will kill off most if not all other AI startups, Anthropic included. Every investor that invested in AI will take massive losses. Every startup that builds on the back of their models will see their company fold, if it hasn’t already due to the massive costs and upcoming price increases. The majority of GPU-based data centers — which really have no other revenue stream — will be left inert, likely powered down, waiting for the day that somebody works it all out, which they won’t, because literally everybody has these things now and I truly believe they’ve tried everything. I don’t “hate on AI” because I am a hater, I hate on it because it fucking sucks and what I’m worried about happening seems to be happening. The tech industry has run out of hypergrowth ideas, and in its desperation hitched itself to the least-profitable hardware and software in history, then spent three straight years lying about what was possible to the media, analysts and shareholders. And they were allowed to lie , because everybody lapped it the fuck up. They didn’t need to worry about convincing anybody. Financiers, editors, analysts and investors were already drafting reasons why they were excited about something they didn’t really understand or believe in, other than the fact it promised more.  This is what happens when you make everything about growth: everybody becomes stupid, ready to be conned, ready to hear what the next big growth thing is because asking nasty questions gets you fucking fired. And what’s left is a tech industry that doesn’t build technology, but growth-focused startups.  Look at Silicon Valley. Do you see these fucking people ever building a new kind of computer? Do you believe these men fit to even imagine a future? These men care about the status quo, they want to always have more software to sell or ways to increase advertising revenue so that the stock number goes up so they receive more money in the form of stock compensation. They are concerned with neither actual business value, honest exchange of value, or societal value. Their existence is only in shareholder value, which is how they are incentivized by their board of directors.  And really, if you’re still defending AI -- does it matter to any of you that this software fucking sucks, does it? If you think it’s good you don’t know much about software! It does not respond precisely at any point to a user or programmer’s intent. That’s bad software. I don’t care that you have heard developers really like it, because that doesn’t fix the underlying economic and social poison in AI. I don’t care that it sort of replaced search for you. I don’t care if you “know a team of engineers that use it.” Every single AI app is subsidized, its price is fake, you are being lied to, and none of this is real. When the collapse happens, do not let a single person that waved off the economics have a moment’s peace. Do not let anybody who sat in front of Dario Amodei or Sam Altman and squealed with delight at whatever vacuous talking points they burped out forget that they didn’t push them, they didn’t ask hard questions, they didn’t worry or wonder or feel any concern for investors or the general public. Do not let a single analyst that called AI skeptics “luddites” or equated them to flat Earthers hear the end of it. Do not let anybody who claimed that we “lost control of AI” or “ blackmailed developers ” go without their complementary “Fell For It Again” badge. When it happens, I promise I won’t be too insufferable, but I will be calling for accountability for anybody who boosted AI 2027 , who sat in front of Sam Altman or Dario Amodei and refused to ask real questions, and for anyone who collected anything resembling “detailed notes” about me or any other AI skeptic. If you think I’m talking about you, I probably am, and I have a question: why didn’t you approach the AI companies with as much skepticism as you did the skeptics? I also promise you, if I’m wrong , I’ll happily explain how and why, and I’ll do so at length, too. I will have links and citations, I’ll do podcast episodes. I will make a good faith effort to explain every single failing, because my concern is the truth, and I would love everybody else to follow suit. Do you think any booster will have the same courtesy? Do you think they care about the truth? Or do they just want to get a fish biscuit from Sam Altman or Jensen Huang?  Pathetic.   It’s times like this where it’s necessary to make the point that there is absolutely “enough money” to end hunger or build enough affordable housing or have universal healthcare, but they would be “too expensive” or “not profitable enough,” despite having a blatant and obvious economic benefit in that more people would have happier, better lives and — if you must see the world in purely reptilian senses — enable many more people to have disposable income and the means of entering the economy on even terms. By contrast, investments in AI do not appear to be driving much economic growth at all, other than in the revenue driven to NVIDIA from selling these GPUs, and the construction of data centers themselves. Had Microsoft, Google, Meta and Amazon sunk $776 billion into building housing and renting it out, the world would be uneven, we would have horrible new landlords, and it would still be a great deal better than one where nearly a trillion dollars is being wasted propping up a broken, doomed industry, all because the people in charge are fucking idiots obsessed with growth.  The future, I believe, spells chaos, and I am trying to rise to the occasion. My work has transformed from being critical of the tech industry to a larger critique of the global financial system. I’ve had to learn accountancy, the mechanics of venture and private equity, and all sorts of annoying debt-related language, all so that I sufficiently explain what’s going on. I see several worrying signs I have yet to fully understand. The Discount Window — where banks go when they need quick liquidity as a last resort — has seen a steady increase of loans on its books since September 2024 , suggesting that financial institutions are facing liquidity issues, and the last few times that this has happened, financial crises followed.  There is also a brewing bullshit crisis in Private Equity, which is heavily invested in data centers.  In September, Auto parts maker First Brands collapsed in a puff of fraud with billions of dollars “ vanishing ” after it double-pledged the same collateral to multiple loans, off-balance sheet liabilities, falsified invoices, and even leased some of the parts it sold. This wasn’t a case where smaller lenders were swindled, either — global investment banks UBS and Jefferies both lost hundreds of millions of dollars , along with asset manager BlackRock through associated funds.  Subprime auto lender Tricolor collapsed in similar circumstances , burning JPMorgan , Jefferies, and Zions Bancorporation, who also loaned money to First Brands. A similar situation is currently brewing with Solar company PosiGen, which recently filed for bankruptcy after, you guessed it, double-pledging collateral for loans. One of its equity financing backers is Magnetar Capital , who invested in CoreWeave. What appears to be happening is simple: large financial institutions are issuing debt without doing the necessary due diligence or considering the future financial health of the companies involved. Private Equity firms are also heavily-leveraged, sidling acquisitions with debt, and playing silly games where they “volatility launder” — deliberately choosing not to regularly revalue assets held to make returns (or the value of assets) look better to their investors .  I don’t really know what this means right now, but I am worried that these data center loans have been entered into under similarly-questionable circumstances. Every single data center deal is based on the phony logic that AI will somehow become profitable one day, and if there’s even one First Brands situation, the entire thing collapses. I realize this is the longest thing I’ve ever written ( or should I say written so far? ), and I want to end it on a positive note, because hundreds of thousands of people now read and listen to my work, and it’s important to note how much support I’ve received and how awesome it is seeing people pick up my work and run with. I want to be clear that there is very little that separates you from the people running these companies, or many analysts. I have taught myself everything I know from scratch, and I believe you can too, and I hope I have been able to and will be able to teach you everything I know, which is why everything I write is so long. Well, that and I’m working out what I’m going to say as I write it. The AI bubble is an inflation of capital and egos, of people emboldened and outright horny over the prospect of millions of people’s livelihoods being automated away. It is a global event where we’ve realized how the global elite are just as stupid and ignorant as anybody you’d meet on the street — Business Idiots that couldn’t think their way out of a paper bag, empowered by other Business Idiots that desperately need to believe that everything will grow forever. I have had a tremendous amount of help in the last year — from my editor Matt Hughes , Robert and Sophie at Cool Zone Media, Better Offline producer Matt Osowski, Kakashii and JustDario (two pseudonymous analysts that know more about LLMs and finance than most people I read), Kasey Kagawa , Ed Ongweso Jr ., Rob Smith , Bryce Elder and Tabby Kinder of the Financial Times, all of whom have been generous with their time, energy and support. A special shoutout to Caleb Wilson ( Kill The Computer ) and Arif Hasan ( Wide Left ), my cohosts on our NFL podcast 60 Minute Drill .  And I’ve heard from thousands of you about how frustrated you are, and how none of this makes sense, and how crazy you feel seeing AI get shoved into every product, how insane it marks you feel when somebody tells you that LLMs are amazing when their actual outputs fucking suck. We are all being lied to, we all feel gaslit and manipulated and punished for not pledging ourselves to Sam Altman’s graveyard smash, but I believe we are right . In the last year, my work has gone from being relatively popular to being cited by multiple major international news organizations, hedge funds, and internal investor analyses. I was profiled by the Financial Times , went on the BBC twice , and watched as my Subreddit, r/ BetterOffline , grew to around 80,000 visitors a week and became one of the 20th largest podcast Subreddits, which is a bigger deal than it sounds. I believe there are millions of people that are tired of the state of the tech industry, and disgusted at what these people have done to the computer. I believe that they outnumber the boosters, the analysts and the hype-fiends that have propped up this era. I believe that a better world is possible by creating a meaningful consensus around making the powerful prove themselves to us rather than proving it for them. I am honoured that you read me, and even more so if you read this far. I’ll see you in 2026. Meta’s business is both supporting and profiting from organized crime, and at 10% of its revenue, it’s also kind of dependent on it. Meta is using deliberate and insidious accounting tricks to act like a data center that it is paying to build and will be the sole tenant of is somehow an “off balance sheet” operation. In Stage 1, things are good for users: the platform is free, things are easy-to-use, and thus it’s really simple for you and your friends to adopt and become dependent on it. In Stage 2, things become bad for consumers, but good for business customers: the platform begins forcing users to do “profitable” things — like show them more adverts by making search results worse — all while making it difficult to migrate to another one, either through locking in your data or the tacit knowledge that moving platforms is hard, and your friends are usually in one place. Businesses sink tons of money into the platform, knowing that users are unlikely to leave, and make good money buying ads against a populace that increasingly stays because it has to as there are no other options. In Stage 3, things become bad for consumers and businesses, but good for shareholders: the platforms begin to deteriorate to the point that usability is pushed to the brink, and businesses — who are now dependent on the platform because monopolies have pushed out every alternative platform to advertise or reach consumers — begin to see their product crumble, all in favour of shareholder capital, which only cares about stock value, net income and buybacks. According to its latest quarterly filings, Microsoft spent $34.9 billion on capital expenditures , Amazon $34.2 billion , Meta $19.37 billion , and Google $24 billion . The common mantra is that these companies are “spending all this money on GPUs,” but that doesn’t match up with NVIDIA’s revenues. NVIDIA’s last quarterly earnings said that four direct customers made up more than 10% of revenue — 22% ($12.54bn), 15% ($8.55bn), 13% ($7.41bn) and 11% ($6.27bn) out of $57 billion.  While this sort of lines up with capex spend, it doesn’t if you shift back a quarter, when Microsoft spent $21.4 billion , Meta $17.01 billion , Amazon $31.4 billion and Google $22.4 billion , with the vast majority on “technical infrastructure.”  In the same quarter, NVIDIA had only two customers that accounted for more than 10% — one 23% ($10.7bn) and one 16% ($7.47bn) out of $46.7 billion. Another quarter back, and Microsoft spent $22.6 billion , Meta $13.69 billion , Google $17.2 billion and Amazon $22.4 billion . In the same quarter, NVIDIA had two customers accounting for more than 10% of revenue — 16% ($7.49bn) and 14% ($6.168bn). Where, exactly, is all this money going? In Microsoft’s latest earnings (Q1FY26), it said that $19.39 billion went to “additions to property and equipment,” with “roughly half of [its total capex] spend on short-lived assets, primarily GPUs and CPUs.” A quarter (Q4FY2025) back, additions to property and equipment were $16.74 billion, with “roughly half…[spent] on long-lived assets that will support monetization over the next 15 years and beyond.”  Let’s assume that Microsoft is NVIDIA’s biggest customer every single quarter — customer A, spending $12.5 billion (out of $34.9 billion), $10.7 billion (out of $21.4 billion) and $7.049 billion (out of $22.6 billion) a quarter. Assuming that Microsoft is only buying NVIDIA’s Blackwell GPUs (forgive the model numbers, but it’s based on my own modeling. Let’s say 40% B200s, 30% GB200s, 10% B300s and 20% GB300s), that works out to about 457MW of IT load for Q1FY26, 391MW for Q4FY25 and (adjusting to include more H200s, as the B300/GB300s were not shipping yet) 263MW for Q3FY25.  Has Microsoft built 1.11GW of data centers in that time? Apparently! It claims it added 2GW in the last year , but Satya Nadella claimed in November that Microsoft had chips in inventory it couldn’t install due to a lack of power.  In any case, where did the remaining $22.4 billion, $11.9 billion and $15.5 billion in capex flow? We know there are finance leases. What for? More GPUs? What is the actual output of these expenditures? OpenAI appears to have net 360 payment terms from CoreWeave — meaning it can pay literally a year from invoice .  Per CoreWeave’s Q3 earnings (page 19), “...on occasion, the Company has granted payment terms up to net 360 days.” Per CoreWeave’s loan agreement (page 12), under “contract realization ratio,” “the sum of Projected Contracted Cash Flows applicable for the corresponding three-month period as determined on a net 360 basis.” CoreWeave is required to maintain something called a “contract realization ratio” of .85x — meaning that CoreWeave has to make at least 85 cents of every expected dollar or it is  in default on their loan. This is important to note because it means that if, say, OpenAI decides not to pay up in a year, CoreWeave will be in real trouble. Blue Owl was present in every single Stargate deal, other than the $38 billion package being raised by Vantage. It also was involved in a $1.3 billion Australian data center debt package by virtue of owning Stack Infrastructure . Remember that name.  MUFG (Mitsubishi UFJ Financial Group) was present in 17 out of 26 of the deals, including three separate CoreWeave financings, Stargate New Mexico ($18 billion), the $38 billion Stargate TX/WI deal for Oracle , SoftBank’s bridge loan , and a $5 billion “green loan” package for Vantage Data Centers (who are the ones building the Stargate TX/WI data centers). JP Morgan Chase was involved in eight deals, but they were some of the largest — CoreWeave’s October 2024 financing, DDTL 3.0 and November financing , the funding behind Stargate Abilene , the $38 billion Oracle deal, and Blue Owl’s acquisition of IPI Partners’ Data Centers in 2024 . They also were part of SoftBank’s bridge loan. Deutsche Bank was involved in SoftBank’s bridge loan, but also three smaller deals: a $212 million data center in Seoul , CoreWeave’s 2024 debt, CoreWeave’s November financing , and a data center in Latin America. It also was part of a $610 million data center project in Virginia , as well as a €1 billion data center project in Germany (invested in with NVIDIA). BNP Paribas? Seven deals: CoreWeave’s DDTL 3.0, Stargate New Mexico, Stargate WI/TX, the acquisition of IPI Partners by Blue Owl, the $212m deal in Seoul, and a data center in Chile . Morgan Stanley? Eight, including CoreWeave’s October 2024, DDTL 3 and November loans, Stargate New Mexico, Stargate WI/TX, EQT’s EdgeConnex financing deal , and, of course, SoftBank’s bridge loan. SMBC (Sumitomo Mitsui Banking Corporation) ? Seven deals, all notable — CoreWeave’s DDTL 3.0 and November financing, Stargate New Mexico, Stargate TX/WI, a data center in Rowan MD (also involving MUFG, TD Securities and HSBC), as well as the data centers in Chile and Latin America. Oh, and SoftBank’s bridge loan. The enshittified stock market, pumped not by actual cashflow or productivity but by signals read by analysts and investors trained over decades to push consumer investors to invest in magnificent 7 stocks that represent as much as 40% of the value of the S&P 500 , their values pumped by analysts and the media misleading investors into believing that their revenue growth is anything to do with AI. Venture capital’s liquidity crisis, one peaking at a time when AI startups have become more capital-intensive than any other point in history. Ballooning, centralized data center debt, funded based on customer contracts or built for demand that doesn’t exist, funding massive data centers of GPUs that immediately become commoditized as a result of the hysteria. The market for AI compute is very, very small. If you assume that Anthropic spent the same on Google Cloud as it did on AWS ($2.66 billion, for a total of $5.32 billion), and add CoreWeave’s revenue ($5 billion, most of which was either OpenAI (via Microsoft) or NVIDIA), there doesn’t appear to be an AI compute market, outside of serving these two companies. The market for AI compute is not actually growing. In the last two years, no new major consumers of AI compute have emerged. Every company that has signed a large compute deal has either been OpenAI, Anthropic or a hyperscaler. Even if Cursor were to dump its entire $2.3 billion in funding into AI compute, that would still not be enough.

0 views

Premium - How The AI Bubble Bursts In 2026

Hello and welcome to the final premium edition of Where's Your Ed At for the year. Since kicking off premium, we've had some incredible bangers that I recommend you revisit (or subscribe and read in the meantime!): I pride myself on providing a ton of value in these pieces, and I really hope if you're on the fence about subscribing you'll give me a look. Last week has been a remarkably grim one for the AI industry, resplendent with some terrible news and "positive stories" that still leave investors with a vile taste in their mouth. Let's recount: There are a few common threads between all of these stories: And the other key thread is the year 2026. Next year is meant to be the year that everything changes. It was meant to be the year that OpenAI had a gigawatt of data centers built with Broadcom and AMD , and when Stargate Abilene's 8 buildings were fully built and energized . 2026 is meant to be the year that OpenAI opened Stargate UAE , too. Here in reality , absolutely none of this is happening, and I believe that 2026 is the year when everything begins to collapse. In today's piece, I'm going to line up the sharp objects sitting right next to an increasingly-wobbling AI bubble, and why everything hinges on a looming cash crunch for OpenAI, AI data centers, those funding AI data centers, and venture capital itself. The Hater's Guide To NVIDIA , a comprehensive guide to the largest and weirdest company on the stock market, which was several weeks ahead of most on the "GPUs in warehouses" story. Big Tech Needs $2 Trillion In AI Revenue By 2030 or They Wasted Their Capex , a mathematical breakdown of how big tech has to make so much money before 2030 or it will have wasted every penny building AI data centers. Oracle and OpenAI Are Full Of Crap , where I broke down how Oracle doesn't have the capacity and OpenAI doesn't have the money to pay for their $300 billion compute deal, predicting the current state of affairs with Oracle's data centers months in advance. The Ways The AI Bubble Will Burst , a detailed piece about how the collapse of AI data center funding will eventually lead to the collapse of AI startup funding, creating a " chain of pain " that eventually leads to nobody buying GPUs and the end of this era. Disney is investing $1 billion in OpenAI in a deal where OpenAI will " bring beloved characters from Disney's brands to Sora ," including a three-year licensing deal. One might think that a licensing deal is weird, given that Disney is investing, and one would be right! Apparently OpenAI is "paying" to license Disney's characters entirely in stock warrants , and Disney has the opportunity to buy an undisclosed amount of future stock. Amazon is in discussions to invest $10 billion in OpenAI at a valuation of over $500 billion, per The Information , and plans to use Amazon's Trainium AI server chips (its in-house competitor to NVIDIA's GPUs that some startups, per Business Insider , claim have "performance challenges" and "underperformed" NVIDIA's years-old H100 chips), apparently. Any excitement you might have over this deal should be tempered by the fact that OpenAI and Amazon Web Services signed a $38 billion deal back in November , meaning that this is likely a situation where Amazon would hand money to OpenAI, which would then hand the money right back to Amazon, and that's assuming any real money actually changes hands. Though this is just one source, I've heard tell that Amazon, at times, sells Trainium at a loss to get customers. Then again, I think this might be the case with all AI compute. Bloomberg reported that Oracle has pushed back the completion date of multiple data centers being built for OpenAI, "largely due to labor and material shortages." Oracle responded , saying that "there have been no delays to any sites required to meet our contractual commitments, and all milestones remain on track." It isn't clear what data centers these are, but a clue might be... ...that Blue Owl has pulled out of funding a $10 billion deal for a data center for Oracle/OpenAI in Michigan, per The Financial Times . This is a very, very, very bad sign. Blue Owl is arguably the loosest, friendliest lender in the data center space, and while Oracle claims another partner is allegedly talking to Blackstone, one has to wonder whether Blackstone is lining up to fund "the deal that Blue Owl couldn't handle." Blue Owl is the pre-eminent lender in data center financing. It backed Meta's $30 billion Hyperion data center project with $3 billion of its own capital , it sunk $3 billion into OpenAI's Stargate New Mexico deal , and an indeterminate amount in Stargate Abilene, likely   somewhere between $2.5 billion and $5 billion , on top of a $7.1 billion loan provided to Blue Owl and developer Crusoe to finish the project , on top of another $5 billion joint venture with Chrisa and Powerhouse to build a data center for rickety, nasty AI compute company CoreWeave . So why did this deal fall apart? Well, according to the Financial Times, "lenders pushed for stricter leasing and debt terms amid shifting market sentiment around enormous AI spending including Oracle’s own commitments and rising debt levels." If only somebody could have warned them , somehow . Though I'll get into more detail after the premium break, both Oracle and Broadcom reported earnings, and both saw their stocks get dumped like a deadbeat boyfriend with a bad attitude and credit card debt. In Oracle's case it was the same old story — lots of debt, decaying margins and negative cash flow, along with a bunch of commitments. Did I mention that Oracle has $248 billion in upcoming data center lease commitments ? More than double those made by Microsoft? In Broadcom's case, things were a little weirder. While it beat on estimates, it partly did so, per The Coastal Journal , by playing funny non-GAAP (generally accepted accounting practices) games with things like how it handles stock compensation and the amortizations to raise its "adjusted" earnings per share, boosting non-GAAP revenues by $4.4 billion. The other problem was related to OpenAI. Back in October, Broadcom and OpenAI announced a "strategic collaboration" for "10 gigawatts of customer AI accelerators ," with "Broadcom to deploy racks of AI accelerator and network systems targeted to start in the second half of 2026, to complete by 2029." I'll get into the nitty gritty later, but CEO Hock Tan said that Broadcom " did not expect much [revenue]" in 2026 from the deal. CoreWeave's Denton Data Center has become a nightmare, with, per the Wall Street Journal , heavy rains and winds causing "a roughly 60-day delay" that prevented contractors from pouring concrete for the data center, pushing the completion date back by "several months" on top of "additional delays caused by revisions to design" for a data center specifically built to lease to OpenAI. OpenAI doesn't have cash. The Disney licensing deal? Paid for in stock. The AWS contract? Amazon has to give OpenAI $10 billion to pay for it, because OpenAI doesn't have the cash. Broadcom's deal with OpenAI? "not much" revenue in 2026, probably because OpenAI doesn't have the cash. The Money For Data Centers Is Running Out. Blue Owl is the loosest lender in the universe, and if it’s having trouble raising money, everybody will very soon. Investors are aggressively dumping Oracle because it keeps trying to build more data centers for OpenAI, a company that does not have the money to pay for its compute. AI Is Wearing Out Its Welcome, and the AI Bubble Narrative Is Impossible To Ignore It used to be (back in September, at least) that you could announce a big, stupid deal with OpenAI and see a 40% stock bump . Now the markets are suddenly thinking "huh, how is it gonna pay that?" Oracle's stock also got dumped because it increased capital expenditures in its latest quarter to $12 billion, on analyst expectations of $8.4 billion .

0 views
iDiallo 2 months ago

Paying for the rides I took 8 years ago

What does it mean when we say that investors are subsidizing the price of a service? We often hear that ChatGPT is not profitable, despite some users paying $20 a month, or others up to $200 a month. The business is still losing money despite everything we're paying. To stay afloat, OpenAI and other AI companies have to use money from their investors to cover operations until they find a way to generate sustainable income. Will these AI companies capture enough market share and attract enough paying customers to become profitable? Will they find the right formula or cheap enough hardware to be sustainable? Lucky for us, we have the benefit of hindsight. Not for AI companies, but for an adjacent company that relied entirely on investor funds to capture market share and survive: Uber. Uber is now a publicly traded company on the NASDAQ. They first became profitable in 2023, with a net income of $1.89 billion. In 2024, they generated $9.86 billion in profit. If you're wondering what their numbers looked like in 2022, it was a net loss of $9.14 billion. When they were losing money, that was investor money. They were doing everything in their power to crush the competition and remain the only player in town. Once they captured enough market share, they pulled a switcheroo. Their prices went from extremely affordable to just being another taxi company. I took my first Uber ride in 2016. I had car troubles, and taking the bus to work would have turned a 20-minute drive into three bus rides and an hour and 20 minutes of commuting. Instead, I downloaded Uber. Within minutes, my ride was outside waiting for me. I walked to the passenger side up front and opened the door, only to find a contraption I wasn't familiar with. The driver politely asked me to sit in the back. He was paraplegic. On the ride, we had a good conversation until he dropped me off at work. A notification appeared on my phone with the price: $3.00. That's how much it cost for a 5-mile drive. For reference, taking the bus would have cost $1.50 per ride. A day pass was $5.00 at the time. But with Uber, it was $3.00 and saved me a whole lot of time. I didn't even have to think about parking once I got to work. I didn't question it because, well, it was cheap and convenient. Throughout my time at that job, I took these rides to work. When I opened the app one day and the price was suddenly $10, I didn't even flinch. I closed the app and opened Lyft as an alternative. At most, I would pay $6. If it was too expensive, I would just spend another 20 minutes at work and wait for the surge to end and prices would go back down. This felt like a cheat code to life. At that point, I questioned whether it was even worth owning a car. Mind you, I live in Los Angeles, a city where you can't do much without a car and our transit system is nothing to brag about. Nobody made money, but everybody got paid. From time to time, I would wonder: if I'm paying those measly prices for transportation, how much is the driver making? Obviously, if Uber took its cut from the $3 ride, there wouldn't be much left for the driver. But my answer came from the drivers themselves. They loved Uber. Some of them said they could make up to $80,000 a year just driving. How many $3 rides does it take? You see, there were bonuses and goals they could reach. If they completed 100 rides in a timespan, they would qualify for a bonus. Something like an extra $500. If they did 300 rides, they could double the bonus. The whole thing was gamified. In the end, Uber was happy, the driver was happy, and the rider was happy. It was the same for Lyft. There were incentives everywhere. Nobody made money, but everybody got paid. This is what it looks like when investors subsidize the cost. So what does it look like when they stop subsidizing the cost? Well, in 2022, I took those same rides. From my old apartment to that job. Instead of $3, it cost around $24. That's an 8x increase. Ridesharing is the norm these days. People hardly take taxis anymore. The Ubers and Lyfts of the world have dominated the industry by making rides so cheap that they decimated the old guards. Now that they're the only players in town, they've jacked up the prices, and hardly anyone complains. We've already changed our habits. We've forgotten what the alternative looks like. This should serve as a preview for subsidized technologies like AI. Right now, everyone is offering it for free or at unsustainable prices. Companies are in a race to capture users, train us to integrate AI into our workflows, and make us dependent on their platforms. While I can see someone paying $20, $30, or even $60 for a rideshare in an emergency, I don't see average people paying $200 for a ChatGPT subscription. Even that is at a net loss. But that's exactly the point. Right now, it doesn't matter what we pay for these subscriptions. The goal for these companies is for AI to become essential to how we work, create, and think. Once these companies capture enough market share and eliminate alternatives, they'll have the same leverage Uber gained. They'll start with a modest price increase, maybe $25 becomes $40. Then $60. Then tiered pricing for different levels of capability. Before long, what feels optional today will feel mandatory, and we'll pay whatever they ask because we'll have built our lives around it. Imagine a future where completing a legal document requires access to agentic AI. Like you literally cannot do it unless you shell out a subscription to Gemini Ultra Pro Max Turbo. The subsidy era never lasts forever. Right now, whenever I have no choice but to take Uber, I'm paying back the remaining $21 dollars from those rides I took eight years ago. Today, venture capitalists are paying for your AI queries just like they paid for my rides. But it's not a charity. Enjoy it while it lasts, but don't forget that someone, eventually, will have to pay the real price. And that someone will be us.

0 views
James Stanley 2 months ago

Student loan deductions

Around May or June last year I finally paid off my student loan. I called up the Student Loans Company, and either made a card payment over the phone, or else made a bank transfer of the amount they instructed me to pay, I don't recall which. And I didn't think any more of it. 4 weeks ago I filled in my self-assessment tax return and it was saying it wanted some money for a student loan repayment. That's strange, because I paid off my student loan ages ago. So I didn't submit my self-assessment just yet, I wanted to resolve the student loan issue first. So then I checked on the Student Loans Company website and it was saying I still owed 67p on my student loan! Presumably this is interest that had accrued before I paid off the loan but had not been added to my balance at the point I paid it off. How annoying, surely this is not the first time anyone has called up wanting to repay their loan, they ought to have a process for this. If they had just taken £1 more off me at the time I paid it off I wouldn't have noticed and it wouldn't matter, but now I have an unwarranted bill from HMRC for thousands of pounds of student loan repayment, to go towards repaying my 67p balance. So I phoned up the Student Loans Company again, on the same day I discovered the problem about 4 weeks ago. The first person I spoke to didn't seem particularly competent and hung up on me mid-conversation. I called back and got someone better. I impressed upon this person how important it was that I completely pay off the loan and not leave a random penny still owed. I asked if I could pay 68p instead of 67p just to make sure, but they said no and assured me that 67p was the correct amount. They said it could take 5 working days until the "stop notice" arrives at HMRC (Why? Are they sending it by post?) and after that HMRC will no longer want to take a student-loan repayment. So I left it for a while. Now approximately 20 working days have passed, and I loaded up the HMRC web interface and checked my self-assessment again and it is still wanting to take the same amount of money for a student loan repayment. There is an important but ambiguously-worded question in the self-assessment form: Did you receive notification from Student Loans Company that repayment of an Income Contingent Student Loan began before 6 April 2025? I don't recall receiving any such notification, but perhaps I did a long time ago. I think what they're getting at is "Should you be making student loan repayments?", and my answer is "Yes". Since I have now definitely paid off my student loan I tried changing my answer to "No". Only it won't let me, because another part of my tax return says that I made £200 of student loan repayments via PAYE, and therefore I must have been making student loan repayments. OK, fine. So I called up HMRC, their automated voice informed me that the recent average wait time is 20 minutes, and it also advised me at one point to "Just hang up". It asked me what I was calling about, and I explained the situation, and to my surprise and delight it responded something like "I think you're asking about reducing your student loan deduction, is that correct?" - Yes! Good bot! Wow, isn't technology something? This stuff never used to work. So I say "yes" and it goes into this spiel about reducing the deduction that your employer is taking, which is actually not relevant to my problem, and how you can only do this via the Student Loans Company and not via HMRC, and at the end of its monologue it said "Thanks for calling. Goodbye" and hung up on me! I have just logged in to the Student Loans Company website again and it is showing that I made a 67p payment but I still have a 2p balance! What the fuck? So now what? Do I just pay the thousands of pounds and hope to get refunded later? Do I persist in trying to engage this Kafkaesque system? I'm not really sure what the lessons are here. Don't use PAYE? Don't take out a student loan? Don't bother paying off your student loan because it won't reduce your repayments anyway? I wanted to try making a one-off loan payment of £5, to wait 5 working days to see if the repayment disappears from my tax return, and then never bother chasing up my £4.98 refund. But it won't even let me! My word. Is it even worth taking a 2p card payment? And would a 2p payment even work or is this just Zeno's loan repayment paradox?

0 views

Premium: The Ways The AI Bubble Might Burst

[Editor's Note: this piece previously said "Blackstone" instead of "Blackrock," which has now been fixed.] I've been struggling to think about what to write this week, if only because I've written so much recently and because, if I'm honest, things aren't really making a lot of sense. NVIDIA claims to have shipped six million Blackwell GPUs in the last four quarters — as I went into in my last premium piece — working out to somewhere between 10GW and 12GW of power (based on the power draw of B100 and B200 GPUs and GB200 and GB300 racks), which...does not make sense based on the amount of actual data center capacity brought online. Similarly, Anthropic claims to be approaching $10 billion in annualized revenue — so around $833 million in a month — which would make it competitive with OpenAI's projected $13 billion in revenue, though I should add that based on my reporting extrapolating OpenAI's revenues from Microsoft's revenue share , I estimate the company will miss that projection by several billion dollars, especially now that Google's Gemini 3 launch has put OpenAI on a " Code Red, " shortly after an internal memo revealed that Gemini 3 could “create some temporary economic headwinds for [OpenAI]." Which leads me to another question: why? Gemini 3 is "better," in the same way that every single new AI model is some indeterminate level of "better." Nano Banana Pro is, to Simon Willison, " the best available image generation model. " But I can't find a clear, definitive answer as to why A) this is "so much better," B) why everybody is freaking out about Gemini 3, and C) why this would have created "headwinds" for OpenAI, headwinds so severe that it has had to rush out a model called Garlic "as soon as possible" according to The Information : Right, sure, cool, another model. Again, why is Gemini 3 so much better and making OpenAI worried about "economic headwinds"? Could this simply be a convenient excuse to cover over, as Alex Heath reported a few weeks ago , ChatGPT's slowing download and usage growth ? Experts I've talked to arrived at two conclusions: I don't know about garlic or shallotpeat or whatever , but one has to wonder at some point what it is that OpenAI is doing all day : So, OpenAI's big plan is to improve ChatGPT , make the image generation better , make people like the models better , improve rankings , make it faster, and make it answer more stuff. I think it's fair to ask: what the fuck has OpenAI been doing this whole time if it isn't "make the model better" and "make people like ChatGPT more"? I guess the company shoved Sora 2 out the door — which is already off the top 30 free Android apps in the US and at 17 on the US free iPhone apps rankings as of writing this sentence after everybody freaked out about it hitting number one . All that attention, and for what? Indeed, signs seem to be pointing towards reduced demand for these services. As The Information reported a few days ago ... Microsoft, of course, disputed this, and said... Well, I don't think Microsoft has any problems selling compute to OpenAI — which paid it $8.67 billion just for inference between January and September — as I doubt there is any "sales team" having to sell compute to OpenAI. But I also want to be clear that Microsoft added a word: "aggregate." The Information never used that word, and indeed nobody seems to have bothered to ask what "aggregate" means. I do, however, know that Microsoft has had trouble selling stuff. As I reported a few months ago, in August 2025 Redmond only had 8 million active paying licenses for Microsoft 365 Copilot out of the more-than-440 million people paying for Microsoft 365 . In fact, here's a rundown of how well AI is going for Microsoft: Yet things are getting weird. Remember that OpenAI-NVIDIA deal? The supposedly "sealed" one where NVIDIA would invest $100 billion in OpenAI , with each tranche of $10 billion gated behind a gigawatt of compute? The one that never really seemed to have any fundament to it, but people reported as closed anyway? Well, per NVIDIA's most-recent 10-Q (emphasis mine): A letter of intent "with an opportunity" means jack diddly squat. My evidence? NVIDIA's follow-up mention of its investment in Anthropic: This deal, as ever, was reported as effectively done , with NVIDIA investing $10 billion and Microsoft $5 billion, saying the word "will" as if the money had been wired, despite the "closing conditions" and the words "up to" suggesting NVIDIA hasn't really agreed how much it will really invest. A few weeks later, the Financial Times would report that Anthropic is trying to go public   as early as 2026 and that Microsoft and NVIDIA's money would "form part of a funding round expected to value the group between $300bn and $350bn." For some reason, Anthropic is hailed as some sort of "efficient" competitor to OpenAI, at least based on what both The Information and Wall Street Journal have said, yet it appears to be raising and burning just as much as OpenAI . Why did a company that's allegedly “reducing costs” have to raise $13 billion in September 2025 after raising $3.5 billion in March 2025 , and after raising $4 billion in November 2024 ? Am I really meant to read stories about Anthropic hitting break even in 2028 with a straight face? Especially as other stories say Anthropic will be cash flow positive “ as soon as 2027 .” And if this company is so efficient and so good with money , why does it need another $15 billion, likely only a few months after it raised $13 billion? Though I doubt the $15 billion round closes this year, if it does, it would mean that Anthropic would have raised $31.5 billion in 2025 — which is, assuming the remaining $22.5 billion comes from SoftBank, not far from the $40.8 billion OpenAI would have raised this year. In the event that SoftBank doesn't fund that money in 2025, Anthropic will have raised a little under $2 billion less ($16.5 billion) than OpenAI ($18.3 billion, consisting of $10 billion in June   split between $7.5 billion from SoftBank and $2.5 billion from other investors, and an $8.3 billion round in August ) this year. I think it's likely that Anthropic is just as disastrous a business as OpenAI, and I'm genuinely surprised that nobody has done the simple maths here, though at this point I think we're in the era of "not thinking too hard because when you do so everything feels crazy.” Which is why I'm about to think harder than ever! I feel like I'm asked multiple times a day both how and when the bubble will burst, and the truth is that it could be weeks or months or another year , because so little of this is based on actual, real stuff. While our markets are supported by NVIDIA's eternal growth engine, said growth engine isn't supported by revenues or real growth or really much of anything beyond vibes. As a result, it's hard to say exactly what the catalyst might be, or indeed what the bubble bursting might look like. Today, I'm going to sit down and give you the scenarios — the systemic shocks — that would potentially start the unravelling of this era, as well as explain what a bubble bursting might actually look like, both for private and public companies. This is the spiritual successor to August's AI Bubble 2027 , except I'm going to have a little more fun and write out a few scenarios that range from likely to possible , and try and give you an enjoyable romp through the potential apocalypses waiting for us in 2026. Gemini 3 is good/better at the stuff tested on benchmarks compared to what OpenAI has. OpenAI's growth and usage was decelerating before this happened, and this just allows OpenAI to point to something. Its chips effort is falling behind , with its "Maya" AI chip delayed to 2026, and according to The Information, "when it finally goes into mass production next year, it’s expected to fall well short of the performance of Nvidia’s flagship Blackwell chip." According to The Information in late October 2025 , "more customers have been using Microsoft’s suite of AI copilots, but many of them aren’t paying for it." In October , Australian's Competition and Consumer Commission sued Microsoft for "allegedly misleading 2.7 million Australians over Microsoft 365 subscriptions," by making it seem like they had to pay extra and integrate Copilot into their subscription rather than buy the, and I quote, "undisclosed third option, the Microsoft 365 Personal or Family Classic plans, which allowed subscribers to retain the features of their existing plan, without Copilot, at the previous lower price." This is what a company does when it can't sell shit. Google did the same thing with its workspace accounts earlier in the year . This should be illegal! According to The Information in September 2025 , Microsoft had to "partly" replace OpenAI's models with Anthropic's for some of its Copilot software. Microsoft has, at this point, sunk over ten billion dollars into OpenAI, and part of its return for doing so was exclusively being able to use its models. Cool! According to The Information in September 2025 , Microsoft has had to push discounts for Office 365 Copilot as customers had "found Copilot adoption slow due to high cost and unproven ROI." In late 2024 , customers had paused purchasing further Copilot assistants due to performance and cost issues.

0 views
Rik Huijzer 3 months ago

Quote about fines from YouTube

Interesting quote from below a YouTube video: > When the punishment for committing a crime is a fine, then it is a punishment only for the poor .

0 views
Rik Huijzer 3 months ago

Ondernemer op Reddit over subsidies

> De enige vorm van “subsidie” die ik als ondernemer concreet ervaar, zijn de belastingen die ik betaal. Het lijkt – en ik zeg nadrukkelijk lijkt – alsof je reactie komt vanuit een positie waarin men niet hoeft te dragen wat ondernemers dagelijks moeten dragen. Zonder een gezond bedrijfsleven bestaat er geen economische ruimte voor sociale voorzieningen of luxe waar we als samenleving allemaal van profiteren. > > Ik zeg niet dat vermogenden nóg rijker moeten worden, maar ik kan je verzekeren dat het tegenwoordig voor veel ondernemers, zelfs met een goedlopend bedrijf, buitengewoon moe...

0 views