Latest Posts (20 found)

Trump Allows H200 Sales to China, The Sliding Scale, A Good Decision

The Trump administration has effectively unwound the Biden era chip controls by selling the H200 to China; I agree with the decision, which is a return to longstanding U.S. policy.

2 views

Domains as "Internet Handles"

A little while ago I cam across a post by Dan Abramov , a name that until then didn’t ring a bell, but who appears to be a former Meta employee and member of the React core team. The post links to a website made by Abramov , that addresses the issues of how, quote, every time you sign up for a new social app, you have to rush to claim your username , how, quote, if someone else got there first, too bad and how, quote, that username only works on that one app anyway . The website goes on: This is silly. The internet has already solved this problem. There already exists a kind of handle that works anywhere on the internet—it’s called a domain . A domain is a name you can own on the internet, like or . Most creators on the internet today don’t own a domain. Why not? Until recently, you could only use a domain for a website or custom email. But personal websites have mostly fallen out of fashion, and each social app sports its own kind of handles. However, open social apps are starting to change that. These apps let you use any internet domain you own as a handle Abramov highlights a familiar pain point: On every new platform, users must scramble to secure their preferred username, often discovering it was taken years ago. Domains, he suggests, solve this by offering a globally unique namespace. However, this solution introduces an even greater scarcity problem, amongst other more important issues. Short, meaningful domain names have been scarce for decades. Most desirable combinations of common words, short names, or initials were claimed long before modern social platforms even existed. For example, just like our author, I, too, would have loved to use or as my handle on e.g. Bluesky . Sadly, however, I’m more than two decades late for that, as the former seemingly belongs to a Russian company, and the latter to a namesake somewhere in Bavaria, Germany. Domain marketplaces and registries still list alternatives , but these often come with premium or recurring fees far exceeding what the average user is willing to pay. When platforms require domains as identity tokens, a user whose preferred domain is unavailable loses access to that identity everywhere , not just on a single platform. Unlike usernames, which can often be adapted with simple variations (e.g. adding punctuation), domains offer no such flexibility. TLD constraints mean that once a desirable domain is taken, there may be no practical semantic alternative. Domain scarcity does not solve the “handle availability” problem, it instead exacerbates it by moving contention from individual platforms to the internet’s global naming infrastructure. Usernames exist within individual platforms and their loss, while inconvenient, usually has contained consequences. Losing a username typically means losing access to a single isolated data silo (platform). Domains, by contrast, are subject to a multilayered hierarchy of control involving domain registrars, TLD operators, ICANN-affiliated registries and the DNS root zone. By using a domain as a cross-platform handle , users tie their entire online identity to this centralized, multi-stakeholder governance structure. Misconduct, even just alleged, on one platform could result in escalations to a registrar or registry, potentially leading to domain suspension. A suspended domain invalidates not just a handle on one platform, but an entire online identity across all services using that identifier. The risks extend beyond platform moderation. A compromised mailbox, a malware incident on a web server, or an automated threat-intelligence flag from entities such as the internet’s favorite bully Spamhaus can lead to domain suspension. In such scenarios, users may face lengthy appeals processes involving opaque third-party entities that wield far more power than a typical platform operator. Domains were designed for hosting services, not for acting as the cornerstone of individual identity. Using them as universal handles places disproportionate power in the hands of infrastructure operators who were never intended to serve as arbiters of personal identity. If you’re a long-time reader of this website you probably already knew that privacy must come up at some point. Well, here it is: Traditional username-based systems allow users to separate their personal identity from their public persona. After all, not everyone might want others to know about their activity in the Taylor Swift forum of FanForum.com , and that’s fine. Domains, however, increasingly erode this layer of privacy. While privacy-respecting domain registrars still exist, the mainstream domain ecosystem overwhelmingly encourages or requires KYC, traceable payment methods and paid WHOIS privacy services to maintain the illusion of privacy. Most users will register domains using a credit card or similar traceable payment method through large commercial registrars. Even if WHOIS privacy is enabled, metadata leakage and billing records remain. In the context of social identities, this creates an environment where domain-based handles can be correlated with real-world identities far more easily than pseudonymous usernames. A user posting under a domain such as time-to-get-swifty.com could find their identity exposed not through any platform breach, but simply through the structural nature of domain registration. Usernames are free. Domains are not. Even the cheapest domains incur recurring costs. More desirable, short, memorable, or branded names often command high premiums or elevated renewal fees. While this financial burden may appear negligible to, let’s say, former well-paid Meta employees who consider their online presence a professional asset, the majority of internet users do not attach the same value to domain ownership. For many, especially outside tech-centric circles, the ROI of maintaining a personal domain is negligible or non-existent. A farmer participating in an agricultural forum is unlikely to find value in purchasing and renewing a domain like solely to participate in an online community. Any identity system that introduces ongoing financial requirements creates unfair barriers to participation and risks entrenching socioeconomic inequality in digital spaces. Abramov ’s argument positions domains as a universal, user-controlled solution to fragmented identity systems. While his vision aligns with broader goals of data portability and user autonomy, domains introduce significant drawbacks that usernames do not suffer from: Greater scarcity and reduced availability, centralized infrastructure vulnerabilities and governance risks, reduced privacy and increased traceability, and recurring financial burdens for users. With statements like “You don’t have to squat handles anymore. Own a domain, and you can log into any open social app” the author makes it sound like domain names are less exclusive than simple usernames, when it’s clearly the other way around, and they fail to recognize that squatting is far worse of an issue for domains than it is for simple usernames. Moreover, the reliance of on conventional DNS infrastructure undermines the self-sovereignty that decentralized identifier systems aspire to. Without a complementary decentralized naming layer (e.g. Handshake ) domain-based identities merely exchange one set of constraints and issues for another (vastly more dangerous and impactful) one. For these reasons, users and platform developers should think carefully before adopting domains as universal “internet handles” . Usernames, for all their imperfections, remain simpler, safer, more private, and more equitable for everyday identity on the web, at least until the truly decentralized future is here. While one might say that the handle is merely a representation of the underlying decentralized ID , a loss of the domain will nevertheless come with functional implications across every service that uses it. Luckily, platforms that implement domain handles continue to offer accounts under their own domains for the time being, so that at least for uninformed users nothing really changes (on the surface). Note: I have an account on a platform that supports domain handles and I am using the feature in order to be able to make informed statements. The account is, however, nothing that is crucial to my existence on the internet. If my domain should spontaneously combust that account would be the least of my worries. Instead, I’d be more troubled about this site and its related services, which is why I have a fallback domain . While I’m sure the author of internethandle.org didn’t intend to, some statements on the website “sound” somewhat out of touch, or at the very least tone-deaf , e.g.: Most creators on the internet today don’t own a domain. Why not? Until recently, you could only use a domain for a website or custom email. But personal websites have mostly fallen out of fashion […] Dan , personal websites haven’t fallen out of fashion , but have suffered under the World Wide Web altered (dare I say destroyed ?) by the very companies you supported building as part of your previous roles and, to some extent, as part of the technologies you’re working with. Just because you, and the people you surround yourself with, seemingly don’t care about the small web it doesn’t mean it has fallen out of fashion ; If anything, personal websites are gaining popularity and are the weapon of choice against the enshittification of the web by companies like Meta and others.

1 views

China's CO₂ Emissions Per Capita Has Already Surpassed the...

EU According to Our World in Data, China's CO₂ emissions per capita have already passed those of people in the European Union and the UK, and will surpass those of the US and Canada roughly around 2028: ![co-emissions-per-capita.svg](/files/b60da0f2ccbfb9fb)

0 views

Under the hood of Canada Spends with Brendan Samek

I talked to Brendan Samek about Canada Spends , a project from Build Canada that makes Canadian government financial data accessible and explorable using a combination of Datasette, a neat custom frontend, Ruby ingestion scripts, sqlite-utils and pieces of LLM-powered PDF extraction. Here's the video on YouTube . Sections within that video: Build Canada is a volunteer-driven non-profit that launched in February 2025 - here's some background information on the organization, which has a strong pro-entrepreneurship and pro-technology angle. Canada Spends is their project to make Canadian government financial data more accessible and explorable. It includes a tax sources and sinks visualizer and a searchable database of government contracts, plus a collection of tools covering financial data from different levels of government. The project maintains a Datasette instance at api.canadasbilding.com containing the data they have gathered and processed from multiple data sources - currently more than 2 million rows plus a combined search index across a denormalized copy of that data. The highest quality government financial data comes from the audited financial statements that every Canadian government department is required to publish. As is so often the case with government data, these are usually published as PDFs. Brendan has been using Gemini to help extract data from those PDFs. Since this is accounting data the numbers can be summed and cross-checked to help validate the LLM didn't make any obvious mistakes. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . 02:57 Data sources and the PDF problem 05:51 Crowdsourcing financial data across Canada 07:27 Datasette demo: Search and facets 12:33 Behind the scenes: Ingestion code 17:24 Data quality horror stories 20:46 Using Gemini to extract PDF data 25:24 Why SQLite is perfect for data distribution datasette.io , the official website for Datasette sqlite-utils.datasette.io for more on Canada Spends BuildCanada/CanadaSpends on GitHub

0 views

The Dutch Nitrogen Regulation Makes No Sense

The Dutch Raad van State in 2019 has argued that a nitrogen deposition of 5.09 mol per acre per year is damaging De Heide (Heath) too much. This is the same as putting down about one one grain of fertilizer the size of a sugar grain per two square meters per week. How ridiculous this may sound, this verdict has blocked thousands of farmers and builders from expanding their business or homes, and even caused many farms to close down. Furthermore, many farmers in the Netherlands, which have often been farmers for many generations, are not sure whether they will be allowed to continue farming.

0 views

Yuval Noah Harari on Science and Truth

Harari in his own words: > Truth was never the highest priority of human society. > It was the highest priority of some individuals, but never of society as a whole because society as a whole does not function on the basis of truth. > And if you take two of the most powerful institutions of humankind, let's think about science and the scientific community, and let's think about religion and churches and so forth. > I think none of them has truth as their chief value. > As individuals yes, but as institutions know, > I think the chief value of science is power and the chief value of religion i...

0 views
HeyDingus Yesterday

I’m not a ring guy, but…

I’m not a ring guy. My parents had to cajole me into getting a class ring back in high school, telling me that it would be something that I would later regret if I didn’t get one. So I got one, tried wearing it, and ended up hating the feeling of it always spinning ’ round my finger. And then I lost it in my bowling ball bag for like a year. I’ve got no idea where it is today. My next ring was my wedding band. Again, following customary traditions, I spent so much of my savings on an engagement and wedding ring combo for my wife. But for my own ring, I wasn’t particular. I looked around online for design ideas, liked the look of a tungsten one, found one for like $15 on Amazon, and clicked ‘ Buy Now’. It still looks good as new over seven years later. And while I liked the feel of it better than my old class ring since it was symmetrical and didn’t tend to fall to one side of my finger or the other, I still prefer my fingers unornamented. In fact, since becoming a mountain guide, I’ve worn my wedding band on a piece of cord around my neck, lest it get wedged in a rock somewhere while I’m climbing, which could be disastrous. I’d like to get a tattooed ring on my finger someday. 1 Likewise, I’ve tended to be skeptical of the fitness rings, such as the Oura , partly because I figure I’d dislike wearing it at least as much as any other ring. But also because my Apple Watch already handles all my fitness tracking, and I wouldn’t want another thing to remember to charge. All that being said, I’m as surprised as anyone that the Index 01 , Pebble’s latest gadget, caught my interest. It’s a ring, but instead of packing in more features than its competition, the Index is designed to do less . Its primary role is to be an ever-present way to record short notes-to-self. It’s got a tiny LED and a little microphone that’s activated by pressing a physical button. That’s it. Eric Migicovsky, Pebble’s founder, is selling the Index as “ external memory for your brain”. It doesn’t have any fitness tracking sensors. It doesn’t record everything around you, 24/7, like other AI gadgets , to make a perfect transcript of your life. It’s basically a dedicated personal note taker, and that’s what makes it so interesting to me. In fact, I’ve been trying to solve this ‘ take a quick note’ problem on my own for years. My brain comes up with its best ideas when I’m out for a hike, but that’s also when I least want to pull out my phone to type it out. So, I rigged up a solution with Apple Shortcuts to trigger voice-to-text with my iPhone’s Action button so that I can easily save my ideas and to-dos to Drafts without breaking stride. But it’s an imperfect solution as I look a little goofy in front of my clients when I mutter into my phone in the backcountry. Plus, I have to have my phone with me, and the audio isn’t saved, just the transcript. The Index remedies a lot of that rigmarole by virtue of being a dedicated device that’s always with you, that saves the audio recording, and that’s less intrusive and distracting than pulling out a smartphone. The physical button. You have to hold it down to make a recording. No wondering if it’s working. Migicovsky insists it has a great click-feel, and I’m inclined to believe him. It’s designed to be worn on your index finger, putting the button always in reach of your thumb to start a recording. That’s so smart, as it means it can be used discreetly with one hand. My Apple Watch often needs to be operated with the other hand, and its raise-to-speak to Siri feature is somewhat unreliable. Adding the button was a great idea. You can’t charge it. This one’s a bit controversial, I know. Just read the comments on the announcement video — it’s basically the only thing people are talking about. The non-replaceable battery is a bummer, but I get it. I’d want a ring to be as unobtrusive as possible, and leaving out the charging bits and accessible battery cuts down on a lot of bulk. It’s definitely more svelte than an Oura. Furthermore, I have enough gadgets that I need to remember to charge every day. If it can just stay on my finger, it has a way higher chance of becoming an ingrained workflow. While I don’t want to contribute to e-waste, Pebble says they’ll recycle it when the battery dies, supposedly in two or so years with typical use. The price. If this thing cost $300+, like most smart rings , I certainly wouldn’t be psyched to replace it every two years. But at $99 ($75 for pre-orders), I think they priced it well to be a reasonable curiosity purchase. And it’s a one-time payment — there’s no ongoing subscription cost! Additional actions. While its primary purpose — and my main interest in it — rests with its always-ready note-taking, it sounds like the Index can do a little processing and take action on some commands. From the announcement post : Actions: While the primary task is remembering things for you, you can also ask it to do things like ‘ Send a Beeper message to my wife - running late’ or answer simple questions that could be answered by searching the web. You can configure button clicks to control your music - I love using this to play/pause or skip tracks. You can also configure where to save your notes and reminders (I have it set to add to Notion). Customizable and hackable: Configure single/double button clicks to control whatever you want (take a photo, turn on lights, Tasker, etc). Add your own voice actions via MCP . Or route the audio recordings directly to your own app or server! Supposedly, you’ll be able to hook it up to MCP to do more AI stuff with the recordings. I don’t know enough about MCP , so that’s not of huge interest to me. But if it can send quick messages, make reminders and calendar events, and control audio playback — and do so reliably — that’d be pretty great. Works offline. It doesn’t have or need an internet connection to work. Transferring the audio file goes directly to your phone, and the transcription is done there, on-device. If you set those additional actions that need the internet, that’s another story, but the Index will serve its primary purpose offline, without sending your (potentially very personal) recordings to anyone’s servers. Less-than-stellar water-resistance. Pebble’s billed the Index as something that you never have to take off, but then notes it’s water-resistant only to 1 meter. They note, “ You can wash your hands, do dishes, and shower with it on, but we don’t recommend swimming with it.” That’s not a deal-breaker, but I’ve grown so used to not worrying about swimming with my watch that I’d be a little grumpy about having to remember to take off my ring before jumping in a pool or lake. Short answer, yes. I’m intrigued enough that I placed a pre-order this morning. But I’m still a little iffy on whether I’ll keep it. As I mentioned, I wear my wedding band as a necklace so that it doesn’t put my finger at risk when I’m climbing. That would still be a factor with the Index. But I’m willing to give it a shot. My wife insists that I put my wedding ring back on my finger for date night, or culturally significant events like weddings and such. I don’t mind. ↩︎ HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email . Actions: While the primary task is remembering things for you, you can also ask it to do things like ‘ Send a Beeper message to my wife - running late’ or answer simple questions that could be answered by searching the web. You can configure button clicks to control your music - I love using this to play/pause or skip tracks. You can also configure where to save your notes and reminders (I have it set to add to Notion). Customizable and hackable: Configure single/double button clicks to control whatever you want (take a photo, turn on lights, Tasker, etc). Add your own voice actions via MCP . Or route the audio recordings directly to your own app or server! My wife insists that I put my wedding ring back on my finger for date night, or culturally significant events like weddings and such. I don’t mind. ↩︎

0 views

Rearchitecting the Thread Model of In-Memory Key-Value Stores with μTPS

Rearchitecting the Thread Model of In-Memory Key-Value Stores with μTPS Youmin Chen, Jiwu Shu, Yanyan Shen, Linpeng Huang, and Hong Mei SOSP'25 I love this paper, because it grinds one of my axes: efficient pipeline parallelism on general purpose CPUs. In many hardware designs, pipeline parallelism is the dominant form of parallelism, whereas data parallelism takes the cake on CPUs and GPUs. It has always seemed to me that there are applications where pipeline parallelism should be great on multi-core CPUs, and here is an example. Fig. 1 illustrates the design space for key-value stores: Source: https://dl.acm.org/doi/10.1145/3731569.3764794 One axis is preemptive vs non-preemptive (cooperative) multi-threading. Preemptive multithreading involves context switches, which are cheap relative to disk reads but expensive relative to DRAM reads. The other axis is how to assign work to threads. Thread per request (TPR) creates a new thread for each request. This approach has been subsumed by thread per queue (TPQ), which uses a static number of threads, each of which dequeues requests from a dedicated queue and executes all of the work for a single request to completion. Finally, there is thread per stage (TPS), which divides the steps necessary to complete a request into multiple pipeline stages, and then divides the pipeline stages among a set of threads. The work discussed here uses a non-preemptive, thread per stage architecture. A pipelined implementation seems more complicated than an imperative run-to-completion design, so why do it? The key reason is to take advantage of the CPU cache. Here are two examples: As we’ve seen in other networking papers , a well-designed system can leverage DDIO to allow the NIC to write network packets into the LLC where they are then consumed by software. Key-value stores frequently have hot tuples, and there are advantages to caching these (example here ). It is hard to effectively cache data in a TPR/TPQ model, because each request runs the entire key-value store request code path. For example, a CPU core may have enough cache capacity to hold network buffers or hot tuples, but not both. The key disadvantage to a TPS architecture is load balancing. One stage could become the bottleneck, leaving CPU cores idle. The authors propose dynamic reconfiguration of the pipeline based on workload changes. Another challenge with pipelining is implementing efficient communication between cores, because data associated with each request flows down the pipeline with the request itself. Fig. 3 shows the pipeline proposed in this paper: Source: https://dl.acm.org/doi/10.1145/3731569.3764794 The NIC writes request packets into the network buffer (stored in the LLC). The cache-resident layer reads data from this buffer and handles requests involving commonly used keys by accessing the hot index and hot data caches (also in the LLC). The memory-resident layer handles cold keys and values, which are stored in DRAM. One set of threads (pinned to CPU cores) implement the cache-resident layer, and a different set of threads (pinned to other CPU cores) implement the memory-resident layer. An auto-tuner continually monitors the system and adjusts the number of threads assigned to each layer. Section 3.5 describes the synchronization required to implement this adjustment. The NIC writes request packets into a single queue. The cache-resident threads cooperatively read requests from this queue. If there are threads in the pool, then thread reads all requests with: . Next, threads check to see if the key associated with a request is hot (and thus cached in the LLC). Time is divided into epochs. During a given epoch, the set of cached items does not change. This enables fast lookups without costly synchronization between threads. A background thread gathers statistics to determine the set of items to be cached in the next epoch and has the ability to atomically switch to the next epoch when the time comes. The number of hot keys is kept small enough that it is highly likely that hot keys will be stored in the LLC. Requests that miss in the cache-resident layer are passed on to the memory-resident layer for further processing (via the CR-MR queue ). Typically, the LLC is treated like a global resource (shared by all cores). But this particular use case requires that most of the LLC be dedicated to the cache-resident layer. This is accomplished with the help of the PQOS utility from Intel, which uses “Intel(R) Resource Director Technology” to control which ways of the LLC are assigned to each layer. The memory-resident layer operates on batches of requests. Because the requests are not hot, it is highly likely that each request will require DRAM accesses for index lookups (keys) and data lookups (values). Software prefetching is used to hide DRAM latency during index lookups. When servicing operations, data values are copied directly into the outgoing network buffer. The CR-MR queue is used to communicate between the two layers. Each (CR thread, MR thread) pair has a dedicated lock-free queue. Enqueue operations use a round-robin policy (message from CR thread is sent to MR thread: ). Dequeue operations must potentially scan queues corresponding to all possible senders. Multiple requests can be stored per message, to amortize control overhead. Fig. 7 has throughput results for synthetic workloads (A, B, and C have different ratios of put/get operations), uTPS-T is this work: Source: https://dl.acm.org/doi/10.1145/3731569.3764794 Dangling Pointers The pipelining here is coarse-grained, and the design is only optimized for the LLC. I wonder if a more fine-grained pipeline would allow hot data to be stored in L2 caches. For example, the set of hot keys could be sharded among N cores, with each core holding a different shard in its L2 cache. It seems redundant that this design requires software to determine the set of hot keys, when the hardware cache circuitry already has support to do something like this. Source: https://dl.acm.org/doi/10.1145/3731569.3764794 One axis is preemptive vs non-preemptive (cooperative) multi-threading. Preemptive multithreading involves context switches, which are cheap relative to disk reads but expensive relative to DRAM reads. The other axis is how to assign work to threads. Thread per request (TPR) creates a new thread for each request. This approach has been subsumed by thread per queue (TPQ), which uses a static number of threads, each of which dequeues requests from a dedicated queue and executes all of the work for a single request to completion. Finally, there is thread per stage (TPS), which divides the steps necessary to complete a request into multiple pipeline stages, and then divides the pipeline stages among a set of threads. The work discussed here uses a non-preemptive, thread per stage architecture. Pipelining Advantages A pipelined implementation seems more complicated than an imperative run-to-completion design, so why do it? The key reason is to take advantage of the CPU cache. Here are two examples: As we’ve seen in other networking papers , a well-designed system can leverage DDIO to allow the NIC to write network packets into the LLC where they are then consumed by software. Key-value stores frequently have hot tuples, and there are advantages to caching these (example here ).

0 views
DHH Yesterday

Europe is weak and delusional (but not doomed)

The gap between Europe's self-image and reality has grown into a chasm of delulu. One that's threatening to swallow the continent's future whole, as dangerous dependencies on others for energy, security, software, and manufacturing stack up to strangle Europe's sovereignty. But its current political class continues to double down on everything that hasn't worked for the past forty years. Let's start with free speech, and the €120 million fine just levied against X. The fig leaf for this was painted as "deceptive design" and "transparency for researchers", but the EU already bared its real intentions when they announced this authoritarian quest back in 2023 with charges of "dissemination of illegal content" and "information manipulation" (aka censorship). Besides, even the fig leaf itself is rotten. Meta offers the very same paid verification scheme as X but, according to Musk, has chosen to play ball with the EU censorship apparatus, so no investigation for them. And the citizens of Europe clearly don't seem bothered much by any "deceptive design", as X continues to be a top-ranked download across every country on the continent. But you can see why many politicians in Europe are eager to punish X for giving Europeans a social media that doesn't cooperate with its crackdown on wrongthink. The German chancellor, Friedrich Merz, is personally responsible for 5,000(!!) cases pursuing his subjects for insults online, which has led to house raids for utterances as banal as calling him a "filthy drunk". Germany is not an outlier either. The UK has been arresting over 10,000 people per year since 2020 for illicit tweets, Facebook posts, and silent prayers. France has thousands of yearly cases for speech-related offenses too. No wonder people on X aren't eager to volunteer their name and address when their elected officials crash out over their tweets. It's against this backdrop — thousands of yearly arrests for banal insults or crass opposition to government policies — that some Europeans still try to convince themselves they're the true champions of free speech and freedom of the press. Delulu indeed.  But this isn't just about the lack of free speech in Europe. The X fine also highlights just how weak and puny the European tech sector has become. Get this: The EU's tech-fine operation produced more income for European coffers than all the income taxes paid by its public internet tech companies in 2024!! That's primarily because Europe basically stopped creating new, large companies more than half a century ago. So as the likes of Nokia died off, there was nobody new to replace them. In the last fifty years, the number and size of new European companies worth $10 billion or more is alarmingly small: But even the old industrial titans of Europe are now struggling. Germany hasn't grown its real GDP in five years. The net-zero nonsense has seriously hurt its competitiveness, and its energy costs are now 2-3x that of America and China. This is after Germany spent a staggering ~€700 billion on green energy projects — despite Europe as a whole being just 6% of world emissions. All the while, the EU as a whole sent over twenty billion euros to Russia to pay for energy in 2024.  So cue the talk about security. European leaders are incensed by getting excluded from the discussion about ending the war in Ukraine, which is currently just happening between America and Russia directly. But they only have themselves to thank for a seat on the sidelines. Here's a breakdown of the NATO spending by country: This used to be a joke to Europeans. That America would spend so much on its military might. Since the invasion of Ukraine, there's been a lot less laughing, and now the new official NATO target for member states is to spend 5% of GDP on defense. But even this target fails to acknowledge the fact that even if European countries should meet their new obligations (and currently only Poland among the larger EU countries is even close), they'd still lag far behind America, simply because the EU is comparatively a much smaller and shrinking economic zone.  In 2025, the combined GDP for the European Union was $20 trillion. America was fifty percent larger with a GDP of $30 trillion. And the gap continues to widen, as EU growth is pegged at around 1% in 2024 compared to almost 3% for the US. Now this is usually when the euro cope begins to screech the loudest. Trying every which way to explain that actually Europe is a better place to live than America, despite having a GDP per capita that's almost half.  And on a subjective level, that might well be true! There are plenty of reasons to prefer living in Europe, but that doesn't offset the fact that America is simply a vastly richer country, and that matters when it comes to everything from commercial dominance to military power. But it's the trajectory that's most damning. In 2008, Europe was on near-parity in GDP with America! But if the 1% vs 3% growth-rate disparity continues for another decade, America will grow its economy by another third to $40 trillion, while Europe will grow just 10% to $22 trillion. Making the American economy nearly twice as large as the European one. Yikes. These should all be sobering numbers to any European. Whether it's the 10,000 yearly arrests in the UK for social media posts or the risk of an economy that's half the size of the American one in a decade.  But Europe isn't doomed to fulfill this tragic destiny. It's full of some of the most creative, capable, and ambitious people in the world (like the fifth of US startup unicorns with European founders!). But they need much better reasons to stay than what the EU (and now a separate UK) is currently giving them. Like drastically lower energy costs to for a competitive industrial base and to power the AI revolution, so best we quickly revive European nuclear ambitions. Like an immigration policy designed to rival America's cherry-picking of the world's best, rather than mass immigration from low-average-IQ regions of net-negative contributors to the economy (and society). Like dropping the censorship ambitions and bureaucratic boondoggles like the DSA. Like actually offering a European internal market for remote labor and a unified stock exchange for listings. There are plenty of paths to take that do not end in a low-growth, censorious regime that continues to export many of its best brains to America and elsewhere. So: make haste, the shadows lengthen.

2 views
Stratechery Yesterday

An Emergency Interview with Michael Nathanson About Netflix’s Acquisition of Warner Bros.

An interview with MoffettNathanson's Michael Nathanson about Netflix's acquisition of Warner Bros. and the Hollywood end game.

0 views

Go proposal: Secret mode

Part of the Accepted! series, explaining the upcoming Go changes in simple terms. Automatically erase used memory to prevent secret leaks. Ver. 1.26 • Stdlib • Low impact The new package lets you run a function in secret mode . After the function finishes, it immediately erases (zeroes out) the registers and stack it used. Heap allocations made by the function are erased as soon as the garbage collector decides they are no longer reachable. This helps make sure sensitive information doesn't stay in memory longer than needed, lowering the risk of attackers getting to it. The package is experimental and is mainly for developers of cryptographic libraries, not for application developers. Cryptographic protocols like WireGuard or TLS have a property called "forward secrecy". This means that even if an attacker gains access to long-term secrets (like a private key in TLS), they shouldn't be able to decrypt past communication sessions. To make this work, session keys (used to encrypt and decrypt data during a specific communication session) need to be erased from memory after they're used. If there's no reliable way to clear this memory, the keys could stay there indefinitely, which would break forward secrecy. In Go, the runtime manages memory, and it doesn't guarantee when or how memory is cleared. Sensitive data might remain in heap allocations or stack frames, potentially exposed in core dumps or through memory attacks. Developers often have to use unreliable "hacks" with reflection to try to zero out internal buffers in cryptographic libraries. Even so, some data might still stay in memory where the developer can't reach or control it. The solution is to provide a runtime mechanism that automatically erases all temporary storage used during sensitive operations. This will make it easier for library developers to write secure code without using workarounds. Add the package with and functions: The current implementation has several limitations: The last point might not be immediately obvious, so here's an example. If an offset in an array is itself secret (you have a array and the secret key always starts at ), don't create a pointer to that location (don't create a pointer to ). Otherwise, the garbage collector might store this pointer, since it needs to know about all active pointers to do its job. If someone launches an attack to access the GC's memory, your secret offset could be exposed. The package is mainly for developers who work on cryptographic libraries. Most apps should use higher-level libraries that use behind the scenes. As of Go 1.26, the package is experimental and can be enabled by setting at build time. Use to generate a session key and encrypt a message using AES-GCM: Note that protects not just the raw key, but also the structure (which contains the expanded key schedule) created inside the function. This is a simplified example, of course — it only shows how memory erasure works, not a full cryptographic exchange. In real situations, the key needs to be shared securely with the receiver (for example, through key exchange) so decryption can work. 𝗣 21865 • 𝗖𝗟 704615 • 👥 Daniel Morsing , Dave Anderson , Filippo Valsorda , Jason A. Donenfeld , Keith Randall , Russ Cox Only supported on linux/amd64 and linux/arm64. On unsupported platforms, invokes directly. Protection does not cover any global variables that writes to. Trying to start a goroutine within causes a panic. If calls , erasure is delayed until all deferred functions are executed. Heap allocations are only erased if ➊ the program drops all references to them, and ➋ then the garbage collector notices that those references are gone. The program controls the first part, but the second part depends on when the runtime decides to act. If panics, the panicked value might reference memory allocated inside . That memory won't be erased until (at least) the panicked value is no longer reachable. Pointer addresses might leak into data buffers that the runtime uses for garbage collection. Do not put confidential information into pointers.

0 views
pabloecortez Yesterday

Next on lettrss: A Christmas Carol

Last week we finished reading The Wizard of Oz on lettrss . Thank you to everyone who read along! For December, I thought it would be fun to read A Christmas Carol by Charles Dickens. This one is also a short book with ~29,000 words and an estimated reading time of 1 hour and 45 minutes. I'd love to know what books you'd like to see next! Some of you have e-mailed me with ideas for doing poetry or short fiction books so it feels less overwhelming in case you miss a post here and there. That sounds great! Suggestions are open until December 21st . Note that your suggestion must be in the public domain. For this I have been browsing Standard Ebooks . Once we have chosen the next book, I'll announce it here and via lettrss. If you would like to help out with preparing the next book, you can also find the lettrss project on GitHub .

0 views

Let’s Destroy The European Union!

Elon Musk is not happy with the EU fining his X platform and is currently on a tweet rampage complaining about it. Among other things, he wants the whole EU to be abolished. He sadly is hardly the first wealthy American to share their opinions on European politics lately. I’m not a fan of this outside attention but I believe it’s noteworthy and something to pay attention to. In particular because the idea of destroying and ripping apart the EU is not just popular in the US; it’s popular over here too. Something that greatly concerns me. There is definitely a bunch of stuff we might want to fix over here. I have complained about our culture before. Unfortunately, I happen to think that our challenges are not coming from politicians or civil servants, but from us, the people. Europeans don’t like to take risks and are quite pessimistic about the future compared to their US counterparts. Additionally, we Europeans have been trained to feel a lot of guilt over the years, which makes us hesitant to stand up for ourselves. This has led to all kinds of interesting counter-cultural movements in Europe, like years of significant support for unregulated immigration and an unhealthy obsession with the idea of degrowth. Today, though, neither seems quite as popular as it once was. Morally these things may be defensible, but in practice they have led to Europe losing its competitive edge and eroding social cohesion. The combination of a strong social state and high taxes in particular does not mix well with the kind of immigration we have seen in the last decade: mostly people escaping wars ending up in low-skilled jobs. That means it’s not unlikely that certain classes of immigrants are going to be net-negative for a very long time, if not forever, and increasingly society is starting to think about what the implications of that might be. Yet even all of that is not where our problems lie, and it’s certainly not our presumed lack of free speech. Any conversation on that topic is foolish because it’s too nuanced. Society clearly wants to place some limits to free speech here, but the same is true in the US. In the US we can currently see a significant push-back against “woke ideologies,” and a lot of that push-back involves restricting freedom of expression through different avenues. The US might try to lecture Europe right now on free speech, but what it should be lecturing us on is our economic model. Europe has too much fragmentation, incredibly strict regulation that harms innovation, ineffective capital markets, and a massive dependency on both the United States and China. If the US were to cut us off from their cloud providers, we would not be able to operate anything over here. If China were to stop shipping us chips, we would be in deep trouble too ( we have seen this ). This is painful because the US is historically a great example when it comes to freedom of information, direct democracy at the state level, and rather low corruption. These are all areas where we’re not faring well, at least not consistently, and we should be lectured. Fundamentally, the US approach to capitalism is about as good as it’s going to get. If there was any doubt that alternative approaches might have worked out better, at this point there’s very little evidence in favor of that. Yet because of increased loss of civil liberties in the US, many Europeans now see everything that the US is doing as bad. A grave mistake. Both China and the US are quite happy with the dependency we have on them and with us falling short of our potential. Europe’s attempt at dealing with the dependency so far has been to regulate and tax US corporations more heavily. That’s not a good strategy. The solution must be to become competitive again so that we can redirect that tax revenue to local companies instead. The Digital Services Act is a good example: we’re punishing Apple and forcing them to open up their platform, but we have no company that can take advantage of that opening. If you read my blog here, you might remember my musings about the lack of clarity of what a foreigner is in Europe. The reality is that Europe has been deeply integrated for a long time now as a result of how the EU works — but still not at the same level as the US. I think this is still the biggest problem. People point to languages as the challenge, but underneath the hood, the countries are still fighting each other. Austria wants to protect its local stores from larger competition in Germany and its carpenters from the cheaper ones coming from Slovenia. You can replace Austria with any other EU country and you will find the same thing. The EU might not be perfect, but it’s hard to imagine that abolishing it would solve any problem given how national states have shown to behave. The moment the EU fell away, we would be warming up all border struggles again. We have already seen similar issues pop up in Northern Ireland after the UK left. And we just have so much bureaucracy, so many non-functioning social systems, and such a tremendous amount of incoming governmental debt to support our flailing pension schemes. We need growth more than any other bloc, and we have such a low probability of actually accomplishing that. Given how the EU is structured, it’s also acting as the punching bag for the failure of the nation states to come to agreements. It’s not that EU bureaucrats are telling Europeans to take in immigrants, to enact chat control or to enact cookie banners or attached plastic caps. Those are all initiatives that come from one or more member states. But the EU in the end will always take the blame because even local politicians that voted in support of some of these things can easily point towards “Brussels” as having created a problem. A Europe in pieces does not sound appealing to me at all, and that’s because I can look at what China and the US have. What China and the US have that Europe lacks is a strong national identity. Both countries have recognized that strength comes from unity. China in particular is fighting any kind of regionalism tooth and nail. The US has accomplished this through the pledge of allegiance, a civil war, the Department of Education pushing a common narrative in schools, and historically putting post offices and infrastructure everywhere. Europe has none of that. More importantly, Europeans don’t even want it. There is a mistaken belief that we can just become these tiny states again and be fine. If Europe wants to be competitive, it seems unlikely that this can be accomplished without becoming a unified superpower. Yet there is no belief in Europe that this can or should happen, and the other superpowers have little interest in seeing it happen either. If I had to propose something constructive, it would be this: Europe needs to stop pretending it can be 27 different countries with 27 different economic policies while also being a single market. The half-measures are killing us. We have a common currency in the Eurozone but no common fiscal policy. We have freedom of movement but wildly different social systems. We have common regulations but fragmented enforcement. 27 labor laws, 27 different legal systems, tax codes, complex VAT rules and so on. The Draghi report from last year laid out many of these issues quite clearly: Europe needs massive investment in technology and infrastructure. It needs a genuine single market for services, not just goods. It needs capital markets that can actually fund startups at scale. None of this is news to anyone paying attention. But here’s the uncomfortable truth: none of this will happen without Europeans accepting that more integration is the answer, not less. And right now, the political momentum is in the opposite direction. Every country wants the benefits of the EU without the obligations. Every country wants to protect its own industries while accessing everyone else’s markets. One of the arguments against deeper integration is that Europe hinges on some quite unrelated issues. For instance, the EU is seen as non-democratic, but some of the criticism just does not sit right with me. Sure, I too would welcome more democracy in the EU, but at the same time, the system really is not undemocratic today. Take things like chat control: the reason this thing does not die, is because some member states and their elected representatives are pushing for it. What stands in the way is that the member countries and their people don’t actually want to strengthen the EU further. The “lack of democracy” is very much intentional and the exact outcome you get if you want to keep the power with the national states. So back to where we started: should the EU be abolished as Musk suggests? I think this is a profoundly unserious proposal from someone who has little understanding of European history and even less interest in learning. The EU exists because two world wars taught Europeans that nationalism without checks leads to catastrophe. It exists because small countries recognized they have more leverage negotiating as a bloc than individually. I also take a lot of issue with the idea that European politics should be driven by foreign interests. Neither Russians nor Americans have any good reason for why they should be having so much interest in European politics. They are not living here; we are. Would Europe be more “free” without the EU? Perhaps in some narrow regulatory sense. But it would also be weaker, more divided, and more susceptible to manipulation by larger powers — including the United States. I also find it somewhat rich that American tech billionaires are calling for the dissolution of the EU while they are greatly benefiting from the open market it provides. Their companies extract enormous value from the European market, more than even local companies are able to. The real question isn’t whether Europe should have less regulation or more freedom. It’s whether we Europeans can find the political will to actually complete the project we started. A genuine federation with real fiscal transfers, a common defense policy, and a unified foreign policy would be a superpower. What we have now is a compromise that satisfies nobody and leaves us vulnerable to exactly the kind of pressure Musk and other oligarchs represent. Europe doesn’t need fixing in the way the loud present-day critics suggest. It doesn’t need to become more like America or abandon its social model entirely. What it needs is to decide what it actually wants to be. The current state of perpetual ambiguity is unsustainable. It also should not lose its values. Europeans might no longer be quite as hot on the human rights that the EU provides, and they might no longer want to have the same level of immigration. Yet simultaneously, Europeans are presented with a reality that needs all of these things. We’re all highly dependent on movement of labour, and that includes people from abroad. Unfortunately, the wars of the last decade have dominated any migration discourse, and that has created ground for populists to thrive. Any skilled tech migrant is running into the same walls as everyone else, which has made it less and less appealing to come. Or perhaps we’ll continue muddling through, which historically has been Europe’s preferred approach. It’s not inspiring, but it’s also not going to be the catastrophe the internet would have you believe either. Is there reason to be optimistic? On a long enough timeline the graph goes up and to the right. We might be going through some rough patches, but structurally the whole thing here is still pretty solid. And it’s not as if the rest of the world is cruising along smoothly: the US, China, and Russia are each dealing with their own crises. That shouldn’t serve as an excuse, but it does offer context. As bleak as things can feel, we’re not alone in having challenges, but ours are uniquely ours and we will face them. One way or another.

11 views

How I run and deploy docker services in my homelab with Komodo and a custom CLI

Alterantive title: I try really hard not to move to Kubernetes.

0 views

Pausing a CSS animation with getAnimations()

It’s Blogvent, day 9, where I blog daily in December! CSS animations are cool, but sometimes you want them to just cool it . You can pause them by using the method ! When you call on an element, you get an array of all of the objects on said element, which includes CSS animations. There’s various things you can do with the returned object, like getting the of the animation’s timeline, or the playback state of the animation ( ), or in our case, actually pausing the animation with . We could loop through every Animation object in that array and pause it, like so: Or, if you just want one animation to pause, you can filter from the returned results. Here’s a real demo where there’s only one animation happening, so we pause it based on the current . See the Pen getAnimations() demo by Cassidy ( @cassidoo ) on CodePen . Hope this was helpful!

0 views

NVIDIA Isn't Enron - So What Is It?

At the end of November, NVIDIA put out an internal memo ( that was leaked to Barron's reporter Tae Kim, who is a huge NVIDIA fan and knows the company very well , so take from that what you will) that sought to get ahead of a few things that had been bubbling up in the news, a lot of which I covered in my Hater’s Guide To NVIDIA (which includes a generous free intro).  Long story short, people have a few concerns about NVIDIA, and guess what, you shouldn’t have any concerns, because NVIDIA’s very secret, not-to-be-leaked-immediately document spent thousands of words very specifically explaining how NVIDIA was fine and, most importantly, nothing like Enron . Anyway, all of this is fine and normal . Companies do this all the time, especially successful ones, and there is nothing to be worried about here , because after reading all seven pages of the document, we can all agree that NVIDIA is nothing like Enron.  No, really! NVIDIA is nothing like Enron, and it’s kind of weird that you’re saying that it is! Why would you say anything about Enron? NVIDIA didn’t say anything about Enron. Okay, well now NVIDIA said something about Enron, but that’s because fools and vagabonds kept suggesting that NVIDIA was like Enron, and very normally, NVIDIA has decided it was time to set the record straight.  And I agree! I truly agree. NVIDIA is nothing like Enron. Putting aside how I might feel about the ethics or underlying economics of generative AI, NVIDIA is an incredibly successful business that has incredible profits, holds an effective monopoly on CUDA ( explained here ), which powers the underlying software layer to running software on GPUs, specifically generative AI, and not really much else that has any kind of revenue potential.  And yes, while I believe that one day this will all be seen as one of the most egregious wastes of capital of all time, for the time being, Jensen Huang may be one of the most successful salespeople in business history.  Nevertheless, people have somewhat run away with the idea that NVIDIA is Enron , in part because of the weird, circular deals it’s built with Neoclouds — dedicated AI-focused cloud companies — like CoreWeave, Lambda and Nebius , who run data centers full of GPUs sold by NVIDIA, which they then use as collateral for loans to buy more GPUs from NVIDIA .  Yet as dodgy and weird and unsustainable as this is, it isn’t illegal , and it certainly isn’t Enron, because, as NVIDIA has been trying to tell you, it is nothing like Enron! Now, you may be a little confused — I get it! — that NVIDIA is bringing up Enron at all. Nobody seriously thought that NVIDIA was like Enron before (though JustDario, who has been questioning its accounting practices for years , is a little suspicious), because Enron was one of the largest criminal enterprises in history, and NVIDIA is at worst, I believe, a big, dodgy entity that is doing whatever it can to survive. Wait, what’s that? You still think NVIDIA is Enron ? What’s it going to take to convince you? I just told you NVIDIA isn’t Enron! NVIDIA itself has shown it’s not Enron, and I’m not sure why you keep bringing up Enron all the time! Stop being an asshole. NVIDIA is not Enron! Look, NVIDIA’s own memo said that “NVIDIA does not resemble historical accounting frauds because NVIDIA's underlying business is economically sound, [its] reporting is complete and transparent, and [it] cares about [its] reputation for integrity.” Now, I know what you’re thinking. Why is the largest company on the stock market having to reassure us about its underlying business economics and reporting? One might immediately begin to think — Streisand Effect style — that there might be something up with NVIDIA’s underlying business. But nevertheless, NVIDIA really is nothing like Enron.  But you know what? I’m good. I’m fine. NVIDIA, grab your coat, we’re going out, let’s forget any of this ever happened. Wait, what was that? First, unlike Enron, NVIDIA does not use Special Purpose Entities to hide debt and inflate revenue. NVIDIA has one guarantee for which the maximum exposure is disclosed in Note 9 ($860M) and mitigated by $470M escrow. The fair value of the guarantee is accrued and disclosed as having an insignificant value. NVIDIA neither controls nor provides most of the financing for the companies in which NVIDIA invests. Oh, okay! I wasn’t even thinking about that at all, I was literally just saying how you were nothing like Enron , we’re good. Let’s go home- Second, the article claims that NVIDIA resembles WorldCom but provides no support for the analogy. WorldCom overstated earnings by capitalizing operating expenses as capital expenditures. We are not aware of any claims that NVIDIA has improperly capitalized operating expenses. Several commentators allege that customers have overstated earnings by extending GPU depreciation schedules beyond economic useful life. Rebutting this claim, some companies have increased useful life estimates to reflect the fact that GPUs remain useful and profitable for longer than originally anticipated; in many cases, for six years or more. We provide additional context on the depreciation topic below. I…okay, NVIDIA is also not like WorldCom either. I wasn’t even thinking about WorldCom. I haven’t thought of them in a while.  Per Adam Berger of Ebsco :   …NVIDIA, are you doing something WorldCommy? Why are you bringing up WorldCom?  To be clear, WorldCom was doing capital F fraud , and its CEO Bernie Ebbers went to prison after an internal team of auditors led by WorldCom VP of internal auditing Cynthia Cooper reported $3.8 billion in “misallocated expenses and phony accounting entries.”  So, yeah, NVIDIA, you were really specific about saying you didn’t capitalize operating expenses as capital expenditures. You’re…not doing that, I guess? That’s great. Great stuff. I had literally never thought you had done that before. I genuinely agree that NVIDIA is nothing like WorldCom.  Anyway, also glad to hear about the depreciation stuff, looking forward to reading- Third, unlike Lucent, NVIDIA does not rely on vendor financing arrangements to grow revenue. In typical vendor financing arrangements, customers pay for products over years. NVIDIA's DSO was 53 in Q3. NVIDIA discloses our standard payment terms, with payment generally due shortly after delivery of products. We do not disclose any vendor financing arrangements. Our customers are subject to strict credit evaluation to ensure collectability. NVIDIA would disclose any receivable longer than one year in long-term other assets. The $632M "Other" balance as of Q3 does not include extended receivables; even if it did, the amount would be immaterial to revenue. Erm… Alright man, if anyone asks about whether you’re like famed dot-com crashout Lucent Technologies, I’ll be sure to correct them. After all, Lucent’s situation was really different — well…sort of. Lucent was a giant telecommunications equipment company, one that was, for a time, extremely successful, really really successful, in fact, turned around by the now-infamous Carly Fiorina. From a 2010 profile in CNN : NVIDIA, this sounds great — why wouldn’t you want to be compared to Lucen- Oh. So, to put it simply, Lucent was classifying debt as an asset (we're getting into technicalities here, but it sort of was but was really counting money from loans as revenue, which is dodgy and bad and accountants hate it ), and did something called “vendor financing,” which means you lend somebody money to buy something from you. It turns out Lucent did a lot of this. Okay, NVIDIA, I hate to say this, but I kind of get why somebody might say you’re doing Lucent stuff. After all, rumour has it that your deal with OpenAI — a company that burns billions of dollars a year — will involve it leasing your GPUs , which sure sounds like you’re doing vendor financing... -we do not disclose any vendor financing arrangements- Fine! Fine. Anyway, Lucent really fucked up big time, indulging in the dark art of circular vendor financing. In 1998 it signed its largest deal — a $2 billion “equipment and finance agreement” — with telecommunications company Winstar , which promised to bring in “$100 million in new business over the next five years” and build a giant wireless broadband network, along with expanding Winstar’s optical networking.  To quote The Wall Street Journal : In December 1999, WIRED would say that Winstar’s “small white dish antennas…[heralded] a new era and new mind-set in telecommunications,” and included this awesome quote about Lucent from CEO and founder Will Rouhana: Fuck yeah!  But that’s not the only great part of this piece: Annualized revenues, very nice. We love annualized revenues don't we folks? A company making about $25 million a month a year after taking on $2 billion in financing from Lucent. Weirdly, Winstar’s Wikipedia page says that revenues were $445.6 million for the year ending 1999 — or around $37.1 million a month.  Winstar loved raising money — two years later in November 2000, it would raise $1.02 billion, for example — and it raised a remarkable $5.6 billion between February 1999 and July 2001 according to the Wall Street Journal. $900 million of that came in December 1999 from an investment from Microsoft and “several investment firms,” with analyst Greg Miller of Jefferies & Co saying: Another fun thing happened in November 2000 too.  Lucent would admit it had overstated its fourth-quarter profits by improperly recording $125 million in sales , reducing that quarter’s revenue from “profitable” to “break-even.” Things would eventually collapse when Winstar couldn’t pay its debts, filing for Chapter 11 bankruptcy protection on April 18 2001 after failing to pay $75 million in interest payments to Lucent, which had cut access to the remaining $400 million of its $1 billion loan to Winstar as a result. Winstar would file a $10 billion lawsuit in bankruptcy court in Delaware the very same day, claiming that Lucent breached its contract and forced Winstar into bankruptcy by, well, not offering to give it more money that it couldn’t pay off. Elsewhere, things had begun to unravel for Lucent. A January 2001 story from the New York Times told a strange story of Lucent, a company that had made over $33 billion in revenue in its previous fiscal year, asking to defer the final tranche of payment — $20 million — for an acquisition due to “accounting and financial reporting considerations.” Why? Because Lucent needed to keep that money on the books to boost its earnings, as its stock was in the toilet, and was about to announce it was laying off 10,000 people and a quarterly loss of $1.02 billion .  Over the course of the next few years, Lucent would sell off various entities , and by the end of September 2005 it would have 30,500 staff and have a stock price of $2.99 — down from a high of $75 a share at the end of 1999 and 157,000 employees. According to VC Tomasz Tunguz, Lucent had $8.1 billion of vendor financing deals at its height . Lucent was still a real company selling real things, but had massively overextended itself in an attempt to meet demand that didn’t really exist, and when Lucent realized that, it decided to create demand itself to please the markets. To quote MIT Tech Review (and author Lisa Endlich), it believed that “setting and meeting [the expectations of Wall Street] “subsumed all other goals,” and that “Lucent had little choice but to ride the wave.”  To be clear, NVIDIA is quite different from Lucent. It has plenty of money, and the circular deals it does with CoreWeave and Lambda don’t involve the same levels of risk. NVIDIA is not (to my knowledge) backstopping CoreWeave’s business or providing it with loans , though NVIDIA has agreed to buy $6.3 billion of compute as the “buyer of last resort” of any unsold capacity . NVIDIA can actually afford this, and it isn’t illegal , though it is obviously propping up a company with flagging demand. NVIDIA also doesn’t appear to be taking on masses of debt to fund its empire, with over $56 billion in cash on hand and a mere $8.4 billion in long term debt .   Okay, phew. We got through this man. NVIDIA is nothing like Lucent either . Okay, maybe it’s got some similarities — but it’s different! No worries at all. I know I’m relaxed. You still seem nervous, NVIDIA. I promise you, if anyone asks me if you’re like Lucent I’ll tell them you’re not. I’ll be sure to tell them you’re nothing like that. Are you okay, dude? When did you last sleep?  Inventory growth indicates waning demand Claim: Growing inventory in Q3 (+32% QoQ) suggests that demand is weak and chips are accumulating unsold, or customers are accepting delivery without payment capability, causing inventory to convert to receivables rather than cash. Woah, woah, woah, slow down. Who has been saying this? Oh, everybody ? Did Michael Burry scare you? Did you watch The Big Short and say “ah, fuck, Christian Bale is going to get me! I can’t believe he played drums to Pantera ! Ahh!”  Anyway, now you’ve woken up everybody else in the house and they’re all wondering why you’re talking about receivables. Shouldn’t that be fine? NVIDIA is a big business, and it’s totally reasonable to believe that a company planning to sell $63 billion of GPUs in the next quarter would have ballooning receivables ( $33 billion, up from $27 billion last quarter ) and growing inventory ( $19.78 billion, up from $14.96 billion the last quarter ). It’s a big, asset-heavy business, which means NVIDIA’s clients likely get decent payment terms to raise debt or move cash around to get them paid.  Everybody calm down! Like my buddy NVIDIA, who is nothing like Enron by the way, just said: Response: First, growing inventory does not necessarily indicate weak demand. In addition to finished goods, inventory includes significant raw materials and work-in-progress. Companies with sophisticated supply chains typically build inventory in advance of new product launches to avoid stockouts. NVIDIA's current supply levels are consistent with historical trends and anticipate strong future growth. Second, growing inventory does not indicate customers are accepting delivery without payment capability. NVIDIA recognizes revenue upon shipping a product and deeming collectability probable. The shipment reduces inventory, which is not related to customer payments. Our customers are subject to strict credit evaluation to ensure collectability. Payment is due shortly after product delivery; some customers prepay. NVIDIA's DSO actually decreased sequentially from 54 days to 53 days. Haha, nice dude, you’re totally right, it’s pretty common for companies, especially large ones, to deliver something before they receive the cash, it happens , I’m being sincere. Sounds like companies are paying! Great!  But, you know, just, can you be a little more specific? Like about the whole “shipping things before they’re paid” thing.  NVIDIA recognizes revenue upon shipping a product and deeming collectability probable- Alright, yeah, thought I heard you right the first time. What does “deeming collectability probable” mean? You could’ve just said “we get paid 95% of the time within 2 months” or whatever. Unless it’s not 95%? Or 90%? How often is it? Most companies don’t break this down by the way, but then again, most companies are not NVIDIA, the largest company on the stock market, and if I’m honest, nobody else has recently had to put out anything that said “I’m not like Enron,” and I want to be clear that NVIDIA is not like Enron. For real, Enron was a criminal enterprise. It broke the law, it committed real deal, actual fraud, and NVIDIA is nothing like Enron. In fact, before NVIDIA put out a letter saying how it was nothing like Enron I would have staunchly defended the company against the Enron allegations, because I truly do not think NVIDIA is committing fraud. That being said, it is very strange that NVIDIA wants somebody to think about how it’s nothing like Enron. This was, technically, an internal memo, and thus there is a chance its existence was built for only internal NVIDIANs worried about the value of their stock, and we know it was definitely written to try and deflect Michael Burry’s criticism, as well as that of a random Substacker who clearly had AI help him write a right-adjacent piece that made all sorts of insane and made up statements (including several about Arrow Electronics that did not happen) — and no, I won’t link it, it’s straight up misinformation.  Nevertheless, I think it’s fair to ask: why does NVIDIA need you to know that it’s nothing like Enron? Did it do something like Enron? Is there a chance that I, or you, may mistakenly say “hey, is NVIDIA doing Enron?”  Heeeeeeyyyy NVIDIA. How’re you feeling? Yeah, haha, you had a rough night. You were saying all this crazy stuff about Enron last night, are you doing okay? No, no, I get it, you’re nothing like Enron, you said that a lot last night. So, while you were asleep — yeah it’s been sixteen hours dude, you were pretty messed up, you brought up Lucent then puked in my sink — I did some digging and like, I get it, you are definitely not like Enron, Enron was breaking the law . NVIDIA is definitely not doing that. But…you did kind of use Special Purpose Vehicles recently? I’m sorry, I know, you’re not like Enron! You’re investing $2 billion in Elon Musk’s special purpose vehicle that will then use that money to raise debt to buy GPUs from NVIDIA that will then be rented to Elon Musk . This is very different to what Enron did! I am with you dude , don’t let the haters keep you down! No, I don’t think a t-shirt that says “NVIDIA is not like Enron for these specific reasons” helps.  Wait, wait, okay, look. One thing. You had this theoretical deal lined up with Sam Altman and OpenAI to invest $100 billion — and yes, you said in your latest earnings that "it was actually a Letter of Intent with the opportunity to invest," which doesn’t mean anything, got it — and the plan was that you would “ lease the GPUs to OpenAI .” Now how would you go about doing that NVIDIA? You’d probably need to do exactly the same deal as you just did with xAI. Right? Because you can’t very well rent these GPUs directly to Elon Musk , you need to sell them to somebody so that you can book the revenue, you were telling me that’s how you make money. I dunno, it’s either that or vendor financing.  Oh, you mentioned that already- -unlike Lucent, NVIDIA does not rely on vendor financing arrangements to grow revenue. In typical vendor financing arrangements, customers pay for products over years. NVIDIA's DSO was 53 in Q3. NVIDIA discloses our standard payment terms, with payment generally due shortly after delivery of products. We do not disclose any vendor financing arrangements- Let me stop you right there a second, you were on about this last night before you scared my cats when you were crying about something to do with “two nanometer.”  First of all, why are you bringing up typical vendor financing agreements? Do you have atypical ones?  Also I’m jazzed to hear you “disclose your standard payment terms,” but uh, standard payment terms for what exactly? Where can I find those? For every contract?  Also, you are straight up saying you don’t disclose any vendor financing arrangements , that’s not the same as “not having any vendor financing arrangements.” I “do not disclose” when I go to the bathroom but I absolutely do use the toilet. Let’s not pretend like you don’t have a history in helping get your buddies funding. You have deals with both Lambda and CoreWeave to guarantee that they will have compute revenue, which they in turn use to raise debt, which is used to buy more of your GPUs. You have learned how to feed debt into yourself quite well, I’m genuinely impressed .  This is great stuff, I’m having the time of my life with how not like Enron you are, and I’m serious that I 100% do not believe you are like Enron. But…what exactly are you doing man? What’re you going to do about what Wall Street wants?  Enron was a criminal enterprise! NVIDIA is not. More than likely NVIDIA is doing relatively boring vendor financing stuff and getting people to pay them on 50-60 day time scales — probably net 60, and, like it said, it gets paid upfront sometimes.  NVIDIA truly isn’t like Enron — after all, Meta is the one getting into ENERGY TRADING — to the point that I think it’s time to explain to you what exactly happened with Enron. Or, at least as much as is possible within the confines of a newsletter that isn’t exclusively about Enron… The collapse of Enron wasn’t just — in retrospect — a large business that ultimately failed. If that was all it was, Enron wouldn’t command the same space in our heads as other failures from that era, like WorldCom (which I mentioned earlier) and Nortel (which I’ll get to later), both of whom were similarly considered giants in their fields. It’s also not just about the fact that Enron failed because of proven business and accounting malfeasance. WorldCom entered bankruptcy due to similar circumstances (though, rather than being liquidated, it was acquired as part of Verizon’s acquisition of MCI , the name of a company that had previously merged with WorldCom that WorldCom renamed itself to after bankruptcy ), and unlike Enron, isn’t the subject of flashy Academy-nominated films , or even a Broadway production .  It’s not the size of Enron that makes its downfall so intriguing. Nor, for that matter, is it the fact that Enron did a lot of legally and ethically dubious stuff to bring about its downfall.  No, what makes Enron special is the sheer gravity of its malfeasance, the rotten culture at the heart of the company that encouraged said malfeasance, and the creative ways Enron’s leaders crafted an image of success around what was, at its heart, a dog of a company.  Enron was born in 1985 on the foundations of two older, much less interesting businesses. The first, Houston Natural Gas (HNG), started life as a utility provider, pumping natural gas from the oilfields of Texas to customers throughout the region, before later exiting the industry to focus on other opportunities. The other, InterNorth, was based in Omaha, Nebraska and was in the same business — pipelines.  In the mid-1980s, HNG was the subject of a hostile take-over from Coastal Corporation (which, until 2001, operated a chain of refineries and gas stations throughout much of the US mainland). Unable to fend it off by itself, HNG merged with InterNorth, with the combined corporation renamed Enron .  The CEO of this new entity was Ken Lay, an economist by trade who spent most of his career in the energy sector who also enjoyed deep political connections with the Bush family . He co-chaired George H. W. Bush’s failed 1992 re-election campaign , and allowed Enron’s corporate jet to ferry Bush Sr. and Barbara Bush back and forth to Washington. Center for Public Integrity Director Charles Lewis said that “ there was no company in America closer to George W. Bush than Enron. ” George W. Bush (the second one) even had a nickname for Lay. Kenny Boy . Anyway, in 1987, Enron hired McKinsey — the world’s most evil management consultancy firm — to help the company create a futures market for natural gas. What that means isn’t particularly important to the story, but essentially, a futures contract is where a company agrees to buy or sell an asset in the future at a fixed price.  It’s a way of hedging against risk, whether that be from something like price or currency fluctuations, or from default. If you’re buying oil in dollars, for example, buying a futures contract for oil to be delivered in six months time at a predetermined price means that if your currency weakens against the dollar, your costs won’t spiral.  That bit isn’t terribly important. What does matter is while working with McKinsey, Lay met someone called Jeff Skilling — a young engineer-turned-consultant who impressed the company’s CEO deeply, so much so that Lay decided to poach him from McKinsey in 1990 and give him the role of chairman and CEO of Enron Finance Group.  Anyway, Skilling continued to impress Lay, who gave him greater and greater responsibility, eventually crowning him Chief Operating Officer (COO) of Enron.  With Skilling in a key leadership position, he was able to shape the organization’s culture. He appreciated those who took risks — even if those risks, when viewed with impartial eyes, were deemed reckless, or even criminal.  He introduced the practice of stack-ranking (also known as “rank and yank”) to Enron, which had previously been pioneered by Jack Welch at GE (see The Shareholder Supremacy from last year ). Here, employees were graded on a scale, and those at the bottom of the scale were terminated. Managers had to place at least 10% (other reports say closer to 15%) of employees in the lowest bracket, which created an almost Darwinian drive to survive.  Staffers worked brutal hours. They cut corners. They did some really, really dodgy shit. None of this bothered Skilling in the slightest.  How dodgy, you ask? Well, in 2000 and 2001, California suffered a series of electricity blackouts. This shouldn’t have happened, because California’s total energy demand (at the time) was 28GW and its production capacity was 45GW.  California also shares a transmission grid with other states (and, for what it’s worth, the Canadian provinces of Alberta and British Colombia, as well as part of Baja California in Mexico), meaning that in the event of a shortage, it could simply draw capacity from elsewhere. So, how did it happen?  Well, remember, Enron traded electricity like a commodity, and as a result, it was incentivized to get the highest possible price for that commodity . So, it took power plants off line during peak hours, and exported power to other states when there was real domestic demand.  How does a company like Enron shut down a power station? Well, it just asked .  In one taped phone conversation released after the company’s collapse , an Enron employee called Bill called an official at a Las Vegas power plant (California shares the same grid with Nevada) and asked him to “ get a little creative, and come up with a reason to go down. Anything you want to do over there? Any cleaning, anything like that? " This power crisis had dramatic consequences — for the people of California, who faced outages and price hikes; for Governor Gray Davis, who was recalled by voters and later replaced by Arnold Schwarzenegger; for PG&E, which entered Chapter 11 bankruptcy that year ; and for Southern California Edison, which was pushed to the brink of bankruptcy as a result. This kind of stuff could only happen in an organization whose culture actively rewarded bad behavior .  In fact, Skilling was seemingly determined to elevate the dodgiest of characters to the highest positions within the company, and few were more-ethically-dubious than Andy Fastow, who Skilling mentored like a protegé, and who would later become Enron’s Chief Financial Officer.  Even before vaulting to the top of Enron’s nasty little empire, Fastow was able to shape its accounting practices, with the company adopting mark-to-market accounting practices in 1991 .  Mark-to-market sounds complicated, but it’s really simple. When listing assets on a balance sheet, you don’t use the acquisition cost, but rather the fair-market value of that asset. So, if I buy a baseball card for a dollar, and I see that it’s currently selling for $10 on eBay, I’d say that said asset is worth $10, not the dollar I paid for it, even though I haven’t actually sold it yet.  This sounds simple — reasonable, even — but the problem is that the way you determine the value of that asset matters, and mark-to-market accounting allows companies and individuals to exercise some…creativity.  Sure, for publicly-traded companies (where the price of a share is verifiable, open knowledge), it’s not too bad, but for assets with limited liquidity, limited buyers, or where the price has to be engineered somehow, you have a lot of latitude for fraud.  Let’s go back to the baseball card example. How do you know it’s actually worth $10, and not $1? What if the “fair value” isn’t something you can check on eBay, but what somebody told me in-person it’s worth? What’s to stop me from lying and saying that the card is actually worth $100, or $1000? Well, other than the fact I’d be committing fraud. What if I have ten $1 baseball cards, and I give my friend $10 and tell him to buy one of the cards using the $10 bill I just handed him, allowing me to say that I’ve realized a $9 profit on one of my $1 cards, and my other cards are worth $90 and not $9?  And then, what if I use the phony valuation of my remaining cards to get a $50 loan, using the cards as collateral, even though the collateral isn’t even one-fifth of the value of the loan?  You get the idea. While a lot of the things people can do to alter the mark-to-market value of an asset are illegal (and would be covered under generic fraud laws), it doesn’t change the fact that mark-to-market accounting allows for some shenanigans to take place. Another trait of mark-to-market accounting, as employed by Enron, is that it would count all the long-term potential revenue from a deal as quarterly revenue — even if that revenue would be delivered over the course of a decades-long contract, or if the contract would be terminated before its intended expiration date.  It would also realize potential revenue as actual revenue, even before money changed hands, and when the conclusion of the deal wasn’t a certainty. For example, in 1999, Enron sold a stake in four electricity-generating barges in Nigeria (essentially floating power stations) to Merrill Lynch , which allowed the company to register $12m in profit.  That sale ultimately didn’t happen, though that didn’t stop Enron from selling pieces to Merrill Lynch, which — I’m not kidding — Merrill Lynch quickly sold back to a Special Purpose Vehicle called “LJM2” controlled by Andrew Fastow. You’re gonna hear that name again. Although the Merrill Lynch bankers who participated in the deal were eventually convicted of conspiracy and fraud charges (long after the collapse of Enron), their convictions were later quashed on appeal.   But still, for a moment, it gave a jolt to Enron’s quarterly earnings.  Anyway, Enron was incredibly creative when it came to how it valued its assets. Take, for example, fiber optic cables. As the Dot Com bubble swelled, Enron saw an opportunity, and wanted to be able to trade and control the supply of bandwidth, just like it does with other more conventional commodities (like oil and gas) .  It built, bought, and leased fiber-optic cables throughout the country, and then, using exaggerated estimates of their value and potential long-term revenue, released glowing financial reports that made the company look a lot healthier and more successful than it actually was.  Enron also loved to create special-purpose entities that existed either to generate revenue that didn’t exist, or to hold toxic assets that would otherwise need to be disclosed (with Enron then using its holdings in said entities to boost its balance sheet), or to disguise its debt.  One, Whitewing, was created and capitalized by Enron (and an outside investor), and pretty much exclusively bought assets from Enron — which allowed the company to recognize sales and profits on its balance sheets, even if they were fundamentally contrived.  Another set of entities — known as LJM, named after the first initial of Andy Fastow’s wife and two children , and which I mentioned earlier — did the same thing, allowing the company to hide risky or failing investments, to limit its perceived debt, and to generate artificial profits and revenues. LJM2 was, creatively, the second version of the idea. Even though the assets that LJM held were, ultimately, dogshit, the distance that LJM provided, combined with Enron’s use of mark-to-market accounting, allowed the company to turn a multi-billion collective failure into a resounding and (on paper) profitable triumph.  So, how did this happen, and how did it go on for so long?  Well, first, Enron was, at its peak, worth $70bn. Its failure would be a failure for its investors and shareholders, and nobody — besides the press, that is — wanted to ask tough questions.  It had auditors, but they were paid handsomely, turning a blind eye to the criminal malfeasance at the heart of the company. Auditor Arthur Andersen surrendered its license in 2002, bringing an end to the company — and resulting in 85,000 employees losing their jobs.  Well, it’s not so much as it only turned a blind eye, so much as it turned on a big paper shredder , shredding tons — and I’m using that as a measure of weight, and not figuratively — of documents as Enron started to implode , a crime for which it was later convicted of obstruction of justice.  I’ve talked about Enron’s culture, but I’d be remiss if I didn’t mention that Enron’s highest-performers and its leadership received hefty bonuses in company equity, motivating them to keep the charade going. Enron’s pension scheme, I add, was basically entirely Enron stock, and employees were regularly encouraged to buy more, with Kenneth Lay telling employees weeks before the company’s collapse that “the company is fundamentally sound” and to “hang on to their stock.”  Additionally, per the terms of the Enron pension plan, employees were prevented from shifting their holdings into other pension funds, or other investments, until they turned 50 . When the company collapsed, those people lost everything, even those who didn’t know anything about Enron’s criminality. George Maddox, a retired former Enron employee, had his entire retirement tied up in 14,000 Enron shares (worth at the time more than $1.3 million), was “forced to spend his golden years making ends meet by mowing pastures and living in a run-down East Texas farmhouse.”  The US Government brought criminal charges against Enron’s top leadership. Ken Lay was convicted of four counts of fraud and making false statements , but died on a skiing vacation to Aspen before sentencing . May he burn in Hell. Skilling was convicted on 24 counts of fraud and conspiracy and sentenced to 24 years in jail. This was reduced in 2013 on appeal to 14 years, and he was released to a halfway house in 2018 , and then freed in 2019. He’s since tried to re-enter the energy sector — with one venture combining energy trading and, I kid you not, blockchain technology — although nothing really came out of it.  Andy Fastow pled guilty to two counts — one of manipulation of financial statements, and one of self-dealing . and received ten years in prison. This was later reduced to six years, including two years of probation , in part because he cooperated with the investigations against other Enron executives. He is now a public speaker and a tech investor in an AI company, KeenCorp .  His wife, Lea, who also worked at Enron, received twelve months for conspiracy to commit wire fraud and money laundering and for submitting false tax returns. She was released from custody in July, 2005 .  Enron’s implosion was entirely self-inflicted and horrifyingly, painfully criminal, yet, it had plenty of collateral damage — to the US economy, to those companies that had lent it money, to its employees who lost their jobs and their life savings and their retirements, and to those employees at companies most entangled with Enron, like those at auditing firm Arthur Andersen. This isn’t unique among corporate failures. WorldCom had some dodgy accounting practices. Nortel too. Both companies failed, both companies wrecked the lives of their employees, and the failure of these companies had systemic economic consequences (especially in Canada, where Nortel, at its peak, accounted for one-third of the market cap of all companies on the Toronto Stock Exchange). The reason why Enron remains captured in our imagination — and why NVIDIA is so vociferously opposed to being compared with Enron — is the extent to which Enron manipulated reality to appear stronger and more successful than it was, and how long it was able to get away with it.  While we may have forgotten the memory of Enron — it happened over two decades ago, after all — we haven’t forgotten the instincts that it gave us. It’s why our noses twitch when we see special-purpose vehicles being used to buy GPUs, and why we gag when we see mark-to-market accounting.  It’s entirely possible that everything NVIDIA is doing is above board. Great! But that doesn’t do anything for the deep pit of dread in my stomach.  A few weeks ago, I published the Hater’s Guide to NVIDIA, and included within it a guide to what this company does . If you’re looking at this through the cold, unthinking lenses of late-stage capitalism. This all sounds really good! I’ve basically described a company that has an essential monopoly in the one thing required for a high-growth (if we’re talking exclusively about capex spending) industry to exist.  Moreover, that monopoly is all-but assured, thanks to NVIDIA’s CUDA moat, its first-mover advantage, and the actual capabilities of the products themselves — thereby allowing the company to charge a pretty penny to customers.  And those customers? If we temporarily forget about the likes of Nebius and CoreWeave (oh, how I wish I could forget about CoreWeave permanently), we’re talking about the biggest companies on the planet. Ones that, surely, will have no problems paying their bills.  Back in February 2023, I wrote about The Rot Economy , and how everything in tech had become oriented around growth — even if it meant making products harder to use as a means of increasing user engagement or funnelling them toward more-profitable parts of an app.  Back in June 2024, I wrote about the Rot-Com Bubble , and my greater theory that the tech industry has run out of hypergrowth ideas: In simple terms, big tech — Amazon, Google, Microsoft and Meta, but also a number of other companies — no longer has the “next big thing,” and jumped on AI out of an abundance of desperation.  Hell, look at Oracle. This company started off by selling databases and ERP systems to big companies, and then trapping said companies by making it really, really difficult to migrate to cheaper (and better) solutions, and then bleeding said companies with onerous licensing terms (including some where you pay by the number of CPU cores that use the application). It doesn’t do anything new, or exciting, or impressive, and even when presented with the opportunity to do things that are useful or innovative (like when it bought Sun Microsystems), it turns away. I imagine that, deep down, it recognizes that its current model just isn’t viable in the long-term, and so, it needs something else.  When you haven’t thought about innovation… well… ever, it’s hard to start. Generative AI, on the face of it, probably seemed like a godsend to Larry Ellison.  We also live in an era where nobody knows what big tech CEOs do other than make nearly $100 million a year , meaning that somebody like Satya Nadella can get called a “ thoughtful leader with striking humility ” for pushing Copilot AI in every single part of your Microsoft experience, even Notepad, a place that no human being would want it , and accelerating capital expenditures from $28 billion across the entirey of FY 2023 to $34.9 billion in its latest quarter . In simpler terms, spending money makes a CEO look busy. And at a time when there were no other potential growth avenues, AI was a convenient way to make everybody look busy. Every department can “have an AI strategy,” and every useless manager and executive can yell, as ServiceNow CEO did back in 2022 , “ let me make it clear to everybody here, everything you do: AI, AI, AI, AI, AI. ” I should also add that ChatGPT was the first real, meaningful hit that the American tech industry had produced in a long, long time — the last being, if I’m honest, Uber, and that’s if we allow “successful yet not particularly good businesses” into the pile.  If we insist on things like “profitability” and “sustainability,” US tech hasn’t done so great. Snowflake runs at a loss , Snap runs at a loss , and while Uber has turned things around somewhat , it’s hardly created the next cloud computing or smartphone.  Putting aside finances, the last major “hit” was probably Venmo or Zelle, and maybe, if I’m feeling generous, smart speakers like Amazon Echo and Apple Homepod. Much like Uber, none of these were “the next big thing,” which would be fine except big tech needs more growth forever right now, pig! This is why Google, Amazon and Meta all do 20 different things — although rarely for any length of time, with these “things” often having a shelf life shorter than a can of peaches — because The Rot Economy’s growth-at-all-costs mindset exists only to please the markets, and the markets demanded growth. ChatGPT was different. Not only did it do something new, it did so in a way that was relatively easy to get people to try and “see the potential” of. It was also really easy to convince people it would become something bigger and better , because that’s what tech does. To quote Bender and Hanna, AI is a “marketing term ” — a squishy way of evoking futuristic visions of autonomous computers that can do anything and everything from us, and because both consumers and analysts have been primed to believe and trust the tech industry, everybody believed that whatever ChatGPT was would be the Next Big Thing. And said “Next Big Thing” is powered by Large Language Models, which require GPUs sold by one company — NVIDIA.  AI became a very useful thing to do. If a company wanted to seem futuristic and attract investors, it could now “integrate AI.” If a hyperscaler wanted to seem enterprising and like it was “building for the future,” it could buy a bunch of GPUs, or invest in its own silicon, or, as Google, Microsoft, Amazon and Meta have done, shove AI in every imaginable crevice of the app.  Investors could invest in AI companies, retail investors (IE: regular people) could invest in AI stocks, tech reporters could write about something new in AI, LinkedIn perverts could write long screeds about AI, the markets could become obsessed with AI… …and yeah, you can kind of see how things got out of control. Everybody now had something to do . An excuse to do AI, regardless of whether it made sense, because everybody else was doing it. ChatGPT quickly became one of the most popular websites on the internet — all while OpenAI burned billions of dollars — and because the media effectively published every single thought that Sam Altman had (such as that GPT-4 would “automate away some jobs and create others ” and that he was a “ little bit scared of it ”), AI, as an idea, technology, symbolic stock trope, marketing tool and myth became so powerful that it could do anything, replace anyone, and be worth anything, even the future of your company. Amongst the hype, there was an assumption related to scaling laws ( summarized well by Charlie Meyer ): In simple terms, the paper suggested that shoving more training data and using more compute power would exponentially increase the ability of a model to do stuff. And to make a model that did more stuff, you needed more GPUs and more data centers. Did it matter that there was compelling evidence in 2022 ( Gary Marcus was right! ) that there were limits to scaling laws, and that we would hit the point of diminishing returns? Amidst all this, NVIDIA has sold over $200 billion of GPUs since the beginning of 2023 , becoming the largest company on the stock market and trading at over $170 as of writing this sentence only a few years after being worth $19.52 a share .  You see, Meta, Google, Microsoft and Amazon all wanted to be “part of the future,” so they sunk a lot of their money into NVIDIA, making up 42% of its revenue in its fiscal year 2025. Though there are some arguments about exactly how much of big tech’s billowing capital expenditures are spent on GPUs, some estimate somewhere between 41% to more than 50% of a data center’s capex is spent on them. If you’re wondering what the payoff is, well, you’re in good company. I estimate that there’s only around $61 billion in total generative AI revenue , and that includes every hyperscaler and neocloud. Large Language Models are limited, AI agents are a pipedream and simply do not work , AI-powered products are unreliable and coding LLMs make developers slower , and the cost of inference — the way in which a model produces its output — keeps going up .  So, due to the fact that so much money has now been piled into building AI infrastructure, and big tech has promised to spend hundreds of billions of dollars more in the next year , big tech has found itself in a bit of a hole. How big a hole? Well, By the end of the year, Microsoft, Amazon, Google and Meta will have spent over $400bn in capital expenditures, much of it focused on building AI infrastructure, on top of $228.4 billion in capital expenditures in 2024 and around $148bn in capital expenditures in 2023, for a total of around $776bn in the space of three years, and intends to spend $400 billion or more in 2026. As a result, based on my analysis, big tech needs to make $2 trillion in brand new revenue, specifically from AI by 2030, or all of this was for nothing. I go into detail here in my premium piece , but I’m going to give you a short explanation here. Sadly you’re going to have to learn stuff. I know! I’m sorry. Introducing a term: depreciation. From my October, 31 newsletter : Nobody seems to be able to come to a consensus about how long this should be. In Microsoft’s case, depreciation for its servers is spread over six years — a convenient change it made in August 2022, a few months before the launch of ChatGPT. This means that Microsoft can spread the cost of the tens of thousands of A100 GPUs bought in 2020, or the 450,000 H100 GPUs it bought in 2024 , across six years, regardless of whether those are the years they will be either A) generating revenue or B) still functional.  CoreWeave, for what it’s worth, says the same thing — but largely because it’s betting that it’ll still be able to find users for older silicon after its initial contracts with companies like OpenAI expire. The problem is, as the aforementioned linked CNBC article points out, is that this is pretty much untested ground.  Whereas we know how much, say, a truck or a piece of heavy machinery can last, and how long it can deliver value to an organization, we don’t know the same thing about the kind of data center GPUs that hyperscapers are spending tens of billions of dollars on each year. Any kind of depreciation schedule is based on, at best, assumptions, and at worst, hope.  The assumption that the cards won’t degrade with heavy usage. The assumption that future generations of GPUs won’t be so powerful and impressive, they’ll render the previous ones more obsolete than expected, kind of like how the first jet-powered planes of the 1950s did to those manufactured just one decade prior. The assumption that there will, in fact, be a market for older cards, and that there’ll be a way to lease them profitably. What if those assumptions are wrong? What if that hope is, ultimately, irrational?  Mihir Kshirsagar of the Center for Information Technology Policy framed the problem well : This is why Michael Burry brought it up recently — because spreading out these costs allows big tech to make their net income (IE: profits) look better. In simple terms, by spreading out costs over six years rather than three, hyperscalers are able to reduce a line item that eats into their earnings, which makes their companies look better to the markets. So, why does this create an artificial time limit? In really, really simple terms:  So, now that you know this, there’s a fairly obvious question to ask: why are they still buying GPUs? Also…where the fuck are they going? As I covered in the Hater’s Guide To NVIDIA : While I’m not going to copy-paste my whole (premium) piece, I was only able to find, at most, a few hundred thousand Blackwell GPUs — many of which aren’t even online! — including OpenAI’s Stargate Abilene (allegedly 400,000, though only two buildings are handed over); a theoretical 131,000 GPU cluster owned by Oracle announced in March 2025 ; 5000 Blackwell GPUs at the University of Texas, Austin ; “more than 1500” in a Lambda data center in Columbus, Ohio ; The Department of Energy’s still-in-development 100,000 GPU supercluster, as well as “10,000 NVIDIA Blackwell GPUs” that are “expected to be available in 2026 in its “Equinox” cluster ; 50,000 going into the still-unbuilt Musk-run Colossus 2 supercluster ; CoreWeave’s “largest GB200 Blackwell cluster” of 2496 Blackwell GPUs ; “tens of thousands” of them deployed globally by Microsoft ( including 4600 Blackwell Ultra GPUs ); 260,000 GPUs for five AI data centers for the South Korean government …and I am still having trouble finding one million of these things that are actually allocated anywhere , let alone in a data center, let alone one with sufficient power. I do not know where these six million Blackwell GPUs have gone, but they certainly haven’t gone into data centers that are powered and turned on. In fact, power has become one of the biggest issues with building these things, in that it’s really difficult (and maybe impossible!) to get the amount of power these things need.   In really simple terms: there isn’t enough power or built data centers for those six million Blackwell GPUs, in part because the data centers aren’t built, and in part because there isn’t enough power for the ones that are. Microsoft CEO Satya Nadella recently said on a podcast that his company “[didn’t] have the warm shells to plug into,” meaning buildings with sufficient power, and heavily suggested Microsoft “may actually have a bunch of chips sitting in inventory that [he] couldn’t plug in.” The news that HPE’s (Hewlett Packard Enterprise) AI server business underperformed, and by a significant margin, only raises more questions about where these chips are going .  So why, pray tell, is Jensen Huang of NVIDIA saying that he has 20 million Blackwell and Vera Rubin GPUs ordered through the end of 2026 ? Where are they going to go? I truly don’t know!  AI bulls will tell you about the “insatiable demand for AI” and that these massive amounts of orders are proof of something or rather, and you know what, I’ll give them that — people sure are buying a lot of NVIDIA GPUs! I just don’t know why . Nobody has made a profit from AI, and those making revenue aren’t really making much.  For example, my reporting on OpenAI from a few weeks ago suggests that the company only made $4.329 billion in revenue through the end of September, extrapolated from the 20% revenue share that Microsoft receives from the company. As some people have argued with the figures, claiming they are either A) delayed or B) not inclusive of the revenue that OpenAI is paid from Microsoft as part of Bing’s AI integration and sales of OpenAI’s models via Microsoft Azure, I wanted to be clear of two things: In the same period, it spent $8.67 billion on inference (the process in which an LLM creates an output). This is the biggest company in the generative AI space, with 800 million weekly active users and the mandate of heaven in the eyes of the media. Anthropic, its largest competitor, alleges it will make $833 million in revenue in December 2025 , and based on my estimates will end up having $5 billion in revenue by end of year. Based on my reporting from October, Anthropic spent $2.66 billion on Amazon Web Services through the end of September, meaning that it (based on my own analysis of reported revenues) spent 104% of its $2.55 billion in revenue up until that point just on AWS , and likely spent just as much on Google Cloud.  While everybody wants to tell the story of Anthropic’s “efficiency” and “ only burning $2.8 billion this year ,” one has to ask why a company that is allegedly “reducing costs” had to raise $13 billion in September 2025 after raising $3.5 billion in March 2025 , and after raising $4 billion in November 2024 ? Am I really meant to read stories about Anthropic hitting break even in 2028 with a straight face? Especially as other stories say Anthropic will be cash flow positive “ as soon as 2027 .” These are the two largest companies in the generative AI space, and by extension the two largest consumers of GPU compute. Both companies burn billions of dollars, and require an infinite amount of venture capital to keep alive at a time when the Saudi Public Investment Fund is struggling and the US venture capital system is set to run out of cash in the next year and a half . The two largest sources of actual revenue for selling AI compute are subsidized by venture capital and debt. What happens if these sources dry up? And, in all seriousness, who else is buying AI compute? What are they doing with it? Hyperscalers (other than Microsoft, which chose to stop reporting its AI revenue back in January, when it claimed a $13 billion, or about $1 billion a month, in revenue ) don’t disclose anything about their AI revenue, which in turn means we have no real idea about how much real, actual money is coming in to justify these GPUs.  CoreWeave made $1.36 billion in revenue (and lost $110 million doing so) in its last quarter — and if that’s indicative of the kind of actual, real demand for AI compute, I think it’s time to start panicking about whether all of this was for nothing.  CoreWeave has a backlog of over $50 billion in compute , but $22 billion of that is OpenAI (a company that burns billions of dollars a year and lives on venture subsidies), $14 billion of that is Meta (which has yet to work out how to make any kind of real money from generative AI, and no, its “ generative AI ads ” are not the future, sorry), and the rest is likely a mixture of Microsoft and NVIDIA, which agreed to buy $6.3 billion of any unused compute from CoreWeave through 2032 .  Sorry, I also forgot Google, which is renting capacity from CoreWeave to rent to OpenAI . Also, I also forgot to mention that CoreWeave’s backlog problem stems from data center construction delays . That and CoreWeave has $14 billion in debt mostly from buying GPUs, which it was able to raise by using GPUs as collateral and that it had contracts from customers willing to pay it, such as NVIDIA, which is also selling it the GPUs. So, just to be abundantly clear: CoreWeave has bought all those GPUs to rent to OpenAI, Microsoft (for OpenAI), Meta, Google (OpenAI), and NVIDIA, which is the company that benefits from CoreWeave’s continued ability to buy GPUs.  Otherwise, where’s the fucking business, exactly? Who are the customers? Who are the people renting these GPUs, and for what purpose are they being rented? How much money is renting those GPUs? You can sit and waffle on about the supposedly glorious “AI revolution” all you want, but where’s the money, exactly? And why, exactly, are we buying more GPUs? What are they doing? To whom are they being rented? For what purpose? And why isn’t it creating the kind of revenue that is actually worth sharing?  Is it because the revenue sucks? Is it because it’s unprofitable to provide it?  And why, at this point in history, do we not know? Hundreds of billions of dollars that have made NVIDIA the biggest company on the stock market and we still do not know why people are buying these fucking things. NVIDIA is currently making hundreds of billions in revenue selling GPUs to companies that either plug them in and start losing money or, I assume, put them in a warehouse for safe keeping. This brings me to my core anxiety: why, exactly, are companies pre-ordering GPUs? What benefit is there in doing so? Blackwell does not appear to be “more efficient” in a way that actually makes anybody a profit, and we’re potentially years from seeing these GPUs in operation in data centers at the scale they’re being shipped — so why would anybody be buying more?  I doubt these are new customers — they’re likely hyperscalers, neoclouds like CoreWeave and resellers like Dell and SuperMicro — because the only companies that can actually afford to buy them are those with massive amounts of cash or debt, to the point that even Google , Amazon , Meta and Oracle are taking on massive amounts of new debt, all without a plan to make a profit. NVIDIA’s largest customers are increasingly unable to afford its GPUs, which appear to be increasing in price with every subsequent generation. NVIDIA’s GPUs are so expensive that the only way you can buy them is by already having billions of dollars or being able to raise billions of dollars, which means, in a very real sense, that NVIDIA is dependent not on its customers , but on its customers’ credit ratings and financial backers. To make matters worse, the key reason that one would buy a GPU is to either run services using it or rent it to somebody else, and the two largest parties spending money on these services are OpenAI and Anthropic, both of whom lose billions of dollars, and are thus dependent on venture capital and debt (remember, OpenAI has a $4 billion line of credit , and Anthropic a $2.5 billion one too ). In simple terms, NVIDIA’s customers rely on debt to buy its GPUs, and NVIDIA’s customers’ customers rely on debt to pay to rent them.  Yet it gets worse from there. Who, after all, are the biggest customers renting AI compute? That’s right, AI startups, all of which are deeply unprofitable. Cursor — Anthropic’s largest customer and now its biggest competitor in the AI coding sphere — raised $2.3 billion in November after raising $900 million in June . Perplexity, one of the most “popular” AI companies,  raised $200 million in September after raising $100 million in July after seeming to fail to raise $500 million in May (I’ve not seen any proof this round closed) after raising $500 million in December 2024 . Cognition raised $400 million in September after raising $300 million in March . Cohere raised $100 million in September a month after it raised $500 million .  Venture capital is feeding money to either OpenAI or Anthropic to use their models, or in some cases hyperscalers or neoclouds like CoreWeave or Lambda to rent NVIDIA GPUs. OpenAI and Anthropic then raise venture capital or debt to pay hyperscalers or neoclouds to rent NVIDIA GPUs. Hyperscalers and neoclouds then use either debt or existent cashflow (in the case of hyperscalers, though not for long!) to buy more NVIDIA GPUs. Only one company actually makes a profit here: NVIDIA.  At some point, a link in this debt-backed chain breaks, because very little cashflow exists to prop it up. At some point, venture capitalists will be forced to stop funnelling money into unprofitable, unsustainable AI companies, which will make those companies unable to funnel money into the pockets of those buying GPUs, which will make it harder for those companies buying GPUs to justify (or raise debt for) buying more GPUs.  And if I’m honest, none of NVIDIA’s success really makes any sense. Who is buying so many GPUs? Where are they going?  Why are inventories increasing ? Is it really just pre-buying parts for future orders? Why are accounts receivable climbing , and how much product is NVIDIA shipping before it gets paid? While these are both explainable as “this is a big company and that’s how big companies do business” (which is true!), why do receivables not seem to be coming down?  And how long, realistically, can the largest company on the stock market continue to grow revenues selling assets that only seem to lose its customers money? I worry about NVIDIA, not because I believe there’s a massive scandal, but because so much rides on its success, and its success rides on the back of dwindling amounts of venture capital and debt, because nobody is actually making money to pay for these GPUs.   In fact, I’m not even saying it goes tits up. Hell, it might even have another good quarter or two. It really comes down to how long people are willing to be stupid and how long Jensen Huang is able to call hyperscalers at three in the morning and say “buy one billion dollars of GPUs, pig.”  No, really! I think much of the US stock market’s growth is held up by how long everybody is willing to be gaslit by Jensen Huang into believing that they need more GPUs. At this point it’s barely about AI anymore, as AI revenue — real, actual cash made from selling services run on GPUs — doesn’t even cover its own costs, let alone create the cash flow necessary to buy $70,000 GPUs thousands at a time. It’s not like any actual innovation or progress is driving this bullshit!  In any case, the markets crave a healthy NVIDIA, as so many hundreds of billions of dollars of NVIDIA stock sit in the hands of retail investors and people’s 401ks, and its endless growth has helped paper over the pallid growth of the US stock market and, by extension, the decay of the tech industry’s ability to innovate. Once this pops — and it will pop, because there is simply not enough money to do this forever — there must be a referendum on those that chose to ignore the naked instability of this era, and the endless lies that inflated the AI bubble. Until then, everybody is betting billions on the idea that Wile E. Coyote won’t look down. Let’s start with a horrible fact: it takes about 2.5 years of construction time and $50 billion per gigawatt of data center capacity . One way or another, these GPUs are depreciating in value, either through death (or reduced efficacy through wear and tear) or becoming obsolete, which is very likely as NVIDIA has committed to releasing a new GPU every year . At some point, Wall Street is going to need to see some sort of return on this investment, and right now that return is “negative dollars.”  I break it down in my premium piece, but I estimate that big tech needs to make $2 for every $1 of capex . This revenue must also be brand spanking new, as this capex is only for AI. Meta, Amazon, Google and Microsoft are already years and hundreds of billions of dollars in , and are yet to see a dollar of profit , creating a $1.21 trillion hole just to justify the expenses (so around $605 billion in capex all told, at the time I calculated it). You might argue that there’s a scenario where, say, an A100 GPU is “useful” past the 3 or 6 year shelf life. Even if that were the case, the average rental price of an A100 GPU is 99 cents an hour . This is a four or five-year-old GPU, and customers are paying for it like they would a five-year-old piece of hardware. The same fate awaits H100 GPUs too. Every year, NVIDIA releases a new GPU, lowering the value of all the other GPUs in the process, making it harder to fill in the holes created by all the other GPUs. This whole time, nobody appears to have found a way to make a profit, meaning that the hole created by these GPUs remains unfilled, all while big tech firms buy more GPUs, creating more holes to fill. Big tech keeps buying more GPUs despite the old GPUs failing to pay for themselves. To fix this problem, big tech is buying more GPUs.  Newer generation GPUs — like NVIDIA’s Blackwell and Vera Rubin — require entirely new data center architecture, meaning that one has to either build a brand new data center or retrofit an old one.  Big tech is spending billions of dollars to make sure it’s able to turn on these new GPUs, at which point you may think that they’ll make a profit.  Even when they’re turned on, these things don’t make money. The Information reports that Oracle’s Blackwell GPUs have a negative 100% gross margin .  How exactly are these bloody things meant to make more money than they cost in the next six years, let alone three? They don’t make a profit now and have no path to doing so in the future! I feel like I’m going INSANE! This is accrual accounting, meaning that these numbers are revenue booked in the quarter I reported them. Any comments about quarter-long delays in payments are incorrect. Microsoft’s revenue share payments to OpenAI are pathetic — totalling, based on documents reviewed by this publication, $69.1 million in CY (calendar year) Q3 2025.

0 views
Jeff Geerling 2 days ago

The DC-ROMA II is the fastest RISC-V laptop and is odd

Inside this Framework 13 laptop is a special mainboard developed by DeepComputing in collaboration with Framework. It has an 8-core RISC-V processor, the ESWIN 7702X—not your typical AMD, Intel, or even Arm SoC. The full laptop version I tested costs $1119 and gets you about the performance of a Raspberry Pi. A Pi 4—the one that came out in 2019.

0 views
Ruslan Osipov 2 days ago

Unveiling my gaming blog: Unmapped Worlds

For the past eight months, I’ve been running two parallel writing projects. You know about this one: my weekly posts in this blog (this is post 42, by the way). But there has been a shadow project running in the background. I love video games, and I’ve collected too many opinions on them to keep them to myself. Meet Rooslawn’s Unmapped Worlds , a blog where I write essays about games. I decided to go for a phonetic spelling of Ruslan in the title, in the hopes I’ll get misnamed less. I don’t review games. Instead, I write about game mechanics and tropes, and I love breaking down how digital worlds are constructed. It’s a place where I can complain about my dislike for map markers and quest GPS, or explore the reality that I rarely actually finish the games I play. It is a home for deep dives into immersion, design philosophy, and the specific friction that makes a game memorable. A few of the pieces I’m most proud of include when I didn’t speak the language of games and difficulty sliders are dumb . Running the project anonymously was a great idea - I was able to be more vulnerable, it allowed me to experiment more with different topics and formats, and find my voice. The voice of Unmapped Worlds can be described as rambly. I’ve been thinking of it as written gumbo . It isn’t clean and corporate, there’s texture, love and care put into it, and you know it’s authentic. Gumbo is something spicy, authentic, textured, visceral, and willing to take risks that alienate some of the audience. This is unlike slop, which usually comes from the desire for inoffensive predictability and consensus, even if we have to falsify our preferences to achieve it. - The FLUX Review, episode 211 Ultimately I felt like attaching my name to Unmapped Worlds does it justice - who I am is highly relevant to the writing. Gumbo’s flavor is unique to the chef. If you like video games, see if any of the 42 (so far) essays connect with you, and consider subscribing to my newsletter .

1 views