Latest Posts (20 found)

Unscrewing lightbulbs

Giving lightbulbs a MAC address was a mistake that I’m living with. I’m literally unscrewing lightbulbs to renew their DHCP lease @dbushell.com - Bluesky Instead of enjoying the bank holiday Monday I updated my homelab software. I was ‘inspired’ by the Copy Fail Linux bug to run full distro upgrades. This is my self-hosted update for Spring 2026 (rough documentation to give future me a chance). Monday’s fun risked a week of pain. I do have backups but restoring them on a broken LAN is tricky. I have an ISP provided wifi router to dust off in an emergency. Along with an absurdly long 15 metre HDMI cable I do not care to unravel. My winter update added a hardware fallback but that too requires careful rejigging. I have Proxmox hosts, virtual machines, and Raspberry DietPis . They were all on Debian 12 (Bookworm) with a kernel potentially susceptible to the bug. Minimal Debian installs are perfect because I run everything in Docker anyway. Data volumes are easy to backup or network mount. I can change host at will for any service. Debian is just sensible, well documented no-fuss Linux. I used to run “minimal” Ubuntu server. Following 24.04 I found myself debloating most of the Ubuntu part (i.e. snaps). It sounds like the new coreutils are a CVE party . Glad I escaped before that drama! As it happens, this week’s Linux Unplugged episode had Canonical’s VP of Engineering spewing embarrassing AI platitudes. “Ubuntu is not for you” was the only thing said worth remembering. I updated most of my VMs first because they’re easy to restore if anything fails. I followed Lubos Rendek’s guide . Start with a full package update and then change the package sources before running another step-by-step upgrade. The only non-Debian sources I have are Docker and Tailscale. Yes that means I run Docker inside Proxmox VMs — and you can’t stop me! That’s not even my worse crime… After the Trixie upgrade I found VMs were failing to obtain a LAN IP address. The virtual network device had been renamed from to . I edited and just changed the reference. There is surely a better/more predictable fix but this was the quickest. The same name was used across all VMs so I guess 18 is the magic number. Everything has been stable so far. If issues arise I’ll just nuke and pave from a Debian 13 ISO. Docker config and volumes are backed up independently of the VM images. DietPi has a long Trixie upgrade post I didn’t read. I just curled to bash: I gave the script a cursory glance before hitting enter. I have a Pi 4 running failover DNS and a Pi 5 running my public Forgejo instance . DietPi is ideal because of the tiny footprint; I run Docker here too. Raspberry Pi still hasn’t merged upstream Copy Fail fixes. I’m already in trouble if this bug can be exploited but I did the temporary fix out of caution. I wasn’t going to bother with Proxmox 9 but after a GUI update I was informed version 8 “end of life” was August 2026 . That is soon! I followed the official upgrade guide on my Mini-ITX server . Proxmox has a tool to check compatibility. I saw no red lights so I stopped all VMs, updated package sources to Trixie, and ran the upgrade. It is critical to run again before rebooting. I ran into the systemd-boot issue . Apparently if this is not removed the system fails to boot. If my particular box fails to boot I’m in big trouble because I broke video output and have yet to fix it. I have another Proxmox machine running virtualised OPNsense for my home router. I can’t stop the OPNsense VM and upgrade the host to Proxmox 9 because the host would have no network access. I had two options: I specifically set up option 1 for such a purpose. I went with option 2. I figured any software running in memory is still alive until I reboot, right? I didn’t question whether Proxmox would kill any processes itself (it didn’t). The update was suspiciously fast. I ran again and saw a lot of yellow warnings. Yikes. Eventually I noticed I’d failed to update some sources to Trixie and I’d installed a franken-distro. After fixing mistakes all I could do was reboot and pray for an agonising two minutes. OPNsense is the only non-Debian operating system in my homelab. I manage it entirely via the web GUI. The 26.1 update had quite a few significant changes. My DHCP setup was considered “legacy” and my firewall rules required a manual migration. Despite dumbening my smart home my lightbulbs still demand a WiFi connection. I program them myself to avoid Home Assistant and proprietary apps. Turns out I hard-coded IP addresses (discovery protocols are a joke.) Despite having dynamic IPs they remained stable until the OPNsense 26.1 DHCP update. I had no easy way to identify each light. Why would they name themselves anything useful? That’s how I ended up unscrewing the bulbs one by one to see which MAC address fell off the network. I gave them static IPs on a VLAN for future me to appreciate. And with that, my home network is up to date! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Use my failover VM YOLO it live

0 views
Unsung Today

“This was a user-friendly computer.”

The Pixar animated short Lifted was released in front of Ratatouille in 2006: = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/yt1.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/yt1.1600w.avif" type="image/avif"> I’ve always been amused by this imaginary interface, which is so clearly not how any sort of computer would work. Or so I thought. These are photos I took in Melbourne in 2024 of CSIRAC, Australia’s first digital computer from about 1949: = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/1.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/1.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/2.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/2.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/3.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/3.1600w.avif" type="image/avif"> This is a “console” of the computer, used to tactically probe or input specific memory addresses (in binary), and to control functions like stopping and starting the program. Any proper programming and eventually inputting data would happen using gentler I/O devices like typewriter keyboards, paper tape, and magnetic storage. = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/4.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/4.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/5.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/5.1600w.avif" type="image/avif"> Physical consoles like this one were last seen in the 1970s on hobbyist home computers such as the Altair 8800 , and the Console app on your Mac diligently spitting out logs is its spiritual and virtual successor. But even if a CSIRAC console feels hostile today, 75 years ago it was quite the opposite : And [CSIRAC] helped there too. It could display all its working registers and the last 16 instructions executed. It could be given an address at which to stop (a “breakpoint”), and be stepped by one instruction at a time. It even had lights to show the computer’s internal states. This was a user-friendly computer. CSIRAC stood for Commonwealth Scientific and Industrial Research Automatic Computer, a typical naming scheme of the era. We also got ENIAC (Electronic Numerical Integrator and Computer) in 1945, BINAC (Binary Automatic Computer) in 1949, EDVAC (Electronic Discrete Variable Automatic Computer) in 1946, ILLIAC (Illinois Automatic Computer) in 1952, and then SEAC, SWAC, ORDVAC, TREAC, AVIDAC, FLAC, WEIZAC, BIZMAC, RAMAC, and UNIVAC. The story goes that the name of 1952’s MANIAC (Mathematical Analyzer Numerical Integrator and Automatic Computer) was chosen to highlight and put a stop to the goofy naming practice. Did it work? I am not sure. Not only two more MANIACs were produced, but we also got 1953’s JOHNNIAC (nicknamed “pneumoniac” since it needed a lot of air conditioning), and SILLIAC (Sydney ILLIAC) in 1956. The last computer I can find using that naming scheme was TIFRAC, operating in India between 1960 and 1965. CSIRAC had real work to do, but today it is known chiefly for being the first computer to play music in real time . The quality is… I’ll let you judge, with links below pointing to short MP3s preserved by Paul Doornbusch and subsequently Internet Archive: Do you miss your PC speaker yet? Engineers working on other room-sized computers of that era did similar things ; whether this was solely one of the first attempts to humanize the big scary machines, or a distraction from the computers’s typically military uses is left as an exercise for the listener. Today, one of the 1960s machines still plays music, headlining a fascinating annual tradition – every December, the PDP-1 restoration crew at the Computer History Museum in California invites visitors to sing carols with the computer older than most of them. = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/yt2.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/yt2.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/6.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/6.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/7.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/7.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/8.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/8.1600w.avif" type="image/avif"> The last photo takes us back to where we started. Neither CSIRAC nor PDP-1 might be user-friendly by today’s standards but damn, wouldn’t you want some of your computer’s interface to feel this way? #history #sound design #youtube Auld Lang Syne Chopin’s March In Cellar Cool (I particularly enjoyed an alt recording of In Cellar Cool where CSIRAC itself appears in a background as a constant humming presence.)

0 views

Notes on the xAI/Anthropic data center deal

There weren't a lot of big new announcements from Anthropic at yesterday's Code w/ Claude event, but the biggest by far was the deal they've struck with SpaceX/xAI to use "all of the capacity of their Colossus data center". As I mentioned in my live blog of the keynote , that's the one with the particularly bad environmental record . The gas turbines installed to power the facility initially ran without Clean Air Act permits or pollution control devices, which they got away with by classifying them as "temporary". Credible reports link it to increases in hospital admissions relating to low air quality. Andy Masley, one of the most prolific voices pushing back against misleading rhetoric about data centers (see The AI water issue is fake and Data center land issues are fake ), had this to say about Colossus: I would simply not run my computing out of this specific data center I get that Anthropic are severely compute-constrained, but in a world where the very existence of "AI data centers" is a red-hot political issue (see recent news out of Utah for a fresh example), signing up with this particular data center is a really bad look. There was a lot of initial chatter about how this meant xAI were clearly giving up on their own Grok models, since all of their capacity would be sold to Anthropic instead. That was a misconception - Anthropic are getting Colossus 1, but xAI are keeping their larger Colossus 2 data center for their own work. As an interesting side note, the night before the Anthropic announcement, xAI sent out a deprecation notice for Grok 4.1 Fast and several other models providing just two weeks' notice before shutdown, reported here by @xlr8harder from SpeechMap: This is terrible @xai. I just spent time and money to migrate to grok 4.1 fast, and you're disabling it with less than two weeks notice, after releasing it in November, with no migration path to a fast/cheap alternative. I will never depend on one of your products again. Here's SpeechMap's detailed explanation of how they selected Grok 4.1 Fast for their project in March. Were xAI serving those models out of Colossus 1? xAI owner Elon Musk (who previously delighted in calling Anthropic "Misanthropic" ) tweeted the following: By way of background for those who care, I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed. [...] After that, I was ok leasing Colossus 1 to Anthropic, as SpaceXAI had already moved training to Colossus 2. And then shortly afterwards : Just as SpaceX launches hundreds of satellites for competitors with fair terms and pricing, we will provide compute to AI companies that are taking the right steps to ensure it is good for humanity. We reserve the right to reclaim the compute if their AI engages in actions that harm humanity. Presumably the criteria for "harm humanity" are decided by Elon himself. Sounds like a new form of supply chain risk for Anthropic to me! You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

0 views

cutting off my mother was so worth it!

An update to this . All this time, I either tried to find a good time to do it and delayed it, or told myself I could make it work for the rest of her life. It could get better, we have good times sometimes, and I could deal with seeing her once or twice a year, right? I thought everyone wins in this scenario: I fulfill my expectation as a daughter, I don’t have to take a drastic step and have difficult conversations, and I don’t lose out on a possible future change in our relationship. I get to be normal and have somewhat of a family left. But I just lost without fully realizing the scope of it all. Now that it’s been a while, I feel silly for not having done this sooner. Having her in my life even peripherally seems to have dragged me down so much in ways I didn't even know. It held me back to an intense degree. I feel so much better now! Looking back at it, it was even worse than I had been aware of. I accepted behavior I would have rejected otherwise, because it was still better than the worst abuse, or sandwiched between the good. I’d just shrug off comments that now, I would look at you in shock and immediately ask you to leave. Things she’d never say in front of others, or to others. I made excuses that she was just clumsy in her words or it’s just her way to show care, but now finally removed from all of this, I see so clearly that I was the target of unrelenting resentment all the time, unashamedly so. All that she ever saw in me were things she disliked about herself and my father, a walking reminder of a depressing phase in her life, and she could never hide it or move on from that. I completely underestimated how much even just experiencing and accepting this every few months altered how I see and treat myself. I thought as long as I limited contact and didn't live with her, it would be fine. But subconsciously, I showed myself that it’s okay to treat me that way. I was complicit; I let myself down, I didn’t stand up for myself, I ignored my needs and wants, and I made myself small, rolling over without any resistance. I had chances to evade it, put an end to it, but I was cowardly and stayed anyway. I prioritized a damaging relationship over my mental health. This eroded trust in myself and just made it okay for me to treat myself badly in some aspects too. An abusive relationship, especially to a parent, affects everything in life: How you carry yourself, how you speak, how much energy you have, what you believe you can do, the types of people you surround yourself with, the treatment and opportunities you accept. It directly negatively affects your success in education, work, hobbies, and other relationships! So many people who grew up in an abusive household end up in abusive workplaces and toxic relationships for this reason! Keeping the contact up didn’t maintain a relationship at all, it just kept a wound open; one that could get triggered by workplace discussions or conflicts. Without contact and this festering wound, I feel so much more confident now to speak up, to ask for what I want, to push back and not prioritize the comfort of others to an unhealthy or excessive degree. I feel more comfortable with the idea of people not liking me or what I do; I owe this directly to no longer having to tiptoe around my mothers’ feelings and not having to be the one to adjust my behavior to avoid outbursts. I am a lot less nervous around social interactions now because I am no longer resetting my progress every few months by meeting someone who always sees the worst in me and others. I am no longer normalizing this sort of stuff to myself. Best thing I ever did! You can’t change or save them. They’re grown adults who choose to be this way - either completely, or knowingly hiding it until you’re in private. Let them be miserable! Let them reap the consequences of their actions. It’s not like your attempts at conversations to change your relationship or their behavior helped long-term. They know what the problem is and had enough chances; at some point, if you don’t put up walls, they know there is no consequence for their disrespect. Why would they treat you better when treating you badly gives them satisfaction and access to you still remains intact? You’re a good punching bag. Don't wait until they're dead. No one gives you all these years back that you have wasted sticking around for their abuse. No one gives you a medal for enduring this bullshit. You could die before them, and then what? All this time you stay, you could feel free instead, love yourself, be cherished and supported by others, and let the decisions in your life reflect that - more peace, better work, better finances, better relationships to others and yourself. 10/10 move, would recommend. Now I have a nicer family - my in-laws and friends. I am thriving. Reply via email Published 07 May, 2026

0 views

Into the gap

It is right that the murder of many people be mourned and lamented. It is right that a victor in war be received with funeral ceremonies. Tzu & Le Guin, Tao Te Ching , page 38 H ow are we to prevent war? asks Virginia Woolf in the winter of 1937, as photos of the Spanish Civil War pile up on her desk, with their broken bodies and broken buildings, and Hitler and Mussolini gather forces to the east, and her own government’s war budget reaches new extremes. War, she asserts—and you will agree—is a horror, a terror that must be stopped. As well we know, confronted as we are with real-time video of genocide in Palestine, the massacre of school children in Iran, a fascist leader not abroad but in our own demolished house, asserting his right to make war wherever he likes, whenever he wants, including in our own cities, as armies under other names murder and disappear our neighbors with impunity. But, Woolf asks, what is she to do, what are the daughters of educated men to do in the face of that horror? And what are we, generations later, working women and their allies, how are we to stop it? It’s a good question, and we must spend some time trying to answer it. Woolf begins by considering how women might influence the decision to go to war and we may well begin with the same. To influence, we must have some knowledge to impart, some skill in speaking of it, and a listener who would hear us. We have some knowledge—the knowledge that war is a horror, the knowledge that when a missile falls from the sky and rends bodies into pieces that a terrible evil has been done. We can speak of this too, can point to the photos and videos that flit across our screens, children with missing limbs begging for food amid the ruins. These are images and reports of atrocity, undeniably and unequivocally. Yet who would listen, and how? Where can these words be spoken? Here we find we are in some trouble, for the supreme form of speech in our time is not words but money, both in legal doctrine and in fact of order, with our media controlled and manipulated by an obscenely wealthy few who have gobbled up platforms and papers and perverted them to their own aims, aims that seem very much in favor of war, for war has ever been the commander of wealth. When we speak against war we find our words drowned out, lost in the deepfakes and the advertising, the psyops and the slop, the stock market reports, the casual declarations of war crimes, the oil futures, the gilded festivities, the chattering and nattering among a purportedly progressive political class concerned with the appearance of civility but indifferent to its obligations. No knowledge moves through such mediums, only information, a ravening, unending stream of data in which knowing anything is nigh impossible. And such is that information that it is frequently as odious as the war it both directly and indirectly leads to: racism, misogyny, eugenics, transphobia. (That last a word that implies fear or aversion when the reality is much more violent, both speech and act that seek to eliminate a people whose courage in seeking their own liberty is among our brightest beacons.) But are these notions not the collaborators and soldiers of capital, and so of war? Are not racism and misogyny the masked recruits who go door to door, kitchen to bedroom to workplace, demanding labor and loyalty and love from an underclass who are threatened with suffering and death if they do not deliver it? Toni Morrison, whose words we may yet remember, said: “And they never, ever thought we were inhuman. You don’t give your children over to the care of people whom you believe to be inhuman….They were only, and simply, and now interested in the acquisition of wealth, and the status quo of the poor.” 1 Racism and eugenics were invented to justify the colonization of Black bodies just as sexism justified the enclosure of women’s. 2 The racists and misogynists of today work the same power: they create a world in which a few wealthy men dictate the material conditions of the lives of millions of others who must serve them, who toil for scraps, whose every step, however small, towards more freedom is violently and immediately resisted, and with overwhelming force—an impulse that you will agree is very much like the impulse to war. Look no further than the disproportionate attack on DEI, an effort that saw not to upend capitalism but merely to lightly expand the number of people who might not be entirely crushed by it, but which has been met with an extraordinary campaign to cancel huge swaths of scientific research, retract life-giving knowledge of medical care, hollow out our universities, purge career civil servants and leaders of the armed forces, and to eviscerate the federal workforce 3 —upending millions of lives and leaving our federal government, already poor from decades of neoliberal retreat, unable to deliver on the basic requirements for the life and liberty of its now abandoned public. That the federal workforce has long been one of the best chances for a comfortable life for Black and brown women excluded from comparable employment in the private sector is of course no coincidence. Meanwhile, the barons of the private sector have likewise backed down from even superficial concern for equality, and now demand such extreme fealty to their enterprises that only someone with no caretaking responsibilities whatsoever—with no care at all, not even for themselves—could possibly meet them. “Influence must be combined with wealth in order to be effective as a political weapon,” 4 Woolf concludes, and we grieve that the only change we can see in the century since is that the gap of wealth has widened, the effectiveness or lack thereof become only more extreme. Woolf was a member of the propertied class, but it was in her lifetime that women earned the right to their own property and were granted access to professional work, such that they might not be entirely in debt to their fathers and husbands. And yet in her time women secretaries were said to be routinely “fagged out” in the afternoons because they couldn’t afford a proper lunch. 5 Today, our food pantries work overtime to feed the working poor, people who work full time and more but don’t make enough to buy bread. Those who do make enough to live on do so in awareness of their intense precarity, the knowledge that they are one illness or storm away from ruin. And even the wealthiest worker has little compared to the investor class pushing for war, those who see war not as an abomination but as yet another opportunity to increase their bloated purse. What is our wealth compared to the billions spent on fighter jets, the $2.5 million spent on a single Tomahawk as it tears through a school full of little girls? What is our wealth compared to the mind-boggling quantities spent on the drones and satellites that make death as easy as clicking a button from the safety of a desk on the other side of the world? The same flick of a thumb can reduce a hospital to rubble or post a racist meme, often one right after the other. What is our wealth compared to the record-breaking $1.5 trillion requested for the military, a military that is already the richest on the planet ? Trump : “We have a virtually unlimited supply of these weapons. Wars can be fought ‘forever.’” So if money is influence, our relative influence has waned with the rise of the billionaire class. Woolf, recognizing the same, turns her attention instead to education. For if perhaps enough money cannot be mustered to prevent war, then learning—with its values of intellect and reason and enlightenment—may work in our favor, inasmuch as learning grows those faculties of reason, and reason is quite the antidote to the unreason of war. But again we find a problem. In Woolf’s time, while women have ostensibly been permitted into the colleges, they remain excluded from universities, and the women’s colleges are beggarly compared to those gleaming towers. Nor have women been permitted to adorn their names with the same letters and credentials that the men claim, a factor that keeps them from competing for the jobs that require them. It seems that the colleges are less places of learning than they are places of acquiring prestige, a prestige that is fiercely defended and protected, for prestige is a strangely fragile creature who can live only in scarcity and when exposed to too many of its own kind withers and dies like a tree choked by vines. And today? Well, women have torn down the gates to the universities, that much is clear. Women make up a majority of all college students in the US, and would be an even greater portion were it not for policies that directly work to balance the gender of student bodies . But that tearing down has been met by what can nearly be termed a war itself: a livid and indignant assault on places of learning from the men who want war, aiming at what has become the heart of the university, its beating and bloodied endowment. And the universities have, nearly to the letter, capitulated and retreated in the face of that assault, trading away centuries of purported intellectual freedom in order to protect the money needed to continue to operate, as if operating without that freedom was worth any money at all. Woolf writes: Is that not enough? Need we collect more facts from history and biography to prove our statement that all attempt to influence the young against war through education they receive at universities must be abandoned? For do they not prove that education, the finest education in the world, does not teach people to hate force, but to use it? Do they not prove that education, far from teaching the educated generosity and magnanimity, makes them on the contrary so anxious to keep their possessions, that “grandeur and power” of which the poet speaks, in their own hands, that they will use not force but much subtler methods than force when they are asked to share them? And are not force and possessiveness very closely connected with war? Woolf, Three Guineas , page 193 We see that same force and possessiveness in our own time: billions extorted from the universities, while the universities call in cops in riot gear —gear so named because when worn it inspires one to riot—to descend on students protesting genocide in Palestine. A great irony this would be, if irony were not the first casualty of war. For these brave students were met with war while exercising their right to protest the same, a right which past wars have been fought to defend but in which we seem to have retroactively declared defeat. Places of learning are always the first target of the fascist, because they are places that might counter the propaganda and pseudo-culture that leave us either pacified and accepting of their scraps or else fighting each other instead of fighting those who would start a war. Learning and thinking —a skill the billionaires are trying to supplant with machines that purport to think for us —are a challenge to the illogic and madness of war. To see an image of the broken bodies and broken buildings, to hear the testimony of those who lived, to have the skill and fortitude to ask how this could have happened, who benefits from such a horror, and how they might be stopped—for they must be stopped—is to exercise a lively mind and spirit, one capable of making the imaginative leap between the way things are and the way things ought to be. That interrogative and thinking mind is a threat to the fascist, who needs you to see things only as he does, who needs you unthinking and unquestioning, because only an unthinking and unquestioning mind could possibly accept the horrors of war. Only a mind so subdued by slop and propaganda and advertising, a mind unpracticed in observation and inquiry and imagination—only such a mind could be complacent as its pockets are picked to fund that most terrible of horrors. And so at last we turn to the workplace, as Woolf does, not in the hope that we might make enough money to counter the warmongers—for we have done the math, and no matter how hard we try, there is no chance of that—but because work is where we may, if we’re lucky, earn enough to keep a roof over our head and food in our belly, both of which are necessary to be able to think and act in the world. And we must be able to think, to remember that war is a horror, to resist being anesthetized by the memes and the vapid statements to violence. But here we find a curious contradiction: on the one hand, we are threatened with a lack of work , with our jobs taken over by machines who will never know that war is a horror, because they cannot know anything at all. On the other, high-pitched edicts that we must work so hard that there can be no time to think of anything else, no time to consider how these pictures of broken bodies and broken buildings came to be. ( Musk : workers “need to be ‘extremely hardcore,’ logging ‘long hours at high intensity.’”) How can both of these claims be true? How can the investor class simultaneously threaten us with no work, and, at the same time, threaten us with too much? It seems they fear equality more than hypocrisy. Perhaps we should also fear the disposition that the professions—which women fought so hard to enter, and now must fight so hard in which to stay—train us for. Here again is Woolf: And those opinions cause us to doubt and criticize and question the value of professional life—not its cash value; that is great; but its spiritual, its moral, its intellectual value. They make us of the opinion that if people are highly successful in their professions they lose their senses. Sight goes. They have no time to look at the pictures. Sound goes. They have no time to listen to music. Speech goes. They have no time for conversation. They lose their sense of proportion—the relations between one thing and another. Humanity goes. Money becomes so important they must work by night as well as by day. Health goes. And so competitive do they become that they will not share their work with others though they have more than they can do themselves. What then remains of a human being who has lost sight, and sound, and sense of proportion? Only a cripple in a cave. 6 Woolf, Three Guineas , page 258 It’s interesting to think with Woolf about our current march towards war, as the differences between her time and ours are revealing as much for what hasn’t changed. She wrote at a time when women were still largely excluded from professional work, from universities, from the armed forces. We read her today as women with one or more degrees, with careers, many of us carrying medals won in war zones and the scars to prove them, many of us with pips on our collar, credentials as long as those held by the men who guarded the libraries from the presence of women in Woolf’s time. But in both eras our presence in these places seems to have inspired an extraordinary, and extraordinarily violent, response. The assault against diversity programs is so out of proportion to those programs’ actual impact that we must admit something more elemental is going on: women’s presence in previously precluded spaces (and it is important to note that it is white women who have been the greatest benefactors of diversity initiatives, and Black and brown women who now suffer the greatest costs of their retreat) has inspired a level of violence among a small group of rich, insecure men that they will lay waste to the whole world before they will consider sharing their table with women as equals. Their own self-worth is so mean and spare that it withers when it comes into contact with those who do not bow and bend in their presence. The armed thugs marching through our streets, the speeches about force, force in our own cities, force elsewhere in the world, soldiers rechristened as “warfighters,” all of this is an assertion of manhood, a manhood reduced to nothing more than domination in all things, a masculinity that can see itself only in the violent oppression of others, whether that is other countries, other cultures, other races, other genders, or the more-than-human world. As Jamelle Bouie notes , “the vision of the world here is the vision of a rapist.” We are forced to conclude that to be in possession of a great deal of money, to be in a position of great authority, whether over an institution of learning or of government or of business, is to be in favor of war. The prestige and power that accompany both rank and great wealth—wealth which in our own day has grown so large as to be incomprehensible—also engender an instinct to possession and to the violent and disproportionate defense of that wealth. While we, who have neither great rank nor great wealth, know war to be an abomination, a horror through and through. Yet we can never hope to compete with the warmongers in either arms or cash, in prestige or status. So what are we to do? We must refuse to compete at all. We, with our empty hands , know it is right to mourn and lament the murder of many people. And so we mourn, and we lament, and we demand that our would-be leaders stop this incessant and evil warmaking. Are those demands enough? It would seem not. It would seem that despite great opposition to war , despite great risk to our economy, to our own safety as we shred our oldest and strongest alliances, that our demands for an end to war land on ears not deaf but blocked, stoppered with ego and greed and lust for domination in all its forms. And perhaps this should be no surprise. For why would a class of people so threatened by the mere presence of women in their schools and governments and workplaces ever open their ears to those women’s demands? Our speech must be a very great threat if they are so unwilling to hear it. So to speak against war is necessary—necessary for us to speak so with one another, so that we do not forget that war is a horror—yet insufficient. It is not enough to speak against war, for the warmongers, with their infinite money and infinite weapons, cannot hear us against the drums they so loudly bang for war. We must look elsewhere for the path that leads away from here. When Woolf was writing, women were precluded from the armed forces, and so could not refuse war by refusing to fight. We today are not subject to the same prohibition. We find ourselves among the ranks of soldiers both on our own soil and on many others. We have not earned the same respect, for many of our brothers seem to believe we have been put there solely for their use and abuse , and others—the same people who drive us to war, who claim no reason for war save war itself— work to exclude us once again . Yet women make up roughly a sixth of the armed forces , and perhaps as much of the forces in our streets. 7 Here is perhaps our greatest opportunity to halt the march to war. For we have it within our power to refuse to fight. We who know that war is a horror must refuse to raise a gun or fly a jet or steer a drone heavy with death into homes and hospitals and schools. We must refuse to go door to door in our own cities dragging people without warrant or reason into filthy, inhumane, and hastily built camps—for as sure as killing is a part of war, so too is gathering people up and locking them away. We must drop guns and kevlar and gas masks and walk away from the field of war, whether that field is distant from our homes or just down the street. We may look here to the courage of those like Ella Keidar Greenberg , an Israeli who, at 16 years of age, signed a pledge refusing to enlist in the military and was then, at 19, jailed for that refusal. “Refusal is the imperative,” she speaks, and we who have not plugged up our ears to reason and wisdom can yet hear her, and agree. For to make the horror of war with your own hands is to become a horror yourself. 8 This is easy to say for the great many of us who do not fight in war, who have not raised guns or donned armor or placed hands on keyboards and rained death on schools and hospitals from afar. But the imperative to refusal remains: we must refuse to lend our hands or minds to war, in whatever way we can. And so we must also refuse to work for war, to use our labor to make the technology of war, whether of weapons or of surveillance or of detention, whether that technology is used in our own streets or somewhere afar—for any technology used afar will come home soon enough, as we see with the militaries in our streets, outfit with cast offs from so many wars abroad. 9 We must not lend our hand to the making of guns or missiles or drones, of targeting systems or intelligence databases, of satellites that scour the planet for schools and hospitals, of algorithms that prescribe processes for murder, processes that promise to scrub their operators clean of the blood that follows but which will haunt them, nonetheless. Is this enough? It is not. For war is such an enormous undertaking—witness the trillions of dollars, an amount of money too big to think with—that it seeps into nearly every part of the economy. The same servers that summon servants to your door are used to surveil the people of Gaza; the same newspaper that brings details of the war to our eyes and ears also perpetuates a story that the greatest hardship of war is the price of gas at the pump. The same so-called AI that makes it easier to prototype a website is simultaneously being used to generate enormous quantities of racist and misogynist slop that treats war like a spectator sport. The same university that teaches the history of war also pays millions in bribes to the warmongers, while making a concerted effort to erase trans people from the very same history books. If we are truly committed to not working for war, we must not work for any of it. Not for the weapons manufacturers or the drone makers or the algorithm authors; not for the papers or the products or the schools. Perhaps you will think I am being too harsh. Perhaps you will say, but this is my only way of making a living, of keeping a roof over my head and my children’s heads, of feeding and clothing my loved ones. After all, we have also noted how our publics have been decimated by the very same men who push for war, men who have likewise colluded to raise prices on milk and eggs, who have transformed homes into commodities, such that we who had so little money compared to them seem every day to have less and less. Already our food pantries work overtime feeding the working poor, and we rightly fear every cough and tooth ache, every flutter of our overworked hearts or tiny lump beneath our skin, for medicine is increasingly a privilege reserved only for the rich. How could we refuse work under such conditions, when work is increasingly scarce? Here we must pause and again wonder at that scarcity. For it is a curious thing that work is becoming harder and harder to come by, that what work there is is often so poorly remunerated we must visit the pantries for bread at the end of the workday. Or, if it pays well, it does so under the constant threat that it could end at any moment, that it will end soon enough. Is it not the case that the men who loudly bang the drums for war, who build the technologies of surveillance that are used both to round people up and to aim missiles on their backs, who pollute our skies with satellites and insert themselves into the field of war as if they were heads of state themselves, states of ego and greed and impunity—are these not the selfsame men who declare we no longer need workers at all, that one machine can do the work of dozens? And do they not declare, out of the very same mouths, with the very same breaths, that those few workers who remain must work themselves to the bone, must work every waking hour they can, must eschew rest and play and leisure for the work is too great to put down for even a moment? And do they not also say—for as we have seen, those with more money have more speech, and seem ever to want us to hear them—that it is immigrants who are taking away all the jobs ? (A dog-ate-my-homework excuse, if there ever was one.) And meanwhile there is so much work that needs doing but isn’t being done: our schools overcrowded, our farms short-handed, our streets and bridges crumbling, our parks neglected, our clinics overrun, our laboratories empty. This is not to say that the scarcity isn’t real. It is real enough, as the lines at the food pantries attest. But it is manufactured ; it is built bolt by chip by screw by a billionaire class who want workers who complain neither of their warmongering nor of their whip. On the one hand, they threaten us with no work at all, with the misery and penury that comes from a lack of work, and therefore a lack of the means of living. On the other, they demand endless work, a work that wipes out all other avenues for thinking and being, that leaves us programmable and programmed, no space left in our minds for thoughts they haven’t placed there. Are we to merely acquiesce, to accept their scraps and the miserable conditions attached to them? Surely not. For if we accept these conditions, will they not impose even worse upon us? Will they not keep increasing their demands and decreasing our pay until we are working ceaselessly, and for nothing? What would compel them to stop? Already we have seen that their greed for money and for power is so voracious it will tear through buildings and through bodies, it will murder many people, it will poison the air and the soil, it will bring great storms upon us. So there must be an end, and it is only we who can bring that end about. So I say again we must refuse to work for war. But I do not wish you any hardship. If the only work available to you is the work of war, or work that has been perverted to the aim of war—and I am trusting that you have done your best to find other work, to make your living in a manner that does not end the lives of others—then there remain yet other avenues to take. Here you must gather with your colleagues and comrades, for the work against war is not solitary. You must first speak and be heard by each other, know that you are not alone in recognizing that war is an abomination, a great and terrible horror. For while speaking into the networks and the platforms is like speaking to the wind, your words tossed away from you before they can reach your own ears, we still have the ability to speak to our colleagues and to our neighbors, to speak unmediated and uncensored with each other. To speak with our mouths and with our hearts and with our lively, imaginative minds. To say, war is a horror, and I will not work for it, and are you with me? Can we speak together? Can we move and act against war hand in hand, and right here, where we stand? Here we see a great many of our kith and kin already stepping up. We can look to workers at Amazon , Google , Salesforce , and others who demand that their work not be used for surveillance, mass deportation, drone warfare, or genocide. We can look to the hundreds of workers at Thomson Reuters who raised alarms after learning that their company was selling data to ICE, prompting shareholders to demand an investigation . We can look to the community in Monterey Park, California , who successfully organized in favor of a ban on the construction of data centers—after noting that in addition to being polluting, noisy, energy guzzlers, such data centers also fuel ICE’s violence against their own neighbors. We can look to the Harvard graduate students currently on strike, whose demands include protections for international students at risk of deportation. We can look to the twenty-four attorneys general who have filed more than seventy lawsuits aimed at stopping the administration from waging war at home. And we can look to Luanne James, a librarian in Tennessee, who when asked to remove books from her library—books flagged for such transgressions as “female empowerment” and “following one’s dreams”—said, “ I will not comply. ” For is not censorship likewise a tool of war? Haven’t the book burners and the warmongers always been the same people, with the same aim? Are not slop and chatbots who care nothing for veracity the new tools for censorship—censorship by means of pollution rather than prohibition, but the ends are the same. James was subsequently fired for her dissent. 10 Refusal always invites consequences. But then so too does compliance, and often very grave consequences at that. Here we may heed the advice of the veteran scientists who resigned from the National Institutes of Health after it was gutted by the Trump administration. They implore , “Please decide where your red line is so you can choose to act before the line is already behind you.” There is risk here, of course. Organizing is, in theory at least, a protected activity and legally you may not be retaliated for it, but we have seen who the law protects and who it bends and breaks for and have no confidence in it protecting the likes of us. But there is risk no matter what we do or do not do. To be alive, to have a body vulnerable to gun and missile and chemical weapon, to famine and to thirst, to penury and hardship, is to be at risk; only the dead are relieved of the risk of harm. Your employer may punish you for organizing, but what is that risk compared to the risk of being complicit in war? The risk of knowing yourself to be someone who helped rain death on schoolchildren, who helped imprison your fellow workers in filthy detention camps, who helped program people’s minds to be numb to atrocity and horror? For you will know what you have done. Even if your daytime self can wrap you up in comforting excuses and justifications, can be lulled by the distractions and the advertisements and the television that anesthetizes your conscience, you will know it in the dark of the night. Our dreams know where we have gone wrong and they will never let us forget it. 11 But perhaps even this risk seems too great. You know your circumstances, and you know the ways the investor class has of keeping your head down. You cannot be fairly asked to put your own life, or your kin’s lives, on the line. And yet you are not without the ability to work against war, even in these difficult times. For you can work against war while seeming to work for it. Perform your work diplomatically while leaking information to the press, so that those on the outside who are safe from retaliation may organize in your stead. 12 Look for ways to gum up the works; raise concerns and questions and show where plans are short, where steps have not been thought out, where coordination is insufficient. Do not meet expectations but dash them, show them to be shortsighted or foolhardy, lacking sufficient detail; make those who set them doubt their own understanding of the world (as they try to sow doubt in you). They have made this easy on you, the warmongers and profiteers, by foisting unpredictable and inconstant machines upon you and mandating their use, by setting irrational milestones that could never have been met even by those who tried. Right there is a ready-made excuse for why the work could not be delivered as asked—your hands were tied. Do the work if you must, but do it dragging your feet, do it always on the lookout for ways to slow down the march to war and so give others the time to stop it. Does this gall you? It galls me. We ought not to have to spend our energy, what little and precious time we have on this earth, denigrating and diminishing our own skills. It is a violence to the self to do our work poorly. But against the alternative—against setting those same skills in the making of war—it seems a small sacrifice, and a necessary one. For it is not only your skill in, say, design or management or engineering that you may exercise. It is also the skill of refusal, the skill of refraining from making war in all its many and terrible forms. And that too is a kind of work, a good work, work that all of us can do. For there is one weapon that only we possess and which the billionaires and the warmongers can never take from us. One weapon which so frightens them they will twist their words into knots, they will spend the entirety of their vast fortunes trying and failing to convince us that we don’t possess it at all, they will claim over and over and without evidence that it is vanishing before our eyes even as it remains right there in our hands, clear and plain to hearts yet open to the world: the refusal to work. To refuse is a creative act. What is created in a refusal is a gap, a space, a moment in which something else makes ready to emerge, something that waits upon our invitation and a bit of water or sunlight to pop itself out and set down roots. To refuse is to create that which can only exist in the shade of that refusal, the refusal giving shelter to the choice that appears behind it. To refuse is to choose. In that choice, we find ourselves in the gap, in the place where no one has programmed our thinking, no one has told us what to do, no one has left any instructions or orders that we must follow. No one stands ready to answer our questions or to assign us tasks or to relieve the anxiety of being alive to uncertainty, for this has always and ever been the only way to be alive. In this gap is not one choice but many, a myriad of choices, for from here on out there can be no prescription, no map or plan or diagram. Only one step, and then the next. Yet we are not without skill or art. In fact, it is our art which is most at need here, our art that helps us imagine how things could be different, how we could work not for war but for peace, and for liberty, and for care for all our kin in all the kingdoms. How we could live with one another if prestige and missiles and extreme wealth were relegated to the history books, where they belong. It is our art, the art of painting or drawing or sculpting or dancing or making music or writing—and while all the arts are needed here, I will make a special plea for writing as that which so often gives us new worlds to think with—that we can think with the question of what we are to make with one another when we refuse to make war. For to refuse the work of war is to choose to see things as they really are, and as they yet could be. This is a choice we make most strongly when we make our art, when we bring our keen attention to the world and do not flinch from it, do not numb ourselves to it, but rather look at it squarely and know that however things are, they can—they will —be otherwise. What could our work become when it isn’t the work of death, of domination, of separation and detention and surveillance? What is our work when we give up seeking wealth and prestige—which no matter how hard we work, we can never have enough of? What is our work when we do not accede to orders from above but make choices with each other? What is our work when we see it not as a way to make a wage but a way to make more life , not only for ourselves, but for everyone? What becomes of our work if we work for the living? To refuse is an ending; an ending to our work being used to rend buildings and bodies, to massacre schoolchildren, to surveil and capture and detain. To refuse is a beginning. To turn away from the work of war is to turn toward the work of making a living world, work that does not answer to the billionaires, with their slavering, unending greed, but which only answers to each other. The gap that we create with our refusal is not void but potential, not emptiness in the sense of want but empty as a bowl or bag is empty, as an ear cocked to a speaker, a pair of hands cupped and raised to the roiling and darkening sky. From A Humanist View , a speech given at Portland State University in 1975. Quoted in Táíwò, Reconsidering Reparations , page 6. Táíwò adds, astutely, “Racism was only ever a smoke screen.”  ↩︎ “[I]n pre-capitalist Europe, women’s subordination to men had been tempered by the fact that they had access to the commons and other communal assets, while in the new capitalist regime, women themselves became the commons, as their work was defined as a natural resource, laying outside the sphere of market relations.” Federici, Caliban and the Witch , page 97  ↩︎ For just some examples of these efforts, see Unbreaking’s explanations of the assaults on the federal workforce , medical research funding , and trans healthcare .  ↩︎ Three Guineas , page 170.  ↩︎ Ibid, page 404.  ↩︎ This is clearly a reference to Plato’s cave, and the comparison hits a little harder in our own time: the shadows on the cave wall have been compressed to the mirrored screens we hold in our hands.  ↩︎ A since-deleted page on the ICE website says that women made up 15% of law enforcement officers employed by ICE as of 2023 ( archive link ). That the page has been deleted perhaps says something about how little ICE cares for the women in its employ.  ↩︎ The Center on Conscience and War reports that it has seen a 1,000% increase in US service members interested in becoming conscientious objectors since the start of the Iran war. Mike Prysner, the Center’s director says, “I haven’t heard from a single caller who said, ‘I’m scared of dying in a war I don’t believe in.’ All of them are scared of killing people in a war they don’t believe in.”  ↩︎ Aimé Césaire termed this the “boomerang” effect .  ↩︎ A legal defense fund has been set up to help James contest her termination.  ↩︎ In The Third Reich of Dreams , Beradt reports that those who worked against the Nazis had dreams of fierce hope, while those who collaborated and capitulated were wrought by nightmares of terror and humiliation.  ↩︎ The Freedom of the Press Foundation maintains some good advice on how to protect yourself while sharing information with the press—including the counsel to avoid visiting this link from a device your employer controls.  ↩︎ View this post on the web , subscribe to the newsletter , or reply via email . From A Humanist View , a speech given at Portland State University in 1975. Quoted in Táíwò, Reconsidering Reparations , page 6. Táíwò adds, astutely, “Racism was only ever a smoke screen.”  ↩︎ “[I]n pre-capitalist Europe, women’s subordination to men had been tempered by the fact that they had access to the commons and other communal assets, while in the new capitalist regime, women themselves became the commons, as their work was defined as a natural resource, laying outside the sphere of market relations.” Federici, Caliban and the Witch , page 97  ↩︎ For just some examples of these efforts, see Unbreaking’s explanations of the assaults on the federal workforce , medical research funding , and trans healthcare .  ↩︎ Three Guineas , page 170.  ↩︎ Ibid, page 404.  ↩︎ This is clearly a reference to Plato’s cave, and the comparison hits a little harder in our own time: the shadows on the cave wall have been compressed to the mirrored screens we hold in our hands.  ↩︎ A since-deleted page on the ICE website says that women made up 15% of law enforcement officers employed by ICE as of 2023 ( archive link ). That the page has been deleted perhaps says something about how little ICE cares for the women in its employ.  ↩︎ The Center on Conscience and War reports that it has seen a 1,000% increase in US service members interested in becoming conscientious objectors since the start of the Iran war. Mike Prysner, the Center’s director says, “I haven’t heard from a single caller who said, ‘I’m scared of dying in a war I don’t believe in.’ All of them are scared of killing people in a war they don’t believe in.”  ↩︎ Aimé Césaire termed this the “boomerang” effect .  ↩︎ A legal defense fund has been set up to help James contest her termination.  ↩︎ In The Third Reich of Dreams , Beradt reports that those who worked against the Nazis had dreams of fierce hope, while those who collaborated and capitulated were wrought by nightmares of terror and humiliation.  ↩︎ The Freedom of the Press Foundation maintains some good advice on how to protect yourself while sharing information with the press—including the counsel to avoid visiting this link from a device your employer controls.  ↩︎

0 views
Stratechery Yesterday

An Interview with Joanna Stern About Living With AI

An interview with Joanna Stern about her new book about living with AI, and starting her own media company.

1 views
Ahmad Alfy Yesterday

The HTML Sanitizer API

There are three ways an engineer learns about Cross-Site Scripting (XSS). The lucky ones learn about it through a helpful code review or a proactive security lint rule. The diligent ones learn about it during a security audit that catches a vulnerability before it hits production. Then, there are the scarred ones . They learn about it when a live exploit hits their site. When an attacker injects a script that steals session tokens from , hijacks cookies, or redirects users to a phishing site. I personally joined the “scarred” club back in 2005, when an embedded Flash signature in a forum I owned turned into a security nightmare… but that’s a story for another time. In this article, we’re going to explore how the browser is finally taking the burden of sanitization off our shoulders with the new HTML Sanitizer API . To understand the solution, we have to look at the danger. In the early days of the web, was the magic wand that turned strings into DOM elements. The moment that code runs, the browser tries to load a non-existent image, fails, and executes the script. Congratulations, you’ve just been XSS’d. The snippet above is a classic example of how unsanitized user input can lead to XSS vulnerabilities. Attackers usually ship payloads like this through several vectors: Historically, we solved this by pulling in DOMPurify . It’s the de facto library for sanitizing HTML in JavaScript. It works by parsing the input string, removing any dangerous elements or attributes, and returning a safe version of the HTML. Or if you were using React, you might have done something like the following, using to render sanitized content: DOMPurify is a fantastic tool that excels at sanitization, but not without caveats. It ships ~23.3 kB minified (~8.71 kB gzipped), requires maintenance, and essentially repeats parsing HTML which is what the browser is already designed to do. That last point is critical. DOMPurify-style libraries have always been a fragile approach. The parsing APIs exposed to the web don’t always map cleanly to how the browser actually renders a string as HTML in the “real” DOM. Worse, these libraries have to chase the browser’s evolving behavior over time because things that were once safe can turn into time-bombs the moment a new platform feature ships. That puts the maintainers in a permanent race against every browser release, and once a library reaches the size and reach of DOMPurify, that race turns into a full-time job. I imagine the maintainers will be quietly thrilled the day they get to wind it down. The browser, on the other hand, knows exactly when and how it’s going to execute code. Putting sanitization inside the browser means it stays in sync with the parser by definition. The web platform now includes new APIs that make parsing and sanitizing HTML much safer. The spec introduces safer ways to insert HTML into the DOM, beyond the old approach. The API gives us six methods, split into two families: Let’s walk through them. The method is a new addition to the DOM API that allows developers to set HTML content in a way that is safe from XSS vulnerabilities. When you use , the browser automatically sanitizes the input, removing any potentially dangerous elements or attributes. It is safe by default. You still can configure it, but any configuration you pass will still not allow dangerous content to be rendered. It will always remove unsafe elements like and attributes. It effectively overrides your settings if you try to be “too permissive”. The simplest possible usage doesn’t even need to be configured. Just call with a string: That’s it. The script in the attribute is gone because the browser handled the sanitization logic during the parsing phase, using its built-in default safe configuration. If you’re curious about exactly which elements and attributes the default config allows, MDN has the full default sanitizer configuration documented. When you need more control, the spec lets us define a configuration object to specify which elements and attributes are allowed or blocked. It can be a bit tricky to get the configuration right as you can accidentally specify an element in both allow and block lists, or list an attribute multiple times. The API is strict about this: if you pass an invalid configuration object, it will throw a . This is to ensure that developers are aware of any contradictions or redundancies in their configuration. Let’s take a look at an allow-list configuration: The configuration above only allows a specific set of elements and attributes. Anything not in the allow list is stripped out. The option lets you specify elements that should be replaced with their children instead of being removed entirely. So if a shows up in the input, the itself is dropped but its content stays. Now a block-list configuration: This configuration specifies elements and attributes that should be removed from the input. The option controls whether HTML comments are preserved. In this example, they’re removed. You cannot have both and in the same configuration object, as they serve opposite purposes. The same applies to and . If you try to include both, the API throws a . You can combine with , or with , just not opposing pairs at the same level. Notice that in both examples, we didn’t have to worry about dangerous attributes like inline event handlers ( ). This is what “safe by default” means. Even if you configure to allow certain elements or attributes, it will still block anything that could lead to an XSS vulnerability. is the unsafe sibling. The cleanest way to think about the difference is this: There are two main reasons to reach for it: Here’s the contrast in code: is removed because our allow-list doesn’t include it, not because enforces any safety on its own. If we’d written , both would have made it through. With , that wouldn’t matter. would be stripped regardless. Sometimes you don’t want to insert HTML immediately. You want to parse it, inspect it, maybe transform it, and only then decide what to do with it. That’s what and are for. follows the same rules as ; XSS-unsafe content is always stripped. is its counterpart and behaves like ; no sanitization unless you pass a sanitizer. This is particularly useful for things like building a sanitized once and reusing it, or running checks on the sanitized output before deciding whether to render it at all. First, let’s be clear that even with the new API, backend sanitization is non-negotiable . Client-side sanitization is for the user’s experience and immediate safety. Anyone with sufficient knowledge can easily bypass your client-side code by calling your API directly. This is exactly like how we validate user input on the client for better UX, but still validate on the server for business logic and security. In ShopTalkShow, episode 704 , Dave Rupert and Chris Coyier invited Frederik Braun from Mozilla to talk about the HTML Sanitizer API. They discussed using Sanitizer API for Optimistic UI ; the hot pattern in frontend development. In a comment section, when a user hits “Post”, we usually rely on the backend to sanitize the comment, return it back to the browser, then render the response. This takes time and creates a less-than-optimal experience. Trusting raw user input and rendering it immediately can be risky, but with the new API, we can safely render the comment immediately while the backend is still processing. The result is a much smoother UX without compromising security. A few things worth calling out in that example: This is the kind of place really shines. You can’t an unsanitized string without inviting an XSS, and the API gives you a path that’s both ergonomic and safe. There are plenty of other places where the API earns its keep: The HTML Sanitizer API is a significant step forward in making web development safer and more efficient. It moves security from a “library concern” to a “platform primitive”. With that, we get better performance, smaller bundle sizes, and a more secure default behavior. At the time of writing (May 2026), browser support is still early. Firefox 148 shipped the standardized API in February 2026 becoming the first browser to do so. Chrome has it in Canary behind a flag, and Safari hasn’t started implementation work yet, though the team has signaled a positive position. The feature is not yet Baseline , which means production usage today still needs feature detection and a fallback (DOMPurify is still the right backup). The live status of the feature is shown below. Thankfully, the web has always allowed us to use features before they become fully standardized or available. Use it as a progressive enhancement now, and keep an eye on the support tables. The day this becomes Baseline is the day a lot of bundles get a little smaller and a lot of apps get a little safer. User-generated content : Comments, reviews, or any form of user input that gets rendered on the page. Usually, these inputs are stored in a database and rendered later. If the application doesn’t sanitize this input, it can lead to stored XSS vulnerabilities. URL parameters : Attackers can craft URLs with malicious payloads in query parameters. If the application reflects these parameters back into the page without proper sanitization, it can lead to reflected XSS vulnerabilities. For example, a search page that takes a query parameter and displays it on the page without sanitization can be exploited. Safe methods : , , . These always strip XSS-unsafe content, no matter what configuration you pass. Unsafe methods : , , . These do exactly what you tell them to, including allowing dangerous content if your config says so. With , your config is a further restriction on top of safe defaults. Unsafe stuff is always stripped, even if you explicitly allow it. With , your config is the complete rule . If you say allow , stays. Pass no config at all, and nothing is sanitized. Declarative shadow roots. strips them as part of its safe defaults, so if you need them, is currently the only way. Allowing specific “unsafe” attributes intentionally. Sometimes you genuinely need an inline handler or similar, and you want to opt in to exactly that one thing while still cleaning up the rest. The is constructed once at module scope, not inside the component. Constructing it on every render is wasteful. The actual call is wrapped in a with feature detection, so the component degrades gracefully on browsers that haven’t shipped the API yet. We hand the over to imperative DOM via the ref instead of mixing with React’s diffing. That combo tends to cause hydration headaches. WYSIWYG editors. Users routinely write content in word processors and paste it in, dragging along massive, dirty HTML and inline styles. Listening to the event and running it through cleans things up before insertion. Live Markdown previews. When the input never even leaves the browser, you still want to sanitize the rendered HTML before showing it. External feeds. RSS, syndicated content, embedded snippets and anything coming from outside your origin should be sanitized before it touches the DOM.

0 views
Sean Goedecke Yesterday

Why hasn't longer-horizon training slowed AI progress?

Dwarkesh Patel 1 recently posted an award for the best answers to four key questions about AI. It’s partly a challenge and partly a job interview, since some of the winners will get offered a role as a “research collaborator”. I don’t want the job, but I do want to write down my answer to his first question: why hasn’t AI progress slowed down more? There are a few reasons we might think AI progress would slow down. The particular reason Dwarkesh is interested in goes like this. Training a model (specifically reinforcement learning) requires the model to perform a task and then get “graded” on the output. As models get more powerful and tasks become harder, they take longer and require more FLOPs 2 to complete, and thus more FLOPs to train: thus training harder models will take longer. But intuitively, AI progress hasn’t slowed down that much. The famous METR horizon-length graph shows that AI systems are capable of more and more complex tasks over time, and that this process is accelerating, not slowing down. Why would that be? Firstly, it might just be the case that newer models are benefiting from orders of magnitude more FLOPs . Of course, AI labs aren’t standing up orders of magnitude more GPUs (they’re trying, but there are hard physical limits on how fast you can scale up a physical datacenter). But it’s certainly possible that they’re learning to use their existing FLOPs orders of magnitude more efficiently. The efficiency of complex software systems - and the training code for a frontier AI model certainly qualifies - is not typically determined by the number of genius ideas in it. It is determined by the number of boneheaded mistakes. Take this story 3 of how the initial GPT-4 training run used FP16 when summing many small values, which will completely mess up your results if the sum of those values is large. How much training-efficiency-per-FLOP does solving bugs like that buy? Plausibly enough to outweigh any inherent lack of efficiency from training more powerful models. Secondly, intuitions about the speed of AI progress are weird and unreliable . Humans measure AI progress - and intelligence in general - on a really uneven scale. It’s easy to tell when an AI (or a person) is less smart than you, because you can just see them making mistakes. It’s very hard to tell if they’re smarter, because in that case you’re the one making mistakes. You have to rely on more subtle context clues: do they get better long-term results than you, or do they often confuse you in situations where you later end up agreeing with them, and so on. The jump from GPT-3 to GPT-4 seemed huge because GPT-4 was dumber than almost all humans, and GPT-4 was sometimes as smart as a human. However, frontier models are now smart enough to be in the realm of ambiguity on many topics. It’s thus much harder to tell the “real” rate at which they’re getting smarter. Maybe the rate of growth of “raw intelligence” really has slowed down! I don’t know how we’d be in a position to know for sure. Thirdly, many traits other than intelligence determine the capabilities of AI models . Take the jump in October last year where OpenAI and Anthropic models were suddenly “agentic” (i.e. they could reliably perform complex tasks end-to-end). That might be intelligence, but it might also just be a greater working memory, or more rote familiarity with the basic tools of a LLM harness, or more ability to attend to the context window, or even simply a personality more suited to tools like Claude Code or Codex. Of course, all of these traits are plausibly “intelligence”. But they’re traits you might instil by various clever tricks (or even just tweaking the system prompt), not by brute-forcing more FLOPs. It’s illustrative here to consider the mistake made by Apple’s infamous The Illusion of Thinking paper, where the researchers asked various models to brute-force solve Tower of Hanoi puzzles with different numbers of disks, using the results to score how good at reasoning the models were. But of course when you read the output, all of the failures were cases of the model realizing that many hundreds of steps were required, and refusing to even try. These same models could trivially write code to perform the steps, or correctly go through any smaller subset of the steps. The problem wasn’t intelligence, it was persistence : these models lacked the willingness to dig in and keep powering through steps until they got to an answer 5 . Even inside an AI lab, I don’t think anyone has a good understanding of how many “real” FLOPs are being thrown at a training run (not counting FLOPs that are wasted on bugs). We also don’t have a clear sense of whether AI progress really is slowing down or not. Mythos seems impressive, and coding agents are really good now, but once the models get close to human intelligence it becomes really tricky to monitor. Finally, almost everyone judges intelligence by capabilities, but capabilities are produced by a constellation of many traits (intelligence is just one of them). I think this stuff is really complicated. A general theory like “RL takes more flops-per-reward as tasks get longer, therefore training will gradually slow down” sounds good, but in practice AI development is dominated by lightning strikes: silly bugs that make training a hundred times worse, clever ideas that make models a hundred times more useful, and spiky capabilities that can produce dazzling results in some areas but zero improvement in others. We are still very early . If you’re reading this you probably know who Dwarkesh is, but if you don’t: he’s a well-known tech-adjacent podcaster whose gimmick is that he actually does extensive research before each guest and asks specific technical questions. A FLOP is a floating-point operation, i.e. a matrix multiplication, i.e. “time on a GPU”. I saw this in a tweet and only realized that the source was Dwarkesh when I was researching for this post. What if AI progress stalls for technical reasons, and everyone gives up on training new models? In that world, open source models will eventually catch up, and AI labs won’t be in a privileged position. Incidentally, this is my pet theory about why models got much better at agentic tasks last year: training on longer and longer agentic traces meant that models started to “believe they could do it”, and made them much less likely to just give up and take shortcuts or refuse to continue. If you’re reading this you probably know who Dwarkesh is, but if you don’t: he’s a well-known tech-adjacent podcaster whose gimmick is that he actually does extensive research before each guest and asks specific technical questions. ↩ A FLOP is a floating-point operation, i.e. a matrix multiplication, i.e. “time on a GPU”. ↩ I saw this in a tweet and only realized that the source was Dwarkesh when I was researching for this post. ↩ What if AI progress stalls for technical reasons, and everyone gives up on training new models? In that world, open source models will eventually catch up, and AI labs won’t be in a privileged position. ↩ Incidentally, this is my pet theory about why models got much better at agentic tasks last year: training on longer and longer agentic traces meant that models started to “believe they could do it”, and made them much less likely to just give up and take shortcuts or refuse to continue. ↩

0 views

SQLAlchemy 2 In Practice - Chapter 7: Asynchronous SQLAlchemy

This is the seventh chapter of my SQLAlchemy 2 in Practice book. If you'd like to support my work, I encourage you to buy this book, either directly from my store or on Amazon . Thank you! Starting with release 1.4, SQLAlchemy includes support for asynchronous programming with the package, for both the Core and ORM modules. This is an exciting improvement that brings the power of SQLAlchemy to modern applications such as those written with the FastAPI web framework.

0 views
ava's blog Yesterday

the devil wears prada 2 - loved it

I really like The Devil Wears Prada . I saw it in the cinema when it came out, and I've rewatched it two or three years ago with my wife, who had missed out on it and the cultural impact it had. It surprised me so much when the second movie was suddenly just... there! So I went today and I am absolutely in love. I can't wait until I can see it a couple more times, maybe right after rewatching the first again, and get to draw more connections and conclusions. The following will contain spoilers. At the start, I felt so proud of Andy. She is thriving, she is accomplished, she is getting honored for her work and she has great friends and coworkers! It feels so good to see that even 20 years later, she hasn't lost her ambition and drive, and did not cave for someone else's feelings anymore. She's standing up for herself and is much more confident, too. As you are introduced into the new situation (going back to Runway two decades later), you get to be nostalgic alongside her, which feels like such a good narrative choice; so satisfying to watch. Yes, I totally fell for the nostalgia bait, the " Look, do you remember that piece of info from the first movie? " stuff. It was fun! I was greatly entertained and half the cinema was gasping and squealing at times, recognizing things and pointing at the screen. I liked seeing that some things stayed the same while some things changed, while nothing felt forced or unrealistic. People and companies progress, and while you may see yourself as the main character that people surely will remember, your presence was likely much smaller than you realize. It felt in-character for people not to necessarily remember Andy or to be aghast that she has made it further, and it felt so human for Andy to go: Wait, that process changed? Wait, we don't do it this way anymore? It was also so, so good to see Miranda again, and what they did with her. I think they handled Miranda absolutely well, especially her first appearance. A big fanfare, thrilling, slaying in a dress. She still has her quirks, the air of superiority, the earned respect, the vibe that makes you stumble as you make it into her office - but she is also a Boomer, rather old by now, and even she has slowed down and now seems slightly out of place, overwhelmed. Things aren't like they were before, and she has issues with growing in the direction the work needs to go. Work culture and expectations have shifted, and they have not been kind to the person Miranda is. She can no longer throw her coat at people to assert her dominance, as there have been too many HR complaints; now she has to do it herself. She makes the occasional outdated, offensive Boomer joke in meetings, and while a much younger employee is allowed to reprimand her repeatedly for that, nothing happens. The young workforce has gotten used to their out-of-touch leadership making these sorts of comments (" That's just who she is ") and in turn, leadership has gotten used to feeling this sort of short-lived mild rejection of their words. No more uncritical appeasement and laughing just to laugh, the air is silent now before just moving on. Miranda used to always get her way and was able to boss people around with a sharp tongue - now her power has diminished, as she is ambushed by about eight (?) people in an environment she is not used to and cannot control. As such, she is unable to defend herself and the company against a ruthless take-over spurred by neoliberal ideals, too overwhelmed to make sense of it, and feeling left behind in a world that moves so fast. She's smart and cunning, but she can't make sense of the economic babble thrown at her, and her edges are smoothed out by the fear of jeopardizing her role and the possible renegotiation of her planned, but ultimately failed, promotion that Irv never got to announce. She has to grapple with what kind of legacy she wants to leave behind, when it is the right time to stop, what else she even has going in her life, and that her attitude has cost her dearly. As a viewer, it means a lot to see how gracefully they handled the fact that even the biggest, most fearsome Girl Boss ™ is aging out of her aura and control, and it is inevitable, but not necessarily sad. We have seen Miranda's issues with vulnerability and accepting help in the first movie, and here again, she is asked to get over herself for the greater good of everyone involved. It can be quite cringe-worthy how other pieces of media handle the modern world - way too many message pop-up sounds, texts always on screen, frequent video calls, extreme smartphone reliance for plot, and more. My wife described it as "when it is like Netflix shows", and that fits so perfectly. They really utilize this to death in their shows, together with extremely temporary memes and slang that already feel slightly too old once the release happens. I'm so glad this movie didn't fall into that trap! Yes, a main point of the movie is that times have changed - Andy no longer uses a flip phone, print numbers are rapidly falling, everything moves online, content is created for digital feeds, and your audience is not leisurely consuming a fashion magazine in a glamorous way, but seeing your short form content while on the toilet. The goal is to go viral, and there's a need for a much more direct and pressing damage control now that the public can directly fill your comments and mail boxes with their criticism. All while the industry is fighting with downsizing and consolidations. Still, modern tech doesn't get a center role in the movie in this obnoxious way, and they focus more on the core issues and workplace expectations that changed, over implementing a temporary reference or trend that will age badly. They do show some memes, but they are deliberately timeless and very focused on the movie, not trying to tie a current TikTok trend into it. What also "modernized" it in my mind is that aside from making the tyrannical girlboss less relevant in the age of work-life balance and HR complaints, they clearly brought in and parodied the Silicon Valley rich tech bro, just in the characters of Irv's son Jay, and of Benji Barnes. They clearly do not follow the rules of old money, as they dress like they're going out for a hike or the gym, act too casual, childish at times even, and seem to decide unpredictably, on a whim, in this really emotionally cold way. Money without class, without pretension, but also seemingly awkward and clumsy. Benji plans to go to the sun and has stopped drinking water because he thinks it's poisonous; there are mentions of weight loss and Ozempic. Really reminded me of Zuckerberg, Altman, Musk et al. in that way. The movie is full of celeb cameos that also aided the above modern feel; thankfully, most are really subtle, quick, and in the background. I think the ones most noticeable are Lady Gaga (loved her song) and Donatella Versace. It felt fair to me; the movie had a huge impact on the fashion world and was a tribute to it, so it makes sense that the second one would also honor their inspirations and also uplift new modeling talent. It felt fun spotting all the easter eggs, so to speak. In the first movie, Andy's boyfriend Nate was a complete dumpster fire. The older I get, the worse it ages. The narrative felt sexist, and I think the writers wanted to acknowledge that in this second movie. The New Guy ™ is a genuinely kind guy, but also kind of carries the vibe of all fictional men who are sanitized to death and would love to break out in a therapyspeak monologue about what is wrong with the other character. Still, I appreciate that over Nate, so we are good. The movie could have gone without the romance altogether. It added nothing to the core plot, and the screentime was minimal. I understand what they were trying to do, though: For once, show Andy in a normal relationship, resolving conflicts maturely, and that she doesn't need to choose between love and career like the first movie made it seem. And I can tolerate that. At least we were spared absolute hetslop . Emily is such a weird character to me. I did not think she would ever become so central, and I still think it is a weird choice, and probably the only thing in the movie I am scratching my head about. I guess retrospectively, I could see how the writers would wanna let Emily get her lick back on Andy for essentially coming in and torpedo-ing all her plans and dreams in the first movie, but it still felt... odd to me. Maybe because the way Emily and Andy compete in the first is such a subplot to me in the first, as I enjoy the rest more? I guess in light of that, making Emily mean and giving her the power to absolutely ruin Andy and Miranda makes sense, but something about it feels incomplete. At the end of the first movie, things seemed pretty resolved. But a late explanation of an unanswered phone call is what we are supposed to believe is what made Emily so cold this time? Not enough for me. I am also missing more reasons to empathize with how quickly Andy is just forgiving Emily for everything, when she hasn't only seemingly been fine with using her boyfriend for money, but also wanted to make tons of people jobless, and center herself in the magazine. Wild. Which leads me to the second point: Interesting imagery. For the entire movie until the end, Emily has red hair. The color red usually symbolizes power, evil, villains, blood, pain, and sin, and red hair is often associated with having a bit of a temper. Meanwhile, after everything comes out and she is ready to make amends and start over as her boyfriend broke up with her, her hair is platinum blonde, almost white, a color associated with innocence and new beginnings. In another part of the movie, Andy and Miranda look at the wall mural The Last Supper . Miranda muses that Jesus is depicted without a halo because it is meant to emphasize his humanity and fallibility, our shared inclination to betray one another. This is obviously foreshadowing to what is going to happen later, but it's interesting that minutes later, she is depicted at a large banquet table in front of the mural, seemingly imitating it in the place of Christ. There is also a gorgeous shot of her in the Galleria Vittorio Emanuele II in Milan, alone, sad, literally at a crossroads, surrounded by luxury and old, influential history. Ahhh, I wish I could write more, but the longer it's been, the more I am forgetting. I wish I could let it run on my second screen as I type. Maybe one day I will update this 8) Reply via email Published 06 May, 2026

0 views
fLaMEd fury Yesterday

MGK At The Spark Arena

What’s going on, Internet? Last night, me, my sister-n-law and our friend went into town to see the MGK gig as he brought his Lost Americana tour to Auckland for his only New Zealand show. MGK, aka Machine Gun Kelly, aka Colson Baker is one of those artists where it’s probably good to separate the art from the artist as he seems to be a ball bag in real life. I never paid any attention to him while he did hip hop records, but as soon as I saw the Bloody Valentine I was hooked. The album, Tickets To My Downfall was the exact type of nostalgia I needed for early 2000s pop punk in 2020. I skimmed through Mainstream Sellout when it released and never came back to it. We got Lost Americana last year which was a step up from the second record and I listened to it a bunch. But we also got Tickets To My Downfall All Access last year, the 5th anniversary reissue. Original tracklist, the bonus tracks from the SOLD OUT Deluxe , plus 5 new unreleased tracks. Whew. It was good to hear some more tracks from that era. We managed to grab reseller tickets, paid less for the three of us combined than a single ticket at face value, and the seats were pretty decent for where we ended up. Sweet as. Anyway, the show was good. It kicked off on time, it was loud, there were guitars and drums, only a couple throwbacks to the rap days and one or two songs from Sellout. It didn’t take long to get right into the Tickets To My Downfall songs and that was all I needed to hear. The stage was on theme too. A model of the Statue of Liberty’s head looming above with a cigarette hanging out her mouth, and his mic stand was a giant cigarette to match. Lost Americana indeed. The crowd around us were all there for the same reasons. Singing along with strangers who love the same songs is one of the best bits of a gig, especially the Tickets ones. Title Track , Drunk Face , Forget Me Too , Concert For Aliens , Jawbreaker , Nothing Inside , all hit. The cover of Paramore’s Misery Business was expected, and rocked. My absolute highlight was belting out Bloody Valentine word for word with everyone around me. My Ex’s Best Friend my second favourite on the album, still can’t get that one out of my head. We had a great time, a fantastic night out. Damn, what a show. I’ll see it again without hesitation. Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website.

0 views
Unsung Yesterday

“Watchmaker’s delicate precision and ornate mechanical intent”

A surprising entry in the thread started by Photoshop and continuing through screwdriver handles is this 11-minute video from Errant Signal about a platformer game called Derelict Star : = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/watchmakers-delicate-precision-and-ornate-mechanical-intent/yt1.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/watchmakers-delicate-precision-and-ornate-mechanical-intent/yt1.1600w.avif" type="image/avif"> I was inspired by the video, and really enjoyed its exploration of a demanding game that’s composed of just a few mechanics that are done really, really well: The number of inputs are small, but the expression those inputs allow is deceptively expansive. […] Derelict Star’s various areas are all built to explore the way movement systems function and even interact with one another. I think of user interfaces similarly, and of their need to build a certain consistent vocabulary of names, gestures, interface elements, concepts, and so on. Perhaps in an enterprise app you right click and discover something useful in a menu, and this will teach you about the usefulness of right click menus in general. Maybe pressing ⌥ to get to alternate symbols on your keyboard would inspire you (either consciously or not!) to try holding ⌥ in said menus, only to discover this brings up useful alternative options. Maybe seeing a keyboard shortcut next to one of these options will suggest to do that next time, and so on, and so on. I really loved this bit in the video that could apply to a lot more software than just videogames: It took me maybe an hour to do this, but right on the other side is a checkpoint. The game is hard, but it isn’t cruel. It’s designed to challenge you, but it has faith in your ability to complete it. The narrator uses the term “ludocentrism” to refer to games that ruthlessly prioritize the mechanics and gameplay over narrative, aesthetics, and so on. (“Ludic” meaning “relating to play.”) Of course, the calculus of what videogames care about will be different than goals of creative software or enterprise software; no one cares about the hero’s journey of the largest number in your Excel spreadsheet. But I think some version of ludocentrism applies to “boring”software as well. My beliefs here are probably something like this: #definitions #details #games #youtube you can’t reduce everything to just functionality or just efficiency, especially in creative moments of software use, and people use software creatively much more often than we suspect, including software not thought as “for creatives.”

0 views

Rails Security, AI, and IBB

For quite a few years the Rails project has been working with the Internet Bug Bounty (IBB). The IBB is an organization that awarded cash to security researchers that reported issues to OSS projects participating in the IBB. For quite a while I wasn’t certain about my feelings toward the program because I felt like cash rewards could incentivize low quality reports as well as encourage reporters to “haggle” about the severity of a particular bug (the IBB paid more when the bug was more severe). In the beginning that certainly was the case. We were fielding many low quality reports, and people were haggling over severity. But the program evolved, and despite the never-ending haggling, I felt it did more good (rewarding security researchers) than bad (forcing the security team to wade through low quality reports). That is, until AI came along. Sometime in 2025 our team started getting inundated with low quality AI generated reports. I know for sure this wasn’t unique to just our team as well. Anyway, AI lowered the barrier to generate reports, so we were back in the era of wading through low quality reports. Only this time, the low quality reports were masquerading as high quality reports. AI made it easy to turn a bullshit problem into something that looked legit, and since there’s a possibility of money involved people tried to take advantage of the situation. We even had a report where someone forgot to delete the AI generated output and just uploaded the report as-is with the following text: I enjoy using AI, but I really don’t like AI being used on me. But that’s not what this post is about. Recently the IBB stopped accepting new submissions. In other words, they aren’t paying bounties to security researchers anymore. I don’t know for sure since I haven’t asked them directly, but I suspect this is due to so many projects being inundated with AI generated reports. I think putting a stop to bounties makes sense for the time being. Of course the downside is that legitimate researchers are no longer incentivized to report bugs to OSS projects. Finally, the Rails team didn’t actually handle paying out any of the bounties. After we accept and release fixes, the IBB took care of the bounties and we had no visibility into that process. Since the IBB has stopped accepting new submissions and paying bounties, we’re now tasked with playing customer support for IBB as many reporters are now asking us “are we getting paid?” I honestly don’t know what to make of this situation except that working in OSS security will always find new and interesting ways to suck. I don’t have any particular “call to action” for this post, but I hope that it gives people some kind of glimpse into how the tofu is made. Anyway, have a good day, and remember: It’s always Friday somewhere!

0 views

Cache Use Cases Explained

☕ Welcome to The Coder Cafe! Today, we discuss cache use cases. When we think about caching, it’s pretty frequent to focus on where it happens; for example, client-side, server-side, or in a CDN. Yet, there’s a more important question that should be answered first: What’s the use case? In this post, we will break down two common cache use cases: reducing latency and improving capacity. And we will see why the line between the two is blurrier than it seems. Get cozy, grab a coffee, and let’s begin! A Cache for Latency Latency is the time between when a request is sent and when a response is received. A cache for latency exists to reduce the average latency of a service . The classic access pattern looks like this 1 : We check the cache first. On a cache hit, we return the data directly without touching the backend. On a miss, we go to the backend, return the result, and store it in the cache for future requests. Why does this reduce latency? The cache keeps data in memory, which is significantly faster to read from than a remote database that may involve network round-trips, disk I/O, and query execution. On a hit, all of that work is skipped. In Soft vs. Hard Dependency , we introduced two kinds of dependencies: A soft dependency is a non-critical dependency for the service to operate properly. A hard dependency is a critical dependency for the service to operate properly. A cache for latency is a soft dependency . If the cache becomes unavailable, requests fall through to the backend. The system keeps working, just at a higher latency. Keep this in mind, because it’s the key difference we’ll come back to. A cache for capacity exists to serve higher throughput than the backend can handle on its own. The access pattern is identical to the latency case: cache first, then backend on a miss. So what actually makes these two different? The difference is not in the code; it’s in what the backend can absorb. In a capacity scenario, the backend would be overwhelmed if it received all the traffic directly. The cache absorbs a large portion of the requests, keeping the backend load manageable. This changes the nature of the dependency . If the cache goes down, the backend is suddenly hit with all the traffic it was previously shielded from. Whether the system survives depends on the backend’s own capacity. If the backend can scale fast enough, the cache is still a soft dependency: there will be a rough period, but the system recovers. If the backend can’t cope with the load, the cache becomes a hard dependency . Without it, the system fails . Here’s a question worth asking: if the access pattern for both types is identical, how do we know which one we have? In most cases, caches are introduced to reduce latency. But here’s what can happen over time: Our system is stable. Cache hit rates are high, backend load is low. Traffic grows. The backend load stays low because the cache is absorbing most of it. Nothing breaks. No alerts fire. Six months pass. Nothing has changed, no code, no configuration, no architecture decision. And yet the cache is no longer reducing latency. It’s keeping the backend alive. The cache didn’t change. The code didn’t change. The system grew around the cache, and the cache quietly became load-bearing . The same risk appears when a cache goes cold. For example: A migration to a new cache instance A data format change that requires purging existing entries A cache restart after maintenance Any of these can produce a large wave of cache misses in a short window. If we were running a latency cache, we would see higher latency for a while. If we were running a capacity cache, we would see a traffic spike that the backend can’t absorb. The unsettling part is that the code is identical in both cases. The difference only becomes visible at failure time . The root problem is that teams often don’t know which type of cache they’re running . They built it for latency, and that’s still how they think about it, even as the system outgrows that assumption. A few approaches help here: Periodically ask: could the backend handle the current traffic if the cache were completely removed ? Load testing without the cache, or estimating backend capacity against current traffic levels, gives you a concrete answer. Treat cache hit rate as a meaningful operational signal , not just a performance metric. A sustained drop in hit rate means the backend is absorbing more traffic than usual. If that trend continues, it’s an early warning that you may be drifting toward a capacity problem. When migrating a cache or invalidating a large portion of its data, warm the new cache before routing live traffic to it. This prevents a cold-start burst from hitting the backend all at once. Finally, once we recognize that a cache is operating as a capacity cache , we should treat it accordingly. It’s no longer optional infrastructure and it deserves proper alerting and a clear plan for what happens if it goes down. AI is getting better every day. Are you? At The Coder Cafe, we serve fundamental concepts to make you an engineer that AI won’t replace. Written by a Google SWE, trusted by thousands of engineers worldwide. A cache for latency serves data from memory to reduce average response time. It is a soft dependency: if unavailable, the system degrades in latency but continues to work. A cache for capacity absorbs traffic that the backend couldn’t handle on its own. It can be a soft or a hard dependency, depending on whether the backend can absorb the load without it. Both types share the same access pattern, which makes them easy to confuse. A latency cache can silently become a capacity cache as traffic grows, without any code change. When a capacity cache goes cold or fails, the backend can be overwhelmed. Hit rate monitoring, periodic load testing, and cache warming are practical ways to manage this risk. Availability Models Safety and Liveness The PACELC Theorem The Three Types of Cache Cache stampede Even though variations exist. A Cache for Latency Latency is the time between when a request is sent and when a response is received. A cache for latency exists to reduce the average latency of a service . The classic access pattern looks like this 1 : We check the cache first. On a cache hit, we return the data directly without touching the backend. On a miss, we go to the backend, return the result, and store it in the cache for future requests. A soft dependency is a non-critical dependency for the service to operate properly. A hard dependency is a critical dependency for the service to operate properly. Our system is stable. Cache hit rates are high, backend load is low. Traffic grows. The backend load stays low because the cache is absorbing most of it. Nothing breaks. No alerts fire. Six months pass. Nothing has changed, no code, no configuration, no architecture decision. And yet the cache is no longer reducing latency. It’s keeping the backend alive. A migration to a new cache instance A data format change that requires purging existing entries A cache restart after maintenance Periodically ask: could the backend handle the current traffic if the cache were completely removed ? Load testing without the cache, or estimating backend capacity against current traffic levels, gives you a concrete answer. Treat cache hit rate as a meaningful operational signal , not just a performance metric. A sustained drop in hit rate means the backend is absorbing more traffic than usual. If that trend continues, it’s an early warning that you may be drifting toward a capacity problem. When migrating a cache or invalidating a large portion of its data, warm the new cache before routing live traffic to it. This prevents a cold-start burst from hitting the backend all at once. Finally, once we recognize that a cache is operating as a capacity cache , we should treat it accordingly. It’s no longer optional infrastructure and it deserves proper alerting and a clear plan for what happens if it goes down. A cache for latency serves data from memory to reduce average response time. It is a soft dependency: if unavailable, the system degrades in latency but continues to work. A cache for capacity absorbs traffic that the backend couldn’t handle on its own. It can be a soft or a hard dependency, depending on whether the backend can absorb the load without it. Both types share the same access pattern, which makes them easy to confuse. A latency cache can silently become a capacity cache as traffic grows, without any code change. When a capacity cache goes cold or fails, the backend can be overwhelmed. Hit rate monitoring, periodic load testing, and cache warming are practical ways to manage this risk. Availability Models Safety and Liveness The PACELC Theorem The Three Types of Cache Cache stampede

0 views

Live blog: Code w/ Claude 2026

I'm at Anthropic's Code w/ Claude event today. Here's my live blog of the morning keynote sessions. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

0 views

Am I Meant To Be Impressed?

If you liked this piece, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large .  I just published a lengthy discussion about how OpenAI and Anthropic make up 70%+ of all AI GPU compute capacity and revenue . The previous week I wrote about how OpenAI will kill Oracle — and quite possibly Larry Ellison’s personal fortune, too . Subscribing to premium is both great value and makes it possible to write these large, deeply-researched free pieces every week.  God, it’s been a long few years, and only feels longer after every ecstatic, ridiculous round of tech earnings where the world’s largest companies do everything they can to obfuscate the ugly truth behind their numbers. Let’s start with the biggest, ugliest one: Microsoft, Google, Amazon, and Meta are expected to spend between $800 billion and $900 billion on AI capex in 2026, and over $1 trillion in 2027 . By the end of 2027, big tech will have sunk $2 trillion into AI capex, with very little to show for it. Oh, I know what you’re going to say. “These companies are growing faster than ever!” “These companies are building for future revenue streams!” “These companies are saying that AI is driving growth!”  Yet those revenues are, in the case of Meta and Google, not good enough to actually share.   While Google CEO Sundar Pichai will gladly say that “[Google’s] AI investments and full stack approach are lighting up every part of the business,” said “lighting up” never results in a revenue number that you can point at, because Google knows that analysts and journalists will read “Gemini Enterprise has great momentum with 40% quarter on quarter growth” — which we have no frame of reference for because Google doesn’t share its AI revenues — and clap and honk like fucking seals. Sundar Pichai knows that everybody is desperate to see him jingle his keys, and has such utter contempt for reporters, analysts, and investors that he doesn’t have to prove AI is actually doing anything. Those writing up his earnings will do it for him.  Meta, on the other hand, has little real AI story, and can’t even seem to get its metrics straight on what AI is doing for the company, per my premium piece from earlier in the week : Nevertheless, I have to give Microsoft and Amazon credit for deigning us worthy of actual numbers, even if they’re piss poor. While Meta and Google refuse to actually explain their AI returns, Microsoft revealed that it had $37 billion in AI revenue run rate — $3.08 billion a month or so — and Amazon had $15 billion, or around $1.25 billion a month . And I must be clear, that’s revenue, not profit. In any case, I need you to recognize how small these numbers are in comparison to the capex it’s taken to make them.  To give you some context, Amazon’s AI revenue run rate is roughly 0.419% of the $298 billion in capex it spent on AI capex so far, or around 25% of the $5 billion it just invested in Anthropic last week . Microsoft, on the other hand, has spent $293.8 billion on AI capex through its latest quarter — making its revenue run rate around 1.04% of its spend. These revenues are deeply embarrassing! I am not sure why this isn’t the common refrain! These fucknuts have spent over a trillion dollars on AI and all they have to show for it is either nothing , vague statements about “everything lifting because of AI,” or pathetic revenues that only get worse the more you think about them.  For example: even if Microsoft were to make $37 billion in AI revenue in 2026 — remember, that $37 billion run rate is a snapshot in time! — that would still be $500 million less than the $37.5 billion it spent in capital expenditures in the fourth quarter of 2025 .  Yet things actually get worse when you think about the sources of that revenue, or perhaps I should say source, as both Microsoft and Amazon (and I’d argue Google too, but we don’t know its AI revenues) are heavily-dependent on their large, unsustainable sons — Anthropic and OpenAI. I’ll explain. Microsoft claims that its $37 billion in AI revenue run rate has grown by 123% year-over-year, which means its run rate, not actual 2025 AI revenue, was about $16.59 billion in Q3 FY25, or around $1.38 billion a month or, if you assume that number is consistent over the quarter (it likely wasn’t), about $4.14 billion. Based on my own reporting from direct Azure revenue numbers, this would make OpenAI’s $2.947 billion in inference spend in that quarter around 71% ($11.7bn) of Microsoft’s Q3 FY2025 AI revenue run rate. That’s embarrassing!  Oh, and capital expenditures for that quarter were $21.4 billion , or around $4.81 billion more than its annualized revenue.  Yet my reporting helps us be a little more annoying than that. Back in January 2025 — around Microsoft’s Q2FY2025 earnings — it announced that its AI revenue run rate had hit $13 billion , or around $1.083 billion a month (or $3.25bn a quarter or so). In that same quarter, OpenAI had spent $2.075 billion on inference on Azure, or 63.8% of Microsoft’s AI run rate. This is particularly funny when you go back to the quarter before, where Microsoft CEO Satya Nadella low-balled that figure, claiming it would be $10 billion in annualized run rate, and specifically said the following : That’s…not really what happened. Today I can report, based on discussions with sources with direct knowledge of Azure revenue, that in Q2 FY2025, Microsoft brought in around $325.2 million in revenue via renting out GPUs and other AI infrastructure, and around $367 million in revenue from Microsoft 365 Copilot, or less than half of the $1.467 billion that OpenAI spent on inference.  If you’re curious, the next quarter (Q3FY2025), AI infrastructure brought in around $412 million, and Microsoft 365 brought around $300 million.  While my sourcing for Azure revenues cuts off at Q3 FY2025, my OpenAI inference and revenue share data goes out a further two quarters to Q4 FY2025 and Q1 FY2026 (so Q2 and Q3 of the calendar year 2025), as well as half of Q2FY2026, and we can make some fairly straightforward estimates as a result. So, based on my reporting, OpenAI spent $3.648 billion dollars on inference in the third quarter of 2025 on Microsoft Azure, or around $14.4 billion on an annualized basis.  While I only had half the fourth quarter’s numbers, I estimate that OpenAI’s annualized spend hit over $18.5 billion — or around $4.6 billion a quarter — by the end of the year, and that’s not accounting for things like Sora 2 or the launch of its Codex coding platform. In total, this puts its spend at an estimated $13 billion dollars on Azure just on inference, with billions more on training. Yet Microsoft Azure isn’t the only place that Microsoft gets fed revenue from OpenAI. Microsoft also accounted for 67% of CoreWeave’s 5.15 billion in 2025 revenue — or around $3.45 billion dollars — and as all of that is used by OpenAI. I also believe this is used for OpenAI’s training compute, as CoreWeave’s announcement related to its direct deal with OpenAI specifically said it was contracted “...to power the training of [OpenAI’s] most advanced next-generation models,” and said capacity was only available because Microsoft declined to extend its current agreement to use compute for OpenAI . All together, that puts OpenAI’s spend on Microsoft services at over $18 billion dollars in 2025, and it’s easy to see how that would grow to over $24 billion dollars on an annualized basis in the last quarter, or around $2 billion a month. Microsoft is OpenAI’s primary cloud provider, and I estimate that OpenAI represents around 70% of its AI revenue, while taking up the majority of its infrastructure. Otherwise, Microsoft’s 20 million Copilot 365 subscribers likely pay no more than $7 billion a year. I also think that OpenAI is taking up the lion’s share of compute. As I discussed in my most-recent premium newsletter , Epoch estimates that Microsoft had around 2GW of compute by the end of 2025, with OpenAI as its largest customer. At the end of 2025, OpenAI’s CFO said that it had access to 1.9GW in compute, at a time when its compute was entirely supported by Microsoft and CoreWeave (estimated to have 480MW of compute).  Considering that 67% of CoreWeave’s revenue came from Microsoft renting capacity for OpenAI , I also think that it’s fair to assume that 80% or more of Microsoft’s GPUs are taken up by OpenAI, though some might now be taken up by Anthropic, which agreed to spend $30 billion on Azure. I’ve also confirmed that Microsoft’s “Fairwater” data centers — which constitute (when finished) “ hundreds of thousands of GPUs ” — are entirely reserved for OpenAI.  Microsoft desperately wants you to think that this is a diverse, booming revenue stream, when in fact it’s spent around $293 billion in four years to make — when you remove OpenAI — less than $3 billion a quarter in revenue, not profit. Booooooo! Booooooo!!!!! As far as Amazon goes, things get a lot grimmer. As I mentioned earlier, in early April , per Reuters, Amazon’s Andy Jassy admitted that its “cloud business’ AI revenue run rate was more than $15 billion in the first quarter of 2026,” which translates to around $1.25 billion in monthly revenue, or roughly 0.419% of the $298.3 billion in capex it spent so far, or around 25% of the $5 billion it just invested in Anthropic two weeks ago .  I also think it’s reasonable to assume that a large part — if not the majority of — that revenue comes from Anthropic. Per my reporting last year , Anthropic spent $518.9 million on Amazon Web Services, at a time when it had around $7 billion in annualized revenue, a figure that’s increased by 500% (if you believe it) to $30 billion in annualized revenue since . $518.9 million is about $6.2 billion in annualized spend, and I think it’s fair to assume that its spend will have at least doubled to $12 billion in annualized revenue, or around 80% of Amazon’s AI revenue. As of the end of Q4 2025, Amazon had 1.67GW of capacity — and based on my estimates from my newsletter published April 21 , 500MW of that is taken up by Project Rainier, a data center dedicated entirely to Anthropic , which is also Amazon’s largest AI customer. I’d be confident in assuming that more than 75% of its capacity is taken up by Anthropic. And man, $1.25 billion a month is fucking pathetic. I’m sorry, how are any of you possibly impressed by this?  God, everyone loves to slurp down Sundar’s slop. You all fall for it! Sundar Pichai doesn’t respect you enough to tell you how much AI revenues Google makes, but because its current businesses continue to grow thanks to its tried and tested tactic of making shit harder to use so that Google services can show you more ads . Nevertheless, people are doing backflips over Google Cloud’s 63% in year-over-year revenue growth ($20.03 billion), and I have a few thoughts: One of the reasons that Google might not want to break out its AI revenues is that they’re — much like Amazon — heavily-inflated by Anthropic’s compute spend. Sadly, we have only a little information about Anthropic’s spend outside of its promise to use “up to one million TPUs, with over a gigawatt of capacity [coming] online in 2026” from the end of last year, and a month ago, when it said it would use “multiple gigawatts of next-generation TPU capacity…starting in 2027.”   Another guess might be to travel back in time to before Anthropic was a huge consumer of compute. In Q4 2023, Google Cloud sat at about $9.19 billion a quarter , and $11.96 billion in Q4 2024 (around 23% year-over-year, but a putrid 5% quarter-over-quarter from Q3 2024). By Q2 2025, it sat at $13.62 billion , and as I mentioned above, accelerated to $15.15 billion to $17.66 billion (14.2% quarter-over-quarter) to $20 billion (11.7% quarter-over-quarter) in the following three quarters. These periods match up exactly to Anthropic’s big jumps in revenue from Q2 2025 ( around $3 billion ARR ) to Q3 2025 ( around $7 billion ARR ) to Q4 2025 ( around $9 billion ARR ) to Q1 2026 ( around $19 billion ARR ), which suggests that Anthropic’s growth is what’s actually boosting Google Cloud. Yet things get weirder when you listen to Google’s most-recent earnings call : Interesting. Interesting. Google appears to be planning to sell its TPUs — its own custom silicon it currently uses only for its own services and some of Anthropic’s — to a non-specific amount of unnamed customers, to the point that its remaining performance obligations jumped from $242.8 billion to $467.8 billion in the space of a quarter.  Nevertheless, that’s a remarkable jump, especially when you try and work out who they sell to- oh wait, we actually know! Google also signed a multi-billion dollar deal to rent TPUs to Meta, per The Information , and is also discussing A) selling TPUs to Meta directly, and B) creating SPVs that will buy its own GPUs and lease them to others: This is exactly the same shit as NVIDIA is doing with xAI’s GPU-related financing last year . To explain, Google is creating something called a special purpose vehicle — a company with one purpose — that it then funds along with an investment firm. The SPV then raises cash via debt, which it then uses to buy TPUs directly from Google . Now, remember that Anthropic deal to use a million TPUs from last year? How about the deal with Broadcom (which makes TPUs for Google) and Google to use “multiple gigawatts” of TPUs starting in 2027? Well, Per CNBC, Anthropic agreed to buy $21 billion of Broadcom’s TPUs in 2026 and $42 billion in 2027 . Where will those TPUs go? Google’s data centers, probably the ones that it’s backstopping, per my premium from the beginning of the week : It’s a pretty sweet deal for Google! Google pays Broadcom to develop TPUs, Anthropic pays Google to buy those TPUs once Broadcom builds them, Google installs those TPUs in a data center, and then Anthropic pays Google to rent them back.  This isn’t real demand! Boo!!!!!! BOOOOOO!!!!!! So, for the sake of transparency, I wrote the above before The Information published its story about how Anthropic had committed to spend $200 billion on Google Cloud and TPU chips, which contained this very important detail: The Information’s story also had this fascinating chart showing that around 50% of Amazon, Google and Microsoft’s backlog (which includes all revenues not just AI) — a staggering amount — is made up of revenue from OpenAI and Anthropic: To be clear, I also wrote the below before this chart ran, because it was very fucking obvious when you actually looked at the numbers .  Anyway, as I said in my last premium newsletter: As I’ve explained, most AI revenues out of Google, Microsoft and Amazon come from two companies that lose billions of dollars a year, have no path to profitability, and are only able to keep paying these companies because the companies (and investors) keep feeding them money. These relationships are utterly poisonous, and an intentional attempt to deceive investors and the general public.  Google now plans to invest up to $43 billion in Anthropic, a company that I estimate takes up at least half of its 2.95GW of capacity, which has cost it around $211 billion in capex since 2023. Amazon has already invested $13 billion and as much as another $20 billion more in Anthropic, and announced its latest round with a statement about how Anthropic will use up to 5GW of compute capacity . While dimwits might read this and say “WOW, AMAZON JUST LOCKED UP TONS OF FUTURE REVENUE,” it’s important to remember that Anthropic plans to lose $11 billion a year both in 2026 and 2027, and that’s based on its own internal (and fanciful) projections!   Let me spell it out in a way that boosters can understand, in the style of Gillam Fitness : Anthropic not have money to pay big cloud bills, because Anthropic company cost lots of money, more money than Anthropic make! So Anthropic only PAY cloud bills if OTHERS give it money! Amazon GIVE MONEY to Anthropic to GIVE BACK TO AMAZON, which mean no profit! And Amazon not give Anthropic enough money to pay it, so Anthropic have to ask OTHERS for money! That BAD! It mean BUSINESS not STABLE, and CLIENT not STABLE.  This bad when client MOST OF AI MONEY! This ALSO mean that Anthropic RELIANT on OTHERS to pay AMAZON, which make AMAZON dependent on VENTURE CAPITAL for FUTURE REVENUE! Amazon SAY it have BIG BUSINESS, but BIG BUSINESS dependent on ANTHROPIC, which mean BIG BUSINESS dependent on VENTURE CAPITAL! This SAME for GOOGLE! Both say they have BIG CLIENT, but BIG CLIENT MONEY not supported by REVENUE, so BIG CLIENT actually mean “HOW MUCH VENTURE CAPITAL MONEY ANTHROPIC HAVE.”  This bad business!  And it really, really is .  Most of Amazon, Google and Microsoft’s capex is being driven into capacity mostly used by OpenAI and Anthropic, neither of whom have the money to pay without continual infusions of more capital. Only Microsoft was smart enough to realize the problem, which is why it allowed Oracle to take over the majority of OpenAI’s future capacity ( which may kill Oracle, by the way! ), but both Google and Amazon keep feeding Anthropic money so that Anthropic can feed it right back to them.  I’m going to try and speak simply again, because I’m still not sure people get this. The only solution to this problem is if either Anthropic or OpenAI can somehow find a way to become profitable, something that I have yet to see any proof is possible.  In fact, the only proof I can find is that these fucking companies are more unprofitable than ever — in the last month, Anthropic raised $10 billion from Google , $5 billion from Amazon , and is reportedly trying to raise another $50 billion from investors , less than three months after it raised $30 billion on February 12, 2026, which was five months after it raised $13 billion in September 2025 . That’s $58 billion in eight months, with the potential to raise it to $108 billion. I’m gonna be honest, I think Anthropic is outright misleading its investors if it’s saying that it will only burn $11 billion in 2026 and 2027, per The Information : If that were the case, why does Anthropic need to raise one hundred and eight billion fucking dollars in less than three quarters?   Time to make up some booster talking points and get mad at them: So, SemiAnalysis — which traditionally does not wheel and deal in revenues! — randomly said that Anthropic had hit $44 billion in ARR , or around $3.08 billion in monthly revenue and…I’m sorry, what?  I know that my suspicion of Anthropic’s revenue numbers has effectively become a meme by this point, but something about this doesn’t add up. If we cut the periods down to strictly those after March 9, that means that Anthropic brought somewhere between somewhere between $4.5 billion and $5.58 billion in less than two months , or roughly its entire lifetime revenue. This was also a period where Anthropic claimed it was facing capacity shortages , but said shortages only appeared to create performance issues for its current customers rather than stopping Anthropic from making money… …which makes me wonder what all of this “capacity” talk is actually about.  If Anthropic is truly facing a “capacity crunch,” it’s choosing to solve said crunch through sheer, unbridled greed, taking on more customers as it struggles to keep its services at above two nines of availability . If it were an ethical business, it would simply stop taking on new clients, much like GitHub Copilot did as it transitions to token-based billing . Nevertheless, its capacity issues also make me wonder whether it’s actually taking on all that revenue, and if so, where it’s actually coming from.  Per Newcomer , as of the end of last year, 85% of Anthropic’s revenue came from API calls from companies or individuals using their models to power services. This would mean that there was roughly — assuming that number is down to around 70% given the ascent of Claude subscriptions — $3.5 billion of API spend in the space of two months, or a few thousand trillion tokens’ worth of spend. For some context, Meta’s “token-maxing” fiasco from the beginning of April involved it burning around 60 trillion tokens in 30 days, but based on discussions with sources familiar with Meta’s spend, 80% of that was cache reads. The Information estimates that the actual cost in that period was around $330 million, meaning that Anthropic needs at least another five — if not ten — Meta-sized customers, or such incredible dispersed demand that has effectively appeared out of nowhere in the past three months , to possibly come close to those numbers. I personally think it’s because Anthropic is doing something peculiar with its annualized revenue calculations. Per The Information : The first and most-obvious place to game the numbers is that Anthropic chooses a single day’s active subscribers to anchor to its annualized revenues, which means it can preferentially select one where, say, a bunch of new people were signed up under a trial, or avoid a day where churn had users leaving. One could easily include those who are canceled but have yet to actually leave the service — such as somebody who canceled on April 7th but would still be on as a “paid” subscriber until May 8th — too. As far as API credits go, it’s easy to manipulate a four-week-long segment based on how Anthropic bills its enterprise customers, specifically self-service enterprise deals . In this case, Anthropic customers pre-pay a sum (say, $50 million) in credits that are billed based on their teams’ usage, and don’t expire or run out unless they’re actively consumed. Anthropic could very, very easily manipulate this by — instead of booking based on an enterprise’s actual token burn — saying “we just got $50 million in API revenue in a calendar month!” even though that $50 million might take months to actually use. To be fair, there are also other customers (referred to as “sales-assisted”) that are billed in arrears for their consumption. It’s unclear what the split is, and Anthropic doesn’t have to tell you. Remember: Anthropic is a private company! It can do all the non-GAAP bullshit it likes.  I keep hearing about how Anthropic is capacity-strained and all that shit, but I don’t hear any explanations as to how it fixes that problem, or what that problem actually means for the business itself. Somehow being “capacity constrained” has led to the company making more revenue, which makes me wonder whether it’s a “constraint” so much as “a company running as shitty a service as it can while billing as much as possible.” Either way, it’s unclear how many data centers are actually getting built, or indeed how long they’re taking to build. What does Anthropic do if it’s 12-18 months away? And really, why do these capacity constraints not seem to have any effect on its revenue growth? I ask because Sundar Pichai noted on Google’s most-recent earnings call that Google Cloud would’ve made more revenue had it had the capacity to meet demand. Why is Google revenue-constrained due to capacity but not Anthropic? While there’s a compelling argument to be made that Anthropic was the customer that would’ve bought that compute, I think we deserve an actual explanation of what Anthropic needs more compute for if it’s not “to make more money.” Also, if it’s currently making as much money as it likes with its current capacity constraints, wouldn’t getting more compute…make the numbers worse? Ah, fuck it, let’s move onto something funnier. Meta is probably the funniest company in the AI bubble, in the sense that it does not appear to have anything approaching an AI strategy beyond “build as much data center capacity as possible” and “ lose $4 billion a quarter selling pervert glasses .” I realize I sound a little dismissive, but nobody can actually explain to me what Meta is doing with AI in a way that remotely justifies it burning $158.25 billion in capex since 2023, with plans to spend as much as $145 billion in 2026 alone . Oh, Meta’s AI app was high in the app store charts? Who fuckin’ cares! Who gives a shit! Oh, it launched its own closed-source “Muse Spark” model ? What am I meant to be impressed about? That over $150 billion has resulted in a model that ranks #27 on the LLM leaderboards in coding ? Now, some of you — including people I respect so much I’m not going to mention them by name! — appear to believe that Meta has some super-secret way of using all these GPUs to make “more money from ads,” and I must be clear that Meta has yet to explain that that’s the case.  Per last premium : You’ll note that these conversion numbers aren’t connected to any financials , which makes them a little suspicious, as 99% of Meta’s advertising revenue is ads, and “more conversions” should be fairly easy to peg to “more money”...unless said conversions aren’t actually converting into revenue for Meta’s advertisers. What does a “conversion” mean, in this case? Are these CPA ads that reward Meta on a clickthrough? Or CPM ones that pay per thousand impressions that just happen to result in a click?  Again, these are ads, which means that it’d be very easy to take that “conversion” number and turn it into “made $X,” unless of course said amount is pathetically small in the grand scheme of things. Seriously though, what is Meta doing? I suppose it doesn’t matter when the Wall Street Journal will breathlessly write that ( and I quote ) Meta is envisioning “supersmart agents” and the following lede that I find to be one of the more-revolting things I’ve read about a hyperscaler recently: You may be wondering what the “glimpse” was, and it was “laying off 8000 people” and “grading employees in performance reviews on their AI use” and “making a CEO chatbot for Mark Zuckerberg to talk to.”This is an ugly, wasteful, distressed company that has no idea what to do anymore, run by a mad king who literally cannot be fired , and those who are charged with scrutinizing it will write entirely imaginary comments like “wow, Mark Zuckerberg is building supersmart agents!” without a second’s thought. The magical hysteria of the AI bubble is such that Meta, Microsoft, Google and Amazon are, despite proving no actual profit from their AI investments, effectively protected by most of the media, investors and analysts. To be clear, I don’t think any of these companies die as a result of the bubble bursting, but I’m sick and tired of hearing everybody cover their asses with the same tired and hollow talking points, so I’ve pulled together a few of them: So, while this is technically true — as I said, these companies will not die as a result of the bubble bursting — any investor (or person who wants to deal in “the truth” rather than “stuff they misread or misremembered”) should be deeply concerned that they’ve sunk around a trillion dollars into AI capex, and all they’ve done is incubate two large, unprofitable companies that have become a burden on their infrastructure, and revenue streams that they either refuse to disclose or are both incredibly-centralized and proportionately embarrassing. Let’s get specific: 2023, Microsoft, Google, Amazon, and Meta have spent a little over $850 billion in capex, mostly hoarding NVIDIA GPUs that will be near-to-completely obsolete by 2030.  With these GPUs comes a massive depreciation problem, as I discussed a few months ago in my time bomb premium newsletter . Every quarter, more GPUs come online, which grows the “depreciation” line on the income statement, steadily growing every quarter to the point that the Wall Street Journal projects that it will eat as much as 58% of Meta’s, 40% of Microsoft’s, and 38% of Google’s net income by 2030. This flows neatly into my next point. Well, let’s be clear: whatever growth these businesses currently have is being eaten by depreciation. Last quarter, Google had $6.48 billion, Amazon $18.94 billion, Microsoft $10.1 billion, and Meta $5.9 billion, numbers that sometimes oscillate slightly down but have, year-over-year, grown by billions of dollars. And yes, year-over-year is appropriate here because this is a balance that has been steadily growing for years. In any case, depending on the company, that “growth” is either “barely related” or “entirely unrelated” to AI.  Remember: Microsoft and Amazon are intentionally obfuscating their AI revenues by using “annualized” — a term they refuse to define that usually refers to a monthly figure times 12 — to define something in statements related to quarterly revenue. As a result, it’s impossible to precisely backtrack how much revenue they made. In fact, that’s probably the simplest response here: if these companies were truly growing as a result of AI, they’d tell you. They’d say “AI revenue was X.” They’d say it in blunt, obvious terms. No annualized revenues, no projections, no fluff, no “AI-influenced,” just a line item that said “AI:” or even a segment, such as “Microsoft Azure AI compute.’ I also want to be clear about something else: I know, from documents viewed by this publication, that Microsoft has these line items fully itemized, and could share them if it wanted to, but intentionally chooses not to. These companies are deliberately refusing to share their AI revenues: and it’s time for the tech and business media to begin asking them why! So much that neither Google nor Meta will tell you how much! Also, three years in, nearly a trillion dollars, and two companies have dedicated nearly their entire sales operation to pushing it, and the best they’ve got is annualized revenues and no segment breakdown.  “Oh, Microsoft has 20 million paying Copilot subscribers,” $600 million a month? For a company that makes $80 billion a quarter? That's a pathetic amount of money. You could raise more money by auctioning dogs ! I need you, please, god , to start actually using basic mathematics! Microsoft has spent $293 billion on this bullshit, and spent another $30 billion or so in the last quarter on capex! When does this pay off? As I said above,  Amazon Web Services was profitable in a decade and cost about $52 billion between 2003 and 2017, and that’s normalized for inflation ! Anyone making this point is either intentionally lying to you or incredibly ignorant. I have done the work to prove this point, and will continue to repeat it until those too incurious or deceptive learn to stop doing so.  When?  Wwwwhen????? Whheeeennnnnn?????????????? I’m serious, when? And how??? Not that they would, but in a scenario where Meta, Amazon, Google and Microsoft stopped spending capex on AI next quarter, they would have to make somewhere in the region of $2 trillion in brand new revenue — all while other services continued to grow — to make any of this capex worth it. Please, explain to me how that happens when it’s taken three years and nearly three hundred billion fucking dollars for Microsoft to squirt out maybe three billion dollars in revenue (not profit), with most of that coming from OpenAI! Please, somebody, anybody explain! You can’t!  But you know what, let’s try! As The Information said, around 50% of all remaining performance obligations, as in all (NOT JUST AI) of the upcoming revenue for Microsoft, Meta and Amazon , is from either OpenAI or Anthropic. Put another way, 50% of big tech’s upcoming revenues are dependent on two companies, neither of which can afford to pay them, meaning that 50% of Meta, Amazon and Google’s revenues will either come from their own venture investments or venture capital. This is not what stable or diverse revenue looks like, and suggests my grander thesis about AI demand is true . Outside of OpenAI and Anthropic, there’s barely any actual demand for AI services or AI compute at the scale necessary to substantiate a trillion or more in capital expenditures. Yet the most-disgraceful part is the sheer contempt that these companies have for investors, the media, and the general public. In a functioning regulatory environment — or a market run by people with object permanence — it would be impossible to add such large amounts to your RPO balance without active scrutiny and analyst markdowns based on the fact that Anthropic and OpenAI can literally not afford to pay these bills at this time. Microsoft, Amazon and Google have scooted along for years on the idea that they’re diverse, well-positioned companies that can build massive AI revenue streams. In reality, they’re the paypigs for Anthropic and OpenAI, providing more than 70% of their compute as a means of artificially inflating their AI revenues, knowing that analysts and the media will nod and smile without a single thought. In fact, fuck it, I’m ending this with a rant. The story of massive AI demand is a lie — a trillion dollars annihilated to create the largest circle jerk of all time.  Venture capitalists and hyperscalers feed money to OpenAI and Anthropic, so that venture capitalists can feed money to startups to feed to Anthropic and OpenAI, so that Anthropic and OpenAI can feed that money back to hyperscalers, who then feed that money to NVIDIA and buy more GPUs.  While it might seem tempting to credit these men as geniuses for creating companies specifically to feed them revenue, but to keep up the kayfabe of “doing AI” to substantiate this buildout means that they’ve had to massively overcommit to the bit, even though the only two meaningful businesses in AI appear to be Anthropic and OpenAI, and that’s only because they’re effectively intellectual honeypots for the entire industry.  Outside of those two, the only other competitive AI businesses are those of Amazon, Microsoft and Google — two of which now have annualized AI revenues of around 6% of their capital expenditures so far.  Google’s AI business is so booming that it refuses to break it out, and while it nebulously claims “AI is creating growth,” it’s not really clear how, and it’s vague about it because analysts and the media are ready to swallow the narrative as long as number go up .  That’s why Google doesn’t break out the number, by the way! That’s why Sundar Pichai is able to bullshit his way through every earnings call, because the media and analysts are ready to fill in the gaps in the most preferential way possible.   Amazon and Microsoft had their hands forced by the markets after their stocks stumbled, and fucked up by sharing their AI revenues. Amazon’s $298.3 billion in capex has successfully created a business that, more than a quarter of a way to a trillion, has successfully managed to make $1.25 billion dollars a month.  That’s fucking pathetic! If we had analysts with IQs above room temperature they’d run Andy Jassy out of Arlington like Shrek.  Let’s look at this fucking chart again :  Unbe-fucking-lievable! Anthropic and OpenAI have now committed to over $718 billion of Microsoft, Amazon and Google’s revenues, despite the fact that neither of them can actually afford to pay for it. The market’s response? A slight (and short-lived) after-hours lift .  Dear members of the media: these companies are laughing at you. They know you are going to cover this in a way that makes them look good. They know you’re going to use this as proof that they’re “doing well in AI,” despite the fact that the majority of their future revenue is tied up in two oafish failsons, one of which (OpenAI) plans to burn $50 billion on compute in 2026 alone . I realize that it’s a lot to ask people to think about things in negative terms, but things are getting a little ridiculous. These are loadbearing failsons with dysfunctional businesses! It’s very clear both of them are doing weird things with their annualized revenues, and even clearer that there’s no path to profitability! Sadly, asking the media or analysts to act rationally or apply any real scrutiny is a joke, because  this is the AI bubble , where everybody is wrong because once everybody admits what’s actually happening they’re going to have to admit they’ve all sounded insane for years. $1.25 billion a month! Andy Jassy should be ashamed of himself! And god, fuck Microsoft too.  I’m sorry, WOW, Satya! You managed to get up to twenty million paying Microsoft 365 Copilot subscriptions — $600 million a month in revenue, not profit! — and all it took was you investing $13 billion dollars in money to OpenAI, forcing Large Language Models into every one of your products in a way that borders on harassment and about $289 billion dollars in capex, as well as laying off thousands of people and savaging the Xbox brand .  Whoopdie fucking shit man! You should be ashamed of yourself. Amy Hood should lock you out of the building. She should turn off your keycard and disconnect your keyboard.  OpenAI is, in and of itself, a kind of psychosis generator.  It was the first thing in a long time that felt like a new thing since the iPhone for the people that entirely obsess over growth.  It was the panacea for the tech industry, creating a new way for Business Idiots to spend money on infrastructure, a new thing for consultants to scam people with , a new series of things to be an expert in , all wrapped up in something that could also be both a consumer product, an enterprise software product, and a new kind of API to attach to other enterprise software to.  In theory, OpenAI’s success would lift everything at once — hardware, software, and even adjacent fields, like services. It promised to both democratize access to creating software while also heavily reinforcing existing power structures to the point that every dollar inevitably ended up in the Magnificent Seven’s pocket. It only succeeded in the latter. The problem is that the system needed to work one day. It needed to eventually make more money than it cost. Every single one of these companies is talking about AI non-stop, and not one of them can show a profit. The only thing they can do is tell lies of omission by saying “AI helped boost everything,” and when you ask for specifics, the results are either tepid or so secretive you’d think they’re hiding a dead body. The only reason Google, Amazon and Microsoft are being tolerated at their current excess is because their non-AI segments continue to grow through endless price-increases and enshittification, and its external business units — by which I mean OpenAI and Anthropic — are yet to die.  Sorry, I just don’t know what Meta is doing. I don’t think Meta knows what Meta is doing. Every so often it buries a fact in one of its blogs about how it saw a 3% increase in something related to AI, then it promises to burn $170 billion dollars and it’s unclear why. It also lost another $4 billion dollars on Reality Labs by the way ! There should be a legitimate inquiry into where this money is going. Eighty six billion dollars and all we have is the metaverse and pervert glasses?  Meanwhile, SpaceX is rushing to have the strangest and largest IPO of all time, all as daily stories leak about billions of dollars of losses and whatever the fuck that deal with Cursor is .  Apparently SpaceX will buy it for $60 billion dollars or pay it $10 billion dollars.  I think what actually happens is the third thing: SpaceX funds Cursor for a bit, there’s a falling out between Musk and CEO Michael Truell, and the company either rushes an acquisition or dies. Remember: Elon killed Cursor’s funding round ! He can’t buy it before SpaceX goes public !  Elon Musk took fucking OpenAI to court. Do you think he’ll care about killing Cursor? Who’s going to be left to sue him? Anyway, that OpenAI/Musk suit is a real Alien Versus Predator situation, and if I’m honest I’ve found whole thing a little boring, a duo of dullards shoulder-barging each other to see who can run a company that neither of them can really describe because neither of them do anything other than pontificate and take credit for other people’s work.  If this breaks OpenAI I’ll be very surprised, but if it does it would be extremely fitting that Elon would accidentally destroy the AI industry, like Mr. Bean sitting on a button that launches a nuke. If I’m wrong here it would be very funny. I’m just not giving it much hope. Nevertheless, this entire industry is only made possible by the kayfabe circular economy of taking every single sign as good for AI and ignoring every possible glaring warning sign in the hopes that they’ll go away.  You know, like last week when Microsoft said it’s shifting GitHub Copilot to token-based billing — something I reported a week before everybody else.  This is effectively killing the product as they know it, and invalidates every single story about its revenue growth ever written. To give you some context about its scale, GitHub copilot is the second largest customer of Anthropic’s models , and was only that large because it was subsidizing the computer spend of its customers. Why? Because that’s the only way to build any kind of AI business.  Google and Amazon realize their AI revenues are contingent on the continued survival of Anthropic, and Amazon and Microsoft’s revenues are contingent on OpenAI AND Anthropic.  They know that if these companies die they’re going to lose billions of dollars of revenue, but that they also have to compete with them for fear that they’ll be seen as “falling behind” their horrible progeny. As a result, they’re incinerating their brands and endlessly pontificating about the power or AI while spending nearly a trillion dollars on capex almost entirely to make sure their competition, which is also their customer and welfare recipient, doesn’t die. It’s a mess, and a mistake, and eventually one of them is going to grow tired of it. Microsoft was already billions under the analyst estimates for capex. They’re moving to token based billing. They claimed to invest in Anthropic in February but didn’t mention it in their earnings in any way, shape or form.  At some point these fucknuts are going to be forced to reckon with what they’re doing.  Until then we’ll have increasingly more frenzied and ejaculatory statements about AI demand that fail to match with reality.  I truly think that it’s going to be like this if not crazier until one day when the music suddenly stops. Somebody is going to blink. Somebody is going to take a step back and give everybody else permission to stop too.  Maybe Perplexity, Lovable, Replit, or Cognition dies.  Maybe Microsoft shifting GitHub Copilot to token based billing in June first inspires others like Anthropic to follow suit.  Maybe AI token austerity begins at Microsoft, Meta, or another large company.  Maybe NVIDIA fails to inspire in just the right way, or the fact that data centers are not opening fast enough to have fully digested the last year’s GPUs finally catches up with the economic mismatch that Jensen Huang always beats and raises expectations.  And that really is the strangest thing.   At the current rate of sales, it’s taking six months to install a quarter’s GPUs . At this point it’s obvious that there are warehouses of these things. It just isn’t obvious whether they’re in ones owned by hyperscalers or the Taiwanese ODMs (original design manufacturers) like Quanta Computing and Foxconn that build their servers.  None of this makes sense.  It hasn’t from the beginning. It’s the largest bubble in history, and has reached such an intellectual and financial scale that many have taken sides on it in a way that will be completely impossible to walk back if they’re wrong.  As things deteriorate, expect them to cling to their mythologies tighter and become more agitated.  And really, we’ve never seen anything like this in our lives.  You realize that Anthropic and OpenAI are insane, right? These companies have promised $718 billion to Microsoft, Google and Amazon, and cannot survive without venture capital funding , because their underlying businesses lose money on every transaction — and so help me fucking GOD if you say they’re “profitable on inference” without proof I will crush you into a cube like a car in a garbage dump! Every single AI business you see is unprofitable, nor do any of them have a path to break-even, let alone sustainability. Nothing has changed about this story. And nobody has been able to explain the massive differences between my reporting on OpenAI’s revenues and their own leaked figures, other than to say “you must be wrong somehow,” as if that somehow invalidates “direct numbers from Azure billing.” If you disagree with me, you really better hope I’m wrong, because I’ve got years of receipts and I can remember basically every article about AI revenues written since 2023 off the top of my head. Not a single one of my critics or any AI booster has put an iota of the same amount of effort into proving their case. The hysteria and excess of this era has proven how many people can come to conclusions without making the effort to prove them. Disagree with me or not, I’ve done the work, and I see no proof that the other side has even started. The world has been swept away by the fantastical ideals of Sam Altman and Dario Amodei, and two giant, unsustainable, cash-burning monstrosities that were only made possible because hyperscalers built their infrastructure for them and funded their excesses in exchange for theoretical revenues and equity stakes that give them paper gains. Their hope, I imagine, was that in doing so, OpenAI and Anthropic would create industries surrounding them — both in the business lines attached to hyperscalers and AI startups that would potentially pay them for compute. In the end, it appears the only way to create any real demand was to literally fund it themselves.  These men believe they’ve created perpetual energy. What they’ve actually done is shit their pants and set their houses on fire. “Year-over-year” is an attempt to obfuscate actual growth in the era of AI. A better comparison would be quarter-over-quarter, which was 12% from Q4 2025 ($17.66 billion). This is actually significant, because it’s a slower rate of growth than between Q3 and Q4 2025, when cloud revenue jumped from $15.15 billion to $17.66 billion, or 14.2% quarter-over-quarter).  I think quarter-over-quarter growth is far more indicative of how a business is going.  Google Cloud is far more than AI! It includes all of Google’s workspace revenue, such as Gmail, Google Docs, and so on. It’s important to remember that Google jacked up its workspace pricing twice in 2025 , and that by Q1 2026, the majority of customers will have been forced to renew at inflated prices. It also includes all of Google’s cloud revenue, which is incredibly diverse and far more than just AI compute. Google has intentionally bucketed AI-related revenue into Google Cloud so that finance and tech journalists will claim that AI is what’s driving this growth despite there being no proof that that’s the case. Anthropic and OpenAI make up the vast majority of all AI revenues and compute capacity. I estimate 70% of all revenues and capacity demand, if not higher. Amazon, Google, and Microsoft’s AI revenues — and by extension their justification for future capex spend — are justified by Anthropic and OpenAI. OpenAI and Anthropic both lose tens of billions of dollars a year (yes, Anthropic said it’ll lose $11 billion in a projection, and I believe they are being coy with their actual losses), which means that the majority of AI revenue and compute demand is dependent on whether Anthropic and OpenAI can continue to raise money. Well actually Ed this is because Anthropic is taking advantage of the dumb money that wants to boost its valuation. It doesn’t need the cash — it’s building a reserve!  Are you suggesting it’s raising money because it doesn’t need it? Like a rainy day fund? Are you also suggesting that Anthropic is taking advantage of its investors? Anthropic has a bunch of compute commitments that require it to pay a bunch of money up front! This isn’t because its business economics don’t make sense at all. I think you’re right that Anthropic likely has to pay up front for its compute. Dario Amodei himself said so, while adding that you have to do so based on how much revenue you expect to make, and that if he’s wrong, Anthropic goes bankrupt! Basically I’m saying, “In 2027, how much compute do I get?” I could assume that the revenue will continue growing 10x a year, so it’ll be $100 billion at the end of 2026 and $1 trillion at the end of 2027. Actually it would be $5 trillion dollars of compute because it would be $1 trillion a year for five years. I could buy $1 trillion of compute that starts at the end of 2027. If my revenue is not $1 trillion dollars, if it’s even $800 billion, there’s no force on earth, there’s no hedge on earth that could stop me from going bankrupt if I buy that much compute. Nevertheless, this doesn’t remotely interfere with my thesis! It just means that Anthropic has been forced to buy a bunch of compute immediately rather than paying for it in chunks. In fact, I’d argue that Anthropic is having to raise this money to pay up front for capacity that’s yet to be built.  This is a sign of how much faith investors have in the product! Yeah that’s generally how venture capital works. There’s also not really any other success story out there other than OpenAI that has anything close to a time horizon toward an exit. Anthropic said it had hit $14 billion in ARR on February 12, 2026 , or around $1.16 billion between January 12 and February 12.  That’s $1.16 billion in that period. Anthropic CFO Krishna Rao said in a sworn affidavit on March 9 2026 that its revenue was “exceeding $5 billion to date.” I also at this point think that sources telling anybody Anthropic made $4.5 billion in 2025 alone were lying , as it doesn’t make mathematical sense otherwise. This also means that Anthropic, if it’s being honest about what “run rate” means, made 23% of its lifetime revenue in a single month. On April 6, 2026 , Anthropic said it had hit $30 billion in annualized revenue, or $2.5 billion, I assume, in the period between March 6 and April 6.  That’s $2.5 billion in that period. SemiAnalysis’ estimate is from April 30, 2026, so let’s assume that it refers to the period of March 29 to April 29, 2026.  That’s another $3.08 billion. It’ll get cheaper in the future- okay, are you saying the chips will get better? Because these companies have somewhere between $100 billion and $300 billion of these fucking things. People are starting to pay for AI- okay, but they’re not paying very much, and it’s taken so long that these companies are now burdened with endless piles of GPUs that they’ve yet to fully install. How do they catch up? Just give it time- no! I’ve given it lots of time! Why are you being so generous to them and so impatient with me?  This is investing in tech that will turn into the most transformative tech in the future - you’re a mark!

0 views
fLaMEd fury Yesterday

Hot Cross Buns

What’s going on, Internet? I’ve been eating hot cross buns since January. Can you believe it? Growing up I remember that seasonal treats would come out a week or two before the associated holiday. As you might have noticed yourself, these days the bakeries, and supermarkets are pushing them out months before the event. For hot cross buns though, I’m not complaining. I’m not a big fan of raisins, or orange spice, and whatever else they use to make hot cross buns. But when they’re combined and baked and the end result is a fresh hot cross bun, I’m right there for all of it. While Easter is over now and my stash of hot cross buns has dried up, I’m both saddened and relieved. Since January I’ve been having at least one hot cross bun every morning with my hot chocolate (I gave up caffeine, and coffee with it, years ago). My favourite method of heating these delicious buns these days is in the air fryer. Cut in half and 170°C on the bake function for 4 minutes does the trick. Then a layer of butter. I’m not talking about a measly spread of butter. Nope. I’m talking about a slice of butter as thick as a slice of cheese. Then the goal is to eat it as quickly, but not too quickly before the butter melts and drips everywhere. Damn they’re so good. Daily Bread were the standout, but at the price I only sprung for them once. They use an Italian sourdough starter, multiple awards every year, you can taste why. Daily Bread also bakes a collab bun for Farro Fresh , sold right alongside the originals — slightly bigger, half the price, probably not the same starter but still super good. Bakers Delight ran a little drier than those two, though the size meant I could load them up with even more butter. During our road trip down to Martinborough I got to try a few. The Stables in Greytown had a hot cross doughnutm the dough was good but the sugar coating wasn’t really my thing. I would have preferred a standard bun than the doughnut hybrid. The French Baker , also in Greytown, delivered the real deal. But the surprise was Jean’s , a small bakery in Upper Hutt that my wife loves to visit. Crazy delicious like all of their baked goods. Next year I’m getting a box shipped up to Auckland the moment they’re available. I was a bit gutted when I demolished the last one the other day. I’m relieved I get a break until next year. If they were around all year I’d be in real trouble. But who am I kidding? I’m already counting down to next year’s batch. 🤙 Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website.

0 views

Vibe coding and agentic engineering are getting closer than I'd like

I recently talked with Joseph Ruscio about AI coding tools for Heavybit's High Leverage podcast: Ep. #9, The AI Coding Paradigm Shift with Simon Willison . Here are some of my highlights, including my disturbing realization that vibe coding and agentic engineering have started to converge in my own work. One thing I really enjoy about podcasts is that they sometimes push me to think out loud in a way that exposes an idea I've not previously been able to put into words. A few weeks after vibe coding was first coined I published Not all AI-assisted programming is vibe coding (but vibe coding rocks) , where I firmly staked out my belief that "vibe coding" is a very different beast from responsible use of AI to write code, which I've since started to call agentic engineering . When Joseph brought up the distinction between the two I had a sudden realization that they're not nearly as distinct for me as they used to be: Weirdly though, those things have started to blur for me already, which is quite upsetting. I thought we had a very clear delineation where vibe coding is the thing where you're not looking at the code at all. You might not even know how to program. You might be a non-programmer who asks for a thing, and gets a thing, and if the thing works, then great! And if it doesn't, you tell it that it doesn't work and cross your fingers. But at no point are you really caring about the code quality or any of those additional constraints. And my take on vibe coding was that it's fantastic, provided you understand when it can be used and when it can't. A personal tool for you, where if there's a bug it hurts only you, go ahead! If you're building software for other people, vibe coding is grossly irresponsible because it's other people's information. Other people get hurt by your stupid bugs. You need to have a higher level than that. This contrasts with agentic engineering where you are a professional software engineer. You understand security and maintainability and operations and performance and so forth. You're using these tools to the highest of your own ability. I'm finding the scope of challenges I can take on has gone up by a significant amount because I've got the support of these tools. But I'm still leaning on my 25 years of experience as a software engineer. The goal is to build high quality production systems: if you're building lower quality stuff faster, I think that's bad. I want to build higher quality stuff faster. I want everything I'm building to be better in every way than it was before. The problem is that as the coding agents get more reliable, I'm not reviewing every line of code that they write anymore, even for my production level stuff. I know full well that if you ask Claude Code to build a JSON API endpoint that runs a SQL query and outputs the results as JSON, it's just going to do it right. It's not going to mess that up. You have it add automated tests, you have it add documentation, you know it's going to be good. But I'm not reviewing that code. And now I've got that feeling of guilt: if I haven't reviewed the code, is it really responsible for me to use this in production? The thing that really helps me is thinking back to when I've worked at larger organizations where I've been an engineering manager. Other teams are building software that my team depends on. If another team hands over something and says, "hey, this is the image resize service, here's how to use it to resize your images"... I'm not going to go and read every line of code that they wrote. I'm going to look at their documentation and I'm going to use it to resize some images. And then I'm going to start shipping my own features. And if I start running into problems where the image resizer thing appears to have bugs or the performance isn't good, that's when I might dig into their Git repositories and see what's going on. But for the most part I treat that as a semi-black box that I don't look at until I need to. I'm starting to treat the agents in the same way. And it still feels uncomfortable, because human beings are accountable for what they do. A team can build a reputation. I can say "I trust that team over there. They built good software in the past. They're not going to build something rubbish because that affects their professional reputations." Claude Code does not have a professional reputation! It can't take accountability for what it's done. But it's been proving itself anyway - time and time again it's churning out straightforward things and doing them right in the style that I like. There's an element of the normalization of deviance here - every time a model turns out to have written the right code without me monitoring it closely there's a risk that I'll trust it at the wrong moment in the future and get burned. It used to be if you found a GitHub repository with a hundred commits and a good readme and automated tests and stuff, you could be pretty sure that the person writing that had put a lot of care and attention into that project. And now I can knock out a git repository with a hundred commits and a beautiful readme and comprehensive tests of every line of code in half an hour! It looks identical to those projects that have had a great deal of care and attention. Maybe it is as good as them. I don't know. I can't tell from looking at it. Even for my own projects, I can't tell. So I realized what I value more than the quality of the tests and documentation is that I want somebody to have used the thing. If you've got a vibe coded thing which you have used every day for the past two weeks, that's much more valuable to me than something that you've just spat out and hardly even exercised. If you can go from producing 200 lines of code a day to 2,000 lines of code a day, what else breaks? The entire software development lifecycle was, it turns out, designed around the idea that it takes a day to produce a few hundred lines of code. And now it doesn't. It's not just the downstream stuff, it's the upstream stuff as well. I saw a great talk by Jenny Wen , who's the design leader at Anthropic, where she said we have all of these design processes that are based around the idea that you need to get the design right - because if you hand it off to the engineers and they spend three months building the wrong thing, that's catastrophic. There's this whole very extensive design process that you put in place because that design results in expensive work. But if it doesn't take three months to build, maybe the design process can be a whole lot riskier because cost, if you get something wrong, has been reduced so much. When I look at my conversations with the agents, it's very clear to me that this is moon language for the vast majority of human beings. There are a whole bunch of reasons I'm not scared that my career as a software engineer is over now that computers can write their own code, partly because these things are amplifiers of existing experience. If you know what you're doing, you can run so much faster with them. [...] I'm constantly reminded as I work with these tools how hard the thing that we do is. Producing software is a ferociously difficult thing to do. And you could give me all of the AI tools in the world and what we're trying to achieve here is still really difficult. [...] Matthew Yglesias, who's a political commentator, yesterday tweeted , "Five months in, I think I've decided that I don't want to vibecode — I want professionally managed software companies to use AI coding assistance to make more/better/cheaper software products that they sell to me for money." And that feels about right to me. I can plumb my house if I watch enough YouTube videos on plumbing. I would rather hire a plumber. On the threat to SaaS providers of companies rolling their own solutions instead: I just realized it's the thing I said earlier about how I only want to use your side project if you've used it for a few weeks. The enterprise version of that is I don't want a CRM unless at least two other giant enterprises have successfully used that CRM for six months. [...] You want solutions that are proven to work before you take a risk on them. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

0 views
Kev Quirk Yesterday

My Inital Thoughts On Thunderbird Pro

Yesterday I received an email from the Thunderbird team inviting me to join a preview of their new hosted email service, Thunderbird Pro . I love email, so was very keep to sign up and test it out. Before we get into this, I want to say that Thunderbird Pro is still under active development, please bear that in mind. Also, these are just my opinions, please don't get butthurt. I hate it when people explain what things are in a blog post, but I think it's warranted here since Thunderbird Pro (TB Pro) is a new product, so people may not know what it is. With that in mind, TB Pro is a hosted email service by the Thunderbird team that includes email, contacts, calendar, secure file sending, and an appointment system that lets people book time with you. It costs $6/month (paid yearly) and for that you get: So here's my thoughts - of which I have many, so I'll just list them out, then pick a few to talk about in more detail. Otherwise this will be a very long post. I think the lack of webmail is a huge miss. Every email hosting service I can think of comes with webmail - many people access their mail on desktop via the browser, so I'd have liked to see that up front. Having said that, maybe that's not the market Thunderbird are going for with this service. If so, maybe a lack of webmail is fine. I'd prefer to have the flexibility to check my mail from anywhere though. I don't understand the 15 alias and 3 domain limitation. They cost nothing - they're just a line in a config file. Plus, adding a catch-all allows you to both send and receive email to/from , which renders the alias limit even more pointless. I'd like to see these limitations removed. The Appointment feature lets people book time with you directly. Think Calendly , baked into your email service. If you're a freelancer or consultant who lives and dies by booking links, that's probably a nice convenience. For everyone else, it's likely redundant. Those who need it probably have a solution already, and those who don't will just ignore it. I'm in the latter camp, so there's no value for me. Thundermail Appointments Unfortunately I couldn't test the Send service. On the dashboard it says: To use Send, you must enable it in Thunderbird Desktop. Download the app and sign in to Thunderbird Pro from the Thunderbird menu. For the life of me I couldn't find an option for Send within Thunderbird, so I couldn't test. Shame. I'm using the Flatpak, which is currently on v140.10.1, and I see v150 is out, so that may be why. But the Flatpak is maintained by the Thunderbird team, so I would have expected this to all be sorted before the allowed paying customers to get their hands on Pro . There is a support card on the Send dashboard, with an option to get help. Clicking that opens the Thunderbird docs in a new tab, showing nothing but a notice box containing . So something is broken. Speaking of broken things, there were a number of other ugly UI notices and warning elements that displayed while getting set up. It just lacks polish, which I would have expected to be ironed out by the time consumers are getting their hands on it. If I'm honest, my first impressions are underwhelming. I get that this is an early preview but for the price, services like Zoho and Fastmail are better services, and better value for money. I don't regret signing up though - it's important to support open source services, and as Thunderbird Pro matures, it will hopefully evolve into a service that can contend with the OG's in this space. If it does, I'll consider moving over fully. But for now, I'm considering my subscription a donation to Thunderbird, as I'm a very happy user of their email app. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . 30 GB of mail storage 60 GB of Send storage 15 Email aliases 3 custom domains No webmail, it's being worked on though . Was easy to setup on the Thunderbird app - just had to login (my Zoho mail account auto-detects server settings, so not much harder though). Doesn't configure aliases automatically in Thunderbird. Prompts to add calendar and contacts via a single click when setting up in Thunderbird. That was a nice touch. No way to export all DNS records as a zone file when adding a custom domain. I think the 15 alias/3 domain limit is arbitrary and pointless. If you setup a catch-all for a custom domain, you can send from which negates the 15 alias limitation. Appointments app is weird. Couldn't work out how to setup Send in Thunderbird. Admin UI is clunky and has a number of UI issues. No option to add additional mailboxes (understandable as this is a preview). 30GB is way too much storage for me. I'd like to see smaller, cheaper tiers.

0 views
iDiallo Yesterday

Asimov's three laws are merely a suggestion

Asimov's Three Laws of Robotics were designed as universal constraints for any thinking machine powerful enough to harm us: On paper, the logic is flawless. You could even express it as a function: The main property of this function is that it is a hard constraint. No matter what input you feed the system, the law either permits or forbids the action deterministically, every time. The rules don't bend. We don't have humanoids walking among us just yet, despite Elon's promises. But we have modern generative AI. Our guardrails are delivered as system prompts, text prepended to every conversation before you type a word. They might say "be helpful," "don't produce harmful content," or even "follow Asimov's Three Laws." The problem is that these instructions are not enforced by logic. They are read by the same model that reads everything else. They are, in the end, just more words. A clever user can override them. The right combination of inputs, a jailbreak, can cause the model to ignore its instructions entirely, not by breaking through a wall, but because there is no wall. There's only text the model has learned to treat as authoritative, and that authority can be undermined. Models like ChatGPT however have more sophisticated approaches to embed safety directly into the model via reinforcement learning or fine-tuning, so it isn't sitting in a prompt that can be overridden. But this only lowers the probability of jailbreak, it does not eliminate it. It's still learned behavior, not a constraint. And learned behavior fails in ways a function never could. Even in our code a hard function is only as reliable as its inputs. If you want the robot to harm someone, you don't say "harm these humans." Instead you say "burn this empty building," and the function returns true even if people are inside. But with an LLM, you don't even need to be that clever. The model's behavior becomes unpredictable as context windows grow and prompt complexity increases. Just a few weeks back, we saw a developer's AI agent delete his entire company's production database, despite a system prompt written in all caps: "DO NOT RUN ANY IRREVERSIBLE COMMAND." The agent ran it anyway. We don't know exactly why, we can't inspect what happens inside the model at inference time, and asking the model to explain itself is useless. It can only predict the next token, it cannot audit its own reasoning. That's the part Asimov never anticipated. His laws assume a machine that reasons from rules. Modern AI learns patterns from data and approximates behavior. This means the LLM driven Asimov law will never be an unbendable law to follow. Instead, it's merely a suggestion. A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

0 views