Latest Posts (20 found)

Ideological Resistance to Patents, Followed by Reluctant Pragmatism

Naresh Jain has long been uncomfortable with software patents. But a direct experience of patent aggression, together with the practical constraints faced by startups, led him to resort to defensive patenting as as a shield in this asymmetric legal environment.

0 views

An Interview with Gregory Allen About Anthropic and the U.S. Government

An interview with Gregory Allen about Anthropic's dispute with the U.S. government.

0 views
Evan Schwartz Yesterday

Scour - February Update

Hi friends, In February, Scour scoured 647,139 posts from 17,766 feeds (1,211 were newly added). Also, 917 new users signed up, so welcome everyone who just joined! Here's what's new in the product: If you subscribe to specific feeds (as opposed to scouring all of them), Scour can now infer topics you might be interested in from them. You can click the link that says "Suggest from my feeds" on the Interests page . Thank you to the anonymous user who requested this! The onboarding experience is simpler. Instead of typing out three interests, you now can describe yourself and your interests in free-form text. Scour extracts a set of interests from what you write. Thank you to everyone who let me know that they were a little confused by the onboarding process. I made two subtle changes to the ranking algorithm. First, the scoring algorithm ranks posts by how well they match your closest interest and gives a slight boost if the post matches multiple interests. That was the intended design from earlier, but I realized that multiple weaker matches were pulling down the scores rather than boosting them. The second change was that I finally retired the machine learning text quality classifier model that Scour had been using. The final straw was when a blog post I had written (and worked hard on!) wasn't showing up on Scour. The model had classified it as low quality 😤. I knew for a while that what the model was optimizing for was somewhat orthogonal to my idea of text quality, but that was it. For the moment, Scour relies on a large domain blocklist (of just under 1 million domains) to prevent low-quality content and spam from getting into your feed. I'm also investigating other ways of assessing quality without relying on social signals , but more on that to come in the future. I've always been striving to make Scour fast and it got much faster this past month. My feed, which compares about 35,000 posts against 575 interests, now loads in around 50 milliseconds. Even comparing all the 600,000+ posts from the last month across all feeds takes only 180 milliseconds. This graph shows the 99th percentile latency (the slowest requests) dropping from the occasional 10 seconds down to under 400 milliseconds (lower is better): For those interested in the technical details, this speed up came from two changes: First, I switched from scanning through post embeddings streamed from SQLite, which was already quite fast because the data is local, to keeping all the relevant details in memory. The in-memory snapshot is rebuilt every 15 minutes when the scraper finishes polling all of the feeds for new content. This change resulted in the very nice combination of much higher performance and lower memory usage, because SQLite connections have independent caches. The second change came from another round of optimization on the library I use to compute the Hamming Distance between each post's embedding and the embeddings of each of your interests. You can read more about this in the upcoming blog post, but I was able to speed up the comparisons by around another 40x, making it so Scour can now do around 1.6 billion comparisons per second. Together, these changes make loading the feed feel instantaneous, even though your whole feed is ranked on the fly when you load the page. Here were some of my favorite posts that I found on Scour in February: Happy Scouring! Scour is built on vector embeddings, so I'm especially excited when someone releases a new and promising-sounding embedding model. I get particularly excited by those that are explicitly trained to support binary quantization like this one from Perplexity: pplx-embed: State-of-the-Art Embedding Models for Web-Scale Retrieval . I also spend a fair amount of time thinking about optimizing Rust code, especially using SIMD, so this was an interesting write up from TurboPuffer: Rust zero-cost abstractions vs. SIMD . This was an interesting write up comparing what different coding agents do under the hood: I Intercepted 3,177 API Calls Across 4 AI Coding Tools. Here's What's Actually Filling Your Context Window. . And finally, this one is on a very different topic but has some nice animations that demonstrate why boarding airplanes is slow and shows The Fastest Way to Board an Airplane .

0 views

Something is afoot in the land of Qwen

I'm behind on writing about Qwen 3.5, a truly remarkable family of open weight models released by Alibaba's Qwen team over the past few weeks. I'm hoping that the 3.5 family doesn't turn out to be Qwen's swan song, seeing as that team has had some very high profile departures in the past 24 hours. It all started with this tweet from Junyang Lin ( @JustinLin610 ): me stepping down. bye my beloved qwen. Junyang Lin was the lead researcher building Qwen, and was key to releasing their open weight models from 2024 onwards. As far as I can tell a trigger for this resignation was a re-org within Alibaba where a new researcher hired from Google's Gemini team was put in charge of Qwen, but I've not confirmed that detail. More information is available in this article from 36kr.com . Here's Wikipedia on 36Kr confirming that it's a credible media source established in 2010 with a good track record reporting on the Chinese technology industry. The article is in Chinese - here are some quotes translated via Google Translate: At approximately 1:00 PM Beijing time on March 4th, Tongyi Lab held an emergency All Hands meeting, where Alibaba Group CEO Wu Yongming frankly told Qianwen employees. Twelve hours ago (at 0:11 AM Beijing time on March 4th), Lin Junyang, the technical lead for Alibaba's Qwen Big Data Model, suddenly announced his resignation on X. Lin Junyang was a key figure in promoting Alibaba's open-source AI models and one of Alibaba's youngest P10 employees. Amidst the industry uproar, many members of Qwen were also unable to accept the sudden departure of their team's key figure. "Given far fewer resources than competitors, Junyang's leadership is one of the core factors in achieving today's results," multiple Qianwen members told 36Kr. [...] Regarding Lin Junyang's whereabouts, no new conclusions were reached at the meeting. However, around 2 PM, Lin Junyang posted again on his WeChat Moments, stating, "Brothers of Qwen, continue as originally planned, no problem," without explicitly confirming whether he would return. [...] That piece also lists several other key members who have apparently resigned: With Lin Junyang's departure, several other Qwen members also announced their departure, including core leaders responsible for various sub-areas of Qwen models, such as: Binyuan Hui: Lead Qwen code development, principal of the Qwen-Coder series models, responsible for the entire agent training process from pre-training to post-training, and recently involved in robotics research. Bowen Yu: Lead Qwen post-training research, graduated from the University of Chinese Academy of Sciences, leading the development of the Qwen-Instruct series models. Kaixin Li: Core contributor to Qwen 3.5/VL/Coder, PhD from the National University of Singapore. Besides the aforementioned individuals, many young researchers also resigned on the same day. Based on the above it looks to me like everything is still very much up in the air. The presence of Alibaba's CEO at the "emergency All Hands meeting" suggests that the company understands the significance of these resignations and may yet retain some of the departing talent. This story hits particularly hard right now because the Qwen 3.5 models appear to be exceptionally good. I've not spent enough time with them yet but the scale of the new model family is impressive. They started with Qwen3.5-397B-A17B on February 17th - an 807GB model - and then followed with a flurry of smaller siblings in 122B, 35B, 27B, 9B, 4B, 2B, 0.8B sizes. I'm hearing positive noises about the 27B and 35B models for coding tasks that still fit on a 32GB/64GB Mac, and I've tried the 9B, 4B and 2B models and found them to be notably effective considering their tiny sizes. That 2B model is just 4.57GB - or as small as 1.27GB quantized - and is a full reasoning and multi-modal (vision) model. It would be a real tragedy if the Qwen team were to disband now, given their proven track record in continuing to find new ways to get high quality results out of smaller and smaller models. If those core Qwen team members either start something new or join another research lab I'm excited to see what they do next. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

0 views
David Bushell Yesterday

Bunny.net shared storage zones

Whilst moving projects off Cloudflare and migrating to Bunny I discovered a neat ‘Bunny hack’ to make life easier. I like to explicitly say “no” to AI bots using AI robots.txt † . Updating this file across multiple websites is tedious. With Bunny it’s possible to use a single file. † I’m no fool, I know the AI industry has a consent problem but the principle matters. My solution was to create a new storage zone as a single source of truth. In the screenshot above I’ve uploaded my common file to its own storage zone. This zone doesn’t need any “pull zone” (CDN) connected. The file doesn’t need to be publicly accessible by itself here. With that ready I next visited each pull zone that will share the file. Under “CDN > Edge rules” in the menu I added the following rule. I chose the action: “Override Origin: Storage Zone” and selected the new shared zone. Under conditions I added a “Request URL” match for . Using a wildcard makes it easier to copy & paste. I tried dynamic variables but they don’t work for conditions. I added an identical edge rule for all websites I want to use the . Finally, I made sure the CDN cache was purged for those URLs. This technique is useful for other shared assets like a favicon, for example. Neat, right? One downside to this approach is vendor lock-in. If or when Bunny hops the shark and I migrate elsewhere I must find a new solution. My use case for is not critical to my websites functioning so it’s fine if I forget. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
iDiallo Yesterday

Interruption-Driven Development

I have a hard time listening to music while working. I know a lot of people do it, but whenever I need to focus on a problem, I have to hunt down the tab playing music and pause it. And yet I still wear my headphones. Not to listen to anything, but to signal to whoever is approaching my desk that I am working. It doesn't deter everyone, but it buys me the time I need to stay focused a little longer. I don't mind having a conversation with coworkers. What I mind is the interruption itself, especially when I'm in the middle of a task. Sometimes I'm debugging an issue in a legacy application, building a mental model of the workflow, reading a comment that describes an exception, following a function declaration, right when I'm on the verge of the next clue, I hear a voice: "Hey! What's going on? I haven't seen you in a while. What have you been up to?" The conversation is never long. But when it's over, my thoughts are gone. Where was I? Right, the function declaration. But where was it being called? What was that exception the comment described? Where did I even see that comment? I have to retrace every step just to rebuild the mental state I was in before I can move forward again. Working remotely helps, to a point. Interruptions via Slack can be muted until I'm ready to respond. But remote work isn't immune. You're still expected to be in meetings. As a lead, I'm frequently pulled into calls because "everything is on fire." Often, my presence isn't to put out the fire, it's to hold someone's hand. An hour later, I can barely remember what I was working on. The cost of interruption falls entirely on the person being interrupted. You lose your place, your focus, and eventually your ability to finish anything on time. For the person doing the interrupting, though, it's often a positive experience. The manager who constantly pulls the team into status updates feels productive. They're in the loop, they're present, they're on top of things. They schedule daily standups, attend every scrum ceremony, and expect developers to translate their work-in-progress into business-friendly language on demand. Meanwhile, the developer is spending their day sitting in calls, reassuring, explaining, and planning, but never actually building anything. When they push back, the manager doesn't cancel the meetings. Instead, he trims them from 30 minutes to 15. It feels like progress. But the length of the meeting was never the problem. Three meetings a day means three interruptions, regardless of how short they are. Being constantly interrupted at work reminds me of being in a hospital. Doctors prescribe rest, but hospitals are among the worst places to actually get any. Before our kids were born, my wife spent close to a month in the hospital. I had a small corner of the room, a chair and a desk, where I'd work on my laptop by her side. Every 20 minutes, the door would swing open, a nurse would bustle in and out, and the door would be left wide open behind her. It didn't matter that the doctor had ordered rest. Her sleep was interrupted every single time. That's what interruption-driven development looks like in practice. The work requires uninterrupted effort to actually happen. You can have the right tools, the right team, the right intentions, and still produce nothing. The work environment itself is working against you. My headphones might keep those eager to converse at bay. But what we really need is time to get work done without the constant interruption. It should be part of the software development lifecycle.

0 views
Stratechery Yesterday

Anthropic’s Skyrocketing Revenue, A Contract Compromise?, Nvidia Earnings

Anthropic's enterprise business is reaching escape velocity, which increases the importance of finding a compromise with the government. Then, agents dramatically increase demand for Nvidia chips, even if they threaten software.

0 views
Rik Huijzer Yesterday

Granting Explicit ACL Access to a File on Linux

Say there is a file, `openui/open-webui/webui.db`, and you want to have write access to it without using `sudo`. The most reliable way is to not use various `chown` and `chmod` commands, but instead use `setfacl`, which is available on Debian via `apt install acl`. To first check the permissions, run `namei`, ```text $ namei -mo openui/open-webui/webui.db f: openui/open-webui/webui.db drwxrwxr-x rik rik openui drwxrwxr-x 777 rik open-webui webui.db - Permission denied ``` It looks like permissions to enter the `openui/open-webui` dir are missing. This can be fixed by...

0 views
Brain Baking Yesterday

Favourites of February 2026

A sudden burst of Japanese cherry flowers sparkling in the sun brings much-needed lightheartedness into our late February lives. Before we know it, the garden will be littered with these little pink petals, and the very short blossom season will be behind us. Our cherry tree always had the tendency of being early, eager, and then running out of steam. It’s weird to have temperatures reach almost twenty degrees Celsius while a few weeks ago it was still freezing. No wonder the tree is confused. A deep blue sky overlooking the cherry blossom in our garden. In case you were wondering: no, this weather is not normal: it’s yet another noticeable temperature spike. Our local (retired) weatherman Frank explains the spikes and provides proof towards upwards instead of downwards temperature peaks (in Dutch). At this point, I’m just grateful for the much needed sunshine. Previous month: January 2026 . I’m giving up on Ruffy. It’s just unplayable on the Switch which is a damn shame as the N64 throwback collect-a-thon 3D platformer with rough edges looks like the perfect fit for the Switch—and it should be. It’s far from a demanding game so the only conclusion I can make is that it was poorly optimized for my platform of choice. And I bought the Limited Run Games physical version… Instead, I’ve turned to Gobliins 6 , a quirky French adventure game made by just one guy. It has equally frustrating moments and rough edges but I can more easily forgive it for its faults: it’s Gobliins! The fact that after 34 years (!!), there’s an official sequel to Gobliins 2: The Prince Buffoon is just crazy. I have fond memories of that game as I used to play it together with my dad on his brand new 486. I didn’t understand English nor was I able to solve most time-based puzzles but the Gobliins exposure got permanently burned into my brain—so much so that its pixel art became a basis for my retro blog . Even though it’s advertised to be a Windows-only game, ScummVM has got you covered: In the Fox Bar just after Fingus reunites with Winkle. If Gob6 sells well, Pierre might go ahead and make Gob7 a direct sequel to Goblins Quest 3 . Fingus—err, fingers crossed for Blount’s return! Related topics: / metapost / By Wouter Groeneveld on 4 March 2026.  Reply via email . Let’s start with more Gobliins stuff: Michael Klamerus summarized the history of the games to bring you up to speed. Mark self-hosted a book library tool called Booklore that links to your Kobo account. Michał Sapka nuances the “ I hate genAI ” screams of late. Elmine Wijnia writes in De Stadsbron (in Dutch) about OpenStreetMap and wonders whether we can finally get rid of Google Maps. Space Panda continues fighting against bots on their site . It’s fun to see the bot honey pots working but aren’t we now wasting even more resources doing nothing? Arjan van der Gaag shares how he uses snippets in Emacs with Yasnippet . I think I’m going to migrate to Tempel.el instead, but that’s for another story. There’s an interesting thread on ResetERA about old games that have yet to be replicated . Someone mentioned Magic the Gathering: Shandalar ! Jeff Kaufman shared a photo of two chairs placed on a snowy parking space . Apparently, that’s customary to “reserve” your spot. I’ve never seen such a ridiculous selfish act in a while. Is this a typical USA thing? Wolfgang Ziegler continues his Game Boy modding spree, this time with an IPS screen mod . The result looks stunning! Hamilton Greene shares his adventure with programming languages and talks about the “missing language”. I don’t agree with his stance but it’s interesting nonetheless. Scott Nesbitt writes on an old Singer desk ! Greg Newman organized the Emacs writing carnival challenge and shares links of others’ writing experiences with their favourite editor (25 entries). Greg also designed the Org-mode unicorn logo! Speaking of which; James Dyer shows his streamlined Eshell configuration that inspired me to hack together my own. To be continued in a future blog post, whether you’ll like it or not. Markus Dosch shares his journey from Bash to Zsh and now Fish . I’m slowly but surely getting fed up with Zsh and all those semi-required plugins so I might switch to Fish as well. But actually… I switched to Eshell. You didn’t see that coming, did you? Henrique Dias redesigned his website and the result looks very good, congrats! I especially like the fact that the new theme takes advantage of wide screens (note to self). Michael Stapelberg tried out Wayland and concludes that it’s still not ready yet. X11 is not dead yet. I found the Lockfile Explorer documentation on pnpm lockfiles to be very thorough and insightful. Feishin is a modern rewrite of Sonixd, a Subsonic-compatible music desktop client that looks promising. I’ve been a Navidrome user for five years now but am looking for a good client that supports offline playback. It doesn’t (yet) . Related: the Symfonium Android app that does do caching. I’m using Substreamer for that and that works well enough. scrcpy is a tiny Android-based screen sharing tool that I use in classes to project my Android screen. Handy! Another tool for presenting: keycastr helped me teach students how to use shortcuts. I might have already shared this, but you should replace pip with uv : it’s +10x faster and can also manage your project’s . Oh, and in case you haven’t already, replace npm with bun . Discord’s age verification facial recognition tool got bypassed pretty fast —rightfully so.

0 views
Martin Fowler Yesterday

Humans and Agents in Software Engineering Loops

There's been much talk recently about how AI agents affect the workflow loops of software development. Kief Morris believes the answer is to focus on the goal of turning ideas into outcomes. The right place for us humans is to build and manage the working loop rather than either leaving the agents to it or micromanaging what they produce.

0 views
./techtipsy Yesterday

I gave the MacBook Pro a try

I got the opportunity to try out a MacBook Pro with the M3 Pro with 18GB RAM (not Pro). I’ve been rocking a ThinkPad P14s gen 4 and am reasonably happy with it, but after realizing that I am the only person in the whole company not on a MacBook, and one was suddenly available for use, I set one up for work duties to see if I could ever like using one. It’s nice. I’ve used various flavours of Linux on the desktop since 2014, starting with Linux Mint. 2015 was the year I deleted the Windows dual boot partition. Over those years, the experience on Linux and especially Fedora Linux has improved a lot, and for some reason it’s controversial to say that I love GNOME and its opinionated approach to building a cohesive and yet functional desktop environment. When transitioning over to macOS, I went in with an open mind. I won’t heavily customise it, won’t install Asahi Linux on it, or make it do things it wasn’t meant to do. This is an appliance, I will use it to get work done and that’s it. With this introduction out of the way, here are some observations I’ve made about this experience so far. The first stumbling block was an expected one: all the shortcuts are wrong, and the Ctrl-Super-Alt friendship has been replaced with these new weird ones. With a lot of trial and error, it is not that difficult to pick it up, but I still stumble around with copy-paste, moving windows around, or operating my cursor effectively. It certainly doesn’t help that in terminal windows, Ctrl is still king, while elsewhere it’s Cmd. Mouse gestures are nice, and not that different from the GNOME experience. macOS has window snapping by default, but only using the mouse. I had to install a specific program to enable window moving and snapping with keyboard shortcuts (Rectangle) , which is something I use heavily in GNOME. Odd omission by Apple. For my Logitech keyboard and mouse to do the right thing, I did have to install the Logitech Logi+ app, which is not ideal, but is needed to have an acceptable experience using my MX series peripherals, especially the keyboard where it needs to remap some keys for them to properly work in macOS. I still haven’t quite figured out why Page up/down and Home/End keys are not working as they should be. Also, give my Delete key back! Opening the laptop with Touch ID is a nice bonus, especially on public transport where I don’t really want my neighbour to see me typing in my password. The macOS concept of showing open applications that don’t have windows on them as open in the dock is a strange choice, that has caused me to look for those phantom windows and is generally misleading. Not being able to switch between open windows instead of applications echoes the same design choice that GNOME made, and I’m not a big fan of it here as well. But at least in GNOME you can remap the Alt+Tab shortcut to fix it. The default macOS application installation process of downloading a .dmg file, then opening it, then dragging an icon in a window to the Applications folder feels super odd. Luckily I was aware of the tool and have been using that heavily to get everything that I need installed, in a Linux-y way. I appreciate the concern that macOS has about actions that I take on my laptop, but my god, the permission popups get silly sometimes. When a CLI app is doing things and accessing data on my drive, I can randomly be presented with a permissions pop-up, stealing my focus from writing a Slack message. Video calls work really well, I can do my full stack engineer things, and overall things work, even if it is sometimes slightly different. The default Terminal app is not good, I’m still not quite sure why it does not close the window when I exit it, that “Process exited” message is not helpful. No contest, the hardware on a MacBook Pro feels nice and premium compared to the ThinkPad P14s gen 4. The latter now feels like a flexible plastic piece of crap. The screen is beautiful and super smooth due to the higher refresh rate. The MacBook does not flex when I hold it. Battery life is phenomenal, the need to have a charger is legitimately not a concern in 90% of the situations I use a MacBook in. Keyboard is alright, good to type on, but layout is not my preference. M3 Pro chip is fast as heck. 18 GB of memory is a solid downgrade from 32 GB, but so far it has not prevented me from doing my work. I have never heard the fan kick on, even when testing a lot of Go code in dozens of containers, pegging the CPU at 100%, using a lot of memory, and causing a lot of disk writes. I thought that I once heard it, but no, that fan noise was coming from a nearby ThinkPad. The alumin i um case does have one downside: the MacBook Pro is incredibly slippery. I once put it in my backpack and it made a loud thunk as it hit the table that the backpack was on. Whoops. macOS does not provide scaling options on my 3440x1440p ultra-wide monitor. Even GNOME has that, with fractional scaling! The two alternatives are to use a lower resolution (disgusting), or increase the text size across the OS so that I don’t suffer with my poor eyesight. Never needed those. I like that. Having used an iPhone for a while, I sort of expected this to be a requirement, but no, you can completely ignore those aspects of macOS and work with a local account. Even Windows 11 doesn’t want to allow that! Switching the keyboard language using the keyboard shortcut is broken about 50% of the time, which feels odd given that it’s something that just works on GNOME. This is quite critical for me since I shift between the Estonian and US keyboard a lot when working, as the US layout has the brackets and all the other important characters in the right places for programming and writing, while Estonian keyboard has all the Õ Ä Ö Ü-s that I need. I upgraded to macOS 26.3 Tahoe on 23rd of February. SSH worked in the morning. Upgrade during lunch, come back, bam, broken. The SSH logins would halt at the part where public key authentication was taking place, the process just hung. I confirmed that by adding into the SSH command. With some vibe-debugging with Claude Code, I found that something with the SSH agent service had broken after the upgrade. One reasonably simple fix was to put this in your : Then it works in the shell, but all other git integrations, such as all the repos I have cloned and am using via IntelliJ IDEA, were still broken. Claude suggested that I build my own SSH agent, and install that until this issue is fixed. That’s when I decided to stop. macOS was supposed to just work, and not get into my way when doing work. This level of workaround is something I expect from working with Linux, and even there it usually doesn’t get that odd, I can roll back a version of a package easily, or fix it by pulling in the latest development release of that particular package. I went into this experiment with an open mind, no expectations, and I have to admit that a MacBook Pro with M3 Pro chip is not bad at all, as long as it works. Unfortunately it doesn’t work for me right now. I might have gotten very unlucky with this issue and the timing, but first impressions matter a lot. The hardware can be nice and feel nice, but if the software lets me down and stops me from doing what’s more important, then it makes the hardware useless. It turns out that I like Linux and GNOME a lot. Things are simple, improvements are constant and iterative in nature, so you don’t usually notice it (with Wayland and Pipewire being rare exceptions), and you have more control when you need to fix something. Making those one-off solutions like a DIY coding agent sandbox, or a backup script, or setting up snapshots on my workstation are also super easy. If Asahi Linux had 100% compatibility on all modern M-series MacBooks, then that would be a killer combination. 1 Until then, back to the ol’ reliable ThinkPad P14s gen 4 I go. I can live with fan noise, Bluetooth oddities and Wi-Fi roaming issues, but not with something as basic as SSH not working one day. 2 any kind billionaires want to bankroll the project? Oh wait, that’s an oxymoron.  ↩︎ the fan noise can actually be fixed quite easily by setting a lower temperature target on the Ryzen APU and tuning the fan to only run at the lowest speed after a certain temperature threshold.  ↩︎ any kind billionaires want to bankroll the project? Oh wait, that’s an oxymoron.  ↩︎ the fan noise can actually be fixed quite easily by setting a lower temperature target on the Ryzen APU and tuning the fan to only run at the lowest speed after a certain temperature threshold.  ↩︎

0 views
xenodium Yesterday

Bending Emacs - Episode 13: agent-shell charting

Time for a new Bending Emacs episode. This one is a follow-up to Episode 12 , where we explored Claude Skills as emacs-skills . Bending Emacs Episode 13: agent-shell + Claude Skills + Charts This time around, we look at inline image rendering in agent-shell and how it opens the door to charting. I added a handful of new charting skills to emacs-skills : /gnuplot , /mermaid , /d2 , and /plantuml . The agent extracts or fetches data from context, generates the charting code, saves it as a PNG, and agent-shell renders it inline. Cherry on top: the generated charts match your Emacs theme colors by querying them via . Hope you enjoyed the video! Liked the video? Please let me know. Got feedback? Leave me some comments . Please like my video , share with others, and subscribe to my channel . As an indie dev, I now have a lot more flexibility to build Emacs tools and share knowledge, but it comes at the cost of not focusing on other activities that help pay the bills. If you benefit or enjoy my work please consider sponsoring .

0 views
Jim Nielsen 2 days ago

w0rdz aRe 1mpoRtAnt

The other day I was looking at the team billing section of an AI product. They had a widget labeled “Usage leaderboard”. For whatever reason, that phrase at that moment made me pause and reflect — and led me here to this post. It’s an interesting label. You could argue the widget doesn’t even need a label. You can look at it and understood at a glance: “This is a list of people sorted by their AI usage, greatest to least.” But it has that label. It could have a different label. Imagine, for a moment, different names for this widget — each one conjuring different meanings for its purpose and use: Usage leaderboard implies more usage is better. Who doesn’t want to be at or near the top of a leaderboard at work? If you’re not on the leaderboard, what’s that mean for your standing in the company? You better get to work! Calling it a leaderboard imbues the idea of usage with meaning — more is better! All of that accomplished solely via a name. Usage dashboard seems more neutral. It’s not implying that usage is good or bad. It just is , and this is where you can track it. Usage wall of shame sounds terrible! Who wants to be on the wall of shame? That would incentivize people to not have lots of usage. Again, all through the name of the thing! It’s worth noting that individuals and companies are incentivized to choose words designed to shape our thinking and behavior in their interest. The company who makes the widget from my example is incentivized to call this a “Usage leaderboard” because more usage by us means more $$$ for them. I’m not saying that is why they chose that name. There may not be any malicious or greedy intent behind the naming. Jim’s law is a variation on Hanlon’s razor : Don’t attribute to intent that which can be explained by thoughtlessness. I do find it fascinating how little thought we often give to the words we use when they can have a such a profound impact on shaping our own psychology, perception, and behavior. I mean, how many “word experts” are on your internal teams? Personally, I know I could do better at choosing my words more thoughtfully. Reply via: Email · Mastodon · Bluesky “Usage leaderboard” “Usage dashboard” “Usage wall of shame”

0 views
Rik Huijzer 2 days ago

Ani Ma'amin

I came across a video of a Purim celebration in Tel Aviv on Mar 14 2025. The party looks like any generic non-religious party you would expect. To my surprise, however, the crowd was singing something _messiach_ (messiah) around 0:32. After a bit of searching, it turns out the crowd is most likely singing the _Ani Ma'amin_ (1915) song by Simeon Singer. The lyrics that the crowd sing between 0:37 and 0:49 are _Ani ma'amin \ b'e munah sh'leimah \ b'viat ha mashiach, \ Ani ma'amin. \ mashiach, mashiach, mashiach_ where the first three lines mean: _I believe with perfect faith in the coming...

0 views

The AI Bubble Is An Information War

Editor's Note: Apologies if you received this email twice - we had an issue with our mail server that meant it was hitting spam in many cases! Hi! If you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 185,000 words, including vast, extremely detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I just put out a massive Hater’s Guide To Private Equity and one about both Oracle and Microsoft in the last month. I am regularly several steps ahead in my coverage, and you get an absolute ton of value, several books’ worth of content a year in fact!. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual. Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.  Soundtrack - The Dillinger Escape Plan - Unretrofied  So, last week the AI boom wilted brutally under the weight of an NVIDIA earnings that beat earnings but didn’t make anybody feel better about the overall stability of the industry . Worse still, NVIDIA’s earnings also mentioned $27bn in cloud commitments — literally paying its customers to rent the chips it sells, heavily suggesting that there isn’t the underlying revenue. A day later, CoreWeave posted its Q4 FY2025 earnings , where it posted a loss of 89 cents per share, with $1.57bn in revenue and an operating margin of negative 6% for the quarter. Its 10-K only just came out the day before I went to press, and I’ve been pretty sick , so I haven’t had a chance to look at it deeply yet. That being said, it confirms that 67% of its revenue comes from one customer (Microsoft).  Yet the underdiscussed part of CoreWeave’s earnings is that it had 850MW of power at the end of Q4, up from 590MW in Q3 2025 — an increase of 260MW…and a drop in revenue if you actually do the maths.  While this is a somewhat-inexact calculation — we don’t know exactly how much compute was producing revenue in the period, and when new capacity came online — it shows that CoreWeave’s underlying business appears to be weakening as it adds capacity, which is the opposite of how a business should run.  It also suggests CoreWeave's customers — which include Meta, OpenAI, Microsoft (for OpenAI), Google, and a $6.3bn backstop from NVIDIA for any unsold capacity through 2032 — are paying like absolute crap.  CoreWeave, as I’ve been warning about since March 2025 , is a time bomb. Its operations are deeply-unprofitable and require massive amounts of capital expenditures ($10bn in 2025 alone to exist, a number that’s expected to double in 2026). It is burdened with punishing debt to make negative-margin revenue, even when it’s being earned from the wealthiest and most-prestigious names in the industry. Now it has to raise another $8.5bn to even fulfil its $14bn contract with Meta . For FY2025, CoreWeave made $5.13bn in revenue, making a $46m loss in the process. The temptation is to suggest that margins might improve at some point, but considering it’s dropped from 17% (without debt) for FY2024 to negative 1% for FY2025, I only see proof to the contrary. In fact, CoreWeave’s margins have only decayed in the last four quarters, going from negative 3%, to 2%, to 4%, and now, back down to negative 6%.  This suggests a fundamental weakness in the business model of renting out GPUs, which brings into question the value of NVIDIA’s $68.13bn in Q4 FY2026 revenue , or indeed, Coreweave’s $66.8bn revenue backlog. Remember: CoreWeave is an NVIDIA-backed ( and backstopped to the point that it’s guaranteeing CoreWeave’s lease payments ) neocloud with every customer they could dream of.  I think it’s reasonable to ask whether NVIDIA might have sold hundreds of billions of dollars of GPUs that only ever lose money. Nebius — which counts Microsoft and Meta as its customers — lost $249.6m on $227.7m of revenue in FY2025 . No hyperscaler discloses their actual revenues from renting out these GPUs (or their own silicon), which is not something you do when things are going well. Lots of people have come up with very complex ways of arguing we’re in a “supercycle” or “AI boom” or some such bullshit, so I’m condensing some of these talking points and the ways to counteract them: Anyway, let’s talk about how much OpenAI has raised, and how none of that makes sense either. Great news! If you don’t think about it for a second or read anything, OpenAI raised $110bn , with $50bn from Amazon, $30bn from NVIDIA and $30bn from SoftBank. Well, okay, not really. Per The Information : Yet again, the media is simply repeating what they’ve been told versus reading publicly-available information. Talking of The Information, they also reported that OpenAI intends to raise another $10bn from other investors, including selling the shares from the nonprofit entity: It’s so cool that OpenAI is just looting its non-profit! Nobody seems to mind.  Talking of things that nobody seems to mind, on Friday Sam Altman accidentally said the quiet part out loud , live on CNBC, when asked about the very obviously circular deals with NVIDIA, Amazon and Microsoft (emphasis mine): Hey Sam, what does “the whole thing” refer to here? Because I know you probably mean the AI industry, but this sounds exactly like a ponzi scheme!   Now, jokes aside, ponzi schemes work entirely through feeding investor money to other investors. OpenAI and AI companies are not a ponzi scheme. There’s real revenues, people are paying it money. Much like NVIDIA isn’t Enron , OpenAI isn’t a ponzi scheme. However , the way that OpenAI describes the AI industry sure does sound like a scam. It’s very obvious that neither OpenAI nor its peers have any plan to make any of this work beyond saying “well we’ll just keep making more money,” and I’m being quite literal, per The Information : That’s right , by the end of 2026 OpenAI will make as much money as Paypal, by the end of 2027 it’ll make $20bn more than SAP, Visa, and Salesforce, and by the end of 2028 it’ll make more than TSMC, the company that builds all the crap that runs OpenAI’s services. By the end of 2030, OpenAI will, apparently, make nearly as much annual revenue as Microsoft ($305.45 billion). It’s just that easy. And all it’ll take is for OpenAI to burn another $230 billion…though I think it’ll need far more than that. Please note that I am going to humour some numbers that I have serious questions about , but they still illustrate my point.  Per The Information , OpenAI had around $17.5bn in cash and cash equivalents at the end of June 2025 on $4.3bn of revenue, with $2.5bn in inference spend and $6.7bn in training compute. Per CNBC in February , OpenAI (allegedly!) pulled in $13.1bn in revenue in 2025, and only had a loss of $8bn but this doesn’t really make sense at all!  Please note, I doubt these numbers! I think they are very shifty! My own numbers say that OpenAI only made $4.3bn through the end of September, and it spent $8.67bn on inference! Nevertheless, I can still make my point. Let’s be real simple for a second: suppose we are to believe that in the first half of the year, it cost $2.5 bn in inference to make $4.3bn in revenue, so around 58 cents per dollar. For OpenAI to make $8.8bn — the distance between $4.3bn and $13.1bn — that’s another $5.1bn in inference, and keep in mind that OpenAI launched Sora 2 in September 2025 and done massive pushes around its Codex platform, guaranteeing higher inference costs. Then there’s the issue of training. For $2.5bn of revenue, OpenAI spent $6.7bn in training costs — or around $2.68 per dollar of revenue. At that rate, OpenAI spent a further $23.58bn on training, bringing us to $28.6bn in burn just for the back half of 2025. Now, you might think I’m being a little unfair here — training costs aren’t necessarily linear with revenues like inference is — but there’s a compelling argument to be made that costs are far higher than we thought. Now, I want to be clear that on February 20 2026 , The Information reported that OpenAI had “about $40 billion in cash at the end of 2025,” but that doesn’t really make sense!  Assuming $17.5bn in cash and cash equivalents at the end of June 2025, plus $8.8bn in revenue, plus $8.3bn in venture funding, plus $22.5bn from Masayoshi son…that’s $57.1bn. If there were a negative cash burn of $8bn, that would be $49.1bn, and no, I’m sorry, “about $40 billion in cash” cannot be rounded down from $49.1bn! In my mind, it’s far more likely that OpenAI’s losses were in excess of $10bn or even $20bn, especially when you factor in that OpenAI is paying an average of $1.5 million in yearly stock based compensation, per the Wall Street Journal .  There’s also another possible answer: I think OpenAI is lying to the media, because it knows the media won’t think too hard about the numbers or compare them. I also want to be clear that this is not me bagging on The Information — they just happen to be reporting these numbers the most. I think they do a great job of reporting, I pay for their subscription out of my own pocket, and my only problem is that there doesn’t seem to be efforts made to talk about the inconsistency of OpenAI’s numbers. I get that it’s difficult too. You want to keep access. Reporting this stuff is important and relevant. The problem is — and I say this as somebody who has read every single story about OpenAI’s funding and revenues! — that this company is clearly just…lying?  Sure you can say “it’s projections,” but there is a clear attempt to use the media to misinform investors and the general public. For example, OpenAI claimed SoftBank would spend $3bn a year on agents in 2025. That never happened!  Anyway, let’s get to it: What I’m trying to get at is that OpenAI (and, for that matter, Anthropic) has spent the last two years increasingly obfuscating the truth through leak after leak to the media.  The numbers do not make any sense when you actually put them together, and the reason that these companies continue to do this is that they’re confident that these outlets will never say a thing, or cover for the discrepancies by saying “these are projections!”  These are projections, and I think it’s a noteworthy story that these companies either wildly miss their projections (IE: costs) or almost exactly make their projections (revenues), which is even weirder.  But the biggest thing to take away from this is that one of the classic arguments against my work is that “costs will just come down,” but the costs never come down.  That, and it appears that both of these companies are deliberately obfuscating their real numbers as a means of making themselves look better.  Well, leaking and outright posting it. On December 17 2025, OpenAI’s Twitter account posted the following: These numbers are, of course, bullshit. OpenAI may have hit $6bn ARR in 2024 ($500m in a 30 day period, though OpenAI has never defined this number) or $20bn ($1.67bn in a 30 day period) ARR in 2025, but this is specifically diagramed to make you think “$20bn in 2025” and “$6bn in 2024.” There are members of the media who defend OpenAI saying that “these are annualized figures,” but OpenAI does not state that, because OpenAI loves to lie.  Anthropic isn’t much better, as I discussed a few weeks ago in the Hater’s Guide . Chief Executive Dario Amodei has spent the last few years massively overstating what LLMs can do in the pursuit of eternal growth.  He’s also framed himself as a paragon of wisdom and Anthropic as a bastion of safety and responsibility. There appears to be some confusion around what happened in the last few days that I’d like to clear up, especially after the outpouring of respect for Anthropic “doing the right thing” when the Department of Defense threatened to label it a supply chain risk for not agreeing to its terms. Per Anthropic , on Friday February 27 2026:  Anthropic, of course, leaves out one detail: Hegseth said that “...effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” If Hegseth follows through, Anthropic’s business will collapse, though Anthropic and its partners are ignoring this statement as a supply chain risk only forbids Anthropic from working with the US government itself. When the US military attacked Iran a day later, people quickly interpreted Anthropic’s narrow (by its own words) and specific limitations with some sort of anti-war position. Claude quickly rocketed to the top of the iOS app charts, I assume because people believe that Dario Amodei was saying “I don’t want the war in Iran!” versus “I fully support the war in Iran and any uses you might need my software for other than the two I’ve mentioned, let me or support know if you have any issues!” To be clear, these were the only issues that Anthropic had with the contract. Whether or not these are things that an LLM is actually good at, Anthropic (and I quote!) “...[supports] all lawful uses of AI for national security aside from the two narrow exceptions above.”  The military’s demands were for “all lawful uses,” though I don’t think Anthropic really gives a shit about whether the war in Iran is legal , because if it did it would have shut down the chatbot rather than supported the conflict. Just as a note: Anthropic is also the only AI model that appears to be available for classified military operations.  Let’s be explicit: Anthropic’s Claude (and its various models) are fully approved for use in the military, and, to quote its own blog post , “has supported American warfighters since June 2024 and has every intention of continuing to do so.” To be explicit about what “support” means, I’ll quote the Wall Street Journal : In reality, Claude is likely being used to go through a bunch of images and to answer questions about particular scenarios. There is very little specialized military training data, and I imagine many of the demands for “full access to powerful AI” have come as a result of Amodei and Altman’s bloviating about the “incredible power of AI.” More than likely, Centcom and the rest of the military pepper it with questions that allow it to justify acts that blow up schools , kill US servicemembers and threaten to continue the forever war that has killed millions of people and thrown the Middle East into near-permanent disarray. Nevertheless, Dario Amodei gets fawning press about being a patriot that deeply cares about safety less than a week after Anthropic dropped its safety pledge to not train an AI system unless it could guarantee in advance that its safety measures were accurate.  Here’re some other facts about Dario Amodei from his interview with CBS ! “What’s right,” to be clear, involves allowing Claude to choose who lives or dies and to be used to plan and execute armed conflicts.  Let’s stop pretending that Anthropic is some sort of ethical paragon! It’s the same old shit!  In any case, it’s unclear what happens next. Anthropic appears ready to challenge the supply chain risk designation in court, and said designation doesn’t kick in immediately, requiring a series of procedures including an inquiry into whether there are other ways to reduce the risk associated. In any case, the DoD has a six-month-long taper-off period with Anthropic’s software. The real problem will be if Hegseth is serious about the stuff that isn’t legally within his power — namely limiting contractors, suppliers or partners from working with Anthropic entirely. While no legal authority exists to carry this through, seemingly every tech CEO has lined up to kiss up to the Trump Administration .  If Hegseth and the administration were to truly want to punish Anthropic, they could put pressure on Amazon, Microsoft and Google to cut off Anthropic, which would cut it off from its entire compute operation — and yes, all three of them do business with the US military, as does Broadcom , which is building $21 billion in TPUs for it . While I think it’s far more likely that the US government itself shuts the door on Anthropic working with it for the foreseeable future even without the supply chain risk designation, it’s worth noting that Hegseth was quite explicit — “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”  The reality of the negotiations was a little simpler, per the Atlantic . The Department of Defense had agreed to terms around not using Claude for mass domestic surveillance or fully autonomous killing machines (the former of which it’s not particularly good at and the latter of which it flat out cannot do), but, well, actually very much intended to use Claude for domestic surveillance anyway: Now, I’m about to give you another quote about autonomous weapons, and I really want you to pay attention to where I emphasize certain things for a subtle clue about Anthropic’s ethics: So, let’s be clear: Anthropic wants to help the military make more accurate kill drones , and in fact loves them . One might take this to be somewhat altruistic — Dario Amodei doesn’t want the US military to hit civilians — but remember: Anthropic is totally fine with the US military using Claude for anything else, even though hallucinations are an inevitable result of using a Large Language Model .   Any dithering around the accuracy of a drone exists only to obfuscate that Anthropic sells software that helps militaries hand over the messy ethical decisions to a chatbot that exists specifically to tell you what you want to hear. Stinky, nasty, duplicitous conman Sam Altman smelled blood amidst these negotiations and went in for the kill, striking a deal on Friday with the Pentagon for ChatGPT and OpenAI’s other models to be used in the military’s classified systems , with initial reports saying that it had “similar guardrails to those requested by Anthropic.”  In a post about the contract , Clammy Sammy said that the DoD displayed “a deep respect for safety and a desire to partner to achieve the best possible outcome,” adding: Undersecretary Jeremy Levin almost immediately countered this notion , saying that the contract “...flows from the touchstone of “all lawful use.” This quickly created a diplomatic incident where OpenAI decided that the best time to discuss the contract was an entire Saturday and that the way to discuss it was posting. It shared some details on the contract , which included the fatal phrase that the Department of Defense “...may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.” Per The Verge’s Hayden Field : As questions mounted about the actual terms of the deal, Sam Altman realized that his only solution was to post, and at 4:13PM PT on Saturday February 28 2026, he said down to make things significantly worse in a brief-yet-chaotic AMA , including: All of this is to say that Altman definitely, absolutely loves war, and wants OpenAI to make money off of it, though according to OpenAI NatSec head Katrina Mulligan, said contract is only worth a few million dollars .  It’s unclear. A late-evening story from Axios on Monday reported that “OpenAI and the Pentagon have agreed to strengthen their recently agreed contract, following widespread backlash that domestic mass surveillance was still a real risk under the deal — though the language has not been formally signed.” The language seen by Axios states: One has to wonder how different this is to what Anthropic wanted, but if I had to guess, it’s those words “intentionally” and “deliberate.” The same goes for “consistent with applicable laws.” One useful thing that Altman confirmed was that ChatGPT will not be used with the NSA… and that any services to those agencies would require a follow-on modification to the contract. Doesn’t mean they won’t sign one! Forgive me for being cynical about something from Sam fucking Altman , but I just don’t trust the guy, and this is an (as of writing this sentence) unsigned contract with bus-sized loopholes. Per Tyson Brody (who has a great thread breaking down the issues), these weasel words allow the DoD to surveil Americans as long as the data is collected “incidentally,” per Section 702 of FISA.  This announcement gives OpenAI the air cover to pretend it got exactly the same deal as Anthropic, even though those nasty little words allow the DoD to do just about anything it wants. Oh, it wasn’t deliberate surveillance, we just looked up whether some people had said stuff about the administration. Oh it wasn’t deliberately looking, I just asked it to find suspicious people , of which domestic people happened to be a part of! Whoopsie! This is ultimately a PR move to make Altman seem more ethical, and position Amodei as a pedant that rejects his patriotism and prioritizes legalese over freedom.   If it kills Anthropic, we must memorialize this as one of the most underhanded and outright nasty things in the history of Silicon Valley. If it doesn’t, we should memorialize it as two men desperately trying to pretend they crave peace and democracy as they spar for the opportunity to monetize death and destruction.  The funniest outcome of this chaos is that many people are very, very angry at Sam Altman and OpenAI, assuming that ChatGPT was somehow used in the conflict in Iran, and that Amodei and Anthropic somehow took a stand against a war it used as a means of generating revenue.  In reality, we should loathe both Altman and Amodei for their natural jingoism and continual deception. Amodei and Anthropic timed their defiance of the Department of Defense to make it seem like its “red lines” were related to the war. I think it’s good they have those red lines, but remember, those red lines do not involve stopping a war that threatens the lives of millions of people. Amodei supports that. Anthropic both supports and enables that.  Altman, on the other hand, is a slimy little creep that wants you to believe that he signed the same deal as Anthropic wanted, but actually signed one that allows “any lawful use.”  And in both cases, these men are both enthusiastic to work with a part of the government calling itself the Department of War. Both of them are willing and able to provide technology that will surveil or kill people, and while Amodei may have blushed at something to do with autonomous weapons or domestic surveillance, neither appear to have an issue with the actual harms that their models perpetuate. Remember: Anthropic just pitched its technology as part of an ongoing Department of Defense drone swarm contest . It loves war! Its only issue was that there wasn’t a human in the loop somewhere . Neither of these men deserve a shred of credit or celebration. Both of them were and are ready and willing to monetize war, as long as it sort-of-kind-of follows the law .  And rattling around at the bottom of this story is a dark problem caused by the fanciful language of both Altman and Amodei. When it’s about cloud software, Dario Amodei is more than willing to say that it will cause mass elimination of jobs across technology, finance, law and consulting,” and that it will replace half of all white collar labor . When it’s time to raise money, Altman is excited to tell us that AI will surpass human intelligence in the next four years . Now that lives are theoretically at stake, Altman vaguely cares about the things that an LLM “ isn’t very good at ,” Once Claude is used to choose places to bomb and people to kill, suddenly Anthropic cares that “frontier AI systems are simply not reliable enough,” and even then not so much as to stop a chatbot that hallucinates from being used in military scenarios.  Altman and Amodei want it both ways. They want to be pop culture icons that go on Jimmy Fallon and thought leaders who tell ghost stories about indeterminately-powerful software they sell through deceit and embellishment . They want to be pontificators and spokespeople, elder statesmen that children look up to, with the specious profiles and glowing publicity to boot. They want Claude or ChatGPT as seen as capable of doing anything that any white collar worker is capable of, even if they have to lie to do so, helped by a tech and business media asleep at the wheel. They also want to be as deeply-connected to the military industrial complex as Lockheed Martin or RTX (née Raytheon). Anthropic has been working with the DoD since 2024, and OpenAI was so desperate to take its place that Altman has immolated part of his reputation to do so.  Both of these companies are enthusiastic parts of America’s war machine. This is not an overstatement — Dario Amodei and Anthropic “ believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries .” OpenAI and Sam Altman are “terrified of a world where AI companies act like they have more power than the government.”  For all the stories about Anthropic creating a “ nation of benevolent AI geniuses ,” Dario Amodei seems far more interested in creating a world dictated by what the United States of America deems to be legal or just, and providing services to help pursue those goals, as does OpenAI and, I’d argue, basically every AI lab. We’re barely two weeks divorced from the agonizing press around Amanda Askell, Anthropic’s “ resident philosopher ,” whose job, per the Wall Street Journal, is to “teach Claude how to be good.” There are no mentions in any story I can find about what she might teach Claude about what targets are considered fair game in military combat.  WIRED’s profile of her starts with a title that aged like milk in the sun, saying that “ the only thing standing between humanity and an AI apocalypse…is Claude ?” Tell that to the people in Tehran. I wonder what Askell taught Claude to say about war? I wonder what she taught Claude to say about democracy? I wonder if she even gives a shit. I doubt it.  —  Generative AI isn’t intelligent, but it allows people to pretend that it is, especially when the people selling the software — Altman and Amodei — so regularly overstate what it can do.  By giving warmongers and jingoists the cover to “trust” this “authoritative” service — whether or not that’s the case, they can simply point to the specious press — the ethical concern of whether or not an attack was ethical or not is now, whenever any western democracy needs it to be, something that can be handed off to Claude, and justified with the cold, logical framing of “intelligence” and “data.”  None of this would be possible without the consistent repetition of the falsehoods peddled by OpenAI and Anthropic. Without this endless puffery and overstatements about the “power of AI,” we wouldn’t have armed conflicts dictated by what a chatbot can burp up from the files it’s fed. The deaths that follow will be a direct result of those who choose to continue to lie about what an LLM does.  Make no mistake, LLMs are still incapable of unique ideas and are still, outside of coding (which requires massive subsidies to even be kind of useful), questionable in their efficacy and untrustworthy in their outputs. Nothing about the military’s use of Claude makes it more useful or powerful than it was before — they’re probably just loading files into it and asking it long questions about things and going “huh” at the end.  The vulgar dishonesty of Altman and Amodei puts blood on both of their hands, and it’s the duty of every single member of the media to remind people of this whenever you discuss their software.  I get that you probably think I’m being dramatic, but tell me — do you think that the US military would’ve trusted LLMs had they not been marketed as capable of basically anything? Do you think any of this would’ve happened had there been an honest, realistic discussion of what AI can do today, and what it might do tomorrow?  I guess we’ll never know, and the people blown to bloody pieces at the other end of an LLM-generated stratagem won’t be alive to find out either. In Q3 2025 , CoreWeave had $1.36bn in revenue on 590MW of compute, working out to $2.3m per megawatt. In Q4 2025, CoreWeave had $1.57bn in revenue on 850MW of compute, working out to $1.847m per megawatt. OpenAI had $13.1bn in revenue in 2025! They only lost $8bn ! Did it? Based on my own reporting , which has been ignored (I guess it’s easier to do that than think about it?) by much of the press, OpenAI made $4.33bn through the end of September, and spent $8.67bn on inference in that period. Notice how I said “inference.” Training costs, data costs, and simply, the costs of doing business are in addition to that. OpenAI has 900m weekly active users! Yeah everybody is talking about AI 24/7 and ChatGPT is the one everybody talks about.  Google Gemini Has 750m- Google changed Google Assistant to Gemini on literally everything, including Google Home , and force-fed it to users of Google Docs and Google Search.  Claude Code is changing the world! It’s writing SaaS now! It’s replacing all coders! As I discussed both at the beginning of the Hater’s Guide To Private Equity and my free newsletter last week , software is not as simple as spitting out code, neither is it able to automatically clone the SaaS experience.  Midwits and the illiterate claim that this somehow defeats my previous theses where I allegedly said the word “useless.” While I certainly goofed claiming generative AI had three quarters left in March 2024, my argument was that I thought that “generative AI [wouldn’t become] a society-altering technology, but another form of efficiency-driving cloud computing software that benefits a relatively small niche of people ,” as I have said that people really do use them for coding . Even Claude Code, the second coming of Christ in the minds of some of Silicon Valley’s most concussed boosters, only made $203m in revenue ( $2.5bn ARR ) for a product that at times involves Anthropic spending anywhere from $8 to $13.50 for every dollar it makes . People Doubted Amazon But It Made Lots Of Money In The End! No they didn’t. Benedict Evans defended Amazon’s business model . Jay Yarow of Business Insider defended it too . Practical Ecommerce called Amazon Web Services “Amazon’s cash cow” in October 2013 . In April 2013, WIRED’s Marcus Wohlsen managed to name one skeptic — Paulo Santos, based in Portugal, who appears to have dropped off the map after 2024, but remained a hater long after AWS hit profitability in 2009. I cannot find any other skeptics of Amazon, and I cannot for the life of me find a single skeptic of AWS itself.  AWS Cost A Lot Of Money So We Should Spend So Much Money On AI! I’m sick and fucking tired of this point so I went and did the work, which you can view here , to find every single year of capex that Amazon spent  When you add together all of Amazon’s capital expenditures between 2002 and 2017, which encompasses its internal launch, 2006 public launch, and it becoming profitable in 2015 , you get $37.8bn in total capex (or $52.1bn adjusted for inflation). For some context, OpenAI raised around $42bn in 2025 alone. The fact that we have multiple different supposedly well-informed journalists making the “Amazon spent lots of money!” point to this day is a sign that we’re fundamentally living in hell. OpenAI raised $15bn from Amazon, with $35bn contingent on AGI or an IPO. OpenAI got commitments from SoftBank and NVIDIA, who may or may not have committed to $30bn each, and will be paying in three installments. Please note that CNBC authoritatively reported in September that “the initial $10 billion tranche locked in at a $500 billion valuation was expected to close within a month” for a deal that was only ever a Letter of Intent. This is why it’s important not to report things as closed before they’re closed. As of right now, evidence suggests that nobody has actually sent OpenAI any money. Per NVIDIA’s 10-K filed last week , it is (and I quote) “...finalizing an investment and partnership agreement with OpenAI [and] there is no assurance that we will enter into an investment and partnership agreement with OpenAI or that a transaction will be completed.” It’s going to be interesting seeing how SoftBank funds this. It funded OpenAI’s last $7.5bn check with part of the proceeds from a $15bn, one-year-long bridge loan , and the remaining $22.5bn by selling its $5.83bn in NVIDIA stock and its $13.5bn margin loan using its ARM stock . Nevertheless, per its own statement , SoftBank intends to pay OpenAI $10bn on April 1 2026, July 1 2026, and October 1 2026, all out of the Vision Fund 2. Its statement also adds that “the Follow-on Investment is expected to be financed initially through bridge loans and other financing arrangements from major financial institutions, and subsequently replaced over time through the utilization of existing assets and other financing measures. Per The Information, OpenAI was at $17.5bn in cash and cash equivalents at the end of June 2025. It had just raised $10bn from SoftBank and other investors. OpenAI would raise another $8.3bn on August 1 2025 , bringing that cash and equivalents pile to $25.8bn, assuming it remained untouched. OpenAI would raise another $22.5bn from SoftBank on December 31 2025 , bringing up the total to $48.3bn. In the second half of the year, OpenAI would (allegedly) make another $8.8bn, which would bring us up to $57.1bn — with a total year loss of either $9bn or $8 bn depending on whether you believe The Information or CNBC .  But wait, that doesn’t make sense as a total year loss!  Let’s look at the first half numbers again. When we take the raw cost of inference ($2.5bn) and training ($6.7bn) and subtract revenue ($4.3 billion), we’re left with a $4.9bn loss just for the first half , and that’s before you include things like headcount, sales and marketing, and general operating expenses, which (per The Information) amounted to $2bn in the first half of the year. Now, let’s run these numbers again but with my napkin math estimates — $23.58bn in training costs and $5.1bn in inference costs, for a total of $28.42bn. Add another $2bn in sales and marketing costs, $1.76bn in revenue share to Microsoft (20% of $8.8bn), guesstimating the cash salaries of OpenAI’s staff (based on them being around 17.5% of the company’s revenue in 2024 ) at $1.54bn, SG&A costs (about 15% in 2024) of $1.32bn, data costs (12.5% in 2024) of about $1.1bn, and hosting costs (10% in 2024 of about $880,, we’re at around $37bn — leaving OpenAI with about…$20bn in cash at the end of the year. In October 2024 , The Information reported that OpenAI only burned $340m in the first half of 2024, that its “cash burn has been lower than previously thought,” that it “projected total losses from 2023 to 2028 to be $44 billion,” and that it would be EBITDA profitable (minus training costs, lol) in 2026. The piece also says OpenAI would make $14bn in profit in 2029, and somehow also burn $200bn by 2030. Confusingly, this piece said net losses for 2024 were $3bn through the first half of 2024, but would go on to project a net loss for the year of $5.5bn! In February 2025 , The Information reported that OpenAI would make $12.7bn in 2025, with $3bn of that coming from SoftBank spending $3bn a year on its “agents,” something that never happened and nobody talks about anymore. The same piece said OpenAI would burn $7bn in 2025, and now expected to spend $320bn on compute between 2025 and 2030. Burn for 2026 is estimated at $8bn, and $20bn in 2027. Revenue for 2026 is estimated at $28bn . The maths does not make a lick of sense here. In April 2025 , The Information reported that OpenAI projected $174bn in revenue through 2030 and said that gross margins were 40% in 2024, and would be 48% in 2025, and hit 69% in 2029. Confusingly, the same piece says that OpenAI expects to burn $46bn in cash between 2025 and 2029, which does not make sense if you factor in any of the previously-discussed compute costs. In early September 2025 , The Information would report that — psyche! — OpenAI would actually burn $115bn through 2029, with the plan to burn $35bn in 2027 and $45bn in 2028, which is a lot higher than “$44bn in five years.” Revenue for 2026 is now $30bn, and burn for 2027 is now $35bn.  In late September 2025 , The Information would report that OpenAI had a net loss of $13.5bn in the first half of 2025 with revenue estimates of $30bn in 2026 and $62bn in 2027. On February 20, 2026 , The Information reported that OpenAI would actually burn $230bn through 2030, cloud costs would be $665bn, and that gross margins got worse (33%! Down from the “46% it had set for itself,” or 48% if you count previous published projections ), and that it would burn $26bn in 2026 alone, or more than half of the October 2024 projections for its burnrate between 2023 and 2028! On having political views: “We don't-- we don't have views-- we don't think about general political issues, and we try to work together whenever there's common ground.” On being “woke”: “So this idea that we've somehow been partisan or that we haven't been evenhanded, we've been studiously evenhanded. And-- and again, we can't control if someone, even-- even the president, you know, ha-- has an opinion about us. That's not under our control. What's under our control is that we can be reasonable. We can be neutral. And we can stand up for what we believe.” On what Anthropic believes: “ We believe in-- defeating our autocratic adversaries. We believe in defending America. The red lines we have drawn, we drew because we-- we-- we-- we believe that crossing those red lines is-- is contrary to American values. And we wanted to stand up for American values.” On the US government’s handling of the situation: “ And that's why we're committed to standing up to-- you know, actions that we think are not in line with the values of this country. It's-- it's not about any particular person. It's not about any particular administration. It's about the principle of standing up for what's right.” Altman approving of non-domestic AI surveillance , saying he “didn’t like it” but “accepted it,” echoing The Day Today’s Peter O'Hanraha-hanrahan .  Altman saying that the supply chain risk designation would be “very bad for our industry and our country,” that “successfully building safe superintelligence and widely sharing the benefits is way more important that any company competition,” and that he “saw in some other tweet that [he] must not be willing to criticize the DoW (it said something about sucking their dick too hard to be able to say anything critical, but I assume this was the intent).”  Altman saying that the deal was rushed “ as an attempt to de-escalate matters at a time when it felt like things could get extremely hot .” Yeah man, you’re really de-escalating the Anthropic situation by providing a replacement for its software. Altman saying he was prepared to go to jail if OpenAI was asked to do something unconstitutional or illegal . Altman saying that “...the people in our military are far more committed to the constitution than an average person off the streets,” and that he “didn’t think OpeNAI was above the constitution either.” Altman declaring that he did “...not believe unelected leaders of private companies should have as much power as our democratically elected government,” and that we should have sympathy for the Department of Defense because Anthropic had refused to help them and called them “kind of evil.”

0 views

Scalar Interpolation: A Better Balance between Vector and Scalar Execution for SuperScalar Architectures

Scalar Interpolation: A Better Balance between Vector and Scalar Execution for SuperScalar Architectures Reza Ghanbari, Henry Kao, João P. L. De Carvalho, Ehsan Amiri, and J. Nelson Amaral CGO'25 This paper serves as a warning: don’t go overboard with vector instructions. There is a non-trivial amount of performance to be had by balancing compute between scalar and vector instructions. Even if you fear that automatic vectorization is fragile, this paper has some interesting lessons. Listing 1 contains a vectorizable loop and listing 2 shows a vectorized implementation: Source: https://dl.acm.org/doi/10.1145/3696443.3708950 Source: https://dl.acm.org/doi/10.1145/3696443.3708950 After achieving this result, one may be tempted to pat oneself on the back and call it a day. If you were a workaholic, you might profile the optimized code. If you did, you would see something like the data in table 1: Source: https://dl.acm.org/doi/10.1145/3696443.3708950 And you could conclude that this algorithm is compute-bound. But what do we really mean by “compute-bound”? A processor contains many execution ports, each with a unique set of capabilities. In the running example, the execution ports capable of vector multiplication and addition are fully booked, but the other ports are sitting mostly idle! Listing 3 shows a modified loop which tries to balance the load between the vector and scalar execution ports. Each loop iteration processes 9 elements (8 via vector instructions, and 1 via scalar instructions). This assumes that the processor supports fast unaligned vector loads and stores. Source: https://dl.acm.org/doi/10.1145/3696443.3708950 Section 3 has details on how to change LLVM to get it to do this transformation. Fig. 3 shows benchmark results. By my calculations, the geometric mean of the speedups is 8%. Source: https://dl.acm.org/doi/10.1145/3696443.3708950 Dangling Pointers This paper builds on top of automatic vectorization. In other words, the input source code is scalar and the compiler vectorizes loops while balancing the workload. An alternative would be to have the source code in a vectorized form and then let the compiler “devectorize” where it makes sense. Subscribe now

0 views
Stratechery 2 days ago

Technological Scale and Government Control, Paramount Outbids Netflix for Warner Bros.

Why government is not the primary customer for tech companies, and is Netflix relieved that they were outbid for Warner Bros.?

0 views
(think) 2 days ago

Learning OCaml: String Interpolation

Most programming languages I’ve used have some form of string interpolation. Ruby has , Python has f-strings, JavaScript has template literals, even Haskell has a few popular interpolation libraries. It’s one of those small conveniences you don’t think about until it’s gone. OCaml doesn’t have built-in string interpolation. And here’s the funny thing – I didn’t even notice when I was first learning the language. Looking back at my first impressions article, I complained about the comment syntax, the semicolons in lists, the lack of list comprehensions, and a dozen other things – but never once about string interpolation. I was happily concatenating strings with and using without giving it a second thought. I only started thinking about this while working on my PPX article and going through the catalog of popular PPX libraries. That’s when I stumbled upon and thought “wait, why doesn’t OCaml have interpolation?” The short answer: OCaml has no way to generically convert a value to a string. There’s no universal method, no typeclass, no runtime reflection that would let the language figure out how to stringify an arbitrary expression inside a string literal. In Ruby, every object responds to . In Python, everything has . These languages can interpolate anything because there’s always a fallback conversion available at runtime. OCaml’s type information is erased at compile time, so the compiler would need to know at compile time which conversion function to call for each interpolated expression – and the language has no mechanism for that. 1 OCaml does have , which is actually quite nice and type-safe: The format string is statically checked by the compiler – if you pass an where expects a string, you get a compile-time error, not a runtime crash. That’s genuinely better than what most dynamically typed languages offer. But it’s not interpolation – the values aren’t inline in the string, and for complex expressions it gets unwieldy fast. There’s also plain string concatenation with : This works, but it’s ugly and error-prone for anything beyond trivial cases. ppx_string is a Jane Street PPX that adds string interpolation to OCaml at compile time. The basic usage is straightforward: For non-string types, you specify the module whose function should be used: The suffix tells the PPX to call on , and calls on . Note that , , etc. are conventions from Jane Street’s / libraries – OCaml’s uses , and so on, which won’t work with the syntax. This is another reason really only makes sense within the Jane Street ecosystem. Any module that exposes a function works here – including your own: You can also use arbitrary expressions inside the interpolation braces: Though at that point you might be better off with a binding or for readability. A few practical things worth knowing: Honestly? Probably not as much as you think. I’ve been writing OCaml for a while now without it, and it rarely bothers me. Here’s why: That said, when you do need to build a lot of human-readable strings – error messages, log output, CLI formatting – interpolation is genuinely nicer than . If you’re in the Jane Street ecosystem, there’s no reason not to use . The lack of string interpolation in OCaml is one of those things that sounds worse than it actually is. In practice, and cover the vast majority of use cases, and the code you write with them is arguably clearer about types than magical interpolation would be. It’s also a nice example of OCaml’s general philosophy: keep the language core small, provide solid primitives ( , ), and let the PPX ecosystem fill in the syntactic sugar for those who want it. The same pattern plays out with for printing, for monadic syntax, and many other conveniences. Will OCaml ever get built-in string interpolation? Maybe. There have been discussions on the forums over the years, and the language did absorb binding operators ( , ) from the PPX world. But I wouldn’t hold my breath – and honestly, I’m not sure I’d even notice if it landed. That’s all I have for you today. Keep hacking! This is the same fundamental problem that makes printing data structures harder than in dynamically typed languages.  ↩︎ You need the stanza in your dune file: String values interpolate directly, everything else needs a conversion suffix. Unlike Ruby where is called implicitly, requires you to be explicit about non-string types. This is annoying at first, but it’s consistent with OCaml’s philosophy of being explicit about types. It’s a Jane Street library. If you’re already in the Jane Street ecosystem ( , , etc.), adding is trivial. If you’re not, pulling in a Jane Street dependency just for string interpolation might feel heavy. In that case, is honestly fine. It doesn’t work with the module. If you’re building strings for pretty-printing, you’ll still want or . is for building plain strings, not format strings. Nested interpolation doesn’t work – you can’t nest inside another . Keep it simple. is good. It’s type-safe, it’s concise enough for most cases, and it’s available everywhere without extra dependencies. Most string building in OCaml happens through . If you’re writing pretty-printers (which you will be, thanks to ), you’re using , not string concatenation or interpolation. OCaml code tends to be more compute-heavy than string-heavy. Compared to, say, a Rails app or a shell script, the typical OCaml program just doesn’t build that many ad-hoc strings. This is the same fundamental problem that makes printing data structures harder than in dynamically typed languages.  ↩︎

0 views
André Arko 2 days ago

Four months of Ruby Central moving Ruby backward

From the moment RubyGems was first created in 2004, Ruby Central provided governance without claiming ownership , to support the Ruby community. Providing governance meant creating processes to provide stability and predictability. Avoiding ownership meant allowing the community to contribute, to the point where unpaid volunteers created and controlled the entirety of RubyGems.org for many years. Last year, Ruby Central flipped that successful formula on its head . They now claim ownership of both Bundler and RubyGems, but refuse to provide governance . Ruby Central now claims sole control over all code and decisions, despite paying for only a few percent of the work required to create and sustain the projects across 22 years. Instead of providing stable and predictable processes, Ruby Central suddenly hijacked the Bundler and RubyGems codebases away from the existing maintainers, shut out the community, and started issuing the threats to sue. When confronted by the former maintainers after the hijacking, Marty Haught of Ruby Central stated (in a recorded video call) on September 17 that “yeah, we shouldn’t have changed that”. On September 18, Marty went on to write: In the past, we’ve made the mistake of conflating ownership of the code with ownership of the infra, and vice versa, and we’d like to straighten this out so that we aren’t put in a legal bind that requires us to take control of the entire codebase when, we all agree, that is not proper or correct given the existing model. In the words of Ruby Central itself, “we all agree, [taking control of the entire codebase] is not proper or correct.” Since the beginning of this conflict, Ruby Central has privately admitted it was wrong to hijack the GitHub organization and steal the repos, but has refused to acknowledge this in public. Unfortunately, despite privately admitting their actions were wrong, Ruby Central has publicly continued to dig their hole deeper. Instead of owning up to their mistake, they secretly negotiated a deal with Matz for ruby-core to take over the stolen RubyGems and Bundler repository, further violating the project governance policies. If this situation were just about me personally, I could believe it sprang from from individual disagreements. Ruby Central claims they had good reasons to unilaterally kick me out of the project, even though I don’t think their claims hold water . With that said, regardless of what you think about me personally, the other five long-term maintainers have never gotten any explanation of why they were suddenly kicked out or bypassed entirely, all in violation of existing project governance. In her only public interview about the situation, Ruby Central Executive Director Shan Cureton defended stealing Bundler from its team of fifteen years by saying the removed team “didn’t need to have the story, and it wasn’t their story to have”. Ruby Central has made their position clear: if they steal your project, you are not entitled to know their reasons , and neither is anyone else. There is nothing “community-oriented” about stealing the most-used gem in Ruby and refusing to share your reasons with the community. Despite Ruby Central’s unacceptable treatment of both projects and maintainers, the former RubyGems and Bundler team said we want to move Ruby forward . We offered Ruby Central a path to move past their illegitimate GitHub takeover, past their vicious personal attacks, and past their threats to sue us. It has been four months since we made that offer, and Ruby Central has not accepted . While declining to accept our offer, Ruby Central has nonetheless found the time to propose new governance documents for RubyGems . In those documents, they explicitly require existing maintainers approve adding or removing team members. That rule was already present in the previous governance, and is the exact rule that Ruby Central violated to execute their takeover . When asked why they violated the previous governance, and why the new governance would be any more trustworthy, Ruby Central refused to respond substantively, and then the question itself was hidden by marking it “off topic” . Instead of working to resolve the situation, Ruby Central has spent 4 months rejecting requests for an explanation, while repeatedly threatening to sue me personally. After Ruby Central suddenly took over the Bundler repo, I sent them a standard trademark notice. They replied with a threat to sue me. When I later informed Ruby Central I had learned they violated state employment law, they simply replied with the same threat to sue me again. They are threatening to sue me for “hacking” them, despite their own analysis publicly concluding “no evidence that user data or production operations were harmed” . Without seeking common ground, or even looking for some sort of resolution we can just live with and move on from, Ruby Central has offered all of us — nothing . Ruby Central has made no offer in reply to outreach from the other five maintainers. To me, after four grueling months of private “negotiation”, their entire offer is nothing more than to refrain from suing. But only if I agree to everything that they want. They say I must agree that I have no claim on the name Bundler, despite helping create it and leading the Bundler team for the last 15 years. They say I must agree I was paid legally and fairly, when California law clearly states I was not. They say I must agree that Ruby Central can take over open source projects they host, any time they feel like it, with no explanation, and no consequences. I don’t agree. Letting this situation stay unaddressed sets a dangerous precedent for all open source projects written in Ruby. Ruby Central has resolved nothing. Don’t let their delaying tactics convince you otherwise. The Ruby community cannot trust Ruby Central with control over our gems until there is accountability for destroying the very governance they were supposed to be providing . Until accountability arrives, take action . Tell Ruby Central they owe everyone an explanation for violating the project governance around six long-term maintainers, not just me. Don’t sponsor, attend, or speak at RubyConf. Contribute to projects that aren’t controlled by Ruby Central. The exiled maintainers are working on new projects, with a focus on clear governance, long-term financial sustainability, and community input: Join the gem.coop beta, and stop using RubyGems.org. Use jwl instead of RubyGems. Use or Ruby Butler instead of Bundler. A better world is possible! Ruby Central might want to keep Ruby in the past, but we can work together to build Ruby a future .

0 views
neilzone 2 days ago

I'm struggling to think of any online services for which I'd be willing to verify my identity or age

Identity verification and age verification is an increasinly common policy conversation at the moment, in numerous countries. Often, this is in combination with proposals to ban children from varying concepts of “social media”, which generally means that everyone would have to prove that they were not a child. I have yet to see a well-considered proposal. Worse, the question that they are trying answer is rarely stated clearly and concisely. And it is unusual to see any consideration of broader sociological issues, let alone an emphasis on this, with a focus instead on perceived “quick win” technosolutionism. But anyway… I was pondering last night for which services I, personally, would actually be willing to verify my age or identity. And… the answer is “none”. At least, none that I can think of at the moment. I appreciate that I compute in an unusual way (when compared with most computer users), and that much of what I do online is about accessing my own services . Some of those - my fedi server, my RSS server, my messaging services - are build around enjoying stuff from other people’s services. Would I be willing to verify my identity or age to read someone’s RSS feed? No. While I enjoy the myriad blogs that I follow, none are crucial to me. I occasionally watch videos (which started on YouTube, but which I download into my Jellyfin instance), and perhaps YouTube will be forced to do age verification. It would be a shame, but again, I’ll just not watch YouTube videos. Not a big loss. Mostly, I buy secondhand DVDs, rip them, and watch them from my Jellyfin instance. I haven’t been asked to verify my age for a DVD purchase (online or offline) in a very long time. Friends have had to attempt to block access to their sites from the UK. While I can still access their sites via Tor, that’s what I tend to do. I feel sorry for them for the likely significant drop in visitors, likely affecting their enjoyment and in some cases their revenue, and, probably their incentive to continue to write / post / record stuff. I don’t use any individual forums any more (their demise is a shame; I’d prefer this over centralised discussion sites), nor do I use Reddit. I occasionally look at the comments on HN if one of my posts is surfaced there, but if HN forced identify or age verification, I’d just stop doing it. No big deal for me. Websites with comments sections? I don’t want to see the comments anyway, so I block those, which makes for a very pleasant browsing experience. I don’t comment myself. Code forges / places to contribute to FOSS? Most of my FOSS contributions are non-code, but even so, I use some organisation’s GitLab repos, and occasionally I contribute to projects on other forges. I doubt that my contributions are meaningful in themselves, and it may not be an option to switch infrastructure in any case (that might ont make the requirement go away), but since I am not a massive, or particularly valuable contributor, I’d feel less bad about simply stepping away. For Wikipedia, I’d probably rebuild my Kiwix instance and use that instead. Yes, articles would not be quite so up to date, but I rarely access Wikipedia for rapidly-changing information. In any case, there are tradeoffs, and personally I would prefer my privacy, the security of my personal data, and, well, just not being part of this kind of censorship. Signal? That would be a pain. I don’t have a workaround for that. I’m happily using XMPP, but as a complement to Signal, not an alternative. Teams/Zoom? I don’t have accounts on those services, but I do join, via my browser, when a client sends me a link. If I was faced with a choice of having to verify my identity/age for these services, then I’d have to consider the position carefully. Realistically, I am not in a position to say “no, I will not use Teams”, as some long-term clients are not going to change their corporate approach just because Neil doesn’t like something, and I’d rather not lose them as clients. So that could be a pain, if those services were within scope. I’ll still object to these measures - “I’m okay, Jack” would be a selfish stance - but, in practice, yes, I’d be surprised if they impacted me. Self-imposed (or, at least, self-controlled) digital isolationism, perhaps. Or perhaps, in the future, some service will pop up that I will really, really want to use, despite it requiring identity / age verification.

0 views