Latest Posts (20 found)

Prototyping with LLMs

Did you know that Jesus gave advice about prototyping with an LLM? Here’s Luke 14:28-30: Suppose one of you wants to build a tower. Won’t you first sit down and estimate the cost to see if you have enough money to complete it? For if you lay the foundation and are not able to finish it, everyone who sees it will ridicule you, saying, ‘This person began to build and wasn’t able to finish.’ That pretty much sums me up when I try to vibe a prototype . Don’t get me wrong, I’m a big advocate of prototyping . And LLMs make prototyping really easy and interesting. And because it’s so easy, there’s a huge temptation to jump straight to prototyping. But what I’ve been finding in my own behavior is that I’ll be mid-prototyping with the LLM and asking myself, “What am I even trying to do here?” And the thought I have is: “I’d be in a much more productive place right now if I’d put a tiny bit more thought upfront into what I am actually trying to build.” Instead, I just jumped right in, chasing a fuzzy feeling or idea only to end up in a place where I’m more confused about what I set out to do than when I started. Don’t get me wrong, that’s fine. That’s part of prototyping. It’s inherent to the design process to get more confused before you find clarity. But there’s an alternative to LLM prototyping that’s often faster and cheaper: sketching. I’ve found many times that if I start an idea by sketching it out, do you know where I end up? At a place where I say, “Actually, I don’t want to build this.” And in that case, all I have to do is take my sketch and throw it away. It didn’t cost me any tokens or compute to figure that out. Talk about efficiency! I suppose what I’m saying here is: it’s good to think further ahead than the tracks you’re laying out immediately in front of you. Sketching is a great way to do that. (Thanks to Facundo for prompting these thoughts out of me.) Reply via: Email · Mastodon · Bluesky

0 views

I Hate Insurance!

So yesterday I received an email from Admiral , our insurance provider, where we have a combined policy for both our cars and our home. Last year this cost £1,426.00 , but this year the renewal had gone up by a huge 33%, to £1,897.93 broken down as follows: Even at last year's price this was a shit tonne of money, so I started shopping around and here's what I ended up with: These policies have at least the same cover as Admiral. In some cases, better. I knew it would be cheaper shopping around, but I didn't think it would be nearly half. So, I called Admiral to see what they could do for me, considering I've been a loyal customer for 7 years. They knocked £167,83 (8.8%) off the policy for me, bringing the revised total to £1,730.10. Nice to see that long-term customers are rewarded with the best price! 🤷🏻‍♂️ So I obviously went with the much cheaper option and renewed with 3 different companies. It's a pain, as I'll now need to renew 3 policies at the same time every year, but if it means saving this much money, I'm happy to do it. Next year I'll get a multi-quote from Admiral to see if they're competitive. Something tells me they will be, as with most things these days, getting new customers is more important than retaining existing ones. Unfortunately having car and home insurance is a necessary evil in today's world, but I'm glad I was able to make it a little more palatable by saving myself over £700! If your insurance is up for renewal, don't just blindly renew - shop around as there's some serious savings to be had. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . Wife's car - £339.34 My car - £455.68 Our home (building & contents) - £1,102.91 Wife's car - £300.17 My car - £402.22 Our home (building and contents) - £533.52 Total: £1056.86 (44% reduction!)

0 views
iDiallo Today

AI Did It in 12 Minutes. It Took Me 10 Hours to Fix It

I've been working on personal projects since the 2000s. One thing I've always been adamant about is understanding the code I write. Even when Stack Overflow came along, I was that annoying guy who told people not to copy and paste code into their repos. Instead, they should read it and adapt it to their specific case. On personal projects, I've applied this to a fault. Projects never get done because I'm reading and editing code to make it work exactly as I want. I am by no means trying to convince you that my code is high quality. Every day, I regret the design choices I made for this very blog. But at the very least, I like to understand the code that powers my projects. So you can imagine how I struggle with the reviewing part when AI writes a large chunk of our daily work. Large language models are just so verbose, and often produce large blocks of code that don't even get used. I don't want to attribute it to malice (wasting your tokens) when I know this is an emergent technology we are all still adapting to. But it doesn't help that there is just so much code to review. What I tell myself when I review an AI-generated PR is: if I don't have a mental model of how the application works, how can I be of any use when it fails? This weekend, I decided to tackle a project I've been postponing since I created this blog over a decade ago. I needed a nice interface to upload assets, such as images, to go with each blog post. According to my git history, I started work on it 11 years ago but never finished. Yet I've still been uploading images regularly to go with blog posts. How? Using and . Like a caveman. Whenever I finish writing a blog post, I create images in Photoshop or Photopea, fire up FileZilla to connect to my file server, and drag and drop my assets. Then, of course, I have to log back into the blog's admin panel and provide the correct URLs for those assets. It's as primitive as it gets. This was a perfect project to tackle with AI. The good news is that I was able to complete it in a single day. The bad news. Well, I read through close to 5,000 lines of code. This is the first time I've used an LLM to write PHP code. I used the GLM-5 model from z.ai, which I've often played with for generating graphs from data. I'll have to try it on a few more PHP projects before I commit to this opinion, but it consistently wrote spaghetti code . Spaghetti code, to me, is anything where logic is embedded directly in a template. HTML, CSS, and database queries all crammed into the same file. That's really hard to maintain. I did realize, though, that my instructions were in direct contradiction with the agent's system prompt. It was explicit in the thinking process: This is NOT a Next.js project - the user explicitly wants a PHP app. So I should NOT use the Next.js project that's already initialized. I should create a PHP application. Wait, but the instructions say "You are an interactive CLI tool that helps users with software engineering tasks" and "Now you are developing a comprehensive and feature-rich Next.js project from scratch." But the user is explicitly asking for a PHP app. I should follow the user's request. It did generate a bunch of Node.js code, which I was able to remove manually. Luckily, it kept the PHP project in its own folder. If you're wondering how 12 files contain ~5,000 lines of code, I wondered the same. But that's what spaghetti code does. I set it up locally, ran and , and a few more files and folders were generated. When I finally ran the application, it didn't work. I spent a few hours working through permissions, updating the install script, and modifying the SQLite setup. I thought StackOverflow was dead, but I don't think I would have gotten SQLite working without it. One error, for example, was that SQLite kept throwing a warning that it was running in read-only mode. Apparently, you have to make the parent folder writable (not just the database file) to enable write mode. It had been a long time since I'd manually d files in PHP. I normally use namespaces and autoload. Since this project was generated from scratch, I had to hunt down various statements that all had incorrect paths. Once I sorted those out, I had to deal with authentication. PHP sessions come with batteries included, you call and you can read and write session variables via the global. But I couldn't figure out why it kept failing. When I created a standalone test file, sessions worked fine. But when loaded through the application, values weren't being saved. I spent a good while debugging before I found that was missing from the login success flow. When I logged in, the page redirected to the dashboard, but every subsequent action that required authentication immediately kicked me out. Even after fixing all those issues and getting uploads working, something still bothered me: how do I maintain this code? How do I add new pages to manage uploaded assets? Do I add meatballs directly to the spaghetti? Or do I just trust the AI agent to know where to put new features? Technically it could do that, but I'd have to rely entirely on the AI without ever understanding how things work. So I did the only sane thing: I rewrote a large part of the code and restructured the project. Maybe I should have started there, but I didn't know what I wanted until I saw it. Which is probably why I had been dragging this project along for 11 years. Yes, now I have 22 files, almost double the original count. But the code is also much simpler at just 1,254 lines. There's far less cognitive load when it comes to fixing bugs. There's still a lot to improve, but it's a much leaner foundation. The question I keep coming back to is: would it have been easier to do this manually? Well, the timeline speaks for itself. I had been neglecting this project for years. Without AI, I probably never would have finished it. That said, it would have been easier to build on my existing framework. My blog's framework has been tested for years and has accumulated a lot of useful features: a template engine, a working router, an auth system, and more. All things I had to re-engineer from scratch here. If I'd taken the time to work within my own framework, it probably would have taken less time overall. But AI gave me the illusion that the work could be done much faster. Z.ai generated the whole thing in just 12 minutes. It took an additional 10 hours to clean it up and get it working the way I wanted. This reminds me of several non-technical friends who built/vibe-coded apps last year. The initial results looked impressive. Most of them don't have a working app anymore, because they realized that the cleanup is just as important as the generation if you want something that actually holds together. I can only imagine what "vibe-debugging" looks like. I'm glad I have a working app, but I'm not sure I can honestly call this vibe-coded. Most, if not all, of the files have been rewritten. When companies claim that a significant percentage of their code is AI-generated , do their developers agree? For me, it's unthinkable to deploy code I haven't vetted and understood. But I'm not the benchmark. In the meantime, I think I've earned the right to say this the next time I ship an AI-assisted app: "I apologize for so many lines of code - I didn't have time to write a shorter app."

0 views

The difference between a company that makes money and a company that makes something worth caring about

David Sparks blogs that companies whose leaders actually give a damn about the products are the ones worth watching: You could argue that’s unhealthy. Maybe it is. But there’s something about a CEO who feels physical pain when the product falls short. That energy flows downhill. When the person at the top cares that much, everyone else figures out pretty quickly that they’d better care too. […] You can spot it pretty easily. When a CEO talks about their company, do they talk about the product or the business? Walt talked about the park. Steve talked about the iPhone. Jensen talks about the chip. The ones who love the product can’t help themselves. The ones who don’t talk about market share and strategic initiatives. Sparks’ sentiment pairs well with Marco Arment’s letter to presumed future Apple CEO John Ternus: Apple doesn’t settle for fine, functional, or good enough in its hardware (and thanks for your incredible work on that). We love making and using products that aren’t just great, but greater than they need to be, always raising the bar of greatness for its own sake. Software, services, revenue sources, and world impact need to be held to that same standard. Focus on making great computers with great user experiences above all else, and you can trust that every other major goal will follow: profit, market share, expansion, impact, and benefit to the world. We have high expectations for Ternus. I hope he can live up to them. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views

OpenAI Buys TBPN, Tech and the Token Tsunami

OpenAI's purchase of TBPN makes no sense, which may be par for the course for OpenAI. Then, AI is breaking stuff, starting with tech services.

0 views

2 museums, 3 exhibitions - work culture, oceans, sex work

Used the free days to go to two museums; ended up visiting an exhibition about work culture, about the oceans, and about sex work. All museum texts include an English portion, so don't get discouraged when you spot German words; feel free to look for the translation if you do not understand the German portion :) This exhibition reminded me again why museums are so great. I can go years without stepping foot in a museum at times, and some bad and disappointing experiences make it harder to justify it. This one made me happy to go again :) Not only was the art really interesting and inspiring, but the participation options were varied and engaging. Lots of stats, options to discuss, being able to put stickers on what applies to you, rating different activities on whether they qualify as free time or labour by using red or green felt balls, using string to vote on labour strikes, adding your own thoughts on a little paper you stick on a board with questions... it's so cool when museum visitors become part of the exhibition. I learned about Taylorism too. They had various books and ads in the exhibition that were used to share that model back then. It was praised as revolutionary, as the new way forward... and the optimization was intense. Everything had to be normed and unified, every production step analyzed, broken down and written down... searching Taylorism online shows a very watered down, basic-productivity-type of stuff; it seemed to be much more hardcore in these materials in Germany back then. It wasn't just applied to stuff like building machinery and conveyor belt type work, where this genuinely made some work faster, safer, easier to reproduce, quality higher, but was also imported to the private. They had a book there was was about "the new way to run the household" which applied Taylorism to the household chores and even the design and layout of the house. They had whole kitchen layouts that were optimized so that the path between different items and kitchen devices was short and not blocked by the table or chairs; the housewife operating like a worker in a factory, learning specific steps and paths by heart to be the most efficient. It reminded me so much of the culture and language around AI. Ask anyone nowadays and probably no one is doing a taylorist layout of their home or directly referencing Taylorism in their own self improvement journey; still, Taylorism changed work and work culture. Seeing the book saying Taylorism is the new way forward in the home, and knowing how it actually is now, it felt very similar to the marketing around smart homes, and AI bros telling everyone it will run everything, and you should let it run all your personal projects and self improvement or else you'll get left behind both privately and professionally. Pretty interesting! Also, look at this caricature predicting Zoom/Teams calls in 1926 already: Have some of the interesting explanation signs I saw: The worker as Christ that is sacrificing himself for capitalism (the glass art not pictured). Some stats: As a surprise to no one, Germans would like to sleep and hike more: Hurts to see how proud we used to be about our social security systems; the spending was seen as progressive, positive, a sign of wealth and power. Now we starve these systems to death. We went out to eat after: I was a little let down by this one because of my own expectations. I had expected more focus on the actual ocean, instead of centering the human so hard. It was all: We sent this device down there to do research, we built boats, we use things to cross the ocean, we deliver stories and ideas via crossing the ocean, we make up creepy stories about the ocean; transatlantic slave trade, migration, etc. and some of our impact on destroying the ocean, climate change, overfishing (while not daring to criticize fishing, really, because they don't dare offend the visitors who still eat fish, I guess). It was depressing, but accidentally so; it didn't feel like they actually wanted to focus on teaching people anything about the ocean itself, or what they can change to not contribute to the issues the ocean faces; it was more a shrine, an altar to human intervention, celebrating oil rigs and the extraction of resources from the ocean. It didn't seem to celebrate the animals and other organisms much beyond just using them to gawk at or eat. So I didn't take many pictures... The highlight was definitely the huge yarn corals (bigger than just the part on the picture) Next up is the exhibition on sex work! This one was emptier and nicer to visit, and definitely worth it. Beautiful, interesting, very good graphics about legislation around the world, notable sex work spots over the course of history in Germany, big events and personalities in the sex work scene. I was surprised that digital modes of sex work were mentioned on one text sign and otherwise not shown or discussed; it was very, very focused on street sex work, bars, clubs, brothels. I think instead of a section covering witches (for some reason?), I'd have appreciated a section on ""modern"" sex work, in which people livestream, sell pictures and videos, and make custom content. OF especially has changed the respectability of some sex work and has enabled many sex workers to do it from the safety of their own home, more comfortable, and reach more customers all around the globe. People who would not otherwise have done sex work now do sex work due to these platforms (even I did, in 2019). I think that deserves to be covered and discussed. Look at this beautiful but very sad quilt about sex workers facing police violence: Here's Marsha: Lots of art had stories included with them. I liked how it humanized sex work and its workers; everyone can relate to weird, funny, odd, dangerous or morally grey work experiences, especially with customers. AIDS and Covid are very difficult topics. The distrust in governments due to the AIDS epidemic is justified and it was handled wrongly; and you can see how Covid measures were also used to punish the unwanted, the criminalized, the ones without a lobby. All kinds of companies in a variety of industries got financial support and workarounds to still remain in business, while sex workers were left to fend for themselves; no support, just prohibition. No dialogue with the workers on how to make their work safe, just seeing them as a danger. Covid of course posed different challenges than HIV transmission, but it could have been handled better. It shows you how the government handles crises for people who they cannot milk for money. Sex workers are generally disregarded as victims of the Nazis. Of course, sometimes their other identities overlap with groups that were honored and have public memorials (Jewish, Sinti and Roma, queer, disabled etc.) but the part of them that was targeted for their sex work, or people who were only targeted for their sex work and not other parts of their identities never got any justice or memorial. Sex workers were regarded as "asocial" and "degenerate" and institutionalized and were also subject to involuntary hospitalizations, forced labour and more. Just as the NS regime tried to argue for born killers and other supposed sign someone was going to become a criminal or otherwise "undesirable", it argued that some women are just "born prostitutes". The exhibition had different maps of bigger German cities like Cologne, Hamburg and Berlin and their popular cruising and sex work spots. This piece of info stood out to me: The biggest, most well-known trans and gay bar in Berlin was converted into the headquarters of the SA. Other than that, the exhibition had some examples of makeshift dildos, the first condoms, and some amazing video interviews. A little chapel with a water fountain serves as a memorial for all the sex workers who have been killed. Cool that you made it this far. Reply via email Published 06 Apr, 2026

0 views

Germany Doxes “UNKN,” Head of RU Ransomware Gangs REvil, GandCrab

An elusive hacker who went by the handle “ UNKN ” and ran the early Russian ransomware groups GandCrab and REvil now has a name and a face. Authorities in Germany say 31-year-old Russian Daniil Maksimovich Shchukin headed both cybercrime gangs and helped carry out at least 130 acts of computer sabotage and extortion against victims across the country between 2019 and 2021. Shchukin was named as UNKN (a.k.a. UNKNOWN) in an advisory published by the German Federal Criminal Police (the “Bundeskriminalamt” or BKA for short). The BKA said Shchukin and another Russian — 43-year-old Anatoly Sergeevitsch Kravchuk — extorted nearly $2 million euros across two dozen cyberattacks that caused more than 35 million euros in total economic damage. Daniil Maksimovich SHCHUKIN, a.k.a. UNKN, and Anatoly Sergeevitsch Karvchuk, alleged leaders of the GandCrab and REvil ransomware groups. Germany’s BKA said Shchukin acted as the head of one of the largest worldwide operating ransomware groups GandCrab and REvil, which pioneered the practice of double extortion — charging victims once for a key needed to unlock hacked systems, and a separate payment in exchange for a promise not to publish stolen data. Shchukin’s name appeared in a Feb. 2023 filing (PDF) from the U.S. Justice Department seeking the seizure of various cryptocurrency accounts associated with proceeds from the REvil ransomware gang’s activities. The government said the digital wallet tied to Shchukin contained more than $317,000 in ill-gotten cryptocurrency. The Gandcrab ransomware affiliate program first surfaced in January 2018, and paid enterprising hackers huge shares of the profits just for hacking into user accounts at major corporations. The Gandcrab team would then try to expand that access, often siphoning vast amounts of sensitive and internal documents in the process. The malware’s curators shipped five major revisions to the GandCrab code, each corresponding with sneaky new features and bug fixes aimed at thwarting the efforts of computer security firms to stymie the spread of the malware. On May 31, 2019, the GandCrab team announced the group was shutting down after extorting more than $2 billion from victims. “We are a living proof that you can do evil and get off scot-free,” GandCrab’s farewell address famously quipped. “We have proved that one can make a lifetime of money in one year. We have proved that you can become number one by general admission, not in your own conceit.” The REvil ransomware affiliate program materialized around the same as GandCrab’s demise, fronted by a user named UNKNOWN who announced on a Russian cybercrime forum that he’d deposited $1 million in the forum’s escrow to show he meant business. By this time, many cybersecurity experts had concluded REvil was little more than a reorganization of GandCrab. UNKNOWN also gave an interview to Dmitry Smilyanets , a former malicious hacker hired by Recorded Future , wherein UNKNOWN described a rags-to-riches tale unencumbered by ethics and morals. “As a child, I scrounged through the trash heaps and smoked cigarette butts,” UNKNOWN told Recorded Future. “I walked 10 km one way to the school. I wore the same clothes for six months. In my youth, in a communal apartment, I didn’t eat for two or even three days. Now I am a millionaire.” As described in The Ransomware Hunting Team by Renee Dudley and Daniel Golden , UNKNOWN and REvil reinvested significant earnings into improving their success and mirroring practices of legitimate businesses. The authors wrote: “Just as a real-world manufacturer might hire other companies to handle logistics or web design, ransomware developers increasingly outsourced tasks beyond their purview, focusing instead on improving the quality of their ransomware. The higher quality ransomware—which, in many cases, the Hunting Team could not break—resulted in more and higher pay-outs from victims. The monumental payments enabled gangs to reinvest in their enterprises. They hired more specialists, and their success accelerated.” “Criminals raced to join the booming ransomware economy. Underworld ancillary service providers sprouted or pivoted from other criminal work to meet developers’ demand for customized support. Partnering with gangs like GandCrab, ‘cryptor’ providers ensured ransomware could not be detected by standard anti-malware scanners. ‘Initial access brokerages’ specialized in stealing credentials and finding vulnerabilities in target networks, selling that access to ransomware operators and affiliates. Bitcoin “tumblers” offered discounts to gangs that used them as a preferred vendor for laundering ransom payments. Some contractors were open to working with any gang, while others entered exclusive partnerships.” REvil would evolve into a feared “big-game-hunting” machine capable of extracting hefty extortion payments from victims, largely going after organizations with more than $100 million in annual revenues and fat new cyber insurance policies that were known to pay out. Over the July 4, 2021 weekend in the United States, REvil hacked into and extorted Kaseya , a company that handled IT operations for more than 1,500 businesses, nonprofits and government agencies. The FBI would later announce they’d infiltrated the ransomware group’s servers prior to the Kaseya hack but couldn’t tip their hand at the time. REvil never recovered from that core compromise, or from the FBI’s release of a free decryption key for REvil victims who couldn’t or didn’t pay. Shchukin is from Krasnodar, Russia and is thought to reside there, the BKA said. “Based on the investigations so far, it is assumed that the wanted person is abroad, presumably in Russia,” the BKA advised. “Travel behaviour cannot be ruled out.” There is little that connects Shchukin to UNKNOWN’s various accounts on the Russian crime forums. But a review of the Russian crime forums indexed by the cyber intelligence firm Intel 471 shows there is plenty connecting Shchukin to a hacker identity called “ Ger0in ” who operated large botnets and sold “installs” — allowing other cybercriminals to rapidly deploy malware of their choice to thousands of PCs in one go. However, Ger0in was only active between 2010 and 2011, well before UNKNOWN’s appearance as the REvil front man. A review of the mugshots released by the BKA at the image comparison site Pimeyes found a match on this birthday celebration from 2023 , which features a young man named Daniel wearing the same fancy watch as in the BKA photos. Images from Daniil Shchukin’s birthday party celebration in Krasnodar in 2023. Update, April 6, 12:06 p.m. ET : A reader forwarded this English-dubbed audio recording from the a ccc.de (37C3) conference talk in Germany from 2023 that previously outed Shchukin as the REvil leader (Shchuckin is mentioned at around 24:25).

0 views
HeyDingus Yesterday

7 Things (Which Are Songs I’ve Been Obsessed With) This Week [#185]

A weekly list of interesting things I found on the internet, posted on Sundays. Sometimes themed, often not. 1️⃣ “ Badlands” by Mumford & Sons & Gracie Abrams 2️⃣ “ Easier Gone” by Jason Aldean & Brittany Aldean 3️⃣ “ Grace Kelly” by Piper.Ally 4️⃣ “ Forever Start (Stripped)” by Ryan Nealon & Jillian Rossi 5️⃣ “ FTS ” by The Summer Set & Travie McCoy 6️⃣ “ Opalite” by Taylor Swift 7️⃣ “ Angels Like You” by Miley Cyrus Thanks for reading 7 Things . If you enjoyed these links or have something neat to share, please let me know . And remember that you can get more links to internet nuggets that I’m finding every day by following me @jarrod on the social web. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views

What next for the compute crunch?

I thought it'd be a good time to continue on the same theme as my previous two articles The Coming AI Compute Crunch and Is the AI Compute Crunch Here? given that both OpenAI and Anthropic are now publicly agreeing they are (very?) compute starved. I came across this really interesting tweet from the COO of GitHub which really underlines the scale of change that the world is seeing now: This shows that GitHub in the last 3 months (!) has seen a ~14x annualised increase in the number of commits. Commits are a crude proxy for inference demand - but even directionally, if we assume that most of the increase is due to coding agents hitting the mainstream, it points to an outrageously large increase in compute requirements for inference. If anything, this is probably a huge undercount - many people new to "vibe coding" are unlikely to get their heads round Git(Hub) quickly - distributed source control is quite confusing to non engineers (and, at least for me, took longer than I'd like to admit to get totally fluent with it as an engineer). Plus this doesn't include all the Cowork usage which is very unlikely to go anywhere near GitHub. OpenAI's Thibault Sottiaux (head of the Codex team) also tweeted recently that AI companies are going through a phase of demand outstripping supply: It's been rumoured - and indeed in my opinion highly likely given how compute intensive video generation is - that Sora was shut down to free up compute for other tasks. All AI companies are feeling this intensely . Even worse, there is a domino effect with this - when Claude Code starts tightening usage limits or experiencing compute-related outages, people start switching to e.g. Codex or OpenCode, putting increased pressure on them. As I mentioned in my last articles, I believe everyone was looking at these "crazy" compute deals that OpenAI, Anthropic, Microsoft etc were signing like they were going out of fashion back in ~2025 the wrong way. Signing a $100bn "commitment" to buy a load of GPU capacity does not suddenly create said capacity. Concrete needs poured, power needs to be connected, natural gas turbines need to be ordered [1] and GPUs need to be fabricated, racked and networked. All of these products are in short supply, as is the labour required. One of the key points I think worth highlighting that often gets overlooked is how difficult the rollout of GB200 (NVidia's latest chips) has been. Unlike previous generations of GPUs from NVidia the GB200-series is fully liquid cooled - not air cooled as before. Liquid cooling at gigawatt scale just hasn't really been done in datacentres before. From what I've heard it's been unbelievably painful. Liquid cooling significantly increases the power density/m 2 , which makes the electrical engineering required harder - plus a real shortage of skilled labour [2] to plumb it all together - and even shortages of various high end plumbing components has led to most (all?) of the GB200 rollout being vastly behind schedule. While no doubt these issues will get resolved - and the supply chains will gain experience and velocity in delivering liquid cooled parts - this has no doubt put even more pressure on what compute is available in the short to medium term. Even worse, Stargate's 1GW under construction datacentre in the UAE is now a chess piece in the current geopolitical tensions in the recent US/Iran conflict, with the Iranian government putting out a video featuring the construction site. The longer term issue I wrote about in my previous articles on this subject is the hard constraints on DRAM fabrication. While SK Hynix recently signed a $8bn deal for more EUV production equipment from ASML, it's unlikely to come online for another couple of years. Indeed I noticed Sundar Pichai specifically called out memory as a significant constraint on his recent appearance on the Stripe podcast. While recent innovations like TurboQuant are extremely promising in driving memory requirements down via KV cache compression, given the pace at which AI usage is growing it at best buys a small window of breathing room. I believe the next 18-24 months are going to be defined by compute shortages. When you have exponential demand increases and ~linear additions on the supply side, the market is going to be pretty volatile, to say the least. The cracks are already showing. Anthropic's uptime is famously now at "1" nine reliability, and doesn't seem to be getting any better. I don't envy the pressure on SRE teams trying to scale these systems dramatically while deploying new models and efficiency strategies. We've seen Anthropic introduce increasingly more heavy handed measures on the Claude subscription side - starting with "peak time" usage limits being cut significantly, and now moving to ban even usage from 3rd party agent harnesses - no doubt to try and reduce demand. The issue is that if my guesswork at the start of the article is correct and Anthropic is seeing ~10x Q/Q inference demand there is only so much you can do by banning 3rd party use of the product - 1st party use will quickly eat that up. And time based rationing - while extremely useful to smooth out the peaks and troughs - can only go so far. Eventually you incentivise it enough that you max out your compute 24/7. That's not to say there isn't a lot more they can (and will) do here, but when you are facing those kind of demand increases it doesn't end up getting you to a steady state. That really only leaves one lever to pull - price. I was hesitant in my previous articles to suggest major price increases, as gaining marketshare is so important to everyone involved in this trillion dollar race, but if all AI providers are compute starved then I think the game theory involved changes. The paradox of this though is that as models get better and better - and the rumours around the new "Spud" and "Mythos" models from OpenAI and Anthropic point that way - users get less price sensitive. While spending $200/month when ChatGPT first brought out their Pro subscription seemed almost comically expensive for the value you could get out of it, I class my $200/month Anthropic subscription as some of the best value going and would probably pay a lot more for it if I had to, even with current models. We're in completely uncharted territory as far as I can tell. I've been doing a lot of reading about the initial electrification of Europe and North America recently in the late 1800s/early 1900s but the analogy quickly breaks down - the demand growth is so much steeper and the supply issues were far less concentrated. So, we're about to find out what people will actually pay for intelligence on tap. My guess is a lot more than most expect - which is both extremely bullish for the industry and going to be extremely painful for users in the short term. [3] Fundamentally, I believe there is a near infinite demand for machines approaching or surpassing human cognition, even if that capability is spread unevenly across domains. The supply will catch up eventually. But it's the "eventually" that's going to hurt. Increasingly large AI datacentres are skipping grid connections (too slow to come online) and connecting straight to natural gas pipelines and installing their own gas turbines and generation sets ↩︎ I've also read that various manufacturing problems from NVidia has lead to parts leaking, which famously does not combine well with high voltage electrical systems. ↩︎ One flip side of this is how much better the small models have got. I'll be writing a lot more on this, but Gemma 4 26b-a4b running locally is hugely impressive for software engineering. It's not quite good enough, but perhaps we are only a few months off local models on consumer hardware being "good enough". Maybe it's worth buying that Mac or GPU you were thinking about as a hedge? ↩︎ Increasingly large AI datacentres are skipping grid connections (too slow to come online) and connecting straight to natural gas pipelines and installing their own gas turbines and generation sets ↩︎ I've also read that various manufacturing problems from NVidia has lead to parts leaking, which famously does not combine well with high voltage electrical systems. ↩︎ One flip side of this is how much better the small models have got. I'll be writing a lot more on this, but Gemma 4 26b-a4b running locally is hugely impressive for software engineering. It's not quite good enough, but perhaps we are only a few months off local models on consumer hardware being "good enough". Maybe it's worth buying that Mac or GPU you were thinking about as a hedge? ↩︎

0 views

Binary Lambda Calculus is Hard

Read on the website: Binary Lambda Calculus is a really alluring idea. But it’s also hard to grasp and use! Here’s my list of complaints and obstacles to using BLC.

0 views
Jim Nielsen Yesterday

I Tried Vibing an RSS Reader and My Dreams Did Not Come True

Simon Willison wrote about how he vibe coded his dream presentation app for macOS . I also took a stab at vibe coding my dream app: an RSS reader. To clarify: Reeder is my dream RSS app and it already exists, so I guess you could say my dreams have already come true? But I’ve kind of always wanted to try an app where my RSS feed is just a list of unread articles and clicking any one opens it in the format in which it was published (e.g. the original website). So I took a stab at it. (Note: the backend portion of this was already solved, as I simply connected to my Feedbin account via the API .) First I tried a macOS app because I never would’ve tried a macOS app before. Xcode, Swift, a Developer Account? All completely outside my wheelhouse. But AI helped be get past that hurdle of going from nothing to something . It was fun to browse articles and see them in situ . A lot of folks have really great personal websites so it’s fun to see their published articles in that format. This was pretty much pure vibes. I didn’t really look at the code at all because I knew I wouldn’t understand any of it. I got it working the first night I sat down and tried it. It was pretty crappy but it worked. From there I iterated. I’d use it for a day, fix things that were off, keep using it, etc. Eventually I got to the point where I thought: I’m picky about software, so the bar for my dreams is high. But I’m also lazy, so my patience is quite low. The intersection of: the LLM failing over and over + my inability to troubleshoot any of it + not wanting to learn = a bad combination for persevering through debugging. Which made me say: “Screw it, I’ll build it as a website!” But websites don’t really work for this kind of app because of CORS. I can’t just stick an article’s URL in an and preview it because certain sites have cross site headers that don’t allow it to display under another domain. But that didn’t stop me. I tried building the idea anyway as just a list view. I could install this as a web app on my Mac and I'd get a simple list view: Anytime I clicked on a link, it would open in my default browser. Actually not a bad experience. It worked pretty decent on my phone too. Once I visited my preview deploy, I could "isntall" it to my home screen and then when I opened it, I'd have my latest unread articles. Clicking on any of them would open a webview that I could easily dismiss and get back to my list. Not too bad. But not what I wanted, especially on desktop. It seemed like the only option to 1) get exactly what I wanted, and 2) distribute it — all in a way that I could understand in case something went wrong or I had to overcome an obstacle — was to make a native app. At this point, I was thinking: “I’m too tired to learn Apple development right now, and I’ve worked for a long time on the web, so I may as well leverage the skills that I got.” So I vibed an Electron app because Electron will let me get around the cross site request issues of a website. This was my very first Electron app and, again, the LLM helped me go from nothing to something quite quickly (but this time I could understand my something way better). The idea was the same: unread articles on the left, a preview of any selected articles on the right. Here’s a screenshot: It’s fine. Not really what I want. But it’s a starting point. Is it better than Reeder? Hell no. Is it my wildest dreams realized? Also no. But it’s a prototype of an idea I’ve wanted to explore. I”m not sure I’ll go any further on it. It’s hacky enough that I can grasp a vision for what it could be. The question is: do I actually want this? Is this experience something I want in the long run? I think it could be. But I have to figure out exactly how I want to build it as a complementary experience to my preferred way of going through my RSS feed. Which won't be your preference. Which is why I'm not sharing it. So what’s my takeaway from all this? I don’t know. That’s why I’m typing this all out in a blog post. Vibe coding is kinda cool. It lets you go from “blank slate” to “something” way faster and easier than before. But you have to be mindful of what you make easy . You know what else is easy? Fast food. But I don’t want that all the time. In fact, vibe coding kinda left me with that feeling I get after indulging in social media, like “What just happened? Two hours have passed and what did I even spend my time doing? Just mindlessly chasing novelty?” It’s fun and easy to mindlessly chasing your whims. But part of me thinks the next best step for this is to sit and think about what I actually want, rather than just yeeting the next prompt out. I’ve quipped before that our new timelines are something like: The making from nothing isn't as hard anymore. But everything after that still is. Understanding it. Making it good. Distributing it. Supporting it. Maintaining it. All that stuff. When you know absolutely nothing about those — like I did with macOS development — things are still hard. After all this time vibing, instead of feeling closer to my dream, I actually kinda feel further from it. Like the LLM helped close the gap in understanding what it would actually take for me to realize my dreams. Which made me really appreciate the folks who have poured a lot of time and thought and effort into building RSS readers I use on a day-to-day basis. Thank you makers of Feedbin & Reeder & others through the years. I’ll gladly pay you $$$ for your thought and care. In the meantime, I may or may not be over here slowly iterating on my own supplemental RSS experience. In fact, I might’ve just found the name: RxSSuplement. Reply via: Email · Mastodon · Bluesky Ok, I could use this on my personal computer. I don’t know that I’ll be able to iterate on this much more because its getting more complicated and failing more and more with each ask ( I was just trying to move some stupid buttons around in the UI and the AI was like, “Nah bro, I can’t.”) I have no idea how I’d share this with someone. I don’t think I’d be comfortable sharing this with someone (even though I think I did things like security right by putting credentials in the built-in keychain, etc.) I guess this is where the road stops. Nothing -> Something? 1hr. Something -> Something Good ? 1 year.

0 views
Rik Huijzer Yesterday

COVID vs Oil Crisis

COVID declared a pandemic on 11/3 (11*3=33) 2020. On 11/3 2026, the IEA wrote _"The IEA Secretariat will provide further details of how this collective action will be implemented in due course. It will also continue to closely monitor global oil and gas markets and to provide recommendations to Member governments, as needed."_

0 views
Brain Baking Yesterday

Remakes And Remasters Of Old DOS Games: A Small 2026 Update

It’s been two years since the Remakes And Remasters Of Old DOS Games article. Nostalgia still sells handsomely thus our favourite remaster studios (hello Night Dive) are cranking out hit after hit. It’s time for a small 2026 update. I’ve also updated the original article just in case you might find your way here through that one. Below is a list of remakes and remasters announced and/or released since April 2024: Guess what, Nightdive is still running the show here: At this point I don’t even know where to start! Monster Bash HD is still being worked on (I hope?). Did I miss something? Let me know! Related topics: / games / dos / engines / By Wouter Groeneveld on 5 April 2026.  Reply via email . Little Big Adventure : Twinsen’s Quest released in November 2024 is a complete graphical overhaul of the original. Not a remake but still noteworthy; Gobliins 6 is a sequel to a 34 year old DOS game ! Star Wars: Dark Forces got a remaster ; Although not a DOS game, Outlaws got the remaster treament as well Oh, and yes, DOOM I + II is another masterpiece ; As is the Heretic + Hexen package ; As did Blood as Refreshed Supply (again?). BioMenace : Remastered by the same devs that did the Duke Nukem 1 & 2 remasters on Evercade. I enjoyed it, it’s good! A Halloween Harry -inspired top-down 3D version is currently being made that only shares the name & style of the original—luckily, not the crappy level design. Ubisoft remastered the original Rayman ( 30th Anniversary Edition ) but it wasn’t met with much success. They changed the included GBA music—that’s what SEGA would have done, right? I found a Masters of Magic remake (2022) on Steam that’s been met with some positive reception. I didn’t play the original so can’t say how faithfully it’s related to the DOS version. Blizzard also decided to cash in with the Warcraft I+II remaster bundle . I was mostly a Wacraft III person so I can’t comment on this. Someone did a Wacky Wheels HD Remake on ? Wow! Best approach this carefully, it looks to have its own technical problems.

0 views
Chris Coyier Yesterday

Help Me Understand How To Get Jetpack Search to Search a Custom Post Type

I’ve got a Custom Post Type in WordPress. It’s called because it’s for documentation pages. This is for the CodePen 2.0 Docs . The Classic Docs are just “Pages” in WordPress, and that works fine, but I thought I’d do the correct WordPress thing and make a unique kind of content a Custom Post Type. This works quite nicely, except that they don’t turn up at all in Jetpack Search . I like Jetpack Search. It works well. It’s got a nice UI. You basically turn it on and forget about it. I put it on CSS-Tricks, and they still use it there. I put it on the Frontend Masters blog. It’s here on this blog. It’s a paid product, and I pay for it and use it because it’s good. I don’t begrudge core WordPress for not having better search, because raw MySQL search just isn’t very good. Jetpack Search uses Elasticsearch, a product better-suited for full-blown site search. That’s not a server requirement they could reasonably bake into core. But the fact that it just doesn’t index Custom Post Types is baffling to me. I suspect it’s just something I’m doing wrong. I can tell it doesn’t work with basic tests. For example, I’ve got a page called “Inline Block Processing” but if you search for “Inline Block Processing” it returns zero results . In the Customizing Jetpack Search area,  I’m specifically telling Jetpack Search  not to exclude “Docs” . That very much feels like it will include it . I’ve tried manually reindexing a couple of times, both from SSHing into Pressable and using WP-CLI to reindex, and from the “Manage Connections” page on WordPress.com. No dice. I contacted Jetpack Support, and they said: Jetpack Search handles Custom Post Types individually, so it may be that the slug for your post type isn’t yet included in the Jetpack Search index.   We have a list of slugs we index here:   https://github.com/Automattic/jetpack/blob/trunk/projects/packages/sync/src/modules/class-search.php#L691   If the slug isn’t on the list, please submit an issue here so that our dev team can add it: Where they sent me on GitHub was a bit confusing. It’s the end of a variable called , which doesn’t seem quite right, as that seems like, ya know, post metadata that shouldn’t be indexed, which isn’t what’s going on here. But it’s also right before a variable called private static $taxonomies_to_sync, which feels closer, but I know what a taxonomy is, and this isn’t that. A taxonomy is categories, tags, and stuff (you can make your own), but I’m not using any custom taxonomies here; I’m using a Custom Post Type. They directed me to open a GitHub Issue, so I did that . But it’s sat untouched for a month. I just need to know whether Jetpack Search can handle Custom Post Types. If it does, what am I doing wrong to make it not work? If it can’t, fine, I just wanna know so I can figure out some other way to handle this. Unsearchable docs are not tenable.

0 views

Stamp It! All Programs Must Report Their Version

Recently, during a production incident response, I guessed the root cause of an outage correctly within less than an hour (cool!) and submitted a fix just to rule it out, only to then spend many hours fumbling in the dark because we lacked visibility into version numbers and rollouts… 😞 This experience made me think about software versioning again, or more specifically about build info (build versioning, version stamping, however you want to call it) and version reporting. I realized that for the i3 window manager, I had solved this problem well over a decade ago, so it was really unexpected that the problem was decidedly not solved at work. In this article, I’ll explain how 3 simple steps (Stamp it! Plumb it! Report it!) are sufficient to save you hours of delays and stress during incident response. Every household appliance has incredibly detailed versioning! Consider this dishwasher: (Thank you Feuermurmel for sending me this lovely example!) I observed a couple household appliance repairs and am under the impression that if a repair person cannot identify the appliance, they would most likely refuse to even touch it. So why are our standards so low in computers, in comparison? Sure, consumer products are typically versioned somehow and that’s typically good enough (except for, say, USB 3.2 Gen 1×2!). But recently, I have encountered too many developer builds that were not adequately versioned! Unlike a physical household appliance with a stamped metal plate, software is constantly updated and runs in places and structures we often cannot even see. Let’s dig into what we need to increase our versioning standard! Usually, software has a name and some version number of varying granularity: All of these identify the Chrome browser on my computer, but each at different granularity. All are correct and useful, depending on the context. Here’s an example for each: After creating the i3 window manager , I quickly learned that for user support, it is very valuable for programs to clearly identify themselves. Let me illustrate with the following case study. When running , you will see output like this: Each word was carefully deliberated and placed. Let me dissect: When doing user support, there are a couple of questions that are conceptually easy to ask the affected user and produce very valuable answers for the developer: Based on my experiences with asking these questions many times, I noticed a few patterns in how these debugging sessions went. In response, I introduced another way for i3 to report its version in i3 v4.3 (released in September 2012): a flag! Now I could ask users a small variation of the first question: What is the output of ? Note how this also transfers well over spoken word, for example at a computer meetup: Michael: Which version are you using? User: How can I check? Michael: Run this command: User: It says 4.24. Michael: Good, that is recent enough to include the bug fix. Now, we need more version info! Run please and tell me what you see. When you run , it does not just report the version of the i3 program you called, it also connects to the running i3 window manager process in your X11 session using its IPC (interprocess communication) interface and reports the running i3 process’s version, alongside other key details that are helpful to show the user, like which configuration file is loaded and when it was last changed: This might look like a lot of detail on first glance, but let me spell out why this output is such a valuable debugging tool: Connecting to i3 via the IPC interface is an interesting test in and of itself. If a user sees output, that implies they will also be able to run debugging commands like (for example) to capture the full layout state. During a debugging session, running is an easy check to see if the version you just built is actually effective (see the line). Showing the full path to the loaded config file will make it obvious if the user has been editing the wrong file. If the path alone is not sufficient, the modification time (displayed both absolute and relative) will flag editing the wrong file. I use NixOS, BTW, so I automatically get a stable identifier ( ) for the specific build of i3. To see the build recipe (“derivation” in Nix terminology) which produced this Nix store output ( ), I can run : Unfortunately, I am not aware of a way to go from the derivation to the source, but at least one can check that a certain source results in an identical derivation. The versioning I have described so far is sufficient for most users, who will not be interested in tracking intermediate versions of software, but only the released versions. But what about developers, or any kind of user who needs more precision? When building i3 from git, it reports the git revision it was built from, using : A modified working copy gets represented by a after the revision: Reporting the git revision (or VCS revision, generally speaking) is the most useful choice. This way, we catch the following common mistakes: As we have seen above, the single most useful piece of version information is the VCS revision. We can fetch all other details (version numbers, dates, authors, …) from the VCS repository. Now, let’s demonstrate the best case scenario by looking at how Go does it! Go has become my favorite programming language over the years, in big part because of the good taste and style of the Go developers, and of course also because of the high-quality tooling: I strive to respect everybody’s personal preferences, so I usually steer clear of debates about which is the best programming language, text editor or operating system. However, recently I was asked a couple of times why I like and use a lot of Go, so here is a coherent article to fill in the blanks of my ad-hoc in-person ramblings :-). Read more → Therefore, I am pleased to say that Go implements the gold standard with regard to software versioning: it stamps VCS buildinfo by default! 🥳 This was introduced in Go 1.18 (March 2022) : Additionally, the go command embeds information about the build, including build and tool tags (set with -tags), compiler, assembler, and linker flags (like -gcflags), whether cgo was enabled, and if it was, the values of the cgo environment variables (like CGO_CFLAGS). Both VCS and build information may be read together with module information using or runtime/debug.ReadBuildInfo (for the currently running binary) or the new debug/buildinfo package. Note: Before Go 1.18, the standard approach was to use or similar explicit injection. This setup works (and can still be seen in many places) but requires making changes to the application code, whereas the Go 1.18+ stamping requires no extra steps. What does this mean in practice? Here is a diagram for the common case: building from git: This covers most of my hobby projects! Many tools I just , or if I want to easily copy them around to other computers. Although, I am managing more and more of my software in NixOS. When I find a program that is not yet fully managed, I can use and the tool to identify it: It’s very cool that Go does the right thing by default! Systems that consist of 100% Go software (like my gokrazy Go appliance platform ) are fully stamped! For example, the gokrazy web interface shows me exactly which version and dependencies went into the build on my scan2drive appliance . Despite being fully stamped, note that gokrazy only shows the module versions, and no VCS buildinfo, because it currently suffers from the same gap as Nix: For the gokrazy packer, which follows a rolling release model (no version numbers), I ended up with a few lines of Go code (see below) to display a git revision, no matter if you installed the packer from a Go module or from a git working copy. The code either displays (the easy case; built from git) or extracts the revision from the Go module version of the main module ( ): What are the other cases? These examples illustrate the scenarios I usually deal with: This is what it looks like in practice: But a version built from git has the full revision available (→ you can tell them apart): When packaging Go software with Nix, it’s easy to lose Go VCS revision stamping: So the fundamental tension here is between reproducibility and VCS stamping. Luckily, there is a solution that works for both: I created the Nix overlay module that you can import to get working Go VCS revision stamping by default for your Nix expressions! Tip: If you are not a Nix user, feel free to skip over this section. I included it in this article so that you have a full example of making VCS stamping work in the most complicated environments. Packaging Go software in Nix is pleasantly straightforward. For example, the Go Protobuf generator plugin is packaged in Nix with <30 lines: official nixpkgs package.nix . You call , supply as the result from and add a few lines of metadata. But getting developer builds fully stamped is not straightforward at all! When packaging my own software, I want to package individual revisions (developer builds), not just released versions. I use the same , or if I need the latest Go version. Instead of using , I provide my sources using Flakes, usually also from GitHub or from another Git repository. For example, I package like so: The comes from my : Go stamps all builds, but it does not have much to stamp here: Here’s a full example of gokrazy/bull: To fix VCS stamping, add my overlay to your : (If you are using , like I am, you need to apply the overlay in both places.) After rebuilding, your Go binaries should newly be stamped with buildinfo: Nice! 🥳 But… how does it work? When does it apply? How do you know how to fix your config? I’ll show you the full diagram first, and then explain how to read it: There are 3 relevant parts of the Nix stack that you can end up in, depending on what you write into your files: For the purpose of VCS revision stamping, you should: Hence, we will stick to the left-most column: fetchers. Unfortunately, by default, with fetchers, the VCS revision information, which is stored in a Nix attrset (in-memory, during the build process), does not make it into the Nix store, hence, when the Nix derivation is evaluated and Go compiles the source code, Go does not see any VCS revision. My Nix overlay module fixes this, and enabling the overlay is how you end up in the left-most lane of the above diagram: the happy path, where your Go binaries are now stamped! How does the overlay work? It functions as an adapter between Nix and Go: So the overlay implements 3 steps to get Go to stamp the correct info: For the full source, see . See Go issue #77020 and Go issue #64162 for a cleaner approach to fixing this gap: allowing package managers to invoke the Go tool with the correct VCS information injected. This would allow Nix (or also gokrazy) to pass along buildinfo cleanly, without the need for workarounds like my adapter . At the time of writing, issue #77020 does not seem to have much traction and is still open. My argument is simple: Stamping the VCS revision is conceptually easy, but very important! For example, if the production system from the incident I mentioned had reported its version, we would have saved multiple hours of mitigation time! Unfortunately, many environments only identify the build output (useful, but orthogonal), but do not plumb the VCS revision (much more useful!), or at least not by default. Your action plan to fix it is just 3 simple steps: Implementing “version observability” throughout your system is a one-day high-ROI project. With my Nix example, you saw how the VCS revision is available throughout the stack, but can get lost in the middle. Hopefully my resources help you quickly fix your stack(s), too: Now go stamp your programs and data transfers! 🚀 Chrome 146.0.7680.80 Chrome f08938029c887ea624da7a1717059788ed95034d-refs/branch-heads/7680_65@{#34} “This works in Chrome for me, did you test in Firefox?” “Chrome 146 contains broken middle-click-to-paste-and-navigate” “I run Chrome 146.0.7680.80 and cannot reproduce your issue” “Apply this patch on top of Chrome f08938029c887ea624da7a1717059788ed95034d-refs/branch-heads/7680_65@{#34} and follow these steps to reproduce: […]” : I could have shortened this to or maybe , but I figured it would be helpful to be explicit because is such a short name. Users might mumble aloud “What’s an i-3-4-2-4?”, but when putting “version” in there, the implication is that i3 is some computer thing (→ a computer program) that exists in version 4.24. is the release date so that you can immediately tell if “ ” is recent. signals when the project was started and who is the main person behind it. gives credit to the many people who helped. i3 was never a one-person project; it was always a group effort. Question: “Which version of i3 are you using?” Since i3 is not a typical program that runs in a window (but a window manager / desktop environment), there is no Help → About menu option. Instead, we started asking: What is the output of ? Question: “ Are you reporting a new issue or a preexisting issue? To confirm, can you try going back to the version of i3 you used previously? ”. The technical terms for “going back” are downgrade, rollback or revert. Depending on the Linux distribution, this is either trivial or a nightmare. With NixOS, it’s trivial: you just boot into an older system “generation” by selecting that version in the bootloader. Or you revert in git, if your configs are version-controlled. With imperative Linux distributions like Debian Linux or Arch Linux, if you did not take a file system-level snapshot, there is no easy and reliable way to go back after upgrading your system. If you are lucky, you can just the older version of i3. But you might run into dependency conflicts (“version hell”). I know that it is possible to run older versions of Debian using snapshot.debian.org , but it is just not very practical, at least when I last tried. Can you check if the issue is still present in the latest i3 development version? Of course, I could also try reproducing the user issue with the latest release version, and then one additional time on the latest development version. But this way, the verification step moves to the affected user, which is good because it filters for highly-motivated bug reporters (higher chance the bug report actually results in a fix!) and it makes the user reproduce the bug twice , figuring out if it’s a flaky issue, hard-to-reproduce, if the reproduction instructions are correct, etc. A natural follow-up question: “ Does this code change make the issue go away? ” This is easy to test for the affected user who now has a development environment. Connecting to i3 via the IPC interface is an interesting test in and of itself. If a user sees output, that implies they will also be able to run debugging commands like (for example) to capture the full layout state. During a debugging session, running is an easy check to see if the version you just built is actually effective (see the line). Note that this is the same check that is relevant during production incidents: verifying that effectively running matches supposed to be running versions. Showing the full path to the loaded config file will make it obvious if the user has been editing the wrong file. If the path alone is not sufficient, the modification time (displayed both absolute and relative) will flag editing the wrong file. People build from the wrong revision. People build, but forget to install. People install, but their session does not pick it up (wrong location?). Nix fetchers like are implemented by fetching an archive ( ) file from GitHub — the full repository is not transferred, which is more efficient. Even if a repository is present, Nix usually intentionally removes it for reproducibility: directories contain packed objects that change across runs (for example), which would break reproducible builds (different hash for the same source). We build from a directory, not a Go module, so the module version is . The stamped buildinfo does not contain any information. Fetchers. These are what Flakes use, but also non-Flake use-cases. Fixed-output derivations (FOD). This is how is implemented, but the constant hash churn (updating the line) inherent to FODs is annoying. Copiers. These just copy files into the Nix store and are not git-aware. Avoid the Copiers! If you use Flakes: ❌ do not use as a Flake input ✅ use instead for git awareness I avoid the fixed-output derivation (FOD) as well. Fetching the git repository at build time is slow and inefficient. Enabling , which is needed for VCS revision stamping with this approach, is even more inefficient because a new Git repository must be constructed deterministically to keep the FOD reproducible. Nix tracks the VCS revision in the in-memory attrset. Go expects to find the VCS revision in a repository, accessed via file access and commands. It synthesizes a file so that Go’s detects a git repository. It injects a command into the that implements exactly the two commands used by Go and fails loudly on anything else (in case Go updates its implementation). It sets in the environment variable. Stamp it! Include the source VCS revision in your programs. This is not a new idea: i3 builds include their revision since 2012! Plumb it! When building / packaging, ensure the VCS revision does not get lost. My “VCS rev with NixOS” case study section above illustrates several reasons why the VCS rev could get lost, which paths can work and how to fix the missing plumbing. Report it! Make your software print its VCS revision on every relevant surface, for example: Executable programs: Report the VCS revision when run with For Go programs, you can always use Services and batch jobs: Include the VCS revision in the startup logs. Outgoing HTTP requests: Include the VCS revision in the HTTP responses: Include the VCS revision in a header (internally) Remote Procedure Calls (RPCs): Include the revision in RPC metadata User Interfaces: Expose the revision somewhere visible for debugging. My overlay for Nix / NixOS My repository is a community resource to collect examples (as markdown content) and includes a Go module with a few helpers to make version reporting trivial.

0 views
Lalit Maganti Yesterday

Eight years of wanting, three months of building with AI

For eight years, I’ve wanted a high-quality set of devtools for working with SQLite. Given how important SQLite is to the industry 1 , I’ve long been puzzled that no one has invested in building a really good developer experience for it 2 . A couple of weeks ago, after ~250 hours of effort over three months 3 on evenings, weekends, and vacation days, I finally released syntaqlite ( GitHub ), fulfilling this long-held wish. And I believe the main reason this happened was because of AI coding agents 4 . Of course, there’s no shortage of posts claiming that AI one-shot their project or pushing back and declaring that AI is all slop. I’m going to take a very different approach and, instead, systematically break down my experience building syntaqlite with AI, both where it helped and where it was detrimental. I’ll do this while contextualizing the project and my background so you can independently assess how generalizable this experience was. And whenever I make a claim, I’ll try to back it up with evidence from my project journal, coding transcripts, or commit history 5 .

0 views
iDiallo 2 days ago

It's not that deep

I have these Sunday evenings where I find myself sitting alone at the kitchen table, thinking about my life and how I got here . Usually, these sessions end with an inspiring idea that makes me want to get up and build something. I remember the old days where I couldn't even sleep because I had all these ideas bubbling in my head, and I could just get up and do it because I had no familial responsibility. I still have that flare in me, but I also don't always give in to those ideas. Instead, sometimes I chose to do something much simpler. I read. Sometimes it's a book, sometimes it's a blog post. But always, it's something that stimulates my mind more than any startup idea, or tech disruption. There is a blog I follow, I'm not even sure how I stumbled upon it. I don't think it has a newsletter, and it's not in my RSS feed. But, it's in my mind. It's as if I can just feel it when the author posts something new. Right there on the kitchen table, I load it up, and I get a glimpse into someone else's life. I don't know much about this person, but reading her writing is soothing. It's not commercialized, the most I can say about it is, well it is human . When I read something that is written, anything that's written, I expect to hear the voice of the person behind it . Whether it is a struggle, a victory, or just a small remark. It only makes sense when there is a person behind it. Sometimes people write, and in order to sound professional, they remove their voice from it. It becomes like reading a corporate memphis blog. Devoid of any humanity. It's weird how I have these names in my head. Keenen Charles . I check his blog on Sunday evening as well. In fact, here are a few I read recently in no particular order: This isn't to tell you that you need to read those articles or you will be left behind. It's not that deep. You can find things you like, and enjoy them at your own comfort. They don't have to be world changing, they don't have to turn you into a millionaire, they just have to make you smile or nod for a moment. The world is constantly trying to remind us that we are at the edge of destruction. But you, the person sitting there, reading a random blog post from this random Guinean guy, yes you. Take it easy for the rest of the day. Jerry (I particularly liked this specific article) Don't know her name

0 views
fLaMEd fury 2 days ago

Link Dump: March 2026

What’s going on, Internet? Trying something different. All the pages I bookmarked this month, no life updates in between. Want more? Check out all my bookmarks at /bookmarks/ and subscribe to the bookmarks feed . Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website. Scroll trīgintā ūnus by Shellsharks - Sharing the latest edition of scrolls, posting online without overthinking. I Am Happier Writing Code by Hand by Abhinav Omprakash - Letting AI write his code kills the satisfaction that made programming worth doing. You Are The Driver (The AI Is Just Typing) by Keith - AI coding tools are only useful once you already know what you’re doing. They automate typing, not thinking. Oceania Web Atlas by Zachary Kai - Collects personal websites from across Oceania into one tidy, human-scaled directory. Building the Good Web by Brennan - Building for users instead of against them is what separates the good web from everything else. Unpolished human websites by Joel - Keep your website messy and human. What is Digital Garage - Digital Tinker’s website is a workshop built for joy, not productivity. Creation without pressure. How to feel at home on the Internet by Jatan Mehta - Is having your own domain really the only way to truly “own” your online space? Endgame for the Open Web by Anil Dash - Is 2026 the last year we have a chance to put a stop on the dismantling of the open web?

0 views

How do you compute?

A recent Tildes thread about computer monitor usage made me wonder what kind of setup others are using, so I spun up a survey! I have a theory about what reader's of this blog will most likely respond, but very curious to see the reality. You can take the 3 question survey here: surveys.darnfinesoftware.com The survey will be open for ~7 days before it auto implodes and all responses are deleted. I'll post a follow-up on what the data looks like (or you can view yourself at the above link).

0 views
Kev Quirk 2 days ago

AMA: Can One Setup Their Digital Life to Be Subscription Free?

Sanjay asked me in a comment on my AMA post : I am a fellow reader of multiple blogs of yours and others. But somehow I have been searching for any article where any one can setup of his entire digital life using subscription free model. I am not talking about to get everything FREE and become a PRODUCT. If you think you can setup everything using opensource then how would you setup all of your essentials. You can write a post anytime when you have a time. For example. And so on.. There may be many more things. I always think what would happen to my subscriptions if I will no more or I will have some issue or financial constraint. Will the subscription be a burden to my family when I will not be there. Or any of my important services will stop working for not paying suddenly? Currently I am not paying any subscription for any of my services as I have reduced as minimum services I can opt. I think the short answer to your question, Sanjay, is mostly yes. But I'd advise against it for some things*. Some of the items on your list are really easy to get without a subscription, for example: Unfortunately, some things on your list are either going to cost you money, privacy, or time somewhere along the line. Domains cost money. I know some don't but they tend to be very spammy and have poor email delivery as a result. Also, any email service worth their salt will require you to pay. If not, they're probably sniffing your mail. You could self-host your email at home, but there's then a cost associated with the hardware to host the mail server, or your time administering the system. Email is notoriously difficult for self-hosters too. As with most things that are free on the web, if it's free, you're probably the product. And that's true with both GitHub and Cloudflare, in my opinion. You can host a site for free on either service, but you would either need to buy a domain, or be happy using one of their free sub-domains. There's also the technical debt required to create the static sites that these services support. So there's a time cost. Again, you can host at home, but there's the same hardware or time costs that are associated with self-hosting email. Like email hosting, any service worth their salt is going to charge. Some may have initial tiers that are free, but I doubt they will be very generous. I personally use Bunny for my CDN needs. They're reasonably priced and have a pay-as-you-go model, so no subscription involved. Obviously you can't host a CDN at home, as that would defeat the object of the whole thing. For databases; same story as above. You can host at home, but there's a hardware/time cost associated, or you can pay for a reputable host to do it for you. I think this one is easy. Your options are threefold: I think these decisions ultimately come down to personal preference, and a compromise in one of three things - cost, time, or privacy. There's always a trade off with this stuff. It just boils down to what you're willing to trade off, personally. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . Free domain based email via MX Routing Hosting on Github or Cloudflare Pages OS - most important using Linux Document, Spreadsheet, Presentation Video Editing RSS feed reader - there are many feed readers you can install locally for free. Vivaldi has one built right into their browser, for example. Or you could self-host something like FreshRSS , or Miniflux . Notes app - my recommendation here would be Obsidian . I personally sync via WebDAV to my server at home. If you don't have the ability to do that, most operating systems have a note taking app pre-installed. Reminders - you can use the calendar app on your device, or on mobile, the built-in reminders/to-do apps. Document editing - LibreOffice is great, as is Only Office if you want something more modern looking. Operating system - Ubuntu for the win. It's what I use. Video editing - Kdenlive is available for all major operating systems, and works really well. A self-hosted media library that will consist of: Ripped music from a physical collection. Buying digital music from services like Bandcamp where you actually own the music, but this can get expensive. Pirated music 🏴‍☠️. A free account on a streaming service like Spotify , but it will be riddled with ads. A paid subscription to a streaming service. A service can be free and private, but it will be time consuming to manage. It can be quick to get started (hosted) and private, but it won't be free. It can be quick to get started (hosted) and free, but it won't respect your privacy.

0 views