Latest Posts (20 found)

How to Host your Own Email Server

I recently started a new platform where I sell my books and courses, and in this website I needed to send account related emails to my users for things such as email address verification and password reset requests. The reasonable option that is often suggested is to use a paid email service such as Mailgun or SendGrid. Sending emails on your own is, according to the Internet, too difficult. Because the prospect of adding yet another dependency on Big Tech is depressing, I decided to go against the general advice and roll my own email server. And sure, it wasn't trivial, but it wasn't all that hard either! Are you interested in hosting your own email server, like me? In this article I'll tell you how to go from nothing to being able to send emails that are accepted by all the big email players. My main concern is sending, but I will also cover the simple solution that I'm using to receive emails and replies.

0 views

How Many Holes Does a Straw Have?

I was recently listening to an episode of The Rest Is Science , specifically the episode The Evolution Of The Butthole . As always, Hannah and Michael put on a great show and I came away thinking about its contents. In it, they asked how many holes does a straw have? And my default response was something like: Why they have 2 holes, silly! One at each end. You probably don't need it, dear reader, but here's a handy-dandy diagram of what I'm talking about...2 holes, right? Then Michael asked "okay, how many holes does a doughnut have?" Bah! More simple questions! A doughnut obviously has 1 hole, right? RIGHT?! Here's another diagram (look, I know you're a clever person, and you don't need a diagram of a bloody straw, or a doughnut, but we're going with it, okay). We're all on the same page here, right folks? A straw clearly has 2 holes, and a doughnut obviously has 1. This is where it gets interesting. Michael now flips script, and quite frankly, blows my fucking mind. He said: But isn't a straw just an elongated doughnut? What. The. Actual. Fuck? A straw is just an elongated doughnut (albeit not as tasty). So does a straw have 1 hole? Does a doughnut have 2 holes? I don't know. I'm questioning my life decisions at this point. It's all too hard. Can any of you tell me how many holes a straw (or a doughnut) has? Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views

Main Character 🦸‍♂️

I’m working on a new app called Main Character. It’s a gamified productivity app where you earn XP and level up for completing tasks & tracking habits. Tasks run on a kanban board and habits show up on a GitHub-style consistency graph. Basic tasks + habit tracking are live today and I use it daily. Long term I’m turning it into an AI orchestration…

0 views

Are Design Tools Relevant Anymore

I was a product designer for a few years. I had switched careers to design after suffering burn out as a software engineer. During those years, my entire day was spent in Figma, building high fidelity mockups, leading workshops and creating prototypes. While Figma helped me move quickly, rapidly iterating after receiving user feedback, the engineer part of me always felt it was a throwaway step. You build something, only to then have somebody else build it again in code. I recently had to put on my design hat again, putting together interactive prototypes around a few redesign ideas. At first, I reached for Figma, but after fiddling around for an hour, decided to go a different route. While prototyping in Figma used to be faster than building in code, that’s no longer true. With Claude Code, building out frontend components is fast . Much faster than messing with layers, frames and symbols in Figma. Let me explain. Enterprise apps have well defined brand guidelines. Colors, type, scale. They are often built off an existing component library (think Bootstrap, shadcn). This means you can use Claude in a way that follows the look and feel of your application, and is constrained to the components the development team leverages. The rails help keep Claude from going off into the deep end. Design then becomes focused on solving the user’s problem through UX, less fiddling around with UI. I can open Freeform on my iPad, sketch something out, and prompt Claude to leverage our foundation to make my sketch a reality. Then, I can dig into the code and tweak things to be just right. The result is a more interactive, true to life prototype that gives your engineering team a head start with coded components. You get better feedback from users and stakeholders as it’s easier to visualize what the final product looks like. You discover pitfalls that might not have shown up until an engineer was halfway into the card. On top of all that, you move a lot faster, you’re designing and building in 1 step rather than 2, giving your engineering team a head start once designs are finalized. So then, what’s the point of Figma and Sketch? You can tell Figma is battling with this reality by pushing Figma make. The issue is, it’s too constrained and produces poor results. You can’t link it to existing coded components, Tailwind configs, etc. On the other hand, usin my approach requires a technical background. You need to guide with framework suggestions, foundational setup and be able to takeover and tweak yourself. That said, there in the shorter term there’s likely still a place for Figma and Sketch at the table. Designing using the method I talked about requires a technical background, otherwise your results will be all over the place, and small tweaks will be next to impossible. As the technology gets better though, I’ll be surprised if Figma and Sketch survive the next couple of years.

0 views
Carlos Becker Yesterday

You'll never see my child's face

I became a dad recently, and I’m not publishing a bunch of photos of my kid like most parents do. Some people started asking me why, so here it is.

0 views
ava's blog Yesterday

[bearblog carnival] my favorite meme

For the Bearblog Carnival of March, I wanna briefly add in my own favorite meme! Or at least, one of them. There are so many I could add... I'm choosing a specific YouTube video, a YouTube Poop . A YouTube Poop (or YTP, often shortened in the title) is a type of video remixing that edits pre-existing media like ads, movies, TV series, game cutscenes, and so on. The point is to edit the video and sound so that the material suddenly shows or says new things. They usually have some crass or silly humor, other memes, vulgar, immature and nonsensical jokes. This format exists since 2004, and new ones are being made still! The skill lies in cutting it so it sounds like the new sentence sounds as if it was really said, or almost like it, while still being obvious that it was cut. Basically, making it credible via amazing (non-AI) editing skills (correct intonation, not as choppy, finding creative ways to string sounds and words together), while also showing via other means that it is not the original and not meant to be taken seriously. YTPs even reference each other or each others' creators sometimes, and a popular sound to edit in is 'soooos' or 'jooj'. Kami also picked a YTP but by very tall bart , which I also enjoy as a creator, but I really love DaThings and cs188 . Specifically, my favorite YTP is Wonder Bros . (You can turn subtitles on, they are always properly subtitled by hand!) This YTP edits a Nintendo ad for the Super Mario Bros Wonder Switch game to make silly statement about the games' contents - new characters, features, maps. The Urineurineurineurineurine badge, being in grill form to bust out of prison... I have watched this YTP so many times, I know it by heart at this point. It's also frequently referenced by me and my wife in real life. For example, as we have been on a bread-baking journey recently, I usually say " Bowser spreads his new bread across the land! " whenever a new bread is finished. Whenever a nun pops up anywhere (visually or as a word), one of us says: " Oh! A nun! Interesting! ". For a while, we have also just randomly said " Standees nuts ". When something goes wrong, my wife says " Dangnabbit, Yoshi. ", and when I feel silly, I try to emulate the motion of Elephant Mario and make the zazazazaoowie-wowie sound at 03:05 (as best as I can). Whenever it fits, usually because of a sound or seeing the word, I'll say " You can also eeuurgh. " or " You can use it to bust out of prison! Nifty. " We no longer call mushrooms mushrooms, we say shushrooms , even in our grocery list. Whenever someone is wearing a good outfit, we say " Mario's wylin. Just look at that drip! ". Whenever I work is weird or I feel awkward about an email I sent or something, I say " This [word that fits] is normal. " in the same tone. I know I even said " Up to four people can breathe the air for a bit. " some time. Writing this all out, I wasn't even aware of just how much it has infiltrated my life! I thought it was just 3-4 things, now this is slightly embarrassing even! But it's funny, and I love it. It's not even the only YTP we reference. We also reference this Garfield YTP fro cs188, specifically presentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresents, and opening the door just to cough (We even have that as a soundbite to play in voice calls). I even sing the song that starts at 2:50, and the one at 4:20, last time it was while we were walking on the street :D Maybe in some future post or carnival, I'll focus more on the written/image memes I like! Reply via email Published 05 Mar, 2026

0 views

Can coding agents relicense open source through a “clean room” implementation of code?

Over the past few months it's become clear that coding agents are extraordinarily good at building a weird version of a "clean room" implementation of code. The most famous version of this pattern is when Compaq created a clean-room clone of the IBM BIOS back in 1982 . They had one team of engineers reverse engineer the BIOS to create a specification, then handed that specification to another team to build a new ground-up version. This process used to take multiple teams of engineers weeks or months to complete. Coding agents can do a version of this in hours - I experimented with a variant of this pattern against JustHTML back in December. There are a lot of open questions about this, both ethically and legally. These appear to be coming to a head in the venerable chardet Python library. was created by Mark Pilgrim back in 2006 and released under the LGPL. Mark retired from public internet life in 2011 and chardet's maintenance was taken over by others, most notably Dan Blanchard who has been responsible for every release since 1.1 in July 2012 . Two days ago Dan released chardet 7.0.0 with the following note in the release notes: Ground-up, MIT-licensed rewrite of chardet. Same package name, same public API — drop-in replacement for chardet 5.x/6.x. Just way faster and more accurate! Yesterday Mark Pilgrim opened #327: No right to relicense this project : [...] First off, I would like to thank the current maintainers and everyone who has contributed to and improved this project over the years. Truly a Free Software success story. However, it has been brought to my attention that, in the release 7.0.0 , the maintainers claim to have the right to "relicense" the project. They have no such right; doing so is an explicit violation of the LGPL. Licensed code, when modified, must be released under the same LGPL license. Their claim that it is a "complete rewrite" is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a "clean room" implementation). Adding a fancy code generator into the mix does not somehow grant them any additional rights. Dan's lengthy reply included: You're right that I have had extensive exposure to the original codebase: I've been maintaining it for over a decade. A traditional clean-room approach involves a strict separation between people with knowledge of the original and people writing the new implementation, and that separation did not exist here. However, the purpose of clean-room methodology is to ensure the resulting code is not a derivative work of the original. It is a means to an end, not the end itself. In this case, I can demonstrate that the end result is the same — the new code is structurally independent of the old code — through direct measurement rather than process guarantees alone. Dan goes on to present results from the JPlag tool - which describes itself as "State-of-the-Art Source Code Plagiarism & Collusion Detection" - showing that the new 7.0.0 release has a max similarity of 1.29% with the previous release and 0.64% with the 1.1 version. Other release versions had similarities more in the 80-93% range. He then shares critical details about his process, highlights mine: For full transparency, here's how the rewrite was conducted. I used the superpowers brainstorming skill to create a design document specifying the architecture and approach I wanted based on the following requirements I had for the rewrite [...] I then started in an empty repository with no access to the old source tree, and explicitly instructed Claude not to base anything on LGPL/GPL-licensed code . I then reviewed, tested, and iterated on every piece of the result using Claude. [...] I understand this is a new and uncomfortable area, and that using AI tools in the rewrite of a long-standing open source project raises legitimate questions. But the evidence here is clear: 7.0 is an independent work, not a derivative of the LGPL-licensed codebase. The MIT license applies to it legitimately. Since the rewrite was conducted using Claude Code there are a whole lot of interesting artifacts available in the repo. 2026-02-25-chardet-rewrite-plan.md is particularly detailed, stepping through each stage of the rewrite process in turn - starting with the tests, then fleshing out the planned replacement code. There are several twists that make this case particularly hard to confidently resolve: I have no idea how this one is going to play out. I'm personally leaning towards the idea that the rewrite is legitimate, but the arguments on both sides of this are entirely credible. I see this as a microcosm of the larger question around coding agents for fresh implementations of existing, mature code. This question is hitting the open source world first, but I expect it will soon start showing up in Compaq-like scenarios in the commercial world. Once commercial companies see that their closely held IP is under threat I expect we'll see some well-funded litigation. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . Dan has been immersed in chardet for over a decade, and has clearly been strongly influenced by the original codebase. There is one example where Claude Code referenced parts of the codebase while it worked, as shown in the plan - it looked at metadata/charsets.py , a file that lists charsets and their properties expressed as a dictionary of dataclasses. More complicated: Claude itself was very likely trained on chardet as part of its enormous quantity of training data - though we have no way of confirming this for sure. Can a model trained on a codebase produce a morally or legally defensible clean-room implementation? As discussed in this issue from 2014 (where Dan first openly contemplated a license change) Mark Pilgrim's original code was a manual port from C to Python of Mozilla's MPL-licensed character detection library. How significant is the fact that the new release of chardet used the same PyPI package name as the old one? Would a fresh release under a new name have been more defensible?

0 views
Martin Fowler Yesterday

Ideological Resistance to Patents, Followed by Reluctant Pragmatism

Naresh Jain has long been uncomfortable with software patents. But a direct experience of patent aggression, together with the practical constraints faced by startups, led him to resort to defensive patenting as as a shield in this asymmetric legal environment.

0 views
neilzone Yesterday

My resolutions for International Women's Day

Each year, 8th March is International Women’s Day . (Yes, yes, since someone asks Every. Single. Time., there is also an International Men’s Day.) This year, IWD is on a Sunday. I saw an interesting toot in the fediverse from Eliza , asking men about their resolutions for IWD. I had a think about this. I work for myself, on my own, so things about “being more aware of things in an office environment” is less applicable to me. (“Explicit” as in “clear, intentional”, rather than “overly sexy”. Probably.) I’m married, and Sandra and I share things pretty equally. It really should go without saying, but nevertheless: I cook, clean, do food shopping, wash clothes, tidy up (I’m the tidy one!), and so on. Sometimes one of us does more of one thing than the other, depending on what is going on in our lives. Other things are split based on enjoyment from doing it, or just plain interest and skill. Sandra enjoys planning holidays, more than I do. I have no objection to sorting out the gardening, or doing “handyman” jobs around the house. Sandra is better at choosing presents for people; I’ll sort out the car servicing and maintenance. We communicate about this kind of thing quite a lot - we make a good team, IMHO, and that means genuinely working together and supporting each other - but one resolution for me, this IWD, is that I will take the opportunity to talk to Sandra explicitly about how we, as a couple, handle these things. We can replan accordingly. I’m on the fence about this one, as it could be merely performative, and I already boost a lot. But it is something that I can do, and raising awareness does have a value. So, perhaps… And perhaps especially toots about women’s equality / rights / contributions etc. Obviously, this would be based on “to the best of my knowledge” anyway. Not everyone wants to share what gender(s) they are, or are not, and that is absolutely their choice. Perhaps. I will give this some more thought. But I wanted to post this sooner rather than later, so I could also draw inspiration from what other men are planning on doing.

0 views

Hapax Locks: Scalable Value-Based Mutual Exclusion

Hapax Locks: Scalable Value-Based Mutual Exclusion Dave Dice and Alex Kogan PPoPP'26 This paper describes a locking algorithm intended for cases where spinning is acceptable (e.g., one thread per core systems). It is similar to a ticket lock but generates less coherence traffic. Each lock/unlock operation causes a constant number of cache lines to move between cores, regardless of the number of cores involved or how long they spin for. As we’ve seen in a previous paper , polling a value in memory is cheap if the cache line is already local to the core which is polling. A Hapax lock comprises two 64-bit fields: Additionally, there is a global (shared among all Hapax locks) 64-bit sequence number. Each time a thread attempts to lock a Hapax lock, it generates a Hapax value which uniquely identifies the locking episode . A locking episode is a single lock/unlock sequence performed by a specific thread. A Hapax value is generated by atomically incrementing the sequence number. It is assumed that the 64-bit counter additions will never overflow. Next, the locking thread atomically exchange the value of with the Hapax value it just generated. This exchange operation generates a total ordering among Hapax values. It is a way for threads to cooperatively decide the order in which they will acquire the lock. Say thread generates Hapax value and stores it into (via an atomic exchange operation). Next, thread generates Hapax value and atomically exchanges the value of with . The result of the exchange operation performed by will be . At this point, thread knows that it is directly behind thread A in the queue and must wait for thread to release the lock. To finish acquiring the lock, threads continually poll , waiting for to equal the Hapax value of the preceding locking episode. In the example above, thread polls until it sees the value . At this point, the lock has been acquired. Unlocking is implemented by storing the Hapax value used by the unlocking thread into . In the running example, thread would unlock the lock by storing the value into . This algorithm generates a lot of coherence traffic. In particular, the cache line which holds the sequence number would move between cores each time a new Hapax value is generated. Also, each store to would send coherence traffic to each core which had recently polled the value of . The paper has two techniques to address these issues. While the sequence number monotonically increases, the values stored in and do not. There are two reasons for this. First, a single sequence number is shared among all Hapax locks. The second reason is that multiple threads can generate Hapax values and then race to perform the atomic exchange operation. For example, thread could generate a Hapax value of while thread generates a Hapax value of . They then race each other to atomically exchange their Hapax value with the value of . If thread wins the race, then will first take on the value , and then later it will have the value . Once you realize that the values of and are not monotonically increasing, it is straightforward to see how the generation of Hapax values can be made cheap. A thread can hoard a batch of Hapax values with a single atomic add operation. For example, a thread could atomically increase the value of the sequence number by 1024. At this point, the thread has allocated 1024 Hapax values for itself that it can use in the future without accessing the cache line which holds the shared sequence number. The paper proposes allocating Hapax values in blocks of 64K. The paper proposes adding an additional array which serves a similar role as . The number of elements in the array should be greater than the number of cores (the paper uses an array of 4096 values). Like the sequence number, this array is shared among all Hapax locks. When a thread writes its Hapax value into , the thread also stores its Hapax value into one of the 4096 elements. The array index is determined by the Hapax value. Many potential hash functions could be used. The paper proposes hashing bits [27:16] of the Hapax value. The 16 is related to the allocator block size. In the locking sequence, a thread loads the value of once. If the value of Depart does not match the expected Hapax value, then the locking thread polls the appropriate element of the shared array. The thread polls this element until its value changes. If the new value is the expected value of , then the lock has been acquired. If not, then a hash collision has occurred (e.g., a locking episode associated with a different Hapax lock caused the value to be updated). In this case, the thread starts over by checking and then polling the array element if necessary. This scheme minimizes coherence traffic associated with polling. When an unlocking core stores a value into an array element, the associated cache line will typically be present only in the cache of the next core in line. Coherence traffic is only generated related to the locking and unlocking cores. Other threads (which are further back in the line) will be polling other array elements and thus be loading from other cache lines and so the cores those threads are running on won’t see the coherence messages. Fig. 3 has results from a microbenchmark. Hapax locks scale much better than ticket locks and go head-to-head with other state-of-the-art locking algorithms. The Hapax implementation is so concise (about 100 lines) that the authors included C++ source code in the paper. Source: https://dl.acm.org/doi/10.1145/3774934.3786443 Dangling Pointers The big downside of spinning is that it wastes cycles in the case where there are other threads that the OS could schedule. I wonder if there is a lightweight coordination mechanism available. For example, the OS could write scheduling information into memory that is mapped read-only into user space. This could be used to communicate to the spinning code whether or not there are other threads ready to run. Subscribe now

0 views
Stratechery Yesterday

An Interview with Gregory Allen About Anthropic and the U.S. Government

An interview with Gregory Allen about Anthropic's dispute with the U.S. government.

0 views
Kev Quirk Yesterday

📚 Flybot

by Dennis E. Taylor Physicist Philip Moray is having a good day. He’s chipping away at his big work project. The lunch in the cafeteria is at least edible. And he’s looking forward to his end-of-the-day drink and a soak in the hot tub. Then, a strange device turns up in his office. A piece of technology he has never seen before–and shouldn’t even exist. Suddenly, corpses start turning up, eco-activists go on the attack, random people suffer bizarre symptoms. And every time the authorities get a lead, it traces right back to Philip and his colleague, Celia Hunt. Then, a mysterious caller contacts Philip–and, suddenly, staying out of jail is the very least of his problems. Apparently, that hot tub’s going to have to wait. 📖 Learn more on Goodreads… I'm a big fan of Taylor's work but Flybot didn't really hit the mark for me as much as other books from Taylor have. I felt like the story lost its way in the middle; it came together okay in the end, where there was a interesting (but predictable) twist. Not the best book I've ever read. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
Kelly Sutton Yesterday

AI Retrospective, Predictions

We’ve entered the 4th year of the Slop Wars. We have colorful short-hand like clanker , vibe-code , one-shot , and you’re absolutely right! . These phrases capture the zeitgeist. Emphasis on geist. It’s been more than 3 years since the release of OpenAI’s ChatGPT, which was the inciting incident that’s upended the world economy and changed how we work. This blog post provides some miscellaneous observations on AI, how it’s being used, and how it might be used going forward. I’m writing this mostly for myself to organize my thoughts, but it might be useful to others. These ideas are mostly drawn from what I’ve seen at my company, Scholarly . Between 2023 - 2025, every interface with AI was a chat interface. LLMs are next-token predictors, and the hello world of a next-token predictor is a chat interface. Words go in, words come out. We flew past the Turing Test with hardly a wave. But a chat interface is only one way of interacting with a next-token predictor. As context windows have grown and model quality increased, we can trust the model with more than the call-and-response staccato. We can ask it to go do things that take longer. The chat surface might not withstand the test of time if it’s not the most appropriate tool for the task. Chat right now is similar to 3D websites, Flash, or an under construction GIF. Novel, but potentially pointless. It ultimately comes down to this: Users don’t care if you use chat or the latest models. They care if you solve their problem. I predict the AI-in-your-face becomes muted. Certain parts of applications might feel more probabilistic because they are driven by LLMs, but still belong to the same application. You won’t be able to tell where discrete ends and probability begins. White collar jobs in general have come to the realization that software engineers have known for a few years: This is going to change how we work dramatically. At the time of writing, the Claude Code/Codex interface seems like the sweet spot of software engineering. This is higher level than where we might have collectively been last year, where it was focused on tab completion. As the models have gotten more capable, we’ve started to trust them with more. What used to be a novel time saver (tab completion) is now unnecessarily slow. The entire nature of software engineering has changed, with far less time with hands on the keyboard entering syntax. We’ve gained a very capable author, so much of our time is now spent reviewing, tweaking, asking it to come back with changes. It’s no surprise that the landscape is incredibly messy. There are incomplete integrations (Why can’t I tag Claude in my Linear ticket?) and half-baked products abound. There are companies that build something compelling, only to be obviated by an Anthropic blog post. Observing this, there will be a field of dead companies in the middle with few survivors at the edges. The model makers (OpenAI, Anthropic) will stick around, with third place subjected to the power law dropoff of consumer choice (not good!). The systems of record that participate in the AI ecosystem will be rewarded handsomely, as OpenAI or Anthropic is the conduit for white collar work. Chat has limited us to asking for tasks to be completed inline. If something was too long for the early meager context windows, it just couldn’t be done. At the time of writing, we rarely think of context windows any more. Claude Code and things like chain-of-thought blew the doors off of the context window. By keeping small artifacts, summarizing as we go (compacting), these models are able to stay on a task for much longer through self-regulation. A strange loop? There’s lots of alpha in just making webhooks work (OpenAI) or providing them at all (Anthropic). As models have gotten better and we can trust them more, we need better ways of sending them off and having them come back with a work product for inspection sooner. I predict we’ll see more ways of asynchronously interacting with models. This is also what makes it feel like a coworker replacement: we’re not chit-chatting, they are going off and accomplishing a task. Agents, skills, etc. feel like fertile ground. Agents communicating with agents, working together to accomplish a task? I’m not sold yet, but that could be where we see the next model improvements take us. Right now when I ask Claude to do something, talented engineers still stand a chance to complete the task before it does. These are engineers that know the codebase, have a high WPM, and know exactly what they are going to do. If we play the tape, I predict the models will eventually start competing on latency. Indeed, there are some things that we use AI for in our application where something taking 30 minutes is not particularly impressive. That same task taking 180ms, now that’s impressive. Between Cerebras and Taalas , there are some promising options out there. As model latency decreases, this will put strange and foreign pressure on conventional hardware. Is there a future where Claude Code is operating on my machine (or a cloud VM) and idling waiting for disk I/O? What if it’s already thought through its next 10 steps, and it’s just waiting for the host to catch up? You should know that I’m financially incentivized to believe that SaaS is not dead as we know it. Scholarly is a SaaS company for colleges and universities. If I believed that LLMs posed an existential threat to SaaS, I should get out of this business. There was a market swing this year with the ethos being: “AI makes it easy to create your own applications, and we believe companies will do that instead of going with a vendor. Therefore existing SaaS has become less valuable.” I think that’s dumb. Mostly because the running code that you buy is just a small piece of software. Are you telling me that an enterprise is going to vibe-upgrade their bespoke application to the latest operating system/library/underlying dependency? I don’t think so. If anything, there are some really exciting properties about LLMs that make being a new entrant into a space great. We are unencumbered by the past with powerful tools for helping us plug in to existing systems. So it’s not that SaaS is dead, it’s SaaS that doesn’t adapt is dead. But that’s always been the case, maybe just more urgent now. AI is having profound impacts on white collar work. Many of the layoffs we are seeing in 2026 may be intertwined with the hiring glut of ~2020. Extrapolating my own behavior, I suspect many service aspects of white collar industries are seeing a softening in demand. Lawyers, for example, I bet aren’t hearing from their clients as much. They are still fielding the important interactions where expertise is required, but a quick turn on a simple contract or proof-read of an NDA might just go to Opus 4.6 or GPT 5.2 for many people. Suddenly or slowly, the billable hours slip. Nothing concrete here, just a hunch. Not sure how it plays out. I think MCP is dumb, but it’s what we’re using. I expect it to stick around for a few years, and then we’ll fall back to more traditional REST APIs. Kind of similar to what happened with GraphQL. One of my favorite podcast episodes ever is the episode on spreadsheets from Planet Money . In it, they discuss how the invention of the digital spreadsheet put bookkeepers (the keeper of paper spreadsheets) out of work. But out of that pain new work was born around financial modeling. Rather than taking a day, it took seconds to answer the question, “What if we decreased our costs by 5%?” Work in predictive financial modeling was born. An entire new job came to be, and with it many positions. Even more than bookkeepers. We’re experiencing a similar creative destruction currently. Lots of what we knew is being destroyed before our eyes and new opportunities are being born. Special thanks to Claude Code for suggesting a few edits to this post.

0 views
Karan Sharma 2 days ago

A Web Terminal for My Homelab with ttyd + tmux

I wanted a browser terminal at that works from laptop, tablet, and phone without special client setup. The stack that works cleanly for this is ttyd + tmux. Two decisions matter most: Why each flag matters: reverse proxies to with TLS via Cloudflare DNS challenge. Because ttyd uses WebSockets heavily, reverse proxy support for upgrades is essential. I tuned tmux for long-running agent sessions, not just manual shell use. This was a big pain point, so I added both workflows: Browser-native copy tmux copy mode On mobile, ttyd’s top-left menu (special keys) makes prefix navigation workable. This is tailnet-only behind Tailscale. No public exposure. Still, the container has and , which is a strong trust boundary. If you expose anything like this publicly, add auth in front and treat it as high-risk infrastructure. The terminal is now boring in the best way: stable, predictable, and fast to reach from any device. handles terminal-over-websocket behavior well. enforces a single active client, which avoids cross-tab resize contention. : writable shell : matches my existing Caddy upstream ( ) : one active client only (no resize fight club) : real host shell from inside the container : correct login environment and tmux config loading : persistent attach/re-attach status line shows host + session + path + time pane border shows pane number + current command active pane is clearly highlighted : create/attach named session : create named window : rename window : session/window picker : pane movement : pane resize Browser-native copy to turn tmux mouse off drag-select + browser copy shortcut to turn tmux mouse back on tmux copy mode enters copy mode and shows select, copy (shows ) or exits (shows )

0 views

Emacs Philosophy and Infinite Depth with Protesilaos

I had the absolute pleasure to be joined by the great Protesilaos Stavrou for a conversation about emacs, minimalism, life philosophy, interconnectedness, and infinite depth. Come along for the ~2 hour journey! As always, God bless, and until next time. If you enjoyed this post, consider Supporting my work , Checking out my book , Working with me , or sending me an Email to tell me what you think.

0 views
Armin Ronacher 2 days ago

AI And The Ship of Theseus

Because code gets cheaper and cheaper to write, this includes re-implementations. I mentioned recently that I had an AI port one of my libraries to another language and it ended up choosing a different design for that implementation. In many ways, the functionality was the same, but the path it took to get there was different. The way that port worked was by going via the test suite. Something related, but different, happened with chardet . The current maintainer reimplemented it from scratch by only pointing it to the API and the test suite. The motivation: enabling relicensing from LGPL to MIT. I personally have a horse in the race here because I too wanted chardet to be under a non-GPL license for many years. So consider me a very biased person in that regard. Unsurprisingly, that new implementation caused a stir. In particular, Mark Pilgrim, the original author of the library, objects to the new implementation and considers it a derived work. The new maintainer, who has maintained it for the last 12 years, considers it a new work and instructs his coding agent to do precisely that. According to author, validating with JPlag, the new implementation is distinct. If you actually consider how it works, that’s not too surprising. It’s significantly faster than the original implementation, supports multiple cores and uses a fundamentally different design. What I think is more interesting about this question is the consequences of where we are. Copyleft code like the GPL heavily depends on copyrights and friction to enforce it. But because it’s fundamentally in the open, with or without tests, you can trivially rewrite it these days. I myself have been intending to do this for a little while now with some other GPL libraries. In particular I started a re-implementation of readline a while ago for similar reasons, because of its GPL license. There is an obvious moral question here, but that isn’t necessarily what I’m interested in. For all the GPL software that might re-emerge as MIT software, so might be proprietary abandonware. For me personally, what is more interesting is that we might not even be able to copyright these creations at all. A court still might rule that all AI-generated code is in the public domain, because there was not enough human input in it. That’s quite possible, though probably not very likely. But this all causes some interesting new developments we are not necessarily ready for. Vercel, for instance, happily re-implemented bash with Clankers but got visibly upset when someone re-implemented Next.js in the same way. There are huge consequences to this. When the cost of generating code goes down that much, and we can re-implement it from test suites alone, what does that mean for the future of software? Will we see a lot of software re-emerging under more permissive licenses? Will we see a lot of proprietary software re-emerging as open source? Will we see a lot of software re-emerging as proprietary? It’s a new world and we have very little idea of how to navigate it. In the interim we will have some fights about copyrights but I have the feeling very few of those will go to court, because everyone involved will actually be somewhat scared of setting a precedent. In the GPL case, though, I think it warms up some old fights about copyleft vs permissive licenses that we have not seen in a long time. It probably does not feel great to have one’s work rewritten with a Clanker and one’s authorship eradicated. Unlike the Ship of Theseus , though, this seems more clear-cut: if you throw away all code and start from scratch, even if the end result behaves the same, it’s a new ship. It only continues to carry the name. Which may be another argument for why authors should hold on to trademarks rather than rely on licenses and contract law. I personally think all of this is exciting. I’m a strong supporter of putting things in the open with as little license enforcement as possible. I think society is better off when we share, and I consider the GPL to run against that spirit by restricting what can be done with it. This development plays into my worldview. I understand, though, that not everyone shares that view, and I expect more fights over the emergence of slopforks as a result. After all, it combines two very heated topics, licensing and AI, in the worst possible way.

0 views
matklad 2 days ago

JJ LSP Follow Up

In Majjit LSP , I described an idea of implementing Magit style UX for jj once and for all, leveraging LSP protocol. I’ve learned today that the upcoming 3.18 version of LSP has a feature to make this massively less hacky: Text Document Content Request LSP can now provide virtual documents, which aren’t actually materialized on disk. So this: can now be such a virtual document, where highlighting is provided by semantic tokens, things like “check out this commit” are code actions, and “goto definition” jumps from the diff in the virtual file to a real file in the working tree.

0 views
Evan Schwartz 2 days ago

Scour - February Update

Hi friends, In February, Scour scoured 647,139 posts from 17,766 feeds (1,211 were newly added). Also, 917 new users signed up, so welcome everyone who just joined! Here's what's new in the product: If you subscribe to specific feeds (as opposed to scouring all of them), Scour can now infer topics you might be interested in from them. You can click the link that says "Suggest from my feeds" on the Interests page . Thank you to the anonymous user who requested this! The onboarding experience is simpler. Instead of typing out three interests, you now can describe yourself and your interests in free-form text. Scour extracts a set of interests from what you write. Thank you to everyone who let me know that they were a little confused by the onboarding process. I made two subtle changes to the ranking algorithm. First, the scoring algorithm ranks posts by how well they match your closest interest and gives a slight boost if the post matches multiple interests. That was the intended design from earlier, but I realized that multiple weaker matches were pulling down the scores rather than boosting them. The second change was that I finally retired the machine learning text quality classifier model that Scour had been using. The final straw was when a blog post I had written (and worked hard on!) wasn't showing up on Scour. The model had classified it as low quality 😤. I knew for a while that what the model was optimizing for was somewhat orthogonal to my idea of text quality, but that was it. For the moment, Scour relies on a large domain blocklist (of just under 1 million domains) to prevent low-quality content and spam from getting into your feed. I'm also investigating other ways of assessing quality without relying on social signals , but more on that to come in the future. I've always been striving to make Scour fast and it got much faster this past month. My feed, which compares about 35,000 posts against 575 interests, now loads in around 50 milliseconds. Even comparing all the 600,000+ posts from the last month across all feeds takes only 180 milliseconds. This graph shows the 99th percentile latency (the slowest requests) dropping from the occasional 10 seconds down to under 400 milliseconds (lower is better): For those interested in the technical details, this speed up came from two changes: First, I switched from scanning through post embeddings streamed from SQLite, which was already quite fast because the data is local, to keeping all the relevant details in memory. The in-memory snapshot is rebuilt every 15 minutes when the scraper finishes polling all of the feeds for new content. This change resulted in the very nice combination of much higher performance and lower memory usage, because SQLite connections have independent caches. The second change came from another round of optimization on the library I use to compute the Hamming Distance between each post's embedding and the embeddings of each of your interests. You can read more about this in the upcoming blog post, but I was able to speed up the comparisons by around another 40x, making it so Scour can now do around 1.6 billion comparisons per second. Together, these changes make loading the feed feel instantaneous, even though your whole feed is ranked on the fly when you load the page. Here were some of my favorite posts that I found on Scour in February: Happy Scouring! Scour is built on vector embeddings, so I'm especially excited when someone releases a new and promising-sounding embedding model. I get particularly excited by those that are explicitly trained to support binary quantization like this one from Perplexity: pplx-embed: State-of-the-Art Embedding Models for Web-Scale Retrieval . I also spend a fair amount of time thinking about optimizing Rust code, especially using SIMD, so this was an interesting write up from TurboPuffer: Rust zero-cost abstractions vs. SIMD . This was an interesting write up comparing what different coding agents do under the hood: I Intercepted 3,177 API Calls Across 4 AI Coding Tools. Here's What's Actually Filling Your Context Window. . And finally, this one is on a very different topic but has some nice animations that demonstrate why boarding airplanes is slow and shows The Fastest Way to Board an Airplane .

0 views
Kev Quirk 2 days ago

Another New Lick of Paint

Around a month ago I switched this blog to Pure Blog , at the same time, I decided to simplify the design and give it a new lick of paint. Here's what it looked like: It was okay . But I've done the thing before, and I really wanted something different. The problem was, I didn't know what I wanted. My wife and I recently went away for the weekend. While away, we stopped off at a lovely little coffee shop where they served us water and a pot of tea from these beautifully coloured pots. The mustard yellow and the steel blue are just beautiful; they work so well together, and I immediately decided I wanted to use this kind of palette for my next website design. Since Monday I've been working on the re-design (something that's really simple to do with Pure Blog ). it's now ready and I've launched the new site this evening. Here's what it looks like now: I thought about using the mustard colour for the entire background, but since this is a blog, reading experience is very important, and I felt I was straining my eyes when reading in full mustard mode. So I toned it down to this nice cream colour, and stuck with mustard for the header and footer only. While I was there I also got rid of the effect to simplify the site header even more. I have to say, I'm really happy with the result. There's bound to be some little bug or caching issues here and there, which I'll mop up as I discover them. If you find an issue, please drop me an email or leave a comment, and I'll get it sorted. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views