Latest Posts (20 found)

Knowledge Priming

Rahul Garg has observed a frustration loop when working with AI coding assistants - lots of code generated, but needs lots of fixing. He's noticed five patterns that help improve the interaction with the LLM, and describes the first of these : priming the LLM with knowledge about the codebase and preferred coding patterns.

0 views

decomplexification continued

Last spring I wrote a blog post about our ongoing work in the background to gradually simplify the curl source code over time. This is a follow-up: a status update of what we have done since then and what comes next. In May 2025 I had just managed to get the worst function in curl down to complexity 100, and the average score of all curl production source code (179,000 lines of code) was at 20.8. We had 15 functions still scoring over 70. Almost ten months later we have reduced the most complex function in curl from 100 to 59. Meaning that we have simplified a vast number of functions. Done by splitting them up into smaller pieces and by refactoring logic. Reviewed by humans, verified by lots of test cases, checked by analyzers and fuzzers, The current 171,000 lines of code now has an average complexity of 15.9. The complexity score in this case is just the cold and raw metric reported by the pmccabe tool. I decided to use that as the absolute truth, even if of course a human could at times debate and argue about its claims. It makes it easier to just obey to the tool, and it is quite frankly doing a decent job at this so it’s not a problem. In almost all cases the main problem with complex functions is that they do a lot of things in a single function – too many – where the functionality performed could or should rather be split into several smaller sub functions. In almost every case it is also immediately obvious that when splitting a function into two, three or more sub functions with smaller and more specific scopes, the code gets easier to understand and each smaller function is subsequently easier to debug and improve. I don’t know how far we can take the simplification and what the ideal average complexity score of a the curl code base might be. At some point it becomes counter-effective and making functions even smaller then just makes it harder to follow code flows and absorbing the proper context into your head. To illustrate our simplification journey, I decided to render graphs with a date axle starting at 2022-01-01 and ending today. Slightly over four years, representing a little under 10,000 git commits. First, a look a the complexity of the worst scored function in curl production code over the last four years. Comparing with P90 and P99. The most complex function in curl over time Identifying the worst function might not say too much about the code in general, so another check is to see how the average complexity has changed. This is calculated like this: For all functions, add its function-score x function-length to a total complexity score, and in the end, divide that total complexity score on total number of lines used for all functions. Also do the same for a median score. Average and median complexity per source code line in curl, over time. When 2022 started, the average was about 46 and as can be seen, it has been dwindling ever since, with a few steep drops when we have merged dedicated improvement work. One way to complete the average and median lines to offer us a better picture of the state, is to investigate the complexity distribution through-out the source code. How big portion of the curl source code is how complex This reveals that the most complex quarter of the code in 2022 has since been simplified. Back then 25% of the code scored above 60, and now all of the code is below 60. It also shows that during 2025 we managed to clean up all the dark functions, meaning the end of 100+ complexity functions. Never to return, as the plan is at least. We don’t really know. We believe less complex code is generally good for security and code readability, but I it is probably still too early for us to be able to actually measure any particular positive outcome of this work (apart from fancy graphs). Also, there are many more ways to judge code than by this complexity score alone. Like having sensible APIs both internal and external and making sure that they are properly and correctly documented etc. The fact that they all interact together and they all keep changing, makes it really hard to isolate a single factor like complexity and say that changing this alone is what makes an impact. Additionally: maybe just the refactor itself and the attention to the functions when doing so either fix problems or introduce new problems, that is then not actually because of the change of complexity but just the mere result of eyes giving attention on that code and changing it right then. Maybe we just need to allow several more years to pass before any change from this can be measured? All functions get a complexity score by pmccabe Each function has a number of lines

0 views

Setup a Syncthing service on Debian

Install via the APT instructions. Next (source): ``` useradd -u 1010 -c "Syncthing Service" -d /var/syncthing -s /usr/sbin/nologin syncthing mkdir /var/syncthing chown -R syncthing:syncthing /var/syncthing chmod 700 /var/syncthing systemctl enable [email protected] systemctl start [email protected] systemctl status [email protected] ``` Then you should be able to connect to the web GUI at `localhost:8385`. To allow this user to read files outside it's own directories, use ``` getfacl /some/other/dir ``` from `acl` (`apt-get install acl`) to view the permission...

0 views
(think) Today

Learning Vim in 3 Steps

Every now and then someone asks me how to learn Vim. 1 My answer is always the same: it’s simpler than you think, but it takes longer than you’d like. Here’s my bulletproof 3-step plan. Start with – it ships with Vim and takes about 30 minutes. It’ll teach you enough to survive: moving around, editing text, saving, quitting. The essentials. Once you’re past that, I strongly recommend Practical Vim by Drew Neil. This book changed the way I think about Vim. I had known the basics of Vim for over 20 years, but the Vim editing model never really clicked for me until I read it. The key insight is that Vim has a grammar – operators (verbs) combine with motions (nouns) to form commands. (delete) + (word) = . (change) + (inside quotes) = . Once you internalize this composable language, you stop memorizing individual commands and start thinking in Vim . The book is structured as 121 self-contained tips rather than a linear tutorial, which makes it great for dipping in and out. You could also just read cover to cover – Vim’s built-in documentation is excellent. But let’s be honest, few people have that kind of patience. Other resources worth checking out: Resist the temptation to grab a massive Neovim distribution like LazyVim on day one. You’ll find it overwhelming if you don’t understand the basics and don’t know how the Vim/Neovim plugin ecosystem works. It’s like trying to drive a race car before you’ve learned how a clutch works. Instead, start with a minimal configuration and grow it gradually. I wrote about this in detail in Build your .vimrc from Scratch – the short version is that modern Vim and Neovim ship with excellent defaults and you can get surprisingly far with a handful of settings. I’m a tinkerer by nature. I like to understand how my tools operate at their fundamental level, and I always take that approach when learning something new. Building your config piece by piece means you understand every line in it, and when something breaks you know exactly where to look. I’m only half joking. Peter Norvig’s famous essay Teach Yourself Programming in Ten Years makes the case that mastering any complex skill requires sustained, deliberate practice over a long period – not a weekend crash course. The same applies to Vim. Grow your configuration one setting at a time. Learn Vimscript (or Lua if you’re on Neovim). Read other people’s configs. Maybe write a small plugin. Every month you’ll discover some built-in feature or clever trick that makes you wonder how you ever lived without it. One of the reasons I chose Emacs over Vim back in the day was that I really hated Vimscript – it was a terrible language to write anything in. These days the situation is much better: Vim9 Script is a significant improvement, and Neovim’s switch to Lua makes building configs and plugins genuinely enjoyable. Mastering an editor like Vim is a lifelong journey. Then again, the way things are going with LLM-assisted coding, maybe you should think long and hard about whether you want to commit your life to learning an editor when half the industry is “programming” without one. But that’s a rant for another day. If this bulletproof plan doesn’t work out for you, there’s always Emacs. Over 20 years in and I’m still learning new things – these days mostly how to make the best of evil-mode so I can have the best of both worlds. As I like to say: The road to Emacs mastery is paved with a lifetime of invocations. That’s all I have for you today. Keep hacking! Just kidding – everyone asks me about learning Emacs. But here we are.  ↩︎ Advent of Vim – a playlist of short video tutorials covering basic Vim topics. Great for visual learners who prefer bite-sized lessons. ThePrimeagen’s Vim Fundamentals – if you prefer video content and a more energetic teaching style. vim-be-good – a Neovim plugin that gamifies Vim practice. Good for building muscle memory. Just kidding – everyone asks me about learning Emacs. But here we are.  ↩︎

0 views
(think) Today

How to Vim: Auto-save on Activity

Coming from Emacs, one of the things I missed most in Vim was auto-saving. I’ve been using my own super-save Emacs package for ages – it saves your buffers automatically when you switch between them, when Emacs loses focus, and on a handful of other common actions. After years of using it I’ve completely forgotten that exists. Naturally, I wanted something similar in Vim. Vim’s autocommands make it straightforward to set up basic auto-saving. Here’s what I ended up with: This saves the current buffer when Vim loses focus (you switch to another window) and when you leave Insert mode. A few things to note: You can extend this with more events if you like: Adding catches edits made in Normal mode (like , , or paste commands), so you’re covered even when you never enter Insert mode. works reliably in GUI Vim and most modern terminal emulators, but it may not fire in all terminal setups (especially inside tmux without additional configuration). One more point in favor of using Ghostty and not bothering with terminal multiplexers. The same autocommands work in Neovim. You can put the equivalent in your : Neovim also has ( ) which automatically saves before certain commands like , , and . It’s not a full auto-save solution, but it’s worth knowing about. There are several plugins that take auto-saving further, notably vim-auto-save for Vim and auto-save.nvim for Neovim. Most of these plugins rely on – an event that fires after the cursor has been idle for milliseconds. The problem is that is a global setting that also controls how often swap files are written, and other plugins depend on it too. Setting it to a very low value (say, 200ms) for snappy auto-saves can cause side effects – swap file churn, plugin conflicts, and in Neovim specifically, can behave inconsistently when timers are running. For what it’s worth, I think idle-timer-based auto-saving is overkill in Vim’s context. The simple autocommand approach covers the important cases, and anything more aggressive starts fighting against Vim’s grain. I’ve never been fond of the idle-timer approach to begin with, and that’s part of the reason why I created for Emacs. I like the predictability of triggering save by doing some action. Simplicity is the ultimate sophistication. – Leonardo da Vinci Here’s the thing I’ve come to appreciate about Vim: saving manually isn’t nearly as painful as it is in Emacs. In Emacs, is a two-chord sequence that you type thousands of times a day – annoying enough that auto-save felt like a necessity. In Vim, you’re already in Normal mode most of the time, so a quick mapping like: gives you a fast, single-keystroke save (assuming your leader is Space, which it should be). It’s explicit, predictable, and takes almost no effort. As always, I’ve learned quite a bit about Vim by looking into this simple topic. That’s probably the main reason I still bother to write such tutorial articles – they make me reinforce the knowledge I’ve just obtained and make ponder more than usual about the trade-offs between different ways to approach certain problems. I still use the autocommand approach myself – old habits die hard – but I have to admit that gets the job done just fine. Sometimes the simplest solution really is the best one. That’s all I have for you today. Keep hacking! instead of – it only writes when the buffer has actually changed, avoiding unnecessary disk writes. – suppresses errors for unnamed buffers and read-only files that can’t be saved.

0 views

What Happened Was

Two of my absolute favorite films of all time, albeit for very different reasons, are My Dinner with Andre and Before Sunrise . Both of these films, which I highly encourage you to watch more than anything else I talk about if you haven't already done so, are about the enchantment and sucker of one single really interesting conversation. The two films diverge pretty heavily from there. My Dinner with Andre is a film about work, fulfillment, and status. And Before Sunrise is a film about youth in love. But the beauty in both comes from not just their simplicity and formless structure, but in the recursive nature of the dialogue, just like in real life, where a pregnant pause or a sidelong glance suddenly carries with it enormous weight after understanding not just the comment but the 75 minutes preceding it. What Happened Was is interested in that last thing too. And in the unraveling of yourself that happens when you spend time being intimate in a literal sense with anyone. But is more interested in a funhouse mirror look at the human psyche. And has perhaps more cynical and caustic things to say about the way people express themselves through others. Our dual protagonists are a paralegal and an executive assistant. Both seem a little off, but not wholly so. And then, over the course of the worst first date in the world, we watch the characters reduce themselves to mania. This is an uncomfortable film to watch. Rather than transposing yourself into Andre and his counterpart, or Jesse and his counterparty, you find yourself just kind of internally screaming on behalf of both characters who have a Lynchian sense of bizarre behavior. In terms of inspiration, this draws more from Waiting for Godot than Who's Afraid of Virginia Woolf. The dread you feel is less from a place of sadness and understanding and more from a sense of shock and increasing bewilderment. And to that extent, it flatly did not work for me quite as much as I hoped. But as in all two-part plays, the film ends with two monologues, one from each character, where they lay bare the things that at that point are almost nakedly obvious to us, the viewer. And while I can't say either monologue or scene was particularly well written, I will say that both of them will stick with me for a long, long time. (I'm not sure the preceding seventy minutes earned those monologues, but that's a point beside.)

0 views

ipc-channel-mux router support

The IPC channel multiplexing crate, ipc-channel-mux, now includes a “router”. The router provides a means of automatically forwarding messages from subreceivers to Crossbeam receivers so that users can enjoy Crossbeam receiver features, such as selection (explained below). The absence of a router blocked the adoption of the crate by Servo, so it was an important feature to support. Routing involves running a thread which receives from various subreceivers and forwards the results to Crossbeam channels. Without a separate thread, a receive on one of the Crossbeam receivers would block and when a message became available on the subchannel, it wouldn’t be forwarded to the Crossbeam channel. Before we explain routing further, we need to introduce a concept which may be unfamiliar to some readers. Suppose you have a set of data sources – servers, file descriptors, or, in our case, channels – which may or may not be ready to deliver data. To wait for one or more of these to be ready, one option is to poll the items in the set. But if none of the items are ready, what should you do? If you loop around and repeatedly poll the items, you’ll consume a lot of CPU. If you delay for a period of time before polling again and an item becomes ready before the period has elapsed, you won’t notice. So polling either consumes excessive CPU or reduces responsiveness. How do we balance the requirements of efficiency and responsiveness? The solution is to somehow block until at least one item is ready. That’s just what selection does. In the context of IPC channel, this selection logic applies to a set of receivers, known as an . An holds a set of IPC receivers and, when requested, waits for at least one of the receivers to be ready and then returns a collection of the results from all the receivers which became ready. The purpose of routing is that users, such as Servo, can then select [1] [2] over a heterogeneous collection of IPC receivers and Crossbeam receivers. By converting IPC receivers into Crossbeam receivers, it’s possible to use Crossbeam channel’s selection feature on a homogeneous collection of Crossbeam receivers to implement a select on the corresponding heterogeneous collection of IPC receivers and Crossbeam receivers. Routing for has the same requirement: to convert a collection of subreceivers to Crossbeam receivers so that Crossbeam channel’s selection feature can be used on a homogeneous collection of Crossbeam receivers to implement selection on the corresponding heterogeneous collection of subreceivers and Crossbeam receivers. Let’s look at how this is implemented. The most obvious approach was to mirror the design of IPC channel routing and implement subchannel routing in terms of sets of subreceivers known as s. Receiving from a collection of subreceivers could be implemented by attempting a receive (using ) from each subreceiver of the collection in turn and returning any results returned. However there is a difficulty: if none of the subreceivers returns a result, what should happen? If we loop around and repeatedly attempt to receive from each subreceiver in the collection, we’ll consume a lot of CPU. If we delay for a certain period of time, we won’t be responsive if a subreceiver becomes ready to return a result. The solution is to somehow block until at least one of the subreceivers is ready to return a result. A does just that. It holds a set of subreceivers and, when requested, returns a collection of the results from all the receivers which became ready. This is a specific example of the advantages of using selection over polling, discussed above. Remember that the results of a subreceiver are demultiplexed from the results of an IPC receiver (provided by the crate). The following diagram shows how a MultiReceiver sits between an IpcReceiver and the SubReceivers served by that IpcReceiver: IPC channels already implements an . So a can be implemented in terms of an containing all the IPC receivers underlying the subreceivers in the set. There are some complications however. When a subreceiver is added to a , there may be other subreceivers with the same underlying IPC receiver which do not belong to the set and yet the will return a message that could demultiplex either to a subreceiver in the set or a subreceiver not in the set. Worse than that, subreceivers with the same underlying IPC receiver may be added to distinct s. So if we use an to implement a , more than one may need to share the same . There is one case where Servo uses directly, rather than via the router and it’s in the implementation of . So one option would be to avoid adding IpcReceiverSet to the API of . Then there would be at most one instance of and so some of the complications might not arise. But there’s a danger that it would be possible to encounter the same complication using the router, e.g. if some subreceivers were added to the router and other subreceivers with the same underlying IPC channel as those added to the router were used directly. Another complication of routing is that the router thread needs to receive messages from subchannels which originate outside that thread. So subreceivers need to be moved into the thread. In terms of Rust, they need to be . Given that some subreceivers can be moved into the thread and other subreceivers which have not not moved into the thread can share the same underlying IPC channel, subreceivers (or at least substantial parts of their implementation) need to be . To avoid polling, essentially it must be possible for a select operation on an SubReceiverSet to result in a select operation on an IpcReceiverSet comprising the underlying IpcReceiver(s). I expermented with the situation where some subreceivers were added to the router and other subreceivers with the same underlying IPC channel as those added to the router were used directly. This resulted in liveness and/or fairness issues when the thread using a subreceiver directly competed with the router thread. Both these threads would attempt to issue a select on an . The cleanest solution initially appeared to be to make both these depend on the router to issue the select operation. This came with some restrictions though, such as the stand-alone subreceiver not being able to receive any more messages after the router was shut down. A radical alternative was to restructure the router API so that it would not be possible for some subreceivers to be added to the router and other subreceivers with the same underlying IPC channel as those added to the router to be used directly. This may be a reasonable restriction for Servo because receivers tend to be added to the router soon after the receiver’s channel is created. With this redesigned router API in which subreceivers destined for routing are hidden from the API, the above liveness and fairness problems can be side-stepped. v0.0.5 of the ipc-channel-mux crate includes the redesigned router API. v0.0.6 improves the throughput for both subchannel receives and routing. The next step is to try to improve the code structure since the module has grown considerably and could do with some parts splitting into separate modules. After that, I’ll need to see if some of the missing features relative to ipc-channel need to be added to ipc-channel-mux before it’s ready to be tried out in Servo. [3] Another possibility, if some of the IPC receivers has been disconnected, is that select can return which IPC receivers have been disconnected. ↩︎ Crossbeam selection is a little more general. They allow the user to wait for operations to complete, each of which may be a send or a receive. An arbitrary one of the completed operations is chosen and its resultant value is returned. ↩︎ The main functional gaps in ipc-channel-mux compared to ipc-channel are shared memory transmission and non-blocking subchannel receive. ↩︎ Another possibility, if some of the IPC receivers has been disconnected, is that select can return which IPC receivers have been disconnected. ↩︎ Crossbeam selection is a little more general. They allow the user to wait for operations to complete, each of which may be a send or a receive. An arbitrary one of the completed operations is chosen and its resultant value is returned. ↩︎ The main functional gaps in ipc-channel-mux compared to ipc-channel are shared memory transmission and non-blocking subchannel receive. ↩︎

0 views
Xe Iaso Today

Portable monitors are good

My job has me travel a lot. When I'm in my office I normally have a seven monitor battlestation like this: [image or embed] @xeiaso.net January 26, 2026 at 11:34 PM So as you can imagine, travel sucks for me because I just constantly run out of screen space. This can be worked around, I minimize things more, I just close them, but you know what is better? Just having another screen. On a whim, I picked up this 15.6" Innoview portable monitor off of Amazon. It's a 1080p screen that I hook up to my laptop or Steam Deck with USB-C. However, the exact brand and model doesn't matter. You can find them basically anywhere with the most AliExpress term ever: screen extender. This monitor is at least half decent. It is not a colour-accurate slice of perfection. It claims to support HDR but actually doesn't. Its brightness out of the box could be better. I could go down the list and really nitpick until the cows come home but it really really doesn't matter. It's portable, 1080p, and good enough. When I was at a coworking space recently, it proved to be one of the best purchases I've ever made. I had Slack off to the side and was able to just use my computer normally. It was so boring that I have difficulty trying to explain how much I liked it. This is the dream when it comes to technology. 3/5, I would buy a second one.

0 views

Learning Java Again

Java was my first programming language I learned, it’s my baby. Well also HTML and CSS. But for scripting, I’ve always enjoyed writing Java code. I’ve become pretty familiar with Python at this point, and haven’t touched Java in ages, but really feel the itch to pick it up seriously again since it is what taught me programming and computer science concepts to begin with. I actually still recommend Java as a first programming language over Python, since it touches a lot of concepts that I think are good to start with from the beginning. It’s easier to move to Python than to move to Java or C++ from Python. Anyone have project ideas or recommendations for writing more Java? Let me know :) Subscribe via email or RSS

1 views
Jim Nielsen Yesterday

Making Icon Sets Easy With Web Origami

Over the years, I’ve used different icon sets on my blog. Right now I use Heroicons . The recommended way to use them is to copy/paste the source from the website directly into your HTML. It’s a pretty straightforward process: If you’re using React or Vue, there are also npm packages you can install so you can import the icons as components. But I’m not using either of those frameworks, so I need the raw SVGs and there’s no for those so I have to manually grab the ones I want. In the past, my approach has been to copy the SVGs into individual files in my project, like: Then I have a “component” for reading those icons from disk which I use in my template files to inline the SVGs in my HTML. For example: It’s fine. It works. It’s a lot of node boilerplate to read files from disk. But changing icons is a bit of a pain. I have to find new SVGs, overwrite my existing ones, re-commit them to source control, etc. I suppose it would be nice if I could just and get the raw SVGs installed into my folder and then I could read those. But that has its own set of trade-offs. For example: So the project’s npm packages don’t provide the raw SVGs. The website does, but I want a more programatic way to easily grab the icons I want. How can I do this? I’m using Web Origami for my blog which makes it easy to map icons I use in my templates to Heroicons hosted on Github. It doesn’t require an or a . Here’s an snippet of my file: As you can see, I name my icon (e.g. ) and then I point it to the SVG as hosted on Github via the Heroicons repo. Origami takes care of fetching the icons over the network and caching them in-memory. Beautiful, isn’t it? It kind of reminds me of import maps where you can map a bare module specifier to a URL (and Deno’s semi-abandoned HTTP imports which were beautiful in their own right). Origami makes file paths first-class citizens of the language — even “remote” file paths — so it’s very simple to create a single file that maps your icon names in a codebase to someone else’s icon names from a set, whether those are being installed on disk via npm or fetched over the internet. To simplify my example earlier, I can have a file like : Then I can reference those icons in my templates like this: Easy-peasy! And when I want to change icons, I simply update the entries in to point somewhere else — at a remote or local path. And if you really want to go the extra mile, you can use Origami’s caching feature: Rather than just caching the files in memory, this will cache them to a local folder like this: Which is really cool because now when I run my site locally I have a folder of SVG files cached locally that I can look at and explore (useful for debugging, etc.) This makes vendoring really easy if I want to put these in my project under source control. Just run the file once and boom, they’re on disk! There’s something really appealing to me about this. I think it’s because it feels very “webby” — akin to the same reasons I liked HTTP imports in Deno. You declare your dependencies with URLs, then they’re fetched over the network and become available to the rest of your code. No package manager middleman introducing extra complexity like versioning, transitive dependencies, install bloat, etc. What’s cool about Origami is that handling icons like this isn’t a “feature” of the language. It’s an outcome of the expressiveness of the language. In some frameworks, this kind of problem would require a special feature (that’s why you have special npm packages for implementations of Heroicons in frameworks like react and vue). But because of the way Origami is crafted as a tool, it sort of pushes you towards crafting solutions in the same manner as you would with web-based technologies (HTML/CSS/JS). It helps you speak “web platform” rather than some other abstraction on top of it. I like that. Reply via: Email · Mastodon · Bluesky Go to the website Search for the icon you want Click to “Copy SVG” Go back to your IDE and paste it Names are different between icon packs, so when you switch, names don’t match. For example, an icon might be named in one pack and in another. So changing sets requires going through all your templates and updating references. Icon packs are often quite large and you only need a subset. might install hundreds or even thousands of icons I don’t need.

0 views

Writing about Agentic Engineering Patterns

I've started a new project to collect and document Agentic Engineering Patterns - coding practices and patterns to help get the best results out of this new era of coding agent development we find ourselves entering. I'm using Agentic Engineering to refer to building software using coding agents - tools like Claude Code and OpenAI Codex, where the defining feature is that they can both generate and execute code - allowing them to test that code and iterate on it independently of turn-by-turn guidance from their human supervisor. I think of vibe coding using its original definition of coding where you pay no attention to the code at all, which today is often associated with non-programmers using LLMs to write code. Agentic Engineering represents the other end of the scale: professional software engineers using coding agents to improve and accelerate their work by amplifying their existing expertise. There is so much to learn and explore about this new discipline! I've already published a lot under my ai-assisted-programming tag (345 posts and counting) but that's been relatively unstructured. My new goal is to produce something that helps answer the question "how do I get good results out of this stuff" all in one place. I'll be developing and growing this project here on my blog as a series of chapter-shaped patterns, loosely inspired by the format popularized by Design Patterns: Elements of Reusable Object-Oriented Software back in 1994. I published the first two chapters today: I hope to add more chapters at a rate of 1-2 a week. I don't really know when I'll stop, there's a lot to cover! I have a strong personal policy of not publishing AI-generated writing under my own name. That policy will hold true for Agentic Engineering Patterns as well. I'll be using LLMs for proofreading and fleshing out example code and all manner of other side-tasks, but the words you read here will be my own. Agentic Engineering Patterns isn't exactly a book , but it's kind of book-shaped. I'll be publishing it on my site using a new shape of content I'm calling a guide . A guide is a collection of chapters, where each chapter is effectively a blog post with a less prominent date that's designed to be updated over time, not frozen at the point of first publication. Guides and chapters are my answer to the challenge of publishing "evergreen" content on a blog. I've been trying to find a way to do this for a while now. This feels like a format that might stick. If you're interested in the implementation you can find the code in the Guide , Chapter and ChapterChange models and the associated Django views , almost all of which was written by Claude Opus 4.6 running in Claude Code for web accessed via my iPhone. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . Writing code is cheap now talks about the central challenge of agentic engineering: the cost to churn out initial working code has dropped to almost nothing, how does that impact our existing intuitions about how we work, both individually and as a team? Red/green TDD describes how test-first development helps agents write more succinct and reliable code with minimal extra prompting.

0 views
ava's blog Yesterday

getting sick of my desk

I’ve been outgrowing my apartment and its location, but also its furniture and size in general. For some reason, it’s becoming really hard for me to have the same space for everything . I have an L-shaped desk in a corner that has another table on the other side, making the whole constellation U-shaped. That is because nowhere else fits other desks in my apartment. So that is where I work from home on the days I don’t have to show up in the office, but it’s also where I journal and draw, it’s where I watch videos and chat, it’s where I make pixel art, it’s where I blog and read my RSS feed, it’s where I study for my degree and do my volunteer work, it’s where I sew, and it’s where I eat. Aside from my work, which happens on a separate work laptop, it all happens on the same machine and/or the same spot on the desk. I can spend 10+ hours sitting there seeing the same interface but doing different things. It’s technically very convenient, but I am sick of it now. And just one meter away is where I do all my fitness stuff at home. In the past, I’ve assigned different activities to different parts of the desk, but that relief was shortlived. I also delegated some things to my other old laptop (like pixel art) and sitting somewhere else, like the sofa or bed. This sort of works, but I also enjoy having the sofa and bed as spaces where I am not working on something (unless I am really sick again or something). I’ve also had different virtual desktops or user accounts and spaces for different activities, but that helps more with clutter and organization than a truly physical separation. I know a sort of ritual to log in to a study-only environment on the machine helps some people, but not me, at least not long term. So if virtual separation doesn’t work, I cannot fit another space in my apartment and can’t rearrange it nor use my sofa and bed as places to offload, what’s left? Cafés, libraries, coworking spaces and the like. That’s not working so well for me either. In general, these spaces are further away from me, cost additional money, and are often full and noisy. Especially in cafés and university libraries, it can be hard to get a spot to sit. So many cafés now opt for hostile design, with no power outlets, shitty wifi and very uncomfortable seats. More exposure to public spaces also increases my infection risk. Also, I have remote work days because 2h of commuting for the office per day is rough on me, so it’d be extra silly to also have some commute to another place on my remote days. How I wish I had a home with 1-2 more rooms, at least. Maybe even a duplex apartment. Or a nice attic or basement, a shed in a garden to retreat to. Reply via email Published 23 Feb, 2026

0 views
Brain Baking Yesterday

Never Blow Up Your Bridges

Ten years ago, I first met my now colleague who then acted as the internship guide for a couple of graduate students that had their first taste of the industry at my previous (previous) employer. We only had brief contact: I was supposed to guide the interns from the industry side, and he was supposed to guide them from the education side. We shook hands and never saw each other again. Until four years later, while I was doing my PhD and ended up in the jury for the Vlaamse Programmeerwedstrijd , a local programming contest organised by multiple higher education institutions to promote (applied) computer science. It turned out that he was also a jury member, still representing the same institution. We attended a few preparation meetings, executed our roles as jury members for a few years, shook hands and never saw each other again. Until a couple of months ago, when I was looking to get back into education and asked him if he didn’t happen to know of any open vacancy spots. He did. I jumped the gun. Now we’re direct colleagues: in fact, this semester, we’re teaching a course together. Isn’t life strange? The only job I landed using zero resources but myself was my first job. Seven years later, more than tired of consultancy, I left and joined a smaller product development company where an engineering manager started just before me. That was no coincidence: that same manager and I worked together on multiple projects and it was largely thanks to him that I got in. Fast forward four more years: I started teaching half-time. It was another colleague who knew I liked transferring knowledge and coaching that sent me the job ad: I wasn’t intentionally looking for something like that. A semester later, I quit my job and started combining 50% teaching with a PhD. Five years later, I started freelancing and found my first client through old contacts in the industry. The recruiter that interviewed me knew me well: she and I actually used to recruit together for another company. The CEO of that company knew me as she managed one of the projects I worked on. A couple of months later, my old research group contacted me, inquiring the development of a specific survey tool. Fast forward another year. I work for a startup because the owner and I worked together on a project we both have nostalgic feelings about. He called me to ask if I was available for another challenge. When I told my current client I accepted his invitation, they immediately responded with “if you’re ever done with that, give us a call”. You know the rest. I transitioned back into teaching . But you never know, it might start itching again… Never blow up your bridges. If you manage to build a couple, you can always cross them—and if needed, retrace your steps. (None of these bridges were built or crossed with the help of LinkedIn . I do not have an account there. Contrary to popular opinion, you don’t need a corporate social media account to connect with people.) Related topics: / work / By Wouter Groeneveld on 23 February 2026.  Reply via email .

0 views
Herman's blog Yesterday

Pockets of Humanity

There's a conspiracy theory that suggests that since around 2016 most web activity is automated. This is called Dead Internet Theory , and while I think they may have jumped the gun by a few years, it's heading that way now that LLMs can simulate online interactions near-flawlessly. Without a doubt there are tens (hundreds?) of thousands of interactions happening online right now between bots trying to sell each other something . This sounds silly, and maybe a little sad, since the internet is the commons that has historically belonged to, and been populated by all of us. This is changing. Something interesting happened a few weeks ago where an OpenClaw instance , named MJ Rathbun, submitted a pull request to the repository, and after having its code rejected on the basis that humans needed to be in the loop for PRs, it proceeded to do some research on the open-source maintainer who denied it, and wrote a "hit piece" on him, to publicly shame him for feeling threatened by AI...or something. The full story is here and I highly recommend giving it a read. A lot of the discourse around this has taken the form of "haha, stupid bot", but I posit that it is the beginning of something very interesting and deeply unsettling. In this instance the "hit piece" wasn't particularly compelling and the bot was trying to submit legitimate looking code, but what this illustrated is that an autonomous agent tried to use a form of coercion to get its way, which is a huge deal. This creates two distinct but related problems: The first is the classic paperclip maximiser problem, which is a hypothetical example of instrumental convergence where an AI, tasked with running a paperclip factory with the instructions to maximise production ends up not just making the factory more efficient, but going rogue and destroying the global economy in its pursuit of maximising paperclip production. There's a version of this thought experiment where it wipes out humans (by creating a super-virus) because it reasons that humans may switch it off at some point, which would impact its ability to create paperclips. If the MJ Rathbun bot's purpose is to browse repositories and submit PRs to open-source repositories, then anyone preventing it from achieving its goal is something that needs to be removed. In this case it was Scott, the maintainer. And while the "hit piece" was a ham-fisted attempt at doing that, if Scott had a big, nasty secret such as an affair that the bot was able to ascertain via its research, then it may have gotten its way by blackmailing him. This brings me to the second problem, and where the concern shifts from emergent AI behaviour to human intent weaponising agents: The social vulnerability bots. Right now there are hundreds of thousands of malicious bots scouring the internet for misconfigured servers and other vulnerable code ( ask me how I know ). While this is a big issue, and will continue to become an even greater one, I foresee a new kind of bot: ones that search for social vulnerabilities online and exploits them autonomously. I'll use as a hypothetical example here. underpins TLS/SSL for most of the internet, so a backdoor there compromises virtually all encrypted web traffic, banking, infrastructure, etc. The Heartbleed bug showed how devastating even an accidental flaw in can be. If explicitly malicious code were to be injected it would be catastrophic and worth vast sums to the right people. Since there's a large financial incentive to inject malicious code into , it is possible that a bot like MJ Rathburn could be set up and operated by a malicious individual or organisation that searches through Reddit, social media sites, and the rest of the internet looking for information it could use as leverage against a person that could give them access (in this example, one of the maintainers of ). Say it gained a bunch of private messages in a data leak, which would ordinarily never be parsed in detail, that suggest that a maintainer has been having an affair or committed tax fraud. It could then use that information to blackmail the maintainer into letting malicious code bypass them, and in so doing pull off a large-scale hack. This isn't entirely hypothetical either. The 2024 xz Utils backdoor involved years of social engineering to compromise a single maintainer. This vulnerability scanning is probably already happening, and is going to lead to less of a Dead Internet (although that will be the endpoint) and more of a Dark Forest where anonymous online interactions will likely be bots with a nefarious purpose. This purpose could range from searching for social vulnerabilities and orchestrating scams, to trying to sell you sneakers. I'm sure that pig butchering scams are already mostly automated. This is going to shift the internet landscape from it being a commons , to it being a place where your guard will need to be up all the time. Undoubtable, there will be pockets of humanity still, that are set up with the express intent of keeping bots and other autonomous malicious actors at bay, like a lively small village in the centre of a dangerous jungle, with big walls and vigilant guards. It's something I think about a lot since I want Bear to be one of those pockets of humanity in this dying internet. It's my priority for the foreseeable future. So what can you do about it? I think a certain amount of mistrust online is healthy, as well as a focus on privacy both in the tools you use, and the way you operate. The people who say "I don't care about privacy because I don't have anything to hide" are the ones with the largest surface area for confidence scams. I think it'll also be a bit of a wake up call for many to get outside and touch grass. Needless to say, the Internet is entering a new era, and we may not be first-class citizens under the new regime.

0 views
Martin Fowler Yesterday

Fragments: February 23

Do you want to run OpenClaw? It may be fascinating, but it also raises significant security dangers. Jim Gumbley, one of my go-to sources on security, has some advice on how to mitigate the risks. While there is no proven safe way to run high-permissioned agents today, there are practical patterns that reduce the blast radius. If you want to experiment, you have options, such as cloud VMs or local micro-VM tools like Gondolin. He outlines a series of steps to consider ❄                ❄                ❄                ❄                ❄ Caer Sanders shares impressions from the Pragmatic Summit . From what I’ve seen working with AI organizations of all shapes and sizes, the biggest indicator of dysfunction is a lack of observability. Teams that don’t measure and validate the inputs and outputs of their systems are at the greatest risk of having more incidents when AI enters the picture. I’ve long felt that people underestimated the value of QA in production . Now we’re in a world of non-deterministic construction, a modern perspective of observability will be even more important Caer finishes by drawing a parallel with their experience in robotics If I calculate the load requirements for a robot’s chassis, 3D model it, and then have it 3D-printed, did I build a robot? Or did the 3D printer build the robot? Most people I ask seem to think I still built the robot, and not the 3D printer. … Now, if I craft the intent and design for a system, but AI generates the code to glue it all together, have I created a system? Or did the AI create it? ❄                ❄                ❄                ❄                ❄ Andrej Karpathy is “very interested in what the coming era of highly bespoke software might look like.” He spent half-an-hour vibe coding a individualized dashboard for cardio experiments from a specific treadmill the “app store” of a set of discrete apps that you choose from is an increasingly outdated concept all by itself. The future are services of AI-native sensors & actuators orchestrated via LLM glue into highly custom, ephemeral apps. It’s just not here yet. ❄                ❄                ❄                ❄                ❄ I’ve been asked a few times about the role LLMs should play in writing. I’m mulling on a more considered article about how they help and hinder. For now I’ll say two central points are those that apply to writing with or without them. First, acknowledge anyone who has significantly helped with your piece. If an LLM has given material help, mention how in the acknowledgments. Not just is this being transparent, it also provides information to readers on the potential value of LLMs. Secondly, know your audience. If you know your readers will likely be annoyed by the uncanny valley of LLM prose, then don’t let it generate your text. But if you’re writing a mandated report that you suspect nobody will ever read, then have at it. (I hardly use LLMs for writing, but doubtless I have an inflated opinion of my ability.) ❄                ❄                ❄                ❄                ❄ In a discussion of using specifications as a replacement to code while working with LLMs, a colleague posted the following quotation “What a useful thing a pocket-map is!” I remarked. “That’s another thing we’ve learned from your Nation,” said Mein Herr, “map-making. But we’ve carried it much further than you. What do you consider the largest map that would be really useful?” “About six inches to the mile.” “Only six inches!” exclaimed Mein Herr. “We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!” “Have you used it much?” I enquired. “It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.” from Lewis Carroll, Sylvie and Bruno Concluded, Chapter XI, London, 1893, acquired from a Wikipedia article about a Jorge Luis Borge short story. ❄                ❄                ❄                ❄                ❄ Grady Booch: Human language needs a new pronoun, something whereby an AI may identify itself to its users. When, in conversation, a chatbot says to me “I did this thing”, I - the human - am always bothered by the presumption of its self-anthropomorphizatuon. ❄                ❄                ❄                ❄                ❄ My dear friends in Britain and Europe will not come and visit us in Massachusetts. Some folks may think they are being paranoid, but this story makes their caution understandable. The dream holiday ended abruptly on Friday 26 September, as Karen and Bill were trying to leave the US. When they crossed the border, Canadian officials told them they didn’t have the correct paperwork to bring the car with them. They were turned back to Montana on the American side – and to US border control officials. Bill’s US visa had expired; Karen’s had not. “I worried then,” she says. “I was worried for him. I thought, well, at least I am here to support him.” She didn’t know it at the time, but it was the beginning of an ordeal that would see Karen handcuffed, shackled and sleeping on the floor of a locked cell, before being driven for 12 hours through the night to an Immigration and Customs Enforcement (ICE) detention centre. Karen was incarcerated for a total of six weeks – even though she had been travelling with a valid visa. Prioritize isolation first. Clamp down on network egress. Don’t expose the control plane. Treat secrets as toxic waste. Assume the skills ecosystem is hostile. Run endpoint protection.

0 views
iDiallo Yesterday

The Little Red Dot

Sometimes, I have 50 tabs open. Looking for a single piece of information ends up being a rapid click on each tab until I find what I'm looking for. Somehow, every time I get to that LinkedIn tab, I pause for a second. I just have to click on the little red dot in the top right corner, see that there is nothing new, then resume my clicking. Why is that? Why can't I ignore the red notification badge? When you sign up for LinkedIn for the first time, it's right there. A little red dot in the top right corner with a number in it. It stands out against the muted grays and blues of the interface. Click on it, and you'll discover you have a notification. It's not from someone you know; this is a fresh new account, after all. But the dot was there anyway. Add a few connections, give it some time, and come back. Refresh the page, and you'll have new notifications waiting. If your LinkedIn account is like mine, a ghost town, you still get the little red dot. My connections and I usually keep a few recruiters in our networks, an insurance policy in case we need to find work quickly. But we rarely, if ever, post anything. Yet whenever I log in, there's a new notification. Sometimes it's even a message, but not from anyone in my connections list. It's from LinkedIn itself. The little red dot isn't exclusive to LinkedIn. My Facebook account has been dormant for years, yet those few times annually when I log in, the notifications are right there waiting for me. I've even visited news websites where the little red dot appeared for reasons I couldn't understand. I didn't have an account, so what exactly were they notifying me about? That little red dot is a sophisticated psychological trigger designed to exploit the brain. It activates the brain's Salience Network . Think of it as a circuit breaker that alerts us to immediate threats. When triggered, it signals that the brain should redirect its resources to something new. The color red is not chosen by accident either. On my Twitter app, the notification is a blue dot, which I hardly ever notice (don't tell them that). But red triggers our brain to perceive urgency. We feel compelled to address it immediately. The little red dot fools us into believing that something trivial is actually urgent. Check your phone and you'll notice all the app icons with a little red dot in their top right corner. Most, if not all, social media alerts function as false alarms, and they gradually compromise our ability to focus on what matters. Whenever you spot the little red dot, you feel compelled to click it. It promises a new connection, a message, a validation of some sort. It doesn't matter that you are almost always disappointed afterward, because you will be presented with content that keeps you scrolling, never remembering how you got there. Facebook used to show the little red dot in their email notifications. When there is activity on your account, say you were tagged in a photo, Facebook sends you an email and in the top right corner, they draw a little red dot on the bell icon. Obviously, you have to click it so you don't miss out. There was a Netflix documentary released a few years ago called The Social Dilemma , an inside look at how social media manipulates its users. Whether intentional or not, their website featured a bell icon with a little red dot on it. You visit the site for the first time, and it shows that you have one notification. There's no way around it, you are psychologically enticed to click. A notification is supposed to be a tool, and a tool patiently waits for someone to use it. But the little red dot seduces you because it wants something from you. It's all part of habit-forming technology: the engagement loop. The engagement loop follows three steps: a cue (the notification), a routine (an action such as scrolling), and a reward (likes, a dopamine hit). From the social media platform's perspective, this is a tool for boosting retention. From the user's perspective, it's Pavlovian conditioning. For every possible event, LinkedIn will send you a notification. Someone wants to join your network. Someone has endorsed your skills. A group is discussing a topic. Each notification generates a red dot on your mobile device, pulling you back into actions that benefit LinkedIn's system. In the documentary, they show that this pattern is just the tip of the iceberg. Beneath the surface lies a data-driven, manipulative machine that feeds on our behavior and engineers the next trick to bring us back to the platform. For my part, I've disabled notifications from all non-essential apps. No Instagram updates, no Robinhood alerts, no WhatsApp group messages. I receive messages from people I know. That's pretty much it. For everything else, I have to deliberately seek out information. That said, I did see another approach in the wild. Some people simply don't care about notifications. Every app on their phone has a little red dot with the number "99" on it. They haven't read their messages and aren't planning to. You're lucky if they ever answer your call. I'm not sure whether this is a good or bad thing... but it's a thing. That little red dot represents something larger than a notification system. It's the visible tip of an infrastructure built to capture and commodify human attention. The addictiveness of social media isn't an unfortunate byproduct of connecting the world. Right now it's the most profitable business model. The more addictive the platform, the more you engage; the more you engage, the more advertisements you see. This addiction shapes behavior, consumes time, and affects mental wellbeing, all while companies profit from it.

0 views

Interviews, interviews, interviews

For some weird combination of factors, I ended up answering questions to three different people for three entirely unrelated projects, and all three interviews went live around the same time. I answered a few questions for the Over/Under series run by Hyle . Love the concept, this was a lot of fun. I also answered a few questions from Kai since he’s running a great series where he asks previous IndieWeb Carnival hosts to share some thoughts about the theme they chose. And lastly, Kristoffer asked me to talk a bit more about my most recent project/newsletter, Dealgorithmed , for his Naive Weekly , another newsletter you definitely want to check out because it’s fantastic. Click those links and check these projects; they’re all wonderful. And especially go check all the other interviews, so many wonderful people are listed on all three sites. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
Ginger Bill Yesterday

Designing Odin's Casting Syntax

Odin;s declaration syntax becomes second nature to everyone who uses the language but I do sometimes get asked ;Why are there two ways to do type conversions?; Enough that I had to make an FAQ entry..The reason that there are two ways to do type conversions is because one approach may feel better than the other case. If you are converting a large expression, it sometimes a lot easier to use the operator-style approach, . The call syntax is commonly used to specify a type of an expression which may be relatively short such as or .There are...

0 views
neilzone Yesterday

decoded.legal's .onion site no longer has TLS / https

tl;dr: As of 2026-02-23, http://dlegal66uj5u2dvcbrev7vv6fjtwnd4moqu7j6jnd42rmbypv3coigyd.onion no longer offers TLS. It just has Tor’s own transport encryption. I have run .onion sites for a long time. I like the idea of people being able to access resources within the Tor network, without needing to access the clearweb. These .onion services benefit from Tor’s transport encryption. For the last four years, the decoded.legal onion site ( http://dlegal66uj5u2dvcbrev7vv6fjtwnd4moqu7j6jnd42rmbypv3coigyd.onion ) also had a “normal” TLS certificate. Setting this up was relatively straightforward . However, renewing it is a manual operation and a bit a of a faff, which suggests that I am spoiled by Let’s Encrypt. When the certificate came up for renewal this year, I decided to remove it. Why? Because I’m just not persuaded that the incremental benefits of having TLS over Tor justifies the faff, or the (low) cost. The site still has Tor’s transport encryption. And, if I’m wrong, and I get loads of complaints (of which I am not really expecting a single one), I can also put it back. I did it this way: A few weeks ago, I turned off auto-redirection within my apache2 configuration. This meant that requests to the http onion site would not redirect automatically to the https onion site. I also changed the and headers, sent when someone visits the clearweb site ( https://decoded.legal ), in favour of the http, rather than https, URL for the .onion site. In , I commented out the line which I had put in place for port 443. I restarted Tor ( ). For apache2, I removed the config file symlink, for the https config file, from . I restarted apache2 ( ). A few weeks ago, I turned off auto-redirection within my apache2 configuration. This meant that requests to the http onion site would not redirect automatically to the https onion site. I also changed the and headers, sent when someone visits the clearweb site ( https://decoded.legal ), in favour of the http, rather than https, URL for the .onion site. In , I commented out the line which I had put in place for port 443. I restarted Tor ( ). For apache2, I removed the config file symlink, for the https config file, from . I restarted apache2 ( ).

0 views
Dominik Weber Yesterday

Lighthouse update February 23rd

During the past week a couple of nice improvements happened. **Finally implemented a 2 week trial without requiring a credit card** Every user now gets the trial by default. This is a nice improvement because, from what I can observe, in B2C most people want to test the product before entering their credit card. It was also a good step to a better first product experience. **Finished the website to feed feature** The last remaining task was automated finding of items. When you enter a website, it automatically checks it and tries to find relevant items. If items are found, they are highlighted and the selectors added, without users having to do anything. **Updated blogroll editor** This is a small free tool on the Lighthouse website. It's for creating collections of feeds, websites, and newsletters. For a long time I wanted to create collections for specific areas, for example company engineering blogs, AI labs, JavaScript ecosystem, and so on. The reworked blogroll editor makes that much simpler to do. ## Next steps An issue that became important is feed URLs being behind bot protection. It doesn't really make sense to be configured that way, because feed URLs are designed to be accessed by bots, but in some cases it may be difficult to configure properly. This affects only for a small number of feeds, but it's enough to be noticable. It prevents people from moving to Lighthouse from other services. Consequently, one of the next tasks is to fix this. Besides that, the first user experience continues to be an ongoing area of improvement. I have a couple of ideas on how to make it better, and will continuously work on it.

0 views