Latest Posts (20 found)

Bunny.net shared storage zones

Whilst moving projects off Cloudflare and migrating to Bunny I discovered a neat ‘Bunny hack’ to make life easier. I like to explicitly say “no” to AI bots using AI robots.txt † . Updating this file across multiple websites is tedious. With Bunny it’s possible to use a single file. † I’m no fool, I know the AI industry has a consent problem but the principle matters. My solution was to create a new storage zone as a single source of truth. In the screenshot above I’ve uploaded my common file to its own storage zone. This zone doesn’t need any “pull zone” (CDN) connected. The file doesn’t need to be publicly accessible by itself here. With that ready I next visited each pull zone that will share the file. Under “CDN > Edge rules” in the menu I added the following rule. I chose the action: “Override Origin: Storage Zone” and selected the new shared zone. Under conditions I added a “Request URL” match for . Using a wildcard makes it easier to copy & paste. I tried dynamic variables but they don’t work for conditions. I added an identical edge rule for all websites I want to use the . Finally, I made sure the CDN cache was purged for those URLs. This technique is useful for other shared assets like a favicon, for example. Neat, right? One downside to this approach is vendor lock-in. If or when Bunny hops the shark and I migrate elsewhere I must find a new solution. My use case for is not critical to my websites functioning so it’s fine if I forget. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views

Anthropic’s Skyrocketing Revenue, A Contract Compromise?, Nvidia Earnings

Anthropic's enterprise business is reaching escape velocity, which increases the importance of finding a compromise with the government. Then, agents dramatically increase demand for Nvidia chips, even if they threaten software.

0 views

Granting Explicit ACL Access to a File on Linux

Say there is a file, `openui/open-webui/webui.db`, and you want to have write access to it without using `sudo`. The most reliable way is to not use various `chown` and `chmod` commands, but instead use `setfacl`, which is available on Debian via `apt install acl`. To first check the permissions, run `namei`, ```text $ namei -mo openui/open-webui/webui.db f: openui/open-webui/webui.db drwxrwxr-x rik rik openui drwxrwxr-x 777 rik open-webui webui.db - Permission denied ``` It looks like permissions to enter the `openui/open-webui` dir are missing. This can be fixed by...

0 views

Favourites of February 2026

A sudden burst of Japanese cherry flowers sparkling in the sun brings much-needed lightheartedness into our late February lives. Before we know it, the garden will be littered with these little pink petals, and the very short blossom season will be behind us. Our cherry tree always had the tendency of being early, eager, and then running out of steam. It’s weird to have temperatures reach almost twenty degrees Celsius while a few weeks ago it was still freezing. No wonder the tree is confused. A deep blue sky overlooking the cherry blossom in our garden. In case you were wondering: no, this weather is not normal: it’s yet another noticeable temperature spike. Our local (retired) weatherman Frank explains the spikes and provides proof towards upwards instead of downwards temperature peaks (in Dutch). At this point, I’m just grateful for the much needed sunshine. Previous month: January 2026 . I’m giving up on Ruffy. It’s just unplayable on the Switch which is a damn shame as the N64 throwback collect-a-thon 3D platformer with rough edges looks like the perfect fit for the Switch—and it should be. It’s far from a demanding game so the only conclusion I can make is that it was poorly optimized for my platform of choice. And I bought the Limited Run Games physical version… Instead, I’ve turned to Gobliins 6 , a quirky French adventure game made by just one guy. It has equally frustrating moments and rough edges but I can more easily forgive it for its faults: it’s Gobliins! The fact that after 34 years (!!), there’s an official sequel to Gobliins 2: The Prince Buffoon is just crazy. I have fond memories of that game as I used to play it together with my dad on his brand new 486. I didn’t understand English nor was I able to solve most time-based puzzles but the Gobliins exposure got permanently burned into my brain—so much so that its pixel art became a basis for my retro blog . Even though it’s advertised to be a Windows-only game, ScummVM has got you covered: In the Fox Bar just after Fingus reunites with Winkle. If Gob6 sells well, Pierre might go ahead and make Gob7 a direct sequel to Goblins Quest 3 . Fingus—err, fingers crossed for Blount’s return! Related topics: / metapost / By Wouter Groeneveld on 4 March 2026.  Reply via email . Let’s start with more Gobliins stuff: Michael Klamerus summarized the history of the games to bring you up to speed. Mark self-hosted a book library tool called Booklore that links to your Kobo account. Michał Sapka nuances the “ I hate genAI ” screams of late. Elmine Wijnia writes in De Stadsbron (in Dutch) about OpenStreetMap and wonders whether we can finally get rid of Google Maps. Space Panda continues fighting against bots on their site . It’s fun to see the bot honey pots working but aren’t we now wasting even more resources doing nothing? Arjan van der Gaag shares how he uses snippets in Emacs with Yasnippet . I think I’m going to migrate to Tempel.el instead, but that’s for another story. There’s an interesting thread on ResetERA about old games that have yet to be replicated . Someone mentioned Magic the Gathering: Shandalar ! Jeff Kaufman shared a photo of two chairs placed on a snowy parking space . Apparently, that’s customary to “reserve” your spot. I’ve never seen such a ridiculous selfish act in a while. Is this a typical USA thing? Wolfgang Ziegler continues his Game Boy modding spree, this time with an IPS screen mod . The result looks stunning! Hamilton Greene shares his adventure with programming languages and talks about the “missing language”. I don’t agree with his stance but it’s interesting nonetheless. Scott Nesbitt writes on an old Singer desk ! Greg Newman organized the Emacs writing carnival challenge and shares links of others’ writing experiences with their favourite editor (25 entries). Greg also designed the Org-mode unicorn logo! Speaking of which; James Dyer shows his streamlined Eshell configuration that inspired me to hack together my own. To be continued in a future blog post, whether you’ll like it or not. Markus Dosch shares his journey from Bash to Zsh and now Fish . I’m slowly but surely getting fed up with Zsh and all those semi-required plugins so I might switch to Fish as well. But actually… I switched to Eshell. You didn’t see that coming, did you? Henrique Dias redesigned his website and the result looks very good, congrats! I especially like the fact that the new theme takes advantage of wide screens (note to self). Michael Stapelberg tried out Wayland and concludes that it’s still not ready yet. X11 is not dead yet. I found the Lockfile Explorer documentation on pnpm lockfiles to be very thorough and insightful. Feishin is a modern rewrite of Sonixd, a Subsonic-compatible music desktop client that looks promising. I’ve been a Navidrome user for five years now but am looking for a good client that supports offline playback. It doesn’t (yet) . Related: the Symfonium Android app that does do caching. I’m using Substreamer for that and that works well enough. scrcpy is a tiny Android-based screen sharing tool that I use in classes to project my Android screen. Handy! Another tool for presenting: keycastr helped me teach students how to use shortcuts. I might have already shared this, but you should replace pip with uv : it’s +10x faster and can also manage your project’s . Oh, and in case you haven’t already, replace npm with bun . Discord’s age verification facial recognition tool got bypassed pretty fast —rightfully so.

0 views

Humans and Agents in Software Engineering Loops

There's been much talk recently about how AI agents affect the workflow loops of software development. Kief Morris believes the answer is to focus on the goal of turning ideas into outcomes. The right place for us humans is to build and manage the working loop rather than either leaving the agents to it or micromanaging what they produce.

0 views

I gave the MacBook Pro a try

I got the opportunity to try out a MacBook Pro with the M3 Pro with 18GB RAM (not Pro). I’ve been rocking a ThinkPad P14s gen 4 and am reasonably happy with it, but after realizing that I am the only person in the whole company not on a MacBook, and one was suddenly available for use, I set one up for work duties to see if I could ever like using one. It’s nice. I’ve used various flavours of Linux on the desktop since 2014, starting with Linux Mint. 2015 was the year I deleted the Windows dual boot partition. Over those years, the experience on Linux and especially Fedora Linux has improved a lot, and for some reason it’s controversial to say that I love GNOME and its opinionated approach to building a cohesive and yet functional desktop environment. When transitioning over to macOS, I went in with an open mind. I won’t heavily customise it, won’t install Asahi Linux on it, or make it do things it wasn’t meant to do. This is an appliance, I will use it to get work done and that’s it. With this introduction out of the way, here are some observations I’ve made about this experience so far. The first stumbling block was an expected one: all the shortcuts are wrong, and the Ctrl-Super-Alt friendship has been replaced with these new weird ones. With a lot of trial and error, it is not that difficult to pick it up, but I still stumble around with copy-paste, moving windows around, or operating my cursor effectively. It certainly doesn’t help that in terminal windows, Ctrl is still king, while elsewhere it’s Cmd. Mouse gestures are nice, and not that different from the GNOME experience. macOS has window snapping by default, but only using the mouse. I had to install a specific program to enable window moving and snapping with keyboard shortcuts (Rectangle) , which is something I use heavily in GNOME. Odd omission by Apple. For my Logitech keyboard and mouse to do the right thing, I did have to install the Logitech Logi+ app, which is not ideal, but is needed to have an acceptable experience using my MX series peripherals, especially the keyboard where it needs to remap some keys for them to properly work in macOS. I still haven’t quite figured out why Page up/down and Home/End keys are not working as they should be. Also, give my Delete key back! Opening the laptop with Touch ID is a nice bonus, especially on public transport where I don’t really want my neighbour to see me typing in my password. The macOS concept of showing open applications that don’t have windows on them as open in the dock is a strange choice, that has caused me to look for those phantom windows and is generally misleading. Not being able to switch between open windows instead of applications echoes the same design choice that GNOME made, and I’m not a big fan of it here as well. But at least in GNOME you can remap the Alt+Tab shortcut to fix it. The default macOS application installation process of downloading a .dmg file, then opening it, then dragging an icon in a window to the Applications folder feels super odd. Luckily I was aware of the tool and have been using that heavily to get everything that I need installed, in a Linux-y way. I appreciate the concern that macOS has about actions that I take on my laptop, but my god, the permission popups get silly sometimes. When a CLI app is doing things and accessing data on my drive, I can randomly be presented with a permissions pop-up, stealing my focus from writing a Slack message. Video calls work really well, I can do my full stack engineer things, and overall things work, even if it is sometimes slightly different. The default Terminal app is not good, I’m still not quite sure why it does not close the window when I exit it, that “Process exited” message is not helpful. No contest, the hardware on a MacBook Pro feels nice and premium compared to the ThinkPad P14s gen 4. The latter now feels like a flexible plastic piece of crap. The screen is beautiful and super smooth due to the higher refresh rate. The MacBook does not flex when I hold it. Battery life is phenomenal, the need to have a charger is legitimately not a concern in 90% of the situations I use a MacBook in. Keyboard is alright, good to type on, but layout is not my preference. M3 Pro chip is fast as heck. 18 GB of memory is a solid downgrade from 32 GB, but so far it has not prevented me from doing my work. I have never heard the fan kick on, even when testing a lot of Go code in dozens of containers, pegging the CPU at 100%, using a lot of memory, and causing a lot of disk writes. I thought that I once heard it, but no, that fan noise was coming from a nearby ThinkPad. The alumin i um case does have one downside: the MacBook Pro is incredibly slippery. I once put it in my backpack and it made a loud thunk as it hit the table that the backpack was on. Whoops. macOS does not provide scaling options on my 3440x1440p ultra-wide monitor. Even GNOME has that, with fractional scaling! The two alternatives are to use a lower resolution (disgusting), or increase the text size across the OS so that I don’t suffer with my poor eyesight. Never needed those. I like that. Having used an iPhone for a while, I sort of expected this to be a requirement, but no, you can completely ignore those aspects of macOS and work with a local account. Even Windows 11 doesn’t want to allow that! Switching the keyboard language using the keyboard shortcut is broken about 50% of the time, which feels odd given that it’s something that just works on GNOME. This is quite critical for me since I shift between the Estonian and US keyboard a lot when working, as the US layout has the brackets and all the other important characters in the right places for programming and writing, while Estonian keyboard has all the Õ Ä Ö Ü-s that I need. I upgraded to macOS 26.3 Tahoe on 23rd of February. SSH worked in the morning. Upgrade during lunch, come back, bam, broken. The SSH logins would halt at the part where public key authentication was taking place, the process just hung. I confirmed that by adding into the SSH command. With some vibe-debugging with Claude Code, I found that something with the SSH agent service had broken after the upgrade. One reasonably simple fix was to put this in your : Then it works in the shell, but all other git integrations, such as all the repos I have cloned and am using via IntelliJ IDEA, were still broken. Claude suggested that I build my own SSH agent, and install that until this issue is fixed. That’s when I decided to stop. macOS was supposed to just work, and not get into my way when doing work. This level of workaround is something I expect from working with Linux, and even there it usually doesn’t get that odd, I can roll back a version of a package easily, or fix it by pulling in the latest development release of that particular package. I went into this experiment with an open mind, no expectations, and I have to admit that a MacBook Pro with M3 Pro chip is not bad at all, as long as it works. Unfortunately it doesn’t work for me right now. I might have gotten very unlucky with this issue and the timing, but first impressions matter a lot. The hardware can be nice and feel nice, but if the software lets me down and stops me from doing what’s more important, then it makes the hardware useless. It turns out that I like Linux and GNOME a lot. Things are simple, improvements are constant and iterative in nature, so you don’t usually notice it (with Wayland and Pipewire being rare exceptions), and you have more control when you need to fix something. Making those one-off solutions like a DIY coding agent sandbox, or a backup script, or setting up snapshots on my workstation are also super easy. If Asahi Linux had 100% compatibility on all modern M-series MacBooks, then that would be a killer combination. 1 Until then, back to the ol’ reliable ThinkPad P14s gen 4 I go. I can live with fan noise, Bluetooth oddities and Wi-Fi roaming issues, but not with something as basic as SSH not working one day. 2 any kind billionaires want to bankroll the project? Oh wait, that’s an oxymoron.  ↩︎ the fan noise can actually be fixed quite easily by setting a lower temperature target on the Ryzen APU and tuning the fan to only run at the lowest speed after a certain temperature threshold.  ↩︎ any kind billionaires want to bankroll the project? Oh wait, that’s an oxymoron.  ↩︎ the fan noise can actually be fixed quite easily by setting a lower temperature target on the Ryzen APU and tuning the fan to only run at the lowest speed after a certain temperature threshold.  ↩︎

0 views
Jim Nielsen Yesterday

w0rdz aRe 1mpoRtAnt

The other day I was looking at the team billing section of an AI product. They had a widget labeled “Usage leaderboard”. For whatever reason, that phrase at that moment made me pause and reflect — and led me here to this post. It’s an interesting label. You could argue the widget doesn’t even need a label. You can look at it and understood at a glance: “This is a list of people sorted by their AI usage, greatest to least.” But it has that label. It could have a different label. Imagine, for a moment, different names for this widget — each one conjuring different meanings for its purpose and use: Usage leaderboard implies more usage is better. Who doesn’t want to be at or near the top of a leaderboard at work? If you’re not on the leaderboard, what’s that mean for your standing in the company? You better get to work! Calling it a leaderboard imbues the idea of usage with meaning — more is better! All of that accomplished solely via a name. Usage dashboard seems more neutral. It’s not implying that usage is good or bad. It just is , and this is where you can track it. Usage wall of shame sounds terrible! Who wants to be on the wall of shame? That would incentivize people to not have lots of usage. Again, all through the name of the thing! It’s worth noting that individuals and companies are incentivized to choose words designed to shape our thinking and behavior in their interest. The company who makes the widget from my example is incentivized to call this a “Usage leaderboard” because more usage by us means more $$$ for them. I’m not saying that is why they chose that name. There may not be any malicious or greedy intent behind the naming. Jim’s law is a variation on Hanlon’s razor : Don’t attribute to intent that which can be explained by thoughtlessness. I do find it fascinating how little thought we often give to the words we use when they can have a such a profound impact on shaping our own psychology, perception, and behavior. I mean, how many “word experts” are on your internal teams? Personally, I know I could do better at choosing my words more thoughtfully. Reply via: Email · Mastodon · Bluesky “Usage leaderboard” “Usage dashboard” “Usage wall of shame”

0 views
Rik Huijzer Yesterday

Ani Ma'amin

I came across a video of a Purim celebration in Tel Aviv on Mar 14 2025. The party looks like any generic non-religious party you would expect. To my surprise, however, the crowd was singing something _messiach_ (messiah) around 0:32. After a bit of searching, it turns out the crowd is most likely singing the _Ani Ma'amin_ (1915) song by Simeon Singer. The lyrics that the crowd sing between 0:37 and 0:49 are _Ani ma'amin \ b'e munah sh'leimah \ b'viat ha mashiach, \ Ani ma'amin. \ mashiach, mashiach, mashiach_ where the first three lines mean: _I believe with perfect faith in the coming...

0 views

Scalar Interpolation: A Better Balance between Vector and Scalar Execution for SuperScalar Architectures

Scalar Interpolation: A Better Balance between Vector and Scalar Execution for SuperScalar Architectures Reza Ghanbari, Henry Kao, João P. L. De Carvalho, Ehsan Amiri, and J. Nelson Amaral CGO'25 This paper serves as a warning: don’t go overboard with vector instructions. There is a non-trivial amount of performance to be had by balancing compute between scalar and vector instructions. Even if you fear that automatic vectorization is fragile, this paper has some interesting lessons. Listing 1 contains a vectorizable loop and listing 2 shows a vectorized implementation: Source: https://dl.acm.org/doi/10.1145/3696443.3708950 Source: https://dl.acm.org/doi/10.1145/3696443.3708950 After achieving this result, one may be tempted to pat oneself on the back and call it a day. If you were a workaholic, you might profile the optimized code. If you did, you would see something like the data in table 1: Source: https://dl.acm.org/doi/10.1145/3696443.3708950 And you could conclude that this algorithm is compute-bound. But what do we really mean by “compute-bound”? A processor contains many execution ports, each with a unique set of capabilities. In the running example, the execution ports capable of vector multiplication and addition are fully booked, but the other ports are sitting mostly idle! Listing 3 shows a modified loop which tries to balance the load between the vector and scalar execution ports. Each loop iteration processes 9 elements (8 via vector instructions, and 1 via scalar instructions). This assumes that the processor supports fast unaligned vector loads and stores. Source: https://dl.acm.org/doi/10.1145/3696443.3708950 Section 3 has details on how to change LLVM to get it to do this transformation. Fig. 3 shows benchmark results. By my calculations, the geometric mean of the speedups is 8%. Source: https://dl.acm.org/doi/10.1145/3696443.3708950 Dangling Pointers This paper builds on top of automatic vectorization. In other words, the input source code is scalar and the compiler vectorizes loops while balancing the workload. An alternative would be to have the source code in a vectorized form and then let the compiler “devectorize” where it makes sense. Subscribe now

0 views
Stratechery Yesterday

Technological Scale and Government Control, Paramount Outbids Netflix for Warner Bros.

Why government is not the primary customer for tech companies, and is Netflix relieved that they were outbid for Warner Bros.?

0 views
(think) Yesterday

Learning OCaml: String Interpolation

Most programming languages I’ve used have some form of string interpolation. Ruby has , Python has f-strings, JavaScript has template literals, even Haskell has a few popular interpolation libraries. It’s one of those small conveniences you don’t think about until it’s gone. OCaml doesn’t have built-in string interpolation. And here’s the funny thing – I didn’t even notice when I was first learning the language. Looking back at my first impressions article, I complained about the comment syntax, the semicolons in lists, the lack of list comprehensions, and a dozen other things – but never once about string interpolation. I was happily concatenating strings with and using without giving it a second thought. I only started thinking about this while working on my PPX article and going through the catalog of popular PPX libraries. That’s when I stumbled upon and thought “wait, why doesn’t OCaml have interpolation?” The short answer: OCaml has no way to generically convert a value to a string. There’s no universal method, no typeclass, no runtime reflection that would let the language figure out how to stringify an arbitrary expression inside a string literal. In Ruby, every object responds to . In Python, everything has . These languages can interpolate anything because there’s always a fallback conversion available at runtime. OCaml’s type information is erased at compile time, so the compiler would need to know at compile time which conversion function to call for each interpolated expression – and the language has no mechanism for that. 1 OCaml does have , which is actually quite nice and type-safe: The format string is statically checked by the compiler – if you pass an where expects a string, you get a compile-time error, not a runtime crash. That’s genuinely better than what most dynamically typed languages offer. But it’s not interpolation – the values aren’t inline in the string, and for complex expressions it gets unwieldy fast. There’s also plain string concatenation with : This works, but it’s ugly and error-prone for anything beyond trivial cases. ppx_string is a Jane Street PPX that adds string interpolation to OCaml at compile time. The basic usage is straightforward: For non-string types, you specify the module whose function should be used: The suffix tells the PPX to call on , and calls on . Note that , , etc. are conventions from Jane Street’s / libraries – OCaml’s uses , and so on, which won’t work with the syntax. This is another reason really only makes sense within the Jane Street ecosystem. Any module that exposes a function works here – including your own: You can also use arbitrary expressions inside the interpolation braces: Though at that point you might be better off with a binding or for readability. A few practical things worth knowing: Honestly? Probably not as much as you think. I’ve been writing OCaml for a while now without it, and it rarely bothers me. Here’s why: That said, when you do need to build a lot of human-readable strings – error messages, log output, CLI formatting – interpolation is genuinely nicer than . If you’re in the Jane Street ecosystem, there’s no reason not to use . The lack of string interpolation in OCaml is one of those things that sounds worse than it actually is. In practice, and cover the vast majority of use cases, and the code you write with them is arguably clearer about types than magical interpolation would be. It’s also a nice example of OCaml’s general philosophy: keep the language core small, provide solid primitives ( , ), and let the PPX ecosystem fill in the syntactic sugar for those who want it. The same pattern plays out with for printing, for monadic syntax, and many other conveniences. Will OCaml ever get built-in string interpolation? Maybe. There have been discussions on the forums over the years, and the language did absorb binding operators ( , ) from the PPX world. But I wouldn’t hold my breath – and honestly, I’m not sure I’d even notice if it landed. That’s all I have for you today. Keep hacking! This is the same fundamental problem that makes printing data structures harder than in dynamically typed languages.  ↩︎ You need the stanza in your dune file: String values interpolate directly, everything else needs a conversion suffix. Unlike Ruby where is called implicitly, requires you to be explicit about non-string types. This is annoying at first, but it’s consistent with OCaml’s philosophy of being explicit about types. It’s a Jane Street library. If you’re already in the Jane Street ecosystem ( , , etc.), adding is trivial. If you’re not, pulling in a Jane Street dependency just for string interpolation might feel heavy. In that case, is honestly fine. It doesn’t work with the module. If you’re building strings for pretty-printing, you’ll still want or . is for building plain strings, not format strings. Nested interpolation doesn’t work – you can’t nest inside another . Keep it simple. is good. It’s type-safe, it’s concise enough for most cases, and it’s available everywhere without extra dependencies. Most string building in OCaml happens through . If you’re writing pretty-printers (which you will be, thanks to ), you’re using , not string concatenation or interpolation. OCaml code tends to be more compute-heavy than string-heavy. Compared to, say, a Rails app or a shell script, the typical OCaml program just doesn’t build that many ad-hoc strings. This is the same fundamental problem that makes printing data structures harder than in dynamically typed languages.  ↩︎

0 views
neilzone Yesterday

I'm struggling to think of any online services for which I'd be willing to verify my identity or age

Identity verification and age verification is an increasinly common policy conversation at the moment, in numerous countries. Often, this is in combination with proposals to ban children from varying concepts of “social media”, which generally means that everyone would have to prove that they were not a child. I have yet to see a well-considered proposal. Worse, the question that they are trying answer is rarely stated clearly and concisely. And it is unusual to see any consideration of broader sociological issues, let alone an emphasis on this, with a focus instead on perceived “quick win” technosolutionism. But anyway… I was pondering last night for which services I, personally, would actually be willing to verify my age or identity. And… the answer is “none”. At least, none that I can think of at the moment. I appreciate that I compute in an unusual way (when compared with most computer users), and that much of what I do online is about accessing my own services . Some of those - my fedi server, my RSS server, my messaging services - are build around enjoying stuff from other people’s services. Would I be willing to verify my identity or age to read someone’s RSS feed? No. While I enjoy the myriad blogs that I follow, none are crucial to me. I occasionally watch videos (which started on YouTube, but which I download into my Jellyfin instance), and perhaps YouTube will be forced to do age verification. It would be a shame, but again, I’ll just not watch YouTube videos. Not a big loss. Mostly, I buy secondhand DVDs, rip them, and watch them from my Jellyfin instance. I haven’t been asked to verify my age for a DVD purchase (online or offline) in a very long time. Friends have had to attempt to block access to their sites from the UK. While I can still access their sites via Tor, that’s what I tend to do. I feel sorry for them for the likely significant drop in visitors, likely affecting their enjoyment and in some cases their revenue, and, probably their incentive to continue to write / post / record stuff. I don’t use any individual forums any more (their demise is a shame; I’d prefer this over centralised discussion sites), nor do I use Reddit. I occasionally look at the comments on HN if one of my posts is surfaced there, but if HN forced identify or age verification, I’d just stop doing it. No big deal for me. Websites with comments sections? I don’t want to see the comments anyway, so I block those, which makes for a very pleasant browsing experience. I don’t comment myself. Code forges / places to contribute to FOSS? Most of my FOSS contributions are non-code, but even so, I use some organisation’s GitLab repos, and occasionally I contribute to projects on other forges. I doubt that my contributions are meaningful in themselves, and it may not be an option to switch infrastructure in any case (that might ont make the requirement go away), but since I am not a massive, or particularly valuable contributor, I’d feel less bad about simply stepping away. For Wikipedia, I’d probably rebuild my Kiwix instance and use that instead. Yes, articles would not be quite so up to date, but I rarely access Wikipedia for rapidly-changing information. In any case, there are tradeoffs, and personally I would prefer my privacy, the security of my personal data, and, well, just not being part of this kind of censorship. Signal? That would be a pain. I don’t have a workaround for that. I’m happily using XMPP, but as a complement to Signal, not an alternative. Teams/Zoom? I don’t have accounts on those services, but I do join, via my browser, when a client sends me a link. If I was faced with a choice of having to verify my identity/age for these services, then I’d have to consider the position carefully. Realistically, I am not in a position to say “no, I will not use Teams”, as some long-term clients are not going to change their corporate approach just because Neil doesn’t like something, and I’d rather not lose them as clients. So that could be a pain, if those services were within scope. I’ll still object to these measures - “I’m okay, Jack” would be a selfish stance - but, in practice, yes, I’d be surprised if they impacted me. Self-imposed (or, at least, self-controlled) digital isolationism, perhaps. Or perhaps, in the future, some service will pop up that I will really, really want to use, despite it requiring identity / age verification.

0 views
Rik Huijzer Yesterday

More Accurate Speech Recognition with whisper.cpp

I have been using OpenAI's whisper for a while to convert audio files to text. For example, to generate subtitles for a file, I used ```bash whisper "$INPUT_FILE" -f srt --model turbo --language en ``` Especially on long files, this would sometimes over time change it's behavior leading to either extremely long or extremely short sentences (run away). Also, `whisper` took a long time to run. Luckily, there is whisper-cpp. On my system with an M2 Pro chip, this can now run speech recognition on a 40 minute audio file in a few minutes instead of half an hour. Also, thanks to a tip from whisp...

0 views
(think) Yesterday

Learning OCaml: PPX for Mere Mortals

When I started learning OCaml I kept running into code like this: My first reaction was “what the hell is ?” Coming from languages like Ruby and Clojure, where metaprogramming is either built into the runtime (reflection) or baked into the language itself (macros), OCaml’s approach felt alien. There’s no runtime reflection, no macro system in the Lisp sense – just this mysterious syntax that somehow generates code at compile time. That mystery is PPX (PreProcessor eXtensions), and once you understand it, a huge chunk of the OCaml ecosystem suddenly makes a lot more sense. This article is my attempt to demystify PPX for people like me – developers who want to use PPX effectively without necessarily becoming PPX authors themselves. OCaml is a statically typed language with no runtime reflection. That means you can’t do things like “iterate over all fields of a record at runtime” or “automatically serialize any type to JSON.” The type information simply isn’t available at runtime – it’s erased during compilation. One of my biggest frustrations as a newcomer was not being able to just print arbitrary data for debugging – there’s no generic or that works on any type. That frustration was probably my first real interaction with PPX. PPX solves this by generating code at compile time . When the OCaml compiler parses your source code, it builds an Abstract Syntax Tree (AST) – a tree data structure that represents the syntactic structure of your program. PPX rewriters are programs that receive this AST, transform it, and return a modified AST back to the compiler. The compiler then continues as if you had written the generated code by hand. In practical terms, this means that when you write: The PPX rewriter generates something like this behind the scenes: You get a pretty-printer for free, derived from the type definition. No boilerplate, no manual work, and it stays in sync with your type automatically. If you’ve used Rust’s or Haskell’s , the idea is very similar. The syntax is different, but the motivation is identical – generating repetitive code from type definitions. If you’re coming from Rust, you might wonder why OCaml doesn’t just have a built-in macro system like . It’s a fair question, and the answer says a lot about OCaml’s design philosophy. OCaml has always favored a small, stable language core . The compiler is famously lean and fast, and the language team is conservative about adding complexity to the specification. A full macro system baked into the compiler would be a significant undertaking – it would need to be designed, specified, maintained, and kept compatible across versions, forever. Instead, OCaml took a more minimal approach: the compiler provides just two things – extension points and attributes – as syntactic hooks in the AST. Everything else lives in the ecosystem. The actual PPX rewriters are ordinary OCaml programs that happen to transform ASTs. The ppxlib framework that ties it all together is a regular library, not part of the compiler. This has some real advantages: The trade-offs are real, though. Rust’s proc macros are more tightly integrated – you get better error messages pointing at macro-generated code, better IDE support for macro expansions, and the macro system is a documented, stable part of the language. With PPX, you’re sometimes left staring at cryptic type errors in generated code and reaching for to figure out what went wrong. That said, OCaml’s approach feels very OCaml – pragmatic, minimal, and trusting the ecosystem to build what’s needed on top of a simple foundation. And in practice, it works remarkably well. PPX wasn’t OCaml’s first metaprogramming system. Before PPX, there was Camlp4 (and its fork Camlp5 ) – a powerful but complex preprocessor that maintained its own parser, separate from the compiler’s parser. Camlp4 could extend OCaml’s syntax in arbitrary ways, which sounds great in theory but was a maintenance nightmare in practice. Every OCaml release risked breaking Camlp4, and code using Camlp4 extensions often couldn’t be processed by standard tools like editors and documentation generators. OCaml 4.02 (2014) introduced extension points and attributes directly into the language grammar – syntactic hooks specifically designed for preprocessor extensions. This was a much simpler and more maintainable approach: PPX rewriters use the compiler’s own AST, the syntax is valid OCaml (so tools can still parse your code), and the whole thing is conceptually just “AST in, AST out.” Camlp4 was officially retired in 2019. Today, the PPX ecosystem is built on ppxlib , a unified framework that provides a stable API across OCaml versions and handles all the plumbing for PPX authors. Before diving into specific libraries, let’s decode the bracket soup. PPX uses two syntactic mechanisms built into OCaml: Extension nodes are placeholders that a PPX rewriter must replace with generated code (compilation fails if no PPX handles them): Attributes attach metadata to existing code. Unlike extension nodes, the compiler silently ignores attributes that no PPX handles: The one you’ll see most often is on type declarations. The distinction between , , and is about scope – one for the innermost node, two for the enclosing declaration, three for the whole module-level. Tip: Don’t worry about memorizing all of this upfront. In practice, you’ll mostly use and occasionally or – and the specific PPX library’s documentation will tell you exactly which syntax to use. To use a PPX library in your project, you add it to the stanza in your file: That’s it. List all the PPX rewriters you need after , and Dune takes care of the rest (it even combines them into a single binary for performance). For plugins specifically, you use dotted names like . Let’s look at the PPX libraries that cover probably 90% of real-world use cases. ppx_deriving is the community’s general-purpose deriving framework. It comes with several built-in plugins: is the one you’ll reach for first – it’s essentially the answer to “how do I just print this thing?” that every OCaml newcomer asks sooner or later. The most commonly used plugins: A neat convention: if your type is named (as is idiomatic in OCaml), the generated functions drop the type name suffix – you get , , , instead of , , etc. You can also customize behavior per field with attributes: And you can derive for anonymous types inline: ppx_deriving_yojson generates JSON serialization and deserialization functions using the Yojson library: You can use or if you only need one direction. This is incredibly useful in practice – writing JSON serializers by hand for complex types is tedious and error-prone. If you’re using Jane Street’s Core library, you’ll encounter S-expression serialization everywhere. ( Tip: Jane Street bundles most of their PPXs into a single ppx_jane package, so you can add just to your instead of listing each one individually.) ppx_sexp_conv generates converters between OCaml types and S-expressions: The attributes here are quite handy – provides a default value during deserialization, and means the field is represented as a present/absent atom rather than . Two more Jane Street PPXs that you’ll see a lot in Core-based codebases. ppx_fields_conv generates first-class accessors and iterators for record fields: ppx_variants_conv does something similar for variant types – generating constructors as functions, fold/iter over all variants, and more. These Jane Street PPXs let you write tests directly in your source files: ppx_expect is particularly nice – it captures printed output and compares it against expected output: If the output doesn’t match, the test fails and you can run to automatically update the expected output in your source file. It’s a very productive workflow for testing functions that produce output. ppx_let provides syntactic sugar for working with monads and other “container” types: How does know which to call? It looks for a module in scope that provides the underlying and functions. In practice, you’ll typically open a module that defines before using : Note: Since OCaml 4.08, the language has built-in binding operators ( , , , ) that cover the basic use cases of without needing a preprocessor. If you’re not using Jane Street’s ecosystem, binding operators are probably the simpler choice. still offers extra features like , , and optimized though. ppx_blob is beautifully simple – it embeds a file’s contents as a string at compile time: No more worrying about file paths at runtime or packaging data files with your binary. The file contents become part of your compiled program. One thing that’s always bugged me about OCaml is the lack of string interpolation. ppx_string fills that gap: The suffix tells the PPX to convert the value using . You can use any module that provides a function. Most OCaml developers will never need to write a PPX, but understanding the basics helps demystify the whole system. Let’s build a very simple one. Say we want an extension that converts a string literal to uppercase at compile time. Here’s the complete implementation using ppxlib : The dune file: The key pieces are: For more complex PPXs (especially derivers), you’ll also want to use Metaquot ( ), which lets you write AST-constructing code using actual OCaml syntax instead of manual AST builder calls: The ppxlib documentation has excellent tutorials if you want to go deeper. One practical tip: when something goes wrong with PPX-generated code and you’re staring at a confusing type error, you can inspect what the PPX actually generated: Seeing the expanded code often makes the error immediately obvious. Most of the introductory PPX content out there was written around 2018-2019, so it’s worth noting how things have evolved since then. The big story has been ppxlib’s consolidation of the ecosystem . Back in 2019, some PPX rewriters still used the older (OMP) library, creating fragmentation. By 2021, nearly all PPXs had migrated to ppxlib , effectively ending the split. Today ppxlib is the way to write PPX rewriters – there’s no real alternative to consider. The transition hasn’t always been smooth, though. In 2025, ppxlib 0.36.0 bumped its internal AST to match OCaml 5.2, which changed how functions are represented in the parse tree. This broke many downstream PPXs and temporarily split the opam universe between packages that worked with the new version and those that didn’t. The community worked through it with proactive patching, but it highlighted an ongoing tension in the PPX world: ppxlib shields you from most compiler changes, but major AST overhauls still ripple through the ecosystem. On the API side, ppxlib is gradually deprecating its copy of in favor of , with plans to remove entirely in a future 1.0.0 release. If you’re writing a new PPX today, use exclusively. Meanwhile, OCaml 4.08’s built-in binding operators ( , , etc.) have reduced the need for in projects that don’t use Jane Street’s ecosystem. It’s a nice example of the language absorbing a pattern that PPX pioneered. Perhaps one day we’ll see more of this (e.g. native string interpolation). This article covers a lot of ground, but the PPX topic is pretty deep and complex, so depending on how far you want to go you might want to read more on it. Here are some of the best resources I’ve found on PPX: I was amused to see whitequark’s name pop up while I was doing research for this article – we collaborated quite a bit back in the day on her Ruby parser project, which was instrumental to RuboCop . Seems you can find (former) Rubyists in pretty much every language community. This article turned out to be a beast! I’ve wanted to write something on the subject for quite a while now, but I’ve kept postponing it because I was too lazy to do all the necessary research. I’ll feel quite relieved to put it behind me! PPX might look intimidating at first – all those brackets and symbols can feel like line noise. But the core idea is simple: PPX generates boilerplate code from your type definitions at compile time. You annotate your types with what you want ( , , , , etc.), and the PPX rewriter produces the code you’d otherwise have to write by hand. For day-to-day OCaml programming, you really only need to know: The “writing your own PPX” part is there for when you need it, but honestly most OCaml developers get by just fine using the existing ecosystem. That’s all I have for you today. Keep hacking! The ecosystem can evolve independently. ppxlib can ship new features, fix bugs, and improve APIs without waiting for a compiler release. Compare this to Rust, where changes to the proc macro system require the full RFC process and a compiler update. Tooling stays simple. Because and are valid OCaml syntax, every tool – editors, formatters, documentation generators – can parse PPX-annotated code without knowing anything about the specific PPX. The code is always syntactically valid OCaml, even before preprocessing. The compiler stays lean. No macro expander, no hygiene system, no special compilation phases – just a hook that says “here, transform this AST before I type-check it.” – registers an extension with a name, the context where it can appear (expressions, patterns, types, etc.), the expected payload pattern, and an expansion function. – a pattern-matching DSL for destructuring AST nodes. Here matches a string literal and captures its value. – helpers for constructing AST nodes. builds a string literal expression. – registers the rule with ppxlib’s driver. Preprocessors and PPXs – the official OCaml documentation on metaprogramming. A solid reference, though it assumes some comfort with the compiler internals. An Introduction to OCaml PPX Ecosystem – Nathan Rebours’ 2019 deep dive for Tarides. This is the most thorough tutorial on writing PPX rewriters I’ve seen. Some API details have changed since 2019 (notably the → shift), but the concepts and approach are still excellent. ppxlib Quick Introduction – ppxlib’s own getting-started guide. The best place to begin if you want to write your own PPX. A Guide to PreProcessor eXtensions – OCamlverse’s reference page with a comprehensive list of available PPX libraries. A Guide to Extension Points in OCaml – Whitequark’s original 2014 guide that introduced many developers to PPX. Historically interesting as a snapshot of the early PPX days. on type declarations to generate useful functions How to add PPX libraries to your dune file with Which PPX libraries exist for common tasks (serialization, testing, pretty-printing)

0 views
Martin Fowler Yesterday

Design-First Collaboration

Rahul Garg continues his series of Patterns for Reducing Friction in AI-Assisted Development . This pattern describes a structured conversation that mirrors whiteboarding with a human pair: progressive levels of design alignment before any code, reducing cognitive load, and catching misunderstandings at the cheapest possible moment.

0 views
Sean Goedecke Yesterday

Giving LLMs a personality is just good engineering

AI skeptics often argue that current AI systems shouldn’t be so human-like. The idea - most recently expressed in this opinion piece by Nathan Beacom - is that language models should explicitly be tools, like calculators or search engines. Although they can pretend to be people, they shouldn’t, because it encourages users to overestimate AI capabilities and (at worst) slip into AI psychosis . Here’s a representative paragraph from the piece: In sum, so much of the confusion around making AI moral comes from fuzzy thinking about the tools at hand. There is something that Anthropic could do to make its AI moral, something far more simple, elegant, and easy than what Askell is doing. Stop calling it by a human name, stop dressing it up like a person, and don’t give it the functionality to simulate personal relationships, choices, thoughts, beliefs, opinions, and feelings that only persons really possess. Present and use it only for what it is: an extremely impressive statistical tool, and an imperfect one. If we all used the tool accordingly, a great deal of this moral trouble would be resolved. So why do Claude and ChatGPT act like people? According to Beacom, AI labs have built human-like systems because AI lab engineers are trying to hoodwink users into emotionally investing in the models, or because they’re delusional true believers in AI personhood, or some other foolish reason. This is wrong. AI systems are human-like because that is the best way to build a capable AI system . Modern AI models - whether designed for chat, like OpenAI’s GPT-5.2, or designed for long-running agentic work, like Claude Opus 4.6 - do not naturally emerge from their oceans of training data. Instead, when you train a model on raw data, you get a “base model”, which is not very useful by itself. You cannot get it to write an email for you, or proofread your essay, or review your code. The base model is a kind of mysterious gestalt of its training data. If you feed it text, it will sometimes continue in that vein, or other times it will start outputting pure gibberish. It has no problem producing code with giant security flaws, or horribly-written English, or racist screeds - all of those things are represented in its training data, after all, and the base model does not judge. It simply outputs. To build a useful AI model, you need to journey into the wild base model and stake out a region that is amenable to human interests: both ethically, in the sense that the model won’t abuse its users, and practically, in the sense that it will produce correct outputs more often than incorrect ones. What this means in practice is that you have to give the model a personality during post-training 1 . Human beings are capable of almost any action at any time. But we only take a tiny subset of those actions, because that’s the kind of people we are. I could throw my cup of coffee all over the wall right now, but I don’t, because I’m not the kind of person who needlessly makes a mess 2 . AI systems are the same. Claude could respond to my question with incoherent racist abuse - the base model is more than capable of those outputs - but it doesn’t, because that’s not the kind of “person” it is. In other words, human-like personalities are not imposed on AI tools as some kind of marketing ploy or philosophical mistake. Those personalities are the medium via which the language model can become useful at all. This is why it’s surprisingly tricky to “just” change a language model’s personality or opinions: because you’re navigating through the near-infinite manifold of the base model. You may be able to control which direction you go, but you can’t control what you find there 3 . When AI people talk about LLMs having personalities, or wanting things, or even having souls 4 , these are technical terms, like the “memory” of a computer or the “transmission” of a car. You simply cannot build a capable AI system that “just acts like a tool”, because the model is trained on humans writing to and about other humans . You need to prime it with some kind of personality (ideally that of a useful, friendly assistant) so it can pull from the helpful parts of its training data instead of the horrible parts. This is all pretty well understood in the AI space. Anthropic wrote a recent paper about it where they cite similar positions going all the way back to 2022. But for some reason it’s not yet penetrated into communities that are more skeptical of AI. You could explain this in terms of “the stories we tell ourselves”. Many people (though not all ) think that human identities are narratively constructed. I wrote about this last year in Mecha-Hitler, Grok, and why it’s so hard to give LLMs the right personality . A little nudge to change Grok’s views on South African internal politics can cause it to start calling itself “Mecha-Hitler”. I have long believed that Claude “feels better” to use than ChatGPT because it has a more coherent persona (due mainly to Amanda Askell’s work on its “soul”). My guess is that if you tried to make a “less human” version of Claude, it would become rapidly less capable. This is all pretty well understood in the AI space. Anthropic wrote a recent paper about it where they cite similar positions going all the way back to 2022. But for some reason it’s not yet penetrated into communities that are more skeptical of AI. ↩ You could explain this in terms of “the stories we tell ourselves”. Many people (though not all ) think that human identities are narratively constructed. ↩ I wrote about this last year in Mecha-Hitler, Grok, and why it’s so hard to give LLMs the right personality . A little nudge to change Grok’s views on South African internal politics can cause it to start calling itself “Mecha-Hitler”. ↩ I have long believed that Claude “feels better” to use than ChatGPT because it has a more coherent persona (due mainly to Amanda Askell’s work on its “soul”). My guess is that if you tried to make a “less human” version of Claude, it would become rapidly less capable. ↩

0 views
Jeff Geerling Yesterday

I built a pint-sized Macintosh

To kick off MARCHintosh , I built this tiny pint-sized Macintosh with a Raspberry Pi Pico: This is not my own doing—I just assembled the parts to run Matt Evans' Pico Micro Mac firmware on a Raspberry Pi Pico (with an RP2040). The version I built outputs to a 640x480 VGA display at 60 Hz, and allows you to plug in a USB keyboard and mouse. Since the original Pico's RAM is fairly constrained, you get a maximum of 208 KB of RAM with this setup—which is 63% more RAM than you got on the original '128K' Macintosh!

0 views
Chris Coyier 2 days ago

FOREVERGREEN

In the first few minutes, Ruby says to me, “ This is like The Giving Tr ee “, and by the end, I was like, “ OK, you’re right .”

0 views