Latest Posts (20 found)
Justin Duke -3 days ago

Maybe use Plain

When I wrote about Help Scout , much of my praise was appositional. They were the one tool I saw that did not aggressively shoehorn you into using them as a CRM to the detriment of the core product itself. This is still true. They launched a redesign that I personally don't love, but purely on subjective grounds. And there's still a fairly reasonable option for — and I mean this in a non-derogatory way — baby's first support system. I will call out also: if you want something even simpler, Jelly , which is an app that leans fully into the shared inbox side of things. It is less featureful than Help Scout, but with a better design and lower price point. If I was starting a new app today, this is what I would reach for first. But nowadays I use Plain . Plain will not solve all of your problems overnight. It's only a marginally more expensive product — $35 per user per month compared to Help Scout's $25 per user per month. The built-in Linear integration is worth its weight in gold if you're already using Linear, and its customer cards (the equivalent of Help Scout's sidebar widgets) are marginally more ergonomic to work with. The biggest downside that we've had thus far is reliability — less in a cosmic or existential sense and more that Plain has had a disquieting number of small-potatoes incidents over the past three to six months. My personal flowchart for what service to use in this genre is something like: But the biggest thing to do is take the tooling and gravity of support seriously as early as you can. Start with Jelly. If I need something more than that, see if anyone else on the team has specific experience that they care a lot about, because half the game here is in muscle memory rather than functionality. If not, use Plain.

0 views

Outgrowing Django admin

For a bit of dessert work this week, I'm working on a full-fledged attempt at replacing the majority of our stock Django admin usage with something purposeful. I say majority and not totality because even though I am an unreasonable person, I am not that unreasonable. We have over a hundred Django models, and the idea of trying to rip and replace each and every one of them — or worse yet, to design some sort of DSL by which we do that — is too quixotic even for me. The vast majority of our admin usage coalesces around three main models, and they're the ones you might guess: the user/newsletter model, the email model, and the subscriber model. My hope is that building out a markedly superior interface for interacting with these three things and sacrificing the long tail still nets out for a much happier time for myself and the support staff. Django admin is a source of both much convenience as much frustration: the abstractions make it powerful and cheap when you're first scaling, but the bill for those abstractions come due in difficult and intractable ways. When I talk with other Django developers, they divide cleanly into one of two camps: either "what are you talking about, Django admin is perfect as-is" or "oh my God, I can't believe we didn't migrate off of it sooner." Ever the annoying centrist, I find myself agreeing with both camps: Let's set aside the visual design of the admin for a second, because arguing about visual design is not compelling prose. To me, the core issue with Django's admin interface, once you get more mature, is the fact that it's a very simple request-response lifecycle. Django pulls all the data, state, and information you might need and throws it up to a massive behemoth view for you to digest and interact with. It is by definition atomic: you are looking at a specific model, and the only way to bring in other models to the detail view is by futzing around with inlines and formsets. The classic thing that almost any Django developer at scale has run into is the N+1 problem — but not even necessarily the one you're thinking about. Take a fairly standard admin class: If you've got an email admin object and one of the fields on the is a — because you want to be able to change and see which user wrote a given email — Django by default will serialize every single possible user into a nice tag for you. Even if this doesn't incur a literal N+1, you're asking the backend to generate a select with thousands (or more) options; the serialization overhead alone will timeout your request. And so the answer is, nowadays, to use or , which pulls in a jQuery 1.9 package 1 Yes, in 2026. No, I don't want to talk about it. to call an Ajax endpoint instead: This is the kind of patch that feels like a microcosm of the whole problem: technically correct, ergonomically awkward, and aesthetically offensive. But the deeper issue is composability rather than performance. A well-defined data model has relationships that spread in every direction. A subscriber has Stripe subscriptions and Stripe charges. It has foreign keys onto email events and external events. When you're debugging an issue reported by a subscriber, you want to see all of these things in one place, interleaved and sorted chronologically. Django admin's answer to this is inlines: This works — until it doesn't. You start to run into pagination issues; you can't interleave those components with one another because they're rendered as separate, agnostic blocks; you can't easily filter or search within a single inline. You could create a helper method on the subscriber class to sort all related events and present them as a single list, but you once again run into the non-trivial problem of this being part of a fixed request-response lifecycle. And that kind of serialized lookup can get really expensive: You can do more bits of cleverness — parallelizing lookups, caching aggressively, using and everywhere — but now you're fighting the framework rather than using it. The whole point of Django admin was to not build this stuff from scratch, and yet here you are, building bespoke rendering logic inside callbacks. I still love Django admin. On the next Django project I start, I will not create a bespoke thing from day one but instead rely on my trusty, outdated friend until it's no longer bearable. But what grinds my gears is the fact that, as far as I can tell, every serious Django company has this problem and has had to solve it from scratch. There's no blessed graduation path, whether in the framework itself or the broader ecosystem. I think that's one of the big drawbacks of Django relative to its peer frameworks. As strong and amazing as its community is, it's missing a part of the flywheel from more mature deployments upstreaming their findings and discoveries back into the zeitgeist. Django admin is an amazing asset; I am excited to be, if not rid of it, to be seeing much less of it in the future.

0 views

the tech-enabled surveillance of children

Every now and then, I'll be exposed to a world I have otherwise nothing to do with: Child surveillance. What I see is infuriating. Not only are children nowadays pressured by their parents to turn location services on their devices on, but the parents also set up notifications for when the child arrives and leaves a place and alerts for when they stray from the path. They also get weekly, if not daily updates about what their child did at school via an app or a message by the teacher directly. This is nuts! This is not normal. This is not how I grew up and this is not how those parents have grown up either. They know it is absolutely possible to do without, just like it has always been pre-2015, but they choose this. Parents' paranoia is allowed to completely overrule the child's own right to privacy, completely unchecked. Emotions run high with anything child-related, so anything goes that could potentially even help the safety of a child a little . The trade-offs are ignored. A newsletter I subscribe to (Dense Discovery) has a section advertising apps and services, and in a recent one, I was shocked to see that they would advertise what's probably the worst child surveillance tech I have seen in a while: "Bark is a parental control system that uses AI to scan texts, social media, images and videos across 30+ apps. It offers an app for existing devices (iPhone & Android) but also, it seems, custom hardware. The goal is to alert parents of potential dangers like bullying, self-harm content or predatory behaviour. It outsources parental vigilance to an algorithm, which is either reassuring or deeply unsettling depending on your stance on digital surveillance and trust. (Looks like it’s currently only available in the US, South Africa and Australia.) " This isn't quirky or an issue to be neutral about; this is completely dystopian, and I'd expect more people to be deeply uncomfortable with this shit and resisting it, child or not. What exactly is "reassuring" about any of this? You are way too comfortable making money off of advertising the complete dehumanization of children. You are treating them worse than prisoners , in ways you would never ever accept, in ways that wasn't even possible yet when you were a child! You know what also counts as "child protection"? Protecting their human rights . "Everyone has the right to respect for his or her private and family life, home and communications." "1. Everyone has the right to the protection of personal data concerning him or her." 2. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified. in the Charter of Fundamental Rights of the European Union, Article 7 and 8. " 1. Everyone has the right to respect for his private and family life, his home and his correspondence. in the European Convention on Human Rights, Article 8. " No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks. " in the International Covenant on Civil and Political Rights, Article 17. And very similarly: " No one shall be subjected to arbitrary interference with their privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks. " in the Universal Declaration of Human Rights, Article 12. These are not exclusively about protecting people from the state, but having privacy in general. There are also the constitutional rights, whose wording depends on where you live. It is likely not mentioned explicitly in there, but inferred. In Germany, for example, the right to informational self-determination (control over your data + privacy) is inferred from the general right of personality and privacy from Article 2(1) in connection to Article 1(1) Grundgesetz (GG). "(1) Human dignity shall be inviolable. To respect and protect it shall be the duty of all state authority." "(1) Every person shall have the right to free development of his personality insofar as he does not violate the rights of others or offend against the constitutional order or the moral law." People do not just begin to be people with rights when they reach adulthood. We should act accordingly. Reply via email Published 16 Feb, 2026 You don't show your child you trust them, so why should they trust you? You model complete distrust and that they are suspicious by default. They have no space where they can just explore how to be and make mistakes or act out without being seen and immediately reported on. It's not safe to test boundaries or make mistakes, because instead of getting to make that mistake and dealing with the fallout later (or it never coming out), their transgressions are immediately recorded, noticed, and punished. Abusive parents have even more pathways to abuse, control, and isolate. Instead of trying to make abusers happy trying to live your life and jumping through hoops, it's easier to just give in and stay home and do what you're told. You're completely normalizing state surveillance and companies snooping on us and present it as a good thing. The fear of recordings and repression makes them obedient in advance, altering normal development. They are much more likely to just act in ways that their parents want them to instead of finding their own selves and path. This is especially bad for queer children. You are raising a terrific liar, and forcing your child to download scummy circumvention methods onto their devices.

0 views
iDiallo Today

Programming is free

A college student on his spring break contacted me for a meeting. At the time, I had my own startup and was navigating the world of startup school with Y Combinator and the publicity from TechCrunch. This student wanted to meet with me to gain insight on the project he was working on. We met in a cafe, and he went straight to business. He opened his MacBook Pro, and I glimpsed at the website he and his partner had created. It was a marketplace for college students. You could sell your items to other students in your dorm. I figured this was a real problem he'd experienced and wanted to solve. But after his presentation, I only had one question in mind, about something he had casually dropped into his pitch without missing a beat. He was paying $200 a month for a website with little to no functionality. To add to it, the website was slow. In fact, it was so slow that he reassured me the performance problems should disappear once they upgraded to the next tier. Let's back up for a minute. When I was getting started, I bought a laptop for $60. A defective PowerBook G4 that was destined for the landfill. I downloaded BBEdit, installed MAMP, and in little to no time I had clients on Craigslist. That laptop paid for itself at least 500 times over. Then a friend gave me her old laptop, a Dell Inspiron e1505. That one paved the way to a professional career that landed me jobs in Fortune 10 companies. I owe it all not only to the cheap devices I used to propel my career and make a living, but also to the free tools that were available. My IDE was Vim. My language was PHP, a language that ran on almost every server for the price of a shared hosting plan that cost less than a pizza. My cloud was a folder on that server. My AI pair programmer was a search engine and a hope that someone, somewhere, had the same problem I did and had posted the solution on a forum. The only barrier to entry was the desire to learn. Fast forward to today, every beginner is buying equipment that can simulate the universe. Before they start their first line of code, they have subscriptions to multiple paid services. It's not because the free tools have vanished, but because the entire narrative around how to get started is now dominated by paid tools and a new kind of gatekeeper: the influencer. When you get started with programming today, the question is "which tool do I need to buy?" The simple LAMP stack (Linux, Apache, MySQL, PHP) that launched my career and that of thousands of developers is now considered quaint. Now, beginners start with AWS. Some get the certification before they write a single line of code. Every class and bootcamp sells them on the cloud. It's AWS, it's Vercel, it's a dozen other platforms with complex pricing models designed for scale, not for someone building their first "Hello, World!" app. Want to build something modern? You'll need an API key for this service, a paid tier for that database, and a hosting plan that charges by the request. Even the code editor, once a simple download, is now often a SaaS product with a subscription. Are you going to use an IDE without an AI assistant? Are you a dinosaur? To be a productive programmer, you need a subscription to an AI. It may be a fruitless attempt, but I'll say it anyway. You don't need any paid tools to start learning programming and building your first side project. You never did. The free tools are still there. Git, VS Code (which is still free and excellent!), Python, JavaScript, Node.js, a million static site generators. They are all still completely, utterly free. New developers are not gravitating towards paid tools by accident. Other than code bootcamps selling them on the idea, the main culprit is their medium of learning. The attention economy. As a beginner, you're probably lost. When I was lost, I read documentation until my eyes bled. It was slow, frustrating, and boring. But it was active. I was engaging with the code, wrestling with it line by line. Today, when a learner is lost, they go to YouTube. A question I am often asked is: Do you know [YouTuber Name]? He makes some pretty good videos. And they're right. The YouTuber is great. They're charismatic, they break down complex topics, and they make it look easy. In between, they promote Hostinger or whichever paid tool is sponsoring them today. But the medium is the message, and the message of YouTube is passive consumption . You watch, you nod along, you feel like you're learning. And then the video ends. An algorithm, designed to keep you watching, instantly serves you the next shiny tutorial . You click. You watch. You never actually practice. Now instead of just paying money for the recommended tool, you are also paying an invisible cost. You are paying with your time and your focus. You're trading the deep, frustrating, but essential work of building for the shallow, easy dopamine hit of watching someone else build. The influencer's goal is to keep you watching. The platform's goal is to keep you scrolling. Your goal should be to stop watching and start typing. These goals are at odds. I told that student he was paying a high cost for his hobby project. A website with a dozen products and images shouldn't cost more than a $30 Shopify subscription. If you feel more daring and want to do the work yourself, a $5 VPS is a good start. You can install MySQL, Rails, Postgres, PHP, Python, Node, or whatever you want on your server. If your project gains popularity, scaling it wouldn't be too bad. If it fails, the financial cost is a drop in a bucket. His story stuck with me because it wasn't unique. It's the default path now: spend first, learn second. But it doesn't have to be. You don't need an AI subscription. You don't need a YouTuber. You need a text editor (free), a language runtime (free), and a problem you want to solve. You need to get bored enough to open a terminal and start tinkering. The greatest gift you can give yourself as a new programmer isn't a $20/month AI tool or a library of tutorial playlists. It's the willingness to stare at a blinking cursor and a cryptic error message until you figure it out yourself. Remember, my $60 defective laptop launched a career. That student's $200/month website taught him to wait for someone else to fix his problems. The only difference between us was our approach. The tools for learning are, and have always been, free. Don't let anyone convince you otherwise.

0 views

[trade] what surprises me most studying law

It's been a while, but I finally have a blog title trade again! James gave me the blog title " What surprises me most studying law ". You can read what title I gave him here . Starting off with some small surprises: Looking back on the attitude I had before I started studying law, I thought it would be a lot harder. Or rather, difficult in a different way than it actually ended up being so far. This is explicitly not meant as a humble brag of " Look how good I am at this, so easy! ", it's just that I didn't know just how much is explicitly mentioned in the law (because I did not care about reading it much before), and that you are allowed to take it with you into the exams to look stuff up in. I was surprised that law is not actually about knowing all of these by heart (almost none of that is needed), but that you just need to know where the information is and that it exists. This works in my favor. I can be forgetful about details, but I still know where exactly I found some piece of information. I think if more people knew about this aspect of studying law, maybe more people would consider it and not shy away from it! What also surprised me was the intense focus on practice cases (at least here, In Germany). This is heavily criticized and I also have my gripes with it, but it also means that in the exams, you are writing a report judging a given case based on a specific rigid structure. The cases usually belong to previously defined and covered case groups. It's likely you have already solved practice cases with that exact problem or heard about it in the news or existing case law, so you know what kind of laws you need to apply, and you also don't need to worry about how to structure the report as you need to follow a specific format. At least at my university, you do not actually end up writing a sort of free-form essay until the Bachelor thesis. As long as you follow the formalities, know some case law that was covered in the materials, solve some practice cases and can detect hints in the text about what the problem in the case is, you have a good chance of passing. All you have to do is follow the structure that's the same for all exams, open the book to the specific paragraphs you need, and read what's in it. You still need to know the numbers yourself and when each is applied, and some definitions of terms and different interpretations; but that is far less than other degrees have to learn by heart. But what surprised me most is that there is not "the one correct interpretation" of almost anything! There are controversies about most things in German law, with at least 2-5 different interpretations and views on how a specific paragraph is applied or worded. Laypeople often feel confident to just quote any paragraph at others and insist that it means xyz, but not even the experts and professionals are agreeing on it. The world of law may look black and white from the outside, but there's a good reason why most people you meet in law will answer anything with " It depends... ". On one hand: Yes, parts of law are written in a way that it is supposed to cover a lot of different cases under one umbrella; but on the other hand, law also needs to allow for some flexibility for outliers and new developments. That means it's usually not as clear-cut as it seems from the outside. That's also why we have courts - they continue to develop case law that adds to the interpretation of paragraphs and articles, and they stick to one interpretation or develop a new one, while making a decision in the discretion the law gives them. If law was actually such a straightforward thing that perfectly and clearly covers every situation, we wouldn't need courts deciding (aside from the right of parties to defend themselves, of course). That also means that two people that did the exact same thing could walk out of court with different results. That may feel unjust, but so is life. I can see this directly in action in the data protection law space: Law groups focused on digital rights and informational self-determination of users argue for different interpretations of GDPR articles than lawyers employed by large social media companies. Law is further developed and changing every day, and an on-going conversation between many different parties and circumstances; totally different from the rigid set of rules I expected. :) Reply via email Published 16 Feb, 2026

0 views

Raspberry Pi as Forgejo Runner

In my instructions on how to setup [Forgejo with a runner](/posts/55), I used a Hetzner server for the runner. This costs roughly 5 euros per month, so 60 euro annually. A full Hetzner server might be a bit overkill for a simple runner. Especially if you are just running Shell scripts or static site generation. The Hetzner server supports things like high bandwidth, low latency, unique IPv4 address, high uptime guarantees. Most of these are not necessary for your own runner. Therefore, in many cases it's probably a good idea to run the Runner on your own hardware. What I have tested and work...

0 views

Type-based alias analysis in the Toy Optimizer

Another entry in the Toy Optimizer series . Last time, we did load-store forwarding in the context of our Toy Optimizer. We managed to cache the results of both reads from and writes to the heap—at compile-time! We were careful to mind object aliasing: we separated our heap information into alias classes based on what offset the reads/writes referenced. This way, if we didn’t know if object and aliased, we could at least know that different offsets would never alias (assuming our objects don’t overlap and memory accesses are on word-sized slots). This is a coarse-grained heuristic. Fortunately, we often have much more information available at compile-time than just the offset, so we should use it. I mentioned in a footnote that we could use type information, for example, to improve our alias analysis. We’ll add a lightweight form of type-based alias analysis (TBAA) (PDF) in this post. We return once again to Fil Pizlo land, specifically How I implement SSA form . We’re going to be using the hierarchical heap effect representation from the post in our implementation, but you can use your own type representation if you have one already. This representation divides the heap into disjoint regions by type. Consider, for example, that objects and objects do not overlap. A pointer is never going to alias an pointer. They can therefore be reasoned about separately. But sometimes you don’t have perfect type information available. If you have in your language an base class of all objects, then the heap overlaps with, say, the heap. So you need some way to represent that too—just having an enum doesn’t work cleanly. Here is an example simplified type hierarchy: Where might represent different parts of the runtime’s data structures, and could be further segmented into , , etc. Fil’s idea is that we can represent each node in that hierarchy with a tuple of integers (inclusive, exclusive) that represent the pre- and post-order traversals of the tree. Or, if tree traversals are not engraved into your bones, they represent the range of all the nested objects within them. Then the “does this write interfere with this read” check—the aliasing check—is a range overlap query. Here’s a perhaps over-engineered Python implementation of the range and heap hierarchy based on the Ruby generator and C++ runtime code from JavaScriptCore: Where kicks off the tree-numbering scheme. Fil’s implementation also covers a bunch of abstract heaps such as SSAState and Control because his is used for code motion and whatnot. That can be added on later but we will not do so in this post. So there you have it: a type representation. Now we need to use it in our load-store forwarding. Recall that our load-store optimization pass looks like this: At its core, it iterates over the instructions, keeping a representation of the heap at compile-time. Reads get cached, writes get cached, and writes also invalidate the state of compile-time information about fields that may alias. In this case, our may alias asks only if the offsets overlap. This means that the following unit test will fail: This test is expecting the write to to still remain cached even though we wrote to the same offset in —because we have annotated as being an and as being a . If we account for type information in our alias analysis, we can get this test to pass. After doing a bunch of fussing around with the load-store forwarding (many rewrites), I eventually got it down to a very short diff: If we don’t have any type/alias information, we default to “I know nothing” ( ) for each object. Then we check range overlap. The boolean logic in looks a little weird, maybe. But we can also rewrite (via DeMorgan’s law) as: So, keeping all the cached field state about fields that are known by offset and by type not to alias. Maybe that is clearer (but not as nice a diff). Note that the type representation is not so important here! You could use a bitset version of the type information if you want. The important things are that you can cheaply construct types and check overlap between them. Nice, now our test passes! We can differentiate between memory accesses on objects of different types. But what if we knew more? Sometimes we know where an object came from. For example, we may have seen it get allocated in the trace. If we saw an object’s allocation, we know that it does not alias (for example) any object that was passed in via a parameter. We can use this kind of information to our advantage. For example, in the following made up IR snippet: We know that (among other facts) doesn’t alias or because we have seen its allocation site. I saw this in the old V8 IR Hydrogen’s lightweight alias analysis 1 : There is plenty of other useful information such as: If you have other fun ones, please write in. We only handle loads and stores in our optimizer. Unfortunately, this means we may accidentally cache stale information. Consider: what happens if a function call (or any other opaque instruction) writes into an object we are tracking? The conservative approach is to invalidate all cached information on a function call. This is definitely correct, but it’s a bummer for the optimizer. Can we do anything? Well, perhaps we are calling a well-known function or a specific IR instruction. In that case, we can annotate it with effects in the same abstract heap model: if the instruction does not write, or only writes to some heaps, we can at least only partially invalidate our heap. However, if the function is unknown or otherwise opaque, we need at least more advanced alias information and perhaps even (partial) escape analysis. Consider: even if an instruction takes no operands, we have no idea what state it has access to. If it writes to any object A, we cannot safely cache information about any other object B unless we know for sure that A and B do not alias. And we don’t know what the instruction writes to. So we may only know we can cache information about B because it was allocated locally and has not escaped. Some runtimes such as ART pre-compute all of their alias information in a bit matrix. This makes more sense if you are using alias information in a full control-flow graph, where you might need to iterate over the graph a few times. In a trace context, you can do a lot in one single pass—no need to make a matrix. As usual, this is a toy IR and a toy optimizer, so it’s hard to say how much faster it makes its toy programs. In general, though, there is a dial for analysis and optimization that goes between precision and speed. This is a happy point on that dial, only a tiny incremental analysis cost bump above offset-only invalidation, but for higher precision. I like that tradeoff. Also, it is very useful in JIT compilers where generally the managed language is a little better-behaved than a C-like language . Somewhere in your IR there will be a lot of duplicate loads and stores from a strength reduction pass, and this can clean up the mess. Thanks for joining as I work through a small use of type-based alias analysis for myself. I hope you enjoyed. Thank you to Chris Gregory for helpful feedback. I made a fork of V8 to go spelunk around the Hydrogen IR. I reset the V8 repo to the last commit before they deleted it in favor of their new Sea of Nodes based IR called TurboFan.  ↩ If we know at compile-time that object A has 5 at offset 0 and object B has 7 at offset 0, then A and B don’t alias (thanks, CF) In the RPython JIT in PyPy, this is used to determine if two user (Python) objects don’t alias because we know the contents of the user (Python) class field Object size (though perhaps that is a special case of the above bullet) Field size/type Deferring alias checks to run-time Have a branch I made a fork of V8 to go spelunk around the Hydrogen IR. I reset the V8 repo to the last commit before they deleted it in favor of their new Sea of Nodes based IR called TurboFan.  ↩

0 views

Notes on "Harness Engineering"

I find it useful and revealing to perform very close readings of engineering blog posts from frontier labs. They seem like meaningful artifacts that, despite their novelty, are barely discussed at any level except the surface one. I try to keep an open but keen mind when reading these posts, both trying to find things that don't make much sense when you think about them for more than a couple seconds and what things are clearly in the internal zeitgeist for these companies but haven't quite filtered themselves out into mainland. And so I read Harness Engineering with this spirit in mind. Some notes: This was a fairly negative list of notes, and I want to end with something positive: I do generally agree with the thrust of the thesis. Ryan writes: This is the kind of architecture you usually postpone until you have hundreds of engineers. With coding agents, it's an early prerequisite: the constraints are what allows speed without decay or architectural drift. I think this is absolutely the right mindset. Build for developer productivity as if you have one more order of magnitude of engineers than you actually do. It's disingenuous to make this kind of judgment without knowing more about the use case and purpose of the application itself, but the quantitative metrics divulged are astounding. The product discussed in this post has been around for five months. It contains over one million lines of code and is not yet ready for public consumption but has a hundred or so users. If you had told me those statistics in any other context, I would be terrified of what was happening within that poor Git repository — which is to say nothing of a very complicated stack relative to an application of that size. Why do you need all this observability for literally one hundred internal users? Again, there might be a very reasonable answer that we are not privy to. Most of the techniques discussed in this essay — like using as an index file rather than a monolith — have been fully integrated into the meta at this point. But there's one interesting bit about using the repository as the main source of truth, and in particular building a lot of tooling around things like downloading Slack discussions or other bits of exogenous data so they can be stored at a repository level. My initial reaction was one of revulsion. Again, that poor, poor Git repository. But in a world where you're optimizing for throughput, getting to eliminate network calls and MCP makes a lot of sense — though I can't help but feel like storing things as flat files as opposed to throwing it in a SQLite database or something a little bit more ergonomic would make more sense. 1 See insourcing-your-data-warehouse . The essay hints at, but does not outright discuss, failure modes. They talk about the rough harness for continuous improvement and paying down of technical debt, as well as how to reproduce and fix bugs, but comparatively little about the circumstances by which those bugs and poor patterns are introduced in the first place. Again, I get it. It's an intellectual exercise, and I'm certainly not one to suggest that human-written code is immune from bugs and poor abstractions. But this does feel a little bit like Synecdoche, New York — an intellectual meta-exercise that demands just as much attention and care and fostering as the real thing. At which point one must ask themselves: why bother?

1 views

Adam Fannin on Voting

“There is more power in praying than there is in voting.” Source: “Put not your trust in princes, nor in the son of man, in whom there is no help.” (Psalm 146:3)

0 views

Carl Cox On Waiheke

What’s going on, Internet? Ferry ride over, no kids. Bus to Onetangi. Alibi for a late lunch. Picanha steak with seasonal vegetables. Unfortunately, they had some type of beer shortage, so their usual selection was limited to four tap beers. I enjoyed the Ruru Hazy, even though it was a hazy. Hopefully, they have the full range available next time I’m there. The gig was a minute walk up the road at the Wild Estate . I was wondering how they would do the setup, and once I saw the fences and tents set up on the front lawns, it made sense. We got through check-in sweet as. The drinks were supplied by Pals. We grabbed a drink, my wife a Purple Pals and a Frankie’s Cola for myself. We took a short walk around the venue to get a lay of the land and found a table to sit down at. There was one person there enjoying a pizza. We said hello and sat down. Shortly after, a couple approached and asked if the seats were free. Of course, come sit down. Let’s chat. Want another drink? Sure, let’s go. Friends were made. Another couple, two friends, sat down in the remaining seats. Hi, how are you? More friends. Time to dance. We met up on the dance floor. A group of new friends dancing amongst the crowd to Nichole Moudaber before Carl Cox came on. Both sets were amazing and just what I wanted to hear on a Saturday afternoon. It’s pretty cool that Carl can play something like Awakenings Festival to hundreds of thousands of people, and then a month later play a small venue on Waiheke to a crowd of a thousand. The gig started at 3pm and went until 9pm. Perfect timing for us. We decided to skip staying to the end of Carl’s set and grabbed the 8:11pm bus back to the ferry terminal. I think we made the right call, as the next boat back to the city was at 9:30pm. Sure, we had to wait at the ferry terminal, but we were at the front of the line and got a seat right away as the boat turned up. There were hundreds of people left waiting at the terminal for the next boat. We managed to get home and into bed by 11pm. Perfect timing for a good enough sleep before kids’ activities in the morning. ← Previous 1 / 4 Next → Close ← Previous 2 / 4 Next → Close ← Previous 3 / 4 Next → Close ← Previous 4 / 4 Next → I miss nights out like these. Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website.

0 views

Deep Blue

We coined a new term on the Oxide and Friends podcast last month (primary credit to Adam Leventhal) covering the sense of psychological ennui leading into existential dread that many software developers are feeling thanks to the encroachment of generative AI into their field of work. We're calling it Deep Blue . You can listen to it being coined in real time from 47:15 in the episode . I've included a transcript below . Deep Blue is a very real issue. Becoming a professional software engineer is hard . Getting good enough for people to pay you money to write software takes years of dedicated work. The rewards are significant: this is a well compensated career which opens up a lot of great opportunities. It's also a career that's mostly free from gatekeepers and expensive prerequisites. You don't need an expensive degree or accreditation. A laptop, an internet connection and a lot of time and curiosity is enough to get you started. And it rewards the nerds! Spending your teenage years tinkering with computers turned out to be a very smart investment in your future. The idea that this could all be stripped away by a chatbot is deeply upsetting. I've seen signs of Deep Blue in most of the online communities I spend time in. I've even faced accusations from my peers that I am actively harming their future careers through my work helping people understand how well AI-assisted programming can work. I think this is an issue which is causing genuine mental anguish for a lot of people in our community. Giving it a name makes it easier for us to have conversations about it. I distinctly remember my first experience of Deep Blue. For me it was triggered by ChatGPT Code Interpreter back in early 2023. My primary project is Datasette , an ecosystem of open source tools for telling stories with data. I had dedicated myself to the challenge of helping people (initially focusing on journalists) clean up, analyze and find meaning in data, in all sorts of shapes and sizes. I expected I would need to build a lot of software for this! It felt like a challenge that could keep me happily engaged for many years to come. Then I tried uploading a CSV file of San Francisco Police Department Incident Reports - hundreds of thousands of rows - to ChatGPT Code Interpreter and... it did every piece of data cleanup and analysis I had on my napkin roadmap for the next few years with a couple of prompts. It even converted the data into a neatly normalized SQLite database and let me download the result! I remember having two competing thoughts in parallel. On the one hand, as somebody who wants journalists to be able to do more with data, this felt like a huge breakthrough. Imagine giving every journalist in the world an on-demand analyst who could help them tackle any data question they could think of! But on the other hand... what was I even for ? My confidence in the value of my own projects took a painful hit. Was the path I'd chosen for myself suddenly a dead end? I've had some further pangs of Deep Blue just in the past few weeks, thanks to the Claude Opus 4.5/4.6 and GPT-5.2/5.3 coding agent effect. As many other people are also observing, the latest generation of coding agents, given the right prompts, really can churn away for a few minutes to several hours and produce working, documented and fully tested software that exactly matches the criteria they were given. "The code they write isn't any good" doesn't really cut it any more. Bryan : I think that we're going to see a real problem with AI induced ennui where software engineers in particular get listless because the AI can do anything. Simon, what do you think about that? Simon : Definitely. Anyone who's paying close attention to coding agents is feeling some of that already. There's an extent where you sort of get over it when you realize that you're still useful, even though your ability to memorize the syntax of program languages is completely irrelevant now. Something I see a lot of is people out there who are having existential crises and are very, very unhappy because they're like, "I dedicated my career to learning this thing and now it just does it. What am I even for?". I will very happily try and convince those people that they are for a whole bunch of things and that none of that experience they've accumulated has gone to waste, but psychologically it's a difficult time for software engineers. Bryan : Okay, so I'm going to predict that we name that. Whatever that is, we have a name for that kind of feeling and that kind of, whether you want to call it a blueness or a loss of purpose, and that we're kind of trying to address it collectively in a directed way. Adam : Okay, this is your big moment. Pick the name. If you call your shot from here, this is you pointing to the stands. You know, I – Like deep blue, you know. Bryan : Yeah, deep blue. I like that. I like deep blue. Deep blue. Oh, did you walk me into that, you bastard? You just blew out the candles on my birthday cake. It wasn't my big moment at all. That was your big moment. No, that is, Adam, that is very good. That is deep blue. Simon : All of the chess players and the Go players went through this a decade ago and they have come out stronger. Turns out it was more than a decade ago: Deep Blue defeated Garry Kasparov in 1997 . You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

0 views
./techtipsy Yesterday

BTRFS disk errors to fall asleep to

This is inspired by a dying Seagate Portable 4TB hard drive, and brought to you by 15 minutes of vibe engineering. Starting the RMA process on the Seagate website is one of the most difficult things I’ve done lately, and half the links there look like a legitimate phishing attempt. By the time I got the RMA created, I’ve run out of time and energy to follow on with this process. I guess it’s a great way to make your RMA rates stay low, though!

0 views
Kev Quirk Yesterday

Step Aside, Phone: Week 1

OK, so here we are at the end of the first week of Step Aside, Phone . Quick re-cap from last week - my average phone and tablet usage combined was approximately 4 hours per day (2.5hrs on phone, and 1.5hr on tablet). That's high! Hopefully this week was better? This one is easy - my screen time on my tablet has been zero, as I turned it off last week, and haven't turned it back on again. Instead I've been either reading RSS feeds quickly on my phone before bed, or reading a book on my Kindle. It took me a couple days to get back into reading a book; I haven't done it for a while and as a result my mind kept wandering. I'm back in the swing of things now though and I'm enjoying the book I'm currently reading. Honestly, I haven't missed my tablet at all. I'm not sure if that thing will get turned back on. So the phone...that's also reduced for the most part, but I have had a couple days with heavier usage. Here's how the breakdown went: Ok, so from 2.5hr average to 1:19hr average. I'll take that. My usage was up for a few days between Wed-Fri as I was shopping for stuff Amazon, as well as browsing for a new car for my wife. The only day where I really wasted time was on Thursday where I spent some time on YouTube during my son's swimming lesson (that's the only time I went on YT all week), and on Friday where I spent 16 mins playing on my silly game. Overall I think it's been a pretty good week, and I hope the next 3 weeks continue to improve. Although, not quite as good as Manu this week ! Mon - 48 mins Tue - 61 mins Wed - 55 mins Thur - 2hrs 13mins Fri - 1hr 42mins Sat - 2hrs 01min Sun - 30 mins Average - 1hr 19mins

0 views
Kev Quirk Yesterday

I Didn't Fail

A good friend of mine at work was recently promoted to the same level as I was before I stepped down . I'm happy for him, as he's a good friend and it's nice to see people achieve their goals. However, a day or so after he told me the news, I found myself feeling jealous. After taking some time to think through my feelings, I think it's all ego. You see, he will now be one of only 2 people in our department at that level. He will also get his own office (my old office, actually), and likely an assistant too. But he will also get the kudos of being the chief . Some days I miss being the chief . I was a global executive at one of the biggest banks on the planet before I was 40. I think that’s impressive, and I was proud of that. I felt validated, like I was winning. I felt like I'd proven something to myself, that the scruffy kid from the council estate with no degree could succeed. No, further. That kid had won . But what did being the chief give me? Well, it gave me long days, late nights, lots of travel, and huge amounts of stress. I knew that before I went into the role. These are very difficult roles to be successful in, and they're not for everyone. Shit, they're not for most people. It wasn't for me. So I stepped down, and there are times when I feel like I've walked away from something important. Like I've diminished myself. Like I've failed . Now it’s my friend’s turn to step up. To be the chief . If I’m honest with myself, he’ll probably be better at that level than I ever was. And that made me jealous. I wrote the following in my journal: Since finding out that [person's name] is taking the new exec position, I've honestly been a little jealous. Mainly because of my ego; that he's gonna be chief, and I'm not any more. But then today I had a really productive day. You know, the kinda day where you get a lot of shit done and you just feel good at end of it. Busy, but not overwhelmed. I'm happy . I don't need to be the chief . What I need is to be happy. I've worked through that pang of jealousy I had for my mate's new role; I'm genuinely happy for him and I'll do my best to support him in any way I can. As for me, I stepped down to have less stress. To spend more time with my wife and kids. To go back to doing a job I know I'm good at. To be happy . And even though I'm no longer the chief , I now have all those things. I didn't fail. I stepped down because I didn’t want what the role required. And occasionally, my ego forgets that.

0 views

My Courses Site is Moving to a New Home

This is a short blog post to announce that I'm migrating the site in which I host my paid courses to a new platform at https://learn.miguelgrinberg.com . If you have purchased a course or ebook directly from me, this article tells you how to transfer your account to the new site.

0 views
ava's blog Yesterday

some thoughts on online verification

I've been thinking about writing a post on the Discord age verification thing, but the entire situation is milked to death by content creators right now. Everyone feels the need to throw their conspiracy theories and misinformation into every comment section as well, so it just feels like a lot of noise and panic right now. I'll leave it at a retrospective write-up when the dust has settled and not add to the confusion.  What I feel like touching on instead is the history of age or name verification online. I've seen many people behave as if this is a new issue or an escalation, and while I understand the concerns, I feel like we shouldn't lose sight of the bigger picture. That's not meant to sugarcoat what's happening or make it seem more harmless, but point out that this has been going on for longer and is part of a bigger pattern. Thinking back on my time online, of course I also had to verify age to purchase games on PlayStation and Steam. But even nowadays, as I have no YouTube account, I get a pop-up that YouTube classifies me as a minor after a few videos. This didn't just start in 2025 when they started using AI to judge users' age; I remember the outrage when YouTube enabled age verification in the first place and asked adult accounts to submit an ID to prove their age. But did anything change? No. People did not leave the platform en masse. I also remember the start of Facebook's real name policy . This de-anonymized people or locked them out of their account unless they provided ID, and targeted ethnic groups a lot, as well as any people whose name on their documents doesn't match the name they go by. It's especially funny to read the justification of " authentic identity is important to the Facebook experience, and our goal is that every account on Facebook should represent a real person " when they are at the forefront of AI user profiles and chatbots right now. Even before and during all that, we have watched as sex workers, NSFW artists and queer people in general have had their accounts demonetized, removed, and payment providers discriminating against them and their platforms due to the general stigma and ideas of "protecting kids". But not many were willing to stand up against that because it surely wouldn't extend to the "respectable people", and only got rid of the people they didn't want to see. My point is: These things are older than the recent UK, Australia and select few US states legal mandates of age verification. Of course, just 'consuming content' in an age-restricted way is different than having direct communication hampered by age restriction and surveilled. Being aware that you are watched can lead to self-censorship. I am reminded of the German " Volkszählungsurteil ", which said (translated by me): “ Anyone who is uncertain whether deviant behavior is being recorded at any time and permanently stored, used, or passed on as information will try not to attract attention through such behavior. […] This would not only impair the individual’s opportunities for personal development, but also the common good, because self-determination is an elementary functional condition of a free democratic community that is based on the capacity of its citizens to act and to participate. From this it follows: Under modern conditions of data processing, the free development of personality presupposes the protection of the individual against unlimited collection, storage, use, and disclosure of personal data. This protection is therefore encompassed by the fundamental right in Article 2(1) in conjunction with Article 1(1) of the Basic Law. To that extent, the fundamental right guarantees the individual the authority, in principle, to determine for themselves the disclosure and use of their personal data. ” Fear of constant monitoring leads to self-censorship and conformity, which harms both individual freedom and democratic participation. But how have we dealt with the knowledge that this is happening? Denial, ignorance, forgetting, defeatism, making memes about our FBI agent, pretending security by obscurity works, focusing on how it makes apps nicer to use, and pretending we have nothing to hide. I saw a YouTuber I like say that Discord surveilling every message for sensitive content or to guess your age is like sending all your messages to the FBI. That left me a little speechless. Unfortunately, it's like many haven't learned anything from the Snowden era. US intelligence is already allowed to almost freely collect data on you 1 , and even as a non-US citizen, see FISA 702 bulk surveillance. Stuff like that is exactly why Safe Harbor and Privacy Shield failed, and why the current upholding of the EU-US Privacy Framework is a farce. This is the issue with no encryption. This is exactly why your privacy-conscious friends were leading you towards options that could be encrypted (and why governments everywhere wage a war on encryption). If you send something via unencrypted means, technically speaking, you must treat it as consent for it to be collected, compiled and evaluated, which sucks. It shouldn't be that way, but it is. Even I struggle with that! This is extremely uncomfortable, especially when most of us were only educated on this years into treating our data on services as private and safe, or when we were children who didn't know how to properly judge the consequences of our actions online and were surrounded by others who did the same thing. This is also a boiling frog situation. You point out for years that the amount of data these giants collect on you is not okay. You advise people to go look into Google or Twitter settings and see what they are grouped as for targeted advertising, to show them exactly what data is collected as an eye-opener, and to turn stuff off. You advise people what services they could switch to. Instead, many people doubled down on it because the recommendations of the algorithm and ads are so good, having a home assistant like Alexa is so sci-fi and convenient, and a Ring camera and a pet camera is the pinnacle of home-safety. The more private service is ugly or doesn't auto-detect your music or whatever else weird reason people can think of. Only now, with a US government becoming increasingly dangerous, do people seem to rethink it all - deleting some social media accounts, switching away from Google, getting rid of their Ring cameras and the like. The problem is: If you make decisions like that based on your current government, you aren't ready for the next one. If you allow intense data harvesting under a benevolent government, that dataset already exists for when fascists take power. You can point all you want towards countries where being gay or trans is illegal or where women cannot leave the house on their own and act as if this won't affect you; you and them are not so different and very little actually protects you from that. The safest option isn't to hope that the next institution to have access to intense amounts of data every couple years will not misuse it, but that they don't hold this level and amount of data to begin with. The same goes for companies: Even if you trust them now, differences in laws, leadership and profitability can change the circumstances. As a user, you're unlikely to be able to control them, you can only control yourself and your means to an extent. Have you also noticed that 2025 seems to have been the year with the most "Wrapped"s so far? It felt like every app and service had a Wrapped ready for you - even period tracking software! Of course they are very fun to share and get to know your friends better and measure up against them, but they absolutely normalize being comfortable with this sort of surveillance. The mechanisms and data on which services like YouTube and Discord attempt to guess your age for verification are the same ones they use for advertising, the feed algorithm, the Wrapped and the auto-generated playlists you enjoy. So dare to look behind the fun facade and know what these things truly are. " delulu yearning girl dinner friday evening " is another way to present 20-25 years old ", location and interests. Reply via email Published 15 Feb, 2026 Every argument denying this is "they can't do that, that's illegal!" levels of convincing. There are so many intelligence laws, so much careful wording, and also so much internals we do not (yet) know about. It took whistleblowers to show some of it, and recent ICE news shows the tip of the iceberg with what law enforcement and intelligence is willing to do to ensure more surveillance - Palantir, Flock etc. ↩ Every argument denying this is "they can't do that, that's illegal!" levels of convincing. There are so many intelligence laws, so much careful wording, and also so much internals we do not (yet) know about. It took whistleblowers to show some of it, and recent ICE news shows the tip of the iceberg with what law enforcement and intelligence is willing to do to ensure more surveillance - Palantir, Flock etc. ↩

0 views
Rik Huijzer Yesterday

Running `deezer/spleeter`

Here are up-to-date installation instructions for running Deezer's Spleeter on `Ubuntu 24.04`. Minimum requirements are around 16 GB of RAM. (During the processing, it uses around 11 GB at the peak.) I ran this on a temporary Hetzner server because my Apple Silicon system, after lots of fiddling with version, ran into AVX issues. Install Conda. ``` conda create -n spleeter_env python=3.8 -y ``` ``` conda activate spleeter_env ``` ``` conda install -c conda-forge ffmpeg libsndfile numpy=1.19 -y ``` ``` pip install spleeter ``` ``` spleeter separate -o audio_output input.mp3 ``` If your a...

0 views
(think) Yesterday

How to Vim: Many Ways to Paste

Most Vim users know and – paste after and before the cursor. Simple enough. But did you know that Vim actually has around a dozen paste commands, each with subtly different behavior? I certainly didn’t when I started using Vim, and I was surprised when I discovered the full picture. Let’s take a tour of all the ways to paste in Vim, starting with Normal mode and then moving to Insert mode. One important thing to understand first – it’s all about the register type . Vim registers don’t just store text, they also track how that text was yanked or deleted. There are three register types (see ): This is something that trips up many Vim newcomers – the same command can behave quite differently depending on the register type! With that in mind, here’s the complete family of paste commands in Normal mode: The “Direction” column above reflects both cases – for characterwise text it’s “after/before the cursor”, for linewise text it’s “below/above the current line”. How to pick the right paste command? Here are a few things to keep in mind: All Normal mode paste commands accept a count (e.g., pastes three times) and a register prefix (e.g., pastes from register ). In Insert mode things get interesting. All paste commands start with , but the follow-up keystrokes determine how the text gets inserted: Let me unpack this a bit: Note: Plain can be a minor security concern when pasting from the system clipboard ( or registers), since control characters in the clipboard will be interpreted. When in doubt, use instead. And that’s a wrap! Admittedly, even I didn’t know some of those ways to paste before doing the research for this article. I’ve been using Vim quite a bit in the past year and I’m still amazed how many ways to paste are there! If you want to learn more, check out , , , and in Vim’s built-in help. There’s always more to discover! That’s all I have for you today. Keep hacking! You can also use and with a register, e.g. to paste from register with adjusted indentation.  ↩ Characterwise (e.g., ): inserts text to the right of the cursor, to the left. Linewise (e.g., , ): inserts text on a new line below, on a new line above. The cursor position within the line doesn’t matter. Blockwise (e.g., selection): text is inserted as a rectangular block starting at the cursor column. The difference between / and / is all about where your cursor ends up. With the cursor lands on the last character of the pasted text, while with it moves just past the pasted text. This makes handy when you want to paste something and continue editing right after it. and are incredibly useful when pasting code – they automatically adjust the indentation of the pasted text to match the current line. No more pasting followed by to fix indentation! 1 and are the most niche – they only matter when pasting blockwise selections, where they avoid adding trailing whitespace to pad shorter lines. is the most common one – you press and then a register name (e.g., , , for the system clipboard). The text is inserted as if you typed it, which means and auto-indentation apply. This can be surprising if your pasted code gets reformatted unexpectedly. inserts the text literally – special characters like backspace won’t be interpreted. However, and auto-indent still apply. is the “raw paste” – no interpretation, no formatting, no auto-indent. What you see in the register is what you get. This is the one I’d recommend for pasting code in Insert mode. is like , but it adjusts the indentation to match the current context. Think of it as the Insert mode equivalent of . You can also use and with a register, e.g. to paste from register with adjusted indentation.  ↩

0 views

Step aside, phone: week 1

First weekly recap for this fun life experiment. To remind you what this is all about : in order to help Kevin get back to a more sane use of his time in front of his phone, we decided to publicly share 4 weeks of screen time statistics from our phones and write roundups every Sunday. Yes, we’re essentially trying to shame ourselves into being more mindful about our phone usage. Let me tell you, it definitely works. Every time I do one of these experiments, I use the first week to prove to myself that this whole phone usage situation is mostly a matter of being mindful about it, and that if I decide that I don’t want to use the phone, well, I will not use it. And it’s not very hard. Monday to Wednesday, I basically almost never picked up my phone from my desk. It was fully charged on Sunday afternoon, and I didn’t plug it in again till Thursday. I did use it when I was outside for a couple of minor things, but as you can see from the image below, screen time is reporting 9 minutes of total usage for the first 3 days of the week. Thursday and Friday, I logged a bit more screen time (had to do a few things that required the use of apps), but also because I started listening to a few podcasts while I was driving. I said I started because one thing I did this week was delete any app that’s related to content consumption from the phone. I think my personal goal for this month-long experiment is going to be to get back to a use of my phone that’s utility-driven and not consumption-focused. The phone should be a tool to do things and not a passive consumption device. Friday usage spiked, and that’s because I was out on a date, so most of the time spent with the screen on was Google Maps being open while I was in the car. I still tried to be mindful of that, though. I drove about 5 hours back and forth, but I only used Google Maps for a bit more than 1 hour. I also used the browser for the first time this week to purchase a couple of tickets for a museum, and I took a few pictures. So this is how the first week went. Not included here is last Sunday—I told Kevin we were going to start this experiment on Monday—but I clocked 11 minutes on that day. Not bad. Now, one consideration about this first week: in order to push my phone usage this low, I had to move some of my normal phone usage over to my Mac, which is how I managed to basically never touch chat apps on my phone. I know this is pretty much cheating, but it was intentional and something I was planning to do only in this first week, and I will move that screen time back on my phone starting next week. The goal is to find the right balance after all, and I like the process of pushing it all the way down to the extreme and then bringing it back up to some more sane levels. If you have decided to take part in this experiment, email me a link to your post, and I’ll include it below. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs Read Kevin's week one recap Read Thomas' week one recap Read Steve's week one recap Read John's week one and two recaps

0 views

OpenClaw, OpenAI and the future

I'm joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent.

0 views