Posts in Web-development (20 found)
Justin Duke Yesterday

Unshipping Keystatic

Two years after initially adopting it , we've formally unshipped Keystatic . Our CMS, such as it is, is now a bunch of Markdoc files and a TypeScript schema organizing the front matter — which is to say, it's not really a CMS at all. There were a handful of reasons for this move, in no specific order: That last point is basically what I wrote about Invoke — it's a terrible heuristic, judging a project by its commit frequency, and I know that. Things can and should be finished! And yet. When you're already on the fence, a quiet GitHub graph is the thing that tips you over. To Keystatic's credit, it was tremendously easy to extricate. The whole migration was maybe two hours of work, most of which was just deleting code. That's the sign of a well-designed library — one that doesn't metastasize into every corner of your codebase. I wish more tools were this easy to leave. Our team's use of Keystatic as an actual front-end CMS had dropped to zero. All of the non-coders have grown sufficiently adept with Markdown that the GUI was gathering dust; Keystatic had become a pure schema validation and rendering tool, and offered fairly little beyond what we were already getting from our build step. Some of the theoretically nice things — image hosting, better previewing — either didn't work as smoothly as we'd like or were supplanted entirely by Vercel's built-in features. The project appears to have atrophied a little bit, commits dwindling into the one-per-quarter frequency despite a healthy number of open issues. This is not to besmirch the lovely maintainers, who have many other things going on. But it's harder to stick around on a library you're not getting much value from when you're also worried there's not a lot of momentum down the road.

0 views
Jim Nielsen 2 days ago

Computers and the Internet: A Two-Edged Sword

Dave Rupert articulated something in “Priority of idle hands” that’s been growing in my subconscious for years: I had a small, intrusive realization the other day that computers and the internet are probably bad for me […] This is hard to accept because a lot of my work, hobbies, education, entertainment, news, communities, and curiosities are all on the internet. I love the internet, it’s a big part of who I am today Hard same. I love computers and the internet. Always have. I feel lucky to have grown up in the late 90’s / early 00’s where I was exposed to the fascination, excitement, and imagination of PCs, the internet, and then “mobile”. What a time to make websites! Simultaneously, I’ve seen how computers and the internet are a two-edged sword for me: I’ve cut out many great opportunities with them, but I’ve also cut myself a lot (and continue to). Per Dave’s comments, I have this feeling somewhere inside of me that the internet and computers don’t necessarily align in support my own, personal perspective of what a life well lived is for me . My excitement and draw to them also often leave me with a feeling of “I took that too far.” I still haven’t figured out a completely healthy balance (but I’m also doing ok). Dave comes up with a priority of constituencies to deal with his own realization. I like his. Might steal it. But I also think I need to adapt it, make it my own — but I don’t know what that looks like yet. To be honest, I don't think I was ready to confront any of this but reading Dave’s blog forced it out of my subconscious and into the open, so now I gotta deal. Thanks Dave. Reply via: Email · Mastodon · Bluesky

0 views
Manuel Moreale 2 days ago

Dominik Schwind

This week on the People and Blogs series we have an interview with Dominik Schwind, whose blog can be found at lostfocus.de . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. My name is Dominik Schwind and I'm from Lörrach , a small town on the German side of the tri-border area with Switzerland and France . I've been a web developer for a really long time now, mostly server-side and just occasionally dabbling in what is showing up in the browser. Annoyingly that's a hobby that I turned into work, so I guess that's ruined now. (Which doesn't stop me, though: I have too many half-finished side-project websites and apps to count.) Besides that I also really like to take photos and after a few years of being frozen in place I started to travel again, which is always nice. I do like watching motorsports of almost all types, I can easily get sucked into computer games like Factorio and I like to listen to podcasts, top of them being the Omnibus Project , Do Go On and Roderick on the Line . I've had a website since before I had internet access - some computer game I had in the mid-90s had the manual included as HTML and I used it to learn how to make basic websites. The very first day my father came home with a modem, I signed up for GeoCities and when I found a webhost that would allow me to run CGI scripts, I installed NewsPro , an early proto-blog system before blogging was even a thing. And while these early iterations of my website(s) are long gone, I haven't stopped since. The name came from an unease I started to feel in my final year of high school: once I finished school, I didn't know where to direct my energy and attention. That feeling hasn't really left since then. Mostly there is none - when I think of something that I want to communicate to someone, anyone , I try to put it online. Quite often it ends up on Mastodon but I do try to put things on my blog, especially when I know it is something future me would appreciate. A few years ago I noticed that I had neglecting my blog in favour of other ways of communicating and I started a pact with a couple of friends to write weeknotes . We're in our fourth year now, which feels like an accomplishment. I try to write those posts first thing on a Sunday morning, if possible. I write most of my posts in Markdown in iA Writer , which is probably the most arrogant Markdown editing app in the world. But I paid for it at some point, so I better use it, too. I basically only need a computer and a place to sit and I'm fine. I've tried to find ways to blog from my phone but in the end, I prefer a proper keyboard and a bigger screen. While I never observed any difference in blogging creativity depending on the physical space, I actually quite enjoy writing in places other than my desk. This one is actually pretty simple: I run WordPress , currently on a DigitalOcean VM. One of the points on my long to-do list for my web stuff is to move it to Hetzner , which probably would only take an evening. And yet, I procrastinate. I've (more or less) jokingly said I'd replace WordPress with a CMS of my own making for years now, but at some point I've resigned, even though my database is a mess. Probably not. Ever since the beginning I wrote for two audiences: my friends and future me. I'm really happy when someone else finds my blog and might turn into an internet friend, but I wouldn't know how else to achieve that other than what I've been doing for all these years now. .de domains are pretty affordable, so it is that plus the server, which is around €100 per year. The blog doesn't generate any revenue, in many ways it's "only" a journal. When it comes to other bloggers, I'd say: go for it if you think your writing (or your photography or whatever it might be you're sharing on your website) is something that can be turned into revenue, one way or another. In many ways I'm a bit bummed that Flattr (or something similar) never really took of, I would happily use a service like that. Of course I need to mention my friends and fellow weeknoters: Martin (blogs in German) and Teymur . (NSFW) Three of the people whose blogs I read have been interviewed here already: Ahn ( Interview ), Jeremy Keith ( Interview ) and Winnie Lim .( Interview ) Some other people whose blogs I read and who might be interesting people to answer your questions would be Jennifer Mills , (who has the best take on weekly blog posts I have ever seen) Nikkin , (he calls it a newsletter, but there is an RSS feed) Roy Tang and Ruben Schade . If you don't have one yet, go start a personal website! Don't take it too seriously, try things and it can be a nice, meditative hobby and helps against the urge to doomscroll. Also you might never know, your kind of people might find it and connect with you. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 130 interviews . People and Blogs is possible because kind people support it.

0 views
David Bushell 2 days ago

Croissant and CORS proxy update

Croissant is my home-cooked RSS reader. I wish it was only a progressive web app (PWA) but due to missing CORS headers, many feeds remain inaccessible. My RSS feeds have the header and so should yours! Blogs Are Back has a guide to enable CORS for your blog . Bypassing CORS requires some kind of proxy. Other readers use a custom browser extension. That is clever, but extensions can be dangerous. I decided on two solutions. I wrapped my PWA in a Tauri app . This is also dangerous if you don’t trust me. I also provided a server proxy for the PWA. A proxy has privacy concerns but is much safer. I’m sorry if anyone is using Croissant as a PWA because the proxy is now gone. If a feed has the correct CORS headers it will continue to work. Sorry for the abrupt change. That’s super lame, I know! To be honest I’ve lost a bit of enthusiasm for the project and I can’t maintain a proxy. Croissant was designed to be limited in scope to avoid too much burden. In hindsight the proxy was too ambitious. Technically, yes! But you’ll have to figure that out by yourself. If you have questions, such as where to find the code, how the code works etc, the answer is no. I don’t mean to be rude, I just don’t have any time! You’re welcome to ask for support but unless I can answer in 30 seconds I’ll have to decline. Croissant is feature complete! It does what I set out to achieve. I have fixed several minor bugs and tweaked a few styles. Until inspiration (or a bug) strikes I won’t do another update anytime soon. Maybe later in the year I’ll decide to overhaul it? Who can predict! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
Kev Quirk 2 days ago

Quick Clarification on Pure Comments

A couple of people have reached out to me asking if I can offer a version of Pure Comments for their site, as they don't run Pure Blog . I obviously didn't make this clear in the announcement , or on the (now updated) Pure Comments site. Pure Comments can be used on ANY website. It's just an embed script, just like Disqus (only with no bloat or tracking). So you just have to upload the files to wherever you want to host Pure Comments, then add the following embed code wherever you want comments to display (replacing the example domain): You can use Pure Comments on WordPress, Bear Blog, Jekyll, 11ty, Hugo, Micro.blog, Kirby, Grav, and even Pure Blog! Anywhere you can inject the little snippet of code above, Pure Comments will work. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
Robin Moffatt 2 days ago

Interesting links - February 2026

Phew, what a month! February may be shorter but that’s not diminished the wealth of truly interesting posts I’ve found to share with you this month.

0 views
fLaMEd fury 3 days ago

Making fLaMEd fury Glow Everywhere With an Eleventy Transform

What’s going on, Internet? I originally added a CSS class as a fun way to make my name (fLaMEd) and site name (fLaMEd fury) pop on the homepage. shellsharks gave me a shoutout for it, which inspired me to take it further and apply the effect site-wide. Site wide was the original intent and the problem was it was only being applied manually in a handful of places, and I kept forgetting to add it whenever I wrote a new post or created a new page. Classic. Instead of hunting through templates and markdown files, I’ve added an Eleventy HTML transform that automatically applies the glow up. I had Claude Code help me figure out the regex and the transform config. This allowed me to get this done before the kids came home. Don't @ me. The effect itself is a simple utility class using : Swap for whatever gradient custom property you have defined. The repeats the gradient across the text for a more dynamic flame effect. The transform lives in its own plugin file and gets registered in . It runs after Eleventy has rendered each page, tokenises the HTML by splitting on tags, tracks a skip-tag stack, and only replaces text in text nodes. Tags in the set, along with any span already carrying the class, push onto the stack. No replacement happens while the stack is non-empty, so link text, code examples, the page , and already-wrapped instances are all left alone. HTML attributes like and are never touched because they sit inside tag tokens, not text nodes. A single regex handles everything in one pass. The optional group matches " fury" (with space) or “fury” (without), so “flamed fury” and “flamedfury” (as it appears in the domain name) are both wrapped as a unit. The flag covers every capitalisation variant (“fLaMEd fury”, “Flamed Fury”, “FLAMED FURY”) with the original casing preserved in the output. This helps because I can be inconsistent with the styling at times. Export the plugin from wherever you manage your Eleventy plugins: Then register it in . Register it before any HTML prettify transform so the spans are in place before reformatting runs: That’s it. Any mention of the site name (fLaMEd fury) in body text gets the gradient automatically, in posts, templates, data-driven content, wherever. Look out for the easter egg I’ve dropped in. Later. Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website.

0 views
Kev Quirk 4 days ago

Introducing Pure Comments (and Pure Commons)

A few weeks ago I introduced Pure Blog a simple PHP based blogging platform that I've since moved to and I'm very happy. Once Pure Blog was done, I shifted my focus to start improving my commenting system . I ended that post by saying: At this point it's battle tested and working great. However, there's still some rough edges in the code, and security could definitely be improved. So over the next few weeks I'll be doing that, at which point I'll probably release it to the public so you too can have comments on your blog, if you want them. I've now finished that work and I'm ready to release Pure Comments to the world. 🎉 I'm really happy with how Pure Comments has turned out; it slots in perfectly with Pure Blog, which got me thinking about creating a broader suite of apps under the Pure umbrella. I've had Simple.css since 2022, and now I've added Pure Blog and Pure Comments to the fold. So I decided I needed an umbrella to house these disparate projects. That's where Pure Commons comes in. My vision for Pure Commons is to build it into a suite of simple, privacy focussed tools that are easy to self-host, and have just what you need and no more. Well, concurrent to working on Pure Comments, I've also started building a fully managed version that people will be able to use for a small monthly fee. That's about 60% done at this point, so I should be releasing that over the next few weeks. In the future I plan to add a managed version of Pure Blog too, but that will be far more complex than a managed version of Pure Comments. So I think that will take some time. I'm also looking at creating Pure Guestbook , which will obviously be a simple, self-hosted guestbook along the same vein as the other Pure apps. This should be relatively simple to build, as a guestbook is basically a simplified commenting system, so most of the code is already exists in Pure Comments. Looking beyond Pure Guestbook I have some other ideas, but you will have to wait and see... In the meantime, please take a look as Pure Comments - download the source code , take it for a spin, and provide any feedback/bugs you find. If you have any ideas for apps I could add to the Pure Commons family, please get in touch. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
Evan Schwartz 4 days ago

Great RSS Feeds That Are Too Noisy to Read Manually

Some RSS feeds are fantastic but far too noisy to add to most RSS readers directly. Without serious filtering, you'd get swamped with more posts than you could possibly read, while missing the hidden gems. I built Scour specifically because I wanted to find the great articles I was missing in noisy feeds like these, without feeling like I was drowning in unread posts. If you want to try it, you can add all of these sources in one click . But these feeds are worth knowing about regardless of what reader you use. Feed: https://hnrss.org/newest Thousands of posts are submitted to Hacker News each week. While the front page gives a sense of what matches the tech zeitgeist, there are plenty of interesting posts that get buried simply because of the randomness of who happens to be reading the Newest page and voting in the ~20 minutes after posts are submitted. (You can try searching posts that were submitted but never made the front page in this demo I built into the Scour docs.) Feed: https://feeds.pinboard.in/rss/recent/ Pinboard describes itself as "Social Bookmarking for Introverts". The recent page is a delightfully random collection of everything one of the 30,000+ users has bookmarked. Human curated, without curation actually being the goal. Feed: https://bearblog.dev/discover/feed/?newest=True Bear is "A privacy-first, no-nonsense, super-fast blogging platform". This post is published on it, and I'm a big fan. The Discovery feed gives a snapshot of blogs that users have upvoted on the platform. But, even better than that, the Most Recent feed gives you every post published on it. There are lots of great articles, and plenty of blogs that are just getting started. Feed: https://feedle.world/rss Feedle is a search engine for blogs and podcasts. You can search for words or phrases among their curated collection of blogs, and every search can become an RSS feed. An empty search will give you a feed of every post published by any one of their blogs. Feed: https://kagi.com/api/v1/smallweb/feed/ Kagi, the search engine, maintains an open source list of around 30,000 "small web" websites that are personal and non-commercial sites. Their Small Web browser lets you browse random posts one at a time. The RSS feed gives you every post published by any one of those websites. Feed: https://threadreaderapp.com/rss.xml Thread Reader is a Twitter/X bot that lets users "unroll" threads into an easier-to-read format. While getting RSS feeds out of Twitter/X content is notoriously difficult, Thread Reader provides an RSS feed of all threads that users have used them to unroll. Like the content on that platform, the threads are very hit-or-miss, but there are some gems in there. Not an RSS feed: https://minifeed.net/global Minifeed is a nice "curated blog reader and search engine". They have a Global page that shows every post published by one of the blogs they've indexed. While this isn't technically an RSS feed, I thought it deserved a mention. Note that Scour can add some websites that don't have RSS feeds. It treats pages with repeated structures that look like blogs (e.g. they have links, titles, and publish dates) as if they were RSS feeds. Minifeed's Global view is one such page, so you can also get every post published from any one of their collected blogs. Feeds galore: https://info.arxiv.org/help/rss.html arXiv has preprint academic articles for technical fields ranging from Computer Science and Mathematics to Physics and Quantitative Biology. Like many of the feeds listed above, most of the categories are very noisy. But, if you're into reading academic articles, there is also plenty of great new research hidden in the noise. Every field and sub-field has its own RSS feed. (You can browse them and subscribe on Scour here ). While reading my Scour feed, I'll often check which feeds an article I liked came from (see what this looks like here ), and I'm especially delighted when it comes from some source I had no idea existed. These types of noisy feeds are great ways of discovering new content and new blogs, but you definitely need some good filters to make use of them. I hope you'll give Scour a try! P.S. Scour makes all of the feeds it creates consumable as RSS/Atom/JSON feeds , so you can add your personalized feed or each of your interests-specific feeds to your favorite feed reader. Read more in this guide for RSS users .

0 views
Andre Garzia 5 days ago

Building your own blogging tools is a fun journey

# Building your own blogging tools is a fun journey I read a very interesting blog post today: ["So I've Been Thinking About Static Site Generators" by PolyWolf](https://wolfgirl.dev/blog/2026-02-23-so-ive-been-thinking-about-static-site-generators/) in which she goes in depth about her quest to create a 🚀BLAZING🔥 fast [static site generator](https://en.wikipedia.org/wiki/Static_site_generator). It was a very good read and I'm amazed at how fast she got things running. The [conversation about the post on Lobste.rs](https://lobste.rs/s/pgh4ss/so_i_ve_been_thinking_about_static_site) is also full of gems. Seeing so many people pouring energy into the specific problem of making SSGs very fast feels to me pretty much like modders getting the utmost performance out of their CPUs or car engines. It is fun to see how they are doing and how fast they can make clean and incremental builds go. > No one will ever complain about their SSG being too fast. As someone who used [a very slow SSG](https://docs.racket-lang.org/pollen/) for years and eventually migrated to [my own homegrown dynamic site](/2025/03/why-i-choose-lua-for-this-blog.html), I understand how frustrating slow site generation can be. In my own personal case, I decided to go with an old-school dynamic website using old 90s tech such as *cgi-bin* scripts in [Lua](https://lua.org). That eliminates the need for rebuilds of the site as it is generated at runtime. One criticism I keep hearing is about the scalability of my approach, people say: *"what if one of your posts go viral and the site crashes?"*, well, that is ok for me cause if I get a cold or flu I crash too, why would I demand of my site something I don't demand of myself? Jokes aside, the problem of scalability can be dealt with by having some heuristic figuring out when a post is getting hot and then generating a static version of that post while keeping posts that are not hot dynamic. I'm not worried about it. Instead of devoting my time to the engineering problem of making my SSG fast, I decided to put my energy elsewhere. A point that is often overlooked by many people developing blogging systems is the editing and posting workflow. They'll have really fast SSGs and then let the user figure out how to write the source files using whatever tool they want. Nothing wrong with that, but I want something better than launching $EDITOR to write my posts. In my case, what prevented me from posting more was not how long my SSG took to rebuild my site, but the friction between wanting to post and having the post written. What tools to use, how to handle file uploads, etc. So I begun to optmising and developing tools for helping me with that. First, I [made a simple posting interface](/2025/01/creating-a-simple-posting-interface.html). This is not a part of the blogging system, it is an independent tool that shares the code base with the rest of the blog (just so I have my own CGI routines available). Internally it uses [micropub](https://www.w3.org/TR/micropub/) to publish. After that, I made it into a Firefox Add-on. The add-on is built for ad-hoc distribution and not shared on the store, it is just for me. Once installed, I get a sidebar that allows me to edit or post. ![Editor](/2026/02/img/c6e6afba-3141-4eca-a0f4-3425f7bea0d8.png) This is part of making my web browser of choice not only a web browser but a web making tool. I'm integrating all I need to write posts into the browser itself and thus diminishing the distance between browsing the web and making the web. I added features to the add-on to help me quote posts, get addresses as I browse them, and edit my own posts. It is all there right in the browser. ![quoting a post](/2026/02/img/b6d2abc1-c4ae-42c5-931f-d390fc9b793f.png) Like PolyWolf, I am passionate about my tools and blogging. I think we should take upon ourselves to build the tools we need if they're not available already (or just for the fun of it). Even though I'm no longer in the SSG bandwagon anymore, I'm deeply interested in blogging and would like to see more people experimenting with building their own tools, especially if their focus is on interesting ux and writing workflows.

0 views
Jim Nielsen 6 days ago

Making Icon Sets Easy With Web Origami

Over the years, I’ve used different icon sets on my blog. Right now I use Heroicons . The recommended way to use them is to copy/paste the source from the website directly into your HTML. It’s a pretty straightforward process: If you’re using React or Vue, there are also npm packages you can install so you can import the icons as components. But I’m not using either of those frameworks, so I need the raw SVGs and there’s no for those so I have to manually grab the ones I want. In the past, my approach has been to copy the SVGs into individual files in my project, like: Then I have a “component” for reading those icons from disk which I use in my template files to inline the SVGs in my HTML. For example: It’s fine. It works. It’s a lot of node boilerplate to read files from disk. But changing icons is a bit of a pain. I have to find new SVGs, overwrite my existing ones, re-commit them to source control, etc. I suppose it would be nice if I could just and get the raw SVGs installed into my folder and then I could read those. But that has its own set of trade-offs. For example: So the project’s npm packages don’t provide the raw SVGs. The website does, but I want a more programatic way to easily grab the icons I want. How can I do this? I’m using Web Origami for my blog which makes it easy to map icons I use in my templates to Heroicons hosted on Github. It doesn’t require an or a . Here’s an snippet of my file: As you can see, I name my icon (e.g. ) and then I point it to the SVG as hosted on Github via the Heroicons repo. Origami takes care of fetching the icons over the network and caching them in-memory. Beautiful, isn’t it? It kind of reminds me of import maps where you can map a bare module specifier to a URL (and Deno’s semi-abandoned HTTP imports which were beautiful in their own right). Origami makes file paths first-class citizens of the language — even “remote” file paths — so it’s very simple to create a single file that maps your icon names in a codebase to someone else’s icon names from a set, whether those are being installed on disk via npm or fetched over the internet. To simplify my example earlier, I can have a file like : Then I can reference those icons in my templates like this: Easy-peasy! And when I want to change icons, I simply update the entries in to point somewhere else — at a remote or local path. And if you really want to go the extra mile, you can use Origami’s caching feature: Rather than just caching the files in memory, this will cache them to a local folder like this: Which is really cool because now when I run my site locally I have a folder of SVG files cached locally that I can look at and explore (useful for debugging, etc.) This makes vendoring really easy if I want to put these in my project under source control. Just run the file once and boom, they’re on disk! There’s something really appealing to me about this. I think it’s because it feels very “webby” — akin to the same reasons I liked HTTP imports in Deno. You declare your dependencies with URLs, then they’re fetched over the network and become available to the rest of your code. No package manager middleman introducing extra complexity like versioning, transitive dependencies, install bloat, etc. What’s cool about Origami is that handling icons like this isn’t a “feature” of the language. It’s an outcome of the expressiveness of the language. In some frameworks, this kind of problem would require a special feature (that’s why you have special npm packages for implementations of Heroicons in frameworks like react and vue). But because of the way Origami is crafted as a tool, it sort of pushes you towards crafting solutions in the same manner as you would with web-based technologies (HTML/CSS/JS). It helps you speak “web platform” rather than some other abstraction on top of it. I like that. Reply via: Email · Mastodon · Bluesky Go to the website Search for the icon you want Click to “Copy SVG” Go back to your IDE and paste it Names are different between icon packs, so when you switch, names don’t match. For example, an icon might be named in one pack and in another. So changing sets requires going through all your templates and updating references. Icon packs are often quite large and you only need a subset. might install hundreds or even thousands of icons I don’t need.

0 views
Dominik Weber 6 days ago

Lighthouse update February 23rd

During the past week a couple of nice improvements happened. **Finally implemented a 2 week trial without requiring a credit card** Every user now gets the trial by default. This is a nice improvement because, from what I can observe, in B2C most people want to test the product before entering their credit card. It was also a good step to a better first product experience. **Finished the website to feed feature** The last remaining task was automated finding of items. When you enter a website, it automatically checks it and tries to find relevant items. If items are found, they are highlighted and the selectors added, without users having to do anything. **Updated blogroll editor** This is a small free tool on the Lighthouse website. It's for creating collections of feeds, websites, and newsletters. For a long time I wanted to create collections for specific areas, for example company engineering blogs, AI labs, JavaScript ecosystem, and so on. The reworked blogroll editor makes that much simpler to do. ## Next steps An issue that became important is feed URLs being behind bot protection. It doesn't really make sense to be configured that way, because feed URLs are designed to be accessed by bots, but in some cases it may be difficult to configure properly. This affects only for a small number of feeds, but it's enough to be noticable. It prevents people from moving to Lighthouse from other services. Consequently, one of the next tasks is to fix this. Besides that, the first user experience continues to be an ongoing area of improvement. I have a couple of ideas on how to make it better, and will continuously work on it.

0 views
Justin Duke 6 days ago

Golinks

If you've never encountered golinks before: they're short, memorable URLs that redirect to longer ones. Instead of telling a coworker "the dashboard is at ," you just say . Instead of bookmarking seventeen different Notion pages, you type or or and trust that you'll end up in the right place. I discovered them at Stripe, though I believe they were invented at Google, and I have not stopped using them since. One thing leads to another. You decide that you no longer need Tailscale because the main reason you spun up Tailscale was for a project that ended up shipping — and therefore spending per-seat pricing on a service that you literally only use for golinks seems a bit silly and prohibitive. 1 Side note: I still really love Tailscale and think it's a great product and would be shocked if we aren't using it again by the end of the year. But! And then you need to find a replacement for golinks, and you cannot get dragged back to golinks.io or Trotto, both of which are slow, cumbersome, and expensive besides. So what was I to do? First, I looked at the open source options, none of which struck me as particularly compelling. I have a set of requirements that I don't think are esoteric, but others might: And nothing quite fit the bill. I had a revelation: I discovered that you could use a default search engine as the routing proxy instead of or DNS interception like Tailscale's MagicDNS. For a week or two, I had this sitting within Django in our monorepo out of ease — simply intercept any incoming search query, redirect it if something's already in the database, and then if it's not but it looks like it could be, send to the empty state prompting the user to create a golink. But frankly, this was just slower than I wanted. Not for any interesting reason, but just the usual Python request-response lifecycle stuff. I could, of course, invest in making it better and faster and was planning on doing so, but figured I would take one last trip around the internet to see if there was some other solution that I somehow missed. And that's when I discovered GoToTools . There is nothing really interesting to say about this product besides the fact that it is very good for what it does. Its author appears to have built it out of the same frustration that I had. And the highest compliment I can give it is that in a year where I've already cut down substantially on the number of services I pay for — in favor of those that I vend — I have absolutely no compunction about starting to use this. The pricing is extraordinary. The performance is really good. It works and is fast and lets me not spend time thinking about golinks and instead lets me spend time using them. A reasonable price Persistence The ability to use golinks without a Chrome extension Performance

0 views

Which web frameworks are most token-efficient for AI agents?

I benchmarked 19 web frameworks on how efficiently an AI coding agent can build and extend the same app. Minimal frameworks cost up to 2.9x fewer tokens than full-featured ones.

0 views

My RSS Feed should now be working

Apparently my RSS Feed was not displaying full post content - just the title - making people have to click through to the actual post on site. It should now be fixed and full posts should be available in your feed reader of choice (you are using Elfeed in Emacs , right?) Thank you to katabex, Sneed1911, and cyberarboretum in the #technicalrenaissance IRC channel for bringing it to my attention. If anyone has any further issues, feel free to email/@me in the IRC. As always, God bless, and until next time. If you enjoyed this post, consider Supporting my work , Checking out my book , Working with me , or sending me an Email to tell me what you think.

0 views
Simon Willison 1 weeks ago

Adding TILs, releases, museums, tools and research to my blog

I've been wanting to add indications of my various other online activities to my blog for a while now. I just turned on a new feature I'm calling "beats" (after story beats, naming this was hard!) which adds five new types of content to my site, all corresponding to activity elsewhere. Here's what beats look like: Those three are from the 30th December 2025 archive page. Beats are little inline links with badges that fit into different content timeline views around my site, including the homepage, search and archive pages. There are currently five types of beats: That's five different custom integrations to pull in all of that data. The good news is that this kind of integration project is the kind of thing that coding agents really excel at. I knocked most of the feature out in a single morning while working in parallel on various other things. I didn't have a useful structured feed of my Research projects, and it didn't matter because I gave Claude Code a link to the raw Markdown README that lists them all and it spun up a parser regex . Since I'm responsible for both the source and the destination I'm fine with a brittle solution that would be too risky against a source that I don't control myself. Claude also handled all of the potentially tedious UI integration work with my site, making sure the new content worked on all of my different page types and was handled correctly by my faceted search engine . I actually prototyped the initial concept for beats in regular Claude - not Claude Code - taking advantage of the fact that it can clone public repos from GitHub these days. I started with: And then later in the brainstorming session said: After some iteration we got to this artifact mockup , which was enough to convince me that the concept had legs and was worth handing over to full Claude Code for web to implement. If you want to see how the rest of the build played out the most interesting PRs are Beats #592 which implemented the core feature and Add Museums Beat importer #595 which added the Museums content type. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . Releases are GitHub releases of my many different open source projects, imported from this JSON file that was constructed by GitHub Actions . TILs are the posts from my TIL blog , imported using a SQL query over JSON and HTTP against the Datasette instance powering that site. Museums are new posts on my niche-museums.com blog, imported from this custom JSON feed . Tools are HTML and JavaScript tools I've vibe-coded on my tools.simonwillison.net site, as described in Useful patterns for building HTML tools . Research is for AI-generated research projects, hosted in my simonw/research repo and described in Code research projects with async coding agents like Claude Code and Codex .

0 views
David Bushell 1 weeks ago

Everything you never wanted to know about visually-hidden

Nobody asked for it but nevertheless, I present to you my definitive “it depends” tome on visually-hidden web content. I’ll probably make an amendment before you’ve finished reading. If you enjoy more questions than answers, buckle up! I’ll start with the original premise, even though I stray off-topic on tangents and never recover. I was nerd-sniped on Bluesky. Ana Tudor asked : Is there still any point to most styles in visually hidden classes in ’26? Any point to shrinking dimensions to and setting when to nothing via / reduces clickable area to nothing? And then no dimensions = no need for . @anatudor.bsky.social Ana proposed the following: Is this enough in 2026? As an occasional purveyor of the class myself, the question wriggled its way into my brain. I felt compelled to investigate the whole ordeal. Spoiler: I do not have a satisfactory yes-or-no answer, but I do have a wall of text! I went so deep down the rabbit hole I must start with a table of contents: I’m writing this based on the assumption that a class is considered acceptable for specific use cases . My final section on native visually-hidden addresses the bigger accessibility concerns. It’s not easy to say where this technique is appropriate. It is generally agreed to be OK but a symptom of — and not a fix for — other design issues. Appropriate use cases for are far fewer than you think. Skip to the history lesson if you’re familiar. , — there have been many variations on the class name. I’ve looked at popular implementations and compiled the kitchen sink version below. Please don’t copy this as a golden sample. It merely encompasses all I’ve seen. There are variations on the selector using pseudo-classes that allow for focus. Think “skip to main content” links, for example. What is the purpose of the class? The idea is to hide an element visually, but allow it to be discovered by assistive technology. Screen readers being the primary example. The element must be removed from layout flow. It should leave no render artefacts and have no side effects. It does this whilst trying to avoid the bugs and quirks of web browsers. If this sounds and looks just a bit hacky to you, you have a high tolerance for hacks! It’s a massive hack! How was this normalised? We’ll find out later. I’ll whittle down the properties for those unfamiliar. Absolute positioning is vital to remove the element from layout flow. Otherwise the position of surrounding elements will be affected by its presence. This crops the visible area to nothing. remains as a fallback but has long been deprecated and is obsolete. All modern browsers support . These two properties remove styles that may add layout dimensions. This group effectively gives the element zero dimensions. There are reasons for instead of and negative margin that I’ll cover later. Another property to ensure no visible pixels are drawn. I’ve seen the newer value used but what difference that makes if any is unclear. This was added to address text wrapping inside the square (I’ll explain later). So basically we have and a load of properties that attempted to make the element invisible. We cannot use or or because those remove elements from the accessibility tree. So the big question remains: why must we still ‘zero’ the dimensions? Why is not sufficient? To make sense of this mystery I went back to the beginning. It was tricky to research this topic because older articles have been corrected with modern information. I recovered many details from the archives and mailing lists with the help of those involved. They’re cited along the way. Our journey begins November 2004. A draft document titled “CSS Techniques for WCAG 2.0” edited by Wendy Chisholm and Becky Gibson includes a technique for invisible labels. While it is usually best to include visual labels for all form controls, there are situations where a visual label is not needed due to the surrounding textual description of the control and/or the content the control contains. Users of screen readers, however, need each form control to be explicitly labeled so the intent of the control is well understood when navigated to directly. Creating Invisible labels for form elements ( history ) The following CSS was provided: Could this be the original class? My research jumped through decades but eventually I found an email thread “CSS and invisible labels for forms” on the W3C WAI mailing list. This was a month prior, preluding the WCAG draft. A different technique from Bob Easton was noted: The beauty of this technique is that it enables using as much text as we feel appropriate, and the elements we feel appropriate. Imagine placing instructive text about the accessibility features of the page off left (as well as on the site’s accessibility statement). Imagine interspersing “start of…” landmarks through a page with heading tags. Or, imagine parking full lists off left, lists of access keys, for example. Screen readers can easily collect all headings and read complete lists. Now, we have a made for screen reader technique that really works! Screenreader Visibility - Bob Easton (2003) Easton attributed both Choan Gálvez and Dave Shea for their contributions. In same the thread, Gez Lemon proposed to ensure that text doesn’t bleed into the display area . Following up, Becky Gibson shared a test case covering the ideas. Lemon later published an article “Invisible Form Prompts” about the WCAG plans which attracted plenty of commenters including Bob Easton. The resulting WCAG draft guideline discussed both the and ideas. Note that instead of using the nosize style described above, you could instead use postion:absolute; and left:-200px; to position the label “offscreen”. This technique works with the screen readers as well. Only position elements offscreen in the top or left direction, if you put an item off to the right or the bottom, many browsers will add scroll bars to allow the user to reach the content. Creating Invisible labels for form elements Two options were known and considered towards the end of 2004. Why not both? Indeed, it appears Paul Bohman on the WebAIM mailing list suggested such a combination in February 2004. Bohman even discovered possibly the first zero width bug. I originally recommended setting the height and width to 0 pixels. This works with JAWS and Home Page Reader. However, this does not work with Window Eyes. If you set the height and width to 1 pixel, then the technique works with all browsers and all three of the screen readers I tested. Re: Hiding text using CSS - Paul Bohman Later in May 2004, Bohman along with Shane Anderson published a paper on this technique. Citations within included Bob Easton and Tom Gilder . Aside note: other zero width bugs have been discovered since. Manuel Matuzović noted in 2023 that links in Safari were not focusable . The zero width story continues as recently as February 2026 (last week). In browse mode in web browsers, NVDA no longer treats controls with 0 width or height as invisible. This may make it possible to access previously inaccessible “screen reader only” content on some websites. NVDA 2026.1 Beta TWO now available - NV Access News Digger further into WebAIM’s email archive uncovered a 2003 thread in which Tom Gilder shared a class for skip navigation links . I found Gilder’s blog in the web archives introducing this technique. I thought I’d put down my “skip navigation” link method down in proper writing as people seem to like it (and it gives me something to write about!). Try moving through the links on this page using the keyboard - the first link should magically appear from thin air and allow you to quickly jump to the blog tools, which modern/visual/graphical/CSS-enabled browsers (someone really needs to come up with an acronym for that) should display to the left of the content. Skip-a-dee-doo-dah - Tom Gilder Gilder’s post links to a Dave Shea post which in turn mentions the 2002 book “Building Accessible Websites” by Joe Clark . Chapter eight discusses the necessity of a “skip navigation” link due to table-based layout but advises: Keep them visible! Well-intentioned developers who already use page anchors to skip navigation will go to the trouble to set the anchor text in the tiniest possible font in the same colour as the background, rendering it invisible to graphical browsers (unless you happen to pass the mouse over it and notice the cursor shape change). Building Accessible Websites - 08. Navigation - Joe Clark Clark expressed frustration over common tricks like the invisible pixel. It’s clear no class existed when this was written. Choan Gálvez informed me that Eric Meyer would have the css-discuss mailing list. Eric kindly searched the backups but didn’t find any earlier discussion. However, Eric did find a thread on the W3C mailing list from 1999 in which Ian Jacobs (IBM) discusses the accessibility of “skip navigation” links. The desire to visually hide “skip navigation” links was likely the main precursor to the early techniques. In fact, Bob Easton said as much: As we move from tag soup to CSS governed design, we throw out the layout tables and we throw out the spacer images. Great! It feels wonderful to do that kind of house cleaning. So, what do we do with those “skip navigation” links that used to be attached to the invisible spacer images? Screenreader Visibility - Bob Easton (2003) I had originally missed that in my excitement seeing the class. I reckon we’ve reached the source of the class. At least conceptually. Technically, the class emerged from several ideas, rather than a “eureka” moment. Perhaps more can be gleaned from other CSS techniques such a the desire to improve accessibility of CSS image replacement . Bob Easton retired in 2008 after a 40 year career at IBM. I reached out to Bob who was surprised to learn this technique was still a topic today † . Bob emphasised the fact that it was always a clumsy workaround and something CSS probably wasn’t intended to accommodate . I’ll share more of Bob’s thoughts later. † I might have overdone the enthusiasm Let’s take an intermission! My contact page is where you can send corrections by the way :) The class stabilised for a period. Visit 2006 in the Wayback Machine to see WebAIM’s guide to invisible content — Paul Bohman’s version is still recommended. Moving forward to 2011, I found Jonathan Snook discussing the “clip method”. Snook leads us to Drupal developer Jeff Burnz the previous year. […] we still have the big problem of the page “jump” issue if this is applied to a focusable element, such as a link, like skip navigation links. WebAim and a few others endorse using the LEFT property instead of TOP, but this no go for Drupal because of major pain-in-the-butt issues with RTL. In early May 2010 I was getting pretty frustrated with this issue so I pulled out a big HTML reference and started scanning through it for any, and I mean ANY property I might have overlooked that could possible be used to solve this thorny issue. It was then I recalled using clip on a recent project so I looked up its values and yes, it can have 0 as a value. Using CSS clip as an Accessible Method of Hiding Content - Jeff Burnz It would seem Burnz discovered the technique independently and was probably the first to write about it. Burnz also notes a right-to-left (RTL) issue. This could explain why pushing content off-screen fell out of fashion. 2010 also saw the arrival of HTML5 Boilerplate along with issue #194 in which Jonathan Neal plays a key role in the discussion and comments: If we want to correct for every seemingly-reasonable possibility of overflow in every browser then we may want to consider [code below] This was their final decision. I’ve removed for clarity. This is very close to what we have now, no surprise since HTML5 Boilterplate was extremely popular. I’m leaning to conclude that the additional properties are really just there for the “possibility” of pixels escaping containment as much as fixing any identified problem. Thierry Koblentz covered the state of affairs in 2012 noting that: Webkit, Opera and to some extent IE do not play ball with [clip] . Koblentz prophesies: I wrote the declarations in the previous rule in a particular order because if one day clip works as everyone would expect, then we could drop all declarations after clip, and go back to the original Clip your hidden content for better accessibility - Thierry Koblentz Sound familiar? With those browsers obsolete, and if behaves itself, can the other properties be removed? Well we have 14 years of new bugs features to consider first. In 2016, J. Renée Beach published: Beware smushed off-screen accessible text . This appears to be the origin of (as demonstrated by Vispero .) Over a few sessions, Matt mentioned that the string of text “Show more reactions” was being smushed together and read as “Showmorereactions”. Beach’s class did not include the kitchen sink. The addition of became standard alongside everything else. Aside note: the origin of remains elusive. One Bootstrap issue shows it was rediscovered in 2018 to fix a browser bug. However, another HTML5 Boilterplate issue dated 2017 suggests negative margin broke reading order. Josh Comeau shared a React component in 2024 without margin. One of many examples showing that it has come in and out of fashion. We started with WCAG so let’s end there. The latest WCAG technique for “Using CSS to hide a portion of the link text” provides the following code. Circa 2020 the property was added as browser support increased and became deprecated. An obvious change I not sure warrants investigation (although someone had to be first!) That brings us back to what we have today. Are you still with me? As we’ve seen, many of the properties were thrown in for good measure. They exist to ensure absolutely no pixels are painted. They were adapted over the years to avoid various bugs, quirks, and edge cases. How many such decisions are now irrelevant? This is a classic Chesterton’s Fence scenario. Do not remove a fence until you know why it was put up in the first place. Well we kinda know why but the specifics are practically folklore at this point. Despite all that research, can we say for sure if any “why” is still relevant? Back to Ana Tudor’s suggestion. How do we know for sure? The only way is extensive testing. Unfortunately, I have neither the time nor skill to perform that adequately here. There is at least one concern with the code above, Curtis Wilcox noted that in Safari the focus ring behaves differently. Other minimum viable ideas have been presented before. Scott O’Hara proposed a different two-liner using . JAWS, Narrator, NVDA with Edge all seem to behave just fine. As do Firefox with JAWS and NVDA, and Safari on macOS with VoiceOver. Seems also fine with iOS VO+Safari and Android TalkBack with Firefox or Chrome. In none of these cases do we get the odd focus rings that have occurred with other visually hidden styles, as the content is scaled down to zero. Also because not hacked into a 1px by 1px box, there’s no text wrapping occurring, so no need to fix that issue. transform scale(0) to visually hide content - Scott O’Hara Sounds promising! It turns out Katrin Kampfrath had explored both minimum viable classes a couple of years ago, testing them against the traditional class. I am missing the experience and moreover actual user feedback, however, i prefer the screen reader read cursor to stay roughly in the document flow. There are screen reader users who can see. I suppose, a jumping read cursor is a bit like a shifting layout. Exploring the visually-hidden css - Katrin Kampfrath Kampfrath’s limited testing found the read cursor size differs for each class. The technique was favoured but caution is given. A few more years ago, Kitty Giraudel tested several ideas concluding that was still the most accessible for specific text use. This technique should only be used to mask text. In other words, there shouldn’t be any focusable element inside the hidden element. This could lead to annoying behaviours, like scrolling to an invisible element. Hiding content responsibly - Kitty Giraudel Zell Liew proposed a different idea in 2019. Many developers voiced their opinions, concerns, and experiments over at Twitter. I wanted to share with you what I consolidated and learned. A new (and easy) way to hide content accessibly - Zell Liew Liew’s idea was unfortunately torn asunder. Although there are cases like inclusively hiding checkboxes where near-zero opacity is more accessible. I’ve started to go back in time again! I’m also starting to question whether this class is a good idea. Unless we are capable and prepared to thoroughly test across every combination of browser and assistive technology — and keep that information updated — it’s impossible to recommend anything. This is impossible for developers! Why can’t browser vendors solve this natively? Once you’ve written 3000 words on a twenty year old CSS hack you start to question why it hasn’t been baked into web standards by now. Ben Myers wrote “The Web Needs a Native .visually-hidden” proposing ideas from HTML attributes to CSS properties. Scott O’Hara responded noting larger accessibility issues that are not so easily handled. O’Hara concludes: Introducing a native mechanism to save developers the trouble of having to use a wildly available CSS ruleset doesn’t solve any of those underlying issues. It just further pushes them under the rug. Visually hidden content is a hack that needs to be resolved, not enshrined - Scott O’Hara Sara Soueidan had floated the topic to the CSS working group back in 2016. Soueidan closed the issue in 2025, coming to a similar conclusion. I’ve been teaching accessibility for a little less than a decade now and if there’s one thing I learned is that developers will resort to using utility to do things that are more often than not just bad design decisions. Yes, there are valid and important use cases. But I agree with all of @scottaohara’s points, and most importantly I agree that we need to fix the underlying issues instead of standardizing a technique that is guaranteed to be overused and misused even more once it gets easier to use. csswg-drafts comment - Sara Soueidan Adrian Roselli has a blog post listing priorities for assigning an accessible name to a control. Like O’Hara and Soueidan, Roselli recognises there is no silver bullet. Hidden text is also used too casually to provide information for just screen reader users, creating overly-verbose content . For sighted screen reader users , it can be a frustrating experience to not be able to find what the screen reader is speaking, potentially causing the user to get lost on the page while visually hunting for it. My Priority of Methods for Labeling a Control - Adrian Roselli In short, many believe that a native visually-hidden would do more harm than good. The use-cases are far more nuanced and context sensitive than developers realise. It’s often a half-fix for a problem that can be avoided with better design. I’m torn on whether I agree that it’s ultimately a bad idea. A native version would give software an opportunity to understand the developer’s intent and define how “visually hidden” works in practice. It would be a pragmatic addition. The technique has persisted for over two decades and is still mentioned by WCAG. Yet it remains hacks upon hacks! How has it survived for so long? Is that a failure of developers, or a failure of the web platform? The web is overrun with inaccessible div soup . That is inexcusable. For the rest of us who care about accessibility — who try our best — I can’t help but feel the web platform has let us down. We shouldn’t be perilously navigating code hacks, conflicting advice, and half-supported standards. We need more energy money dedicated to accessibility. Not all problems can be solved with money. But what of the thousands of unpaid hours, whether volunteered or solicited, from those seeking to improve the web? I risk spiralling into a rant about browser vendors’ financial incentives, so let’s wrap up! I’ll end by quoting Bob Easton from our email conversation: From my early days in web development, I came to the belief that semantic HTML, combined with faultless keyboard navigation were the essentials for blind users. Experience with screen reader users bears that out. Where they might occasionally get tripped up is due to developers who are more interested in appearance than good structural practices. The use cases for hidden content are very few, such as hidden information about where a search field is, when an appearance-centric developer decided to present a search field with no visual label, just a cute unlabeled image of a magnifying glass. […] The people promoting hidden information are either deficient in using good structural practices, or not experienced with tools used by people they want to help. Bob ended with: You can’t go wrong with well crafted, semantically accurate structure. Ain’t that the truth. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Accessibility notice Class walkthrough Where it all began Further adaptations Minimum viable technique Native visually-hidden Zero dimensions Position off-screen

6 views
ava's blog 1 weeks ago

[event] my bearblog creations

As part of the Grizzly Gazette event , I thought I'd made some buttons, a forum signature, and a little joke graphic. Feel free to use. Reply via email Published 17 Feb, 2026

0 views
Jimmy Miller 1 weeks ago

Untapped Way to Learn a Codebase: Build a Visualizer

The biggest shock of my early career was just how much code I needed to read that others wrote. I had never dealt with this. I had a hard enough time understanding my own code. The idea of understanding hundreds of thousands or even millions of lines of code written by countless other people scared me. What I quickly learned is that you don't have to understand a codebase in its entirety to be effective in it. But just saying that is not super helpful. So rather than tell, I want to show. In this post, I'm going to walk you through how I learn an unfamiliar codebase. But I'll admit, this isn't precisely how I would do it today. After years of working on codebases, I've learned quite a lot of shortcuts. Things that come with experience that just don't translate for other people. So what I'm going to present is a reconstruction. I want to show bits and parts of how I go from knowing very little to gaining knowledge and ultimately, asking the right questions. To do this, I will use just a few techniques: I want to do this on a real codebase, so I've chosen one whose purpose and scope I'm generally familiar with. But one that I've never contributed to or read, Next.js . But I've chosen to be a bit more particular than that. I'm particularly interested in learning more about the Rust bundler setup (turbopack) that Next.js has been building out. So that's where we will concentrate our time. Trying to learn a codebase is distinctly different from trying to simply fix a bug or add a feature. In post, we may use bugs, talk about features, make changes, etc. But we are not trying to contribute to the codebase, yet. Instead, we are trying to get our mind around how the codebase generally works. We aren't concerned with things like coding standards, common practices, or the development roadmap. We aren't even concerned with correctness. The changes we make are about seeing how the codebase responds so we can make sense of it. I find starting at to be almost always completely unhelpful. From main, yes, we have a single entry point, but now we are asking ourselves to understand the whole. But things actually get worse when dealing with a large codebase like this. There isn't even one main. Which main would we choose? So instead, let's start by figuring out what our library even consists of. A couple of things to note. We have packages, crates, turbo, and turbopack. Crates are relevant here because we know we are interested in some of the Rust code, but we are also interested in turbopack in particular. A quick look at these shows that turbo, packages, and crates are probably not our target. Why do I say that? Because turbopack has its own crates folder. So there are 54 crates under turbopack.... This is beginning to feel a bit daunting. So why don't we take a step back and find a better starting point? One starting point I find particularly useful is a bug report . I found this by simply looking at recently opened issues. When I first found it, it had no comments on it. In fact, I find bug reports with only reproducing instructions to be the most useful. Remember, we are trying to learn, not fix a bug. So I spent a little time looking at the bug report. It is fairly clear. It does indeed reproduce. But it has a lot of code. So, as is often the case, it is useful to reduce it to the minimal case. So that's what I did. Here is the important code and the problem we are using to learn from. MyEnum here is dead code. It should not show up in our final bundle. But when we do and look for it, we get: If we instead do The code is completely gone from our build. So now we have our bug. But remember. Our goal here is not to fix the bug. But to understand the code. So our goal is going to be to use this little mini problem to understand what code is involved in this bug. To understand the different ways we could fix this bug. To understand why this bug happened in the first place. To understand some small slice of the turbopack codebase. So at each junction, we are going to resist the urge to simply find the offending code. We are going to take detours. We are going to ask questions. We hope that from the start of this process to the end, we no longer think of the code involved in this action as a black box. But we will intentionally leave ourselves with open questions. As I write these words, I have no idea where this will take us. I have not prepared this ahead of time. I am not telling you a fake tale from a codebase I already know. Yes, I will simplify and skip parts. But you will come along with me. The first step for understanding any project is getting some part of it running. Well, I say that, but in my day job, I've been at companies where this is a multi-day or week-long effort. Sometimes, because of a lack of access, sometimes from unclear instructions, if you find yourself in that situation, you now have a new task, understand it well enough to get it to build. Well, unfortunately, that is the scenario we find ourselves in. I can't think of a single one of these endeavors I've gone on to learn a codebase that didn't involve a completely undesirable, momentum-stopping side quest. For this one, it was as soon as I tried to make changes to the turbopack Rust code and get it working in my test project. There are instructions on how to do this . In fact, we even get an explanation on why it is a bit weird. Since Turbopack doesn't support symlinks when pointing outside of the workspace directory, it can be difficult to develop against a local Next.js version. Neither nor imports quite cut it. An alternative is to pack the Next.js version you want to test into a tarball and add it to the pnpm overrides of your test application. The following script will do it for you: Okay, straightforward enough. I start by finding somewhere in the turbopack repo that I think will be called more than once, and I add the following: Yes. Very scientific, I know. But I've found this to be a rather effective method of making sure my changes are picked up. So I do that, make sure I've built and done the necessary things. I run Then that script tells me to add some overrides and dependencies to my test project. I go to build my project and HERERE!!!!!!! does not show up at all... I will save you the fun details here of looking through this system. But I think it's important to mention a few things. First, being a dependency immediately stood out to me. In my day job, I maintain a fork of swc (WHY???) for some custom stuff. I definitely won't pretend to be an expert on swc, but I know it's written in Rust. I know it's a native dependency. The changes I'm making are native dependencies. But I see no mention at all of turbopack. In fact, if I search in my test project, I get the following: So I have a sneaking suspicion my turbopack code should be in that tar. So let's look at the tar. Ummm. That seems a bit small... Let's look at what's inside. Okay, I think we found our problem. There's really nothing in this at all. Definitely no native code. After lots of searching, the culprit came down to: In our case, the input came from this file and f was . Unfortunately, this little set + regex setup causes to be filtered out. Why? Because it doesn't match the regex. This regex is looking for a with characters after it. We have none. So since we are already in the set (we just added ourselves), we filter ourselves out. How do we solve this problem? There are countless answers, really. I had Claude whip me up one without regex. But my gut says the sorting lets us do this much simpler. But after this fix, let's look at the tar now: Much better. After this change, we can finally see HERERE!!!!!!! a lot. Update : As I wrote this article, someone fixed this in a bit of a different way . Keeping the regex and just changing to . Fairly practical decision. Okay, we now have something we can test. But where do we even begin? This is one reason we chose this bug. It gives a few avenues to go down. First, the report says that these enums are not being "tree-shaken." Is that the right term? One thing I've learned from experience is to never assume that the end user is using terms in the same manner as the codebase. So this can be a starting point, but it might be wrong. With some searching around, we can actually see that there is a configuration for turning turbopackTreeShaking on or off. It was actually a bit hard to find exactly where the default for this was. It isn't actually documented. So let's just enable it and see what we get. Well, I think we figured out that the default is off. So one option is that we never "tree shake" anything. But that seems wrong. At this point, I looked into tree shaking a bit in the codebase, and while I started to understand a few things, I've been at this point before. Sometimes it is good to go deep. But how much of this codebase do I really understand? If tree shaking is our culprit (seeming unlikely at this point), it might be good to know how code gets there. Here, we of course found a bug. But it is an experimental feature. Maybe we can come back and fix it? Maybe we can file a bug? Maybe this code just isn't at all ready for primetime. It's hard to know as an outsider. Our "search around the codebase" strategy failed. So now we try a different tactic. We know a couple of things. We now have two points we can use to try to trace what happens. Let's start with parsing. Luckily, here it is straightforward: . When we look at this code, we can see that swc does the heavy lifting. First, it parses it into a TypeScript AST, then applies transforms to turn it into JavaScript. At this point, we don't write to a string, but if you edit the code and use an emitter, you see this: Now, to find where we write the chunks. In most programs, this would be pretty easy. Typically, there is a linear flow somewhere that just shows you the steps. Or if you can't piece one together, you can simply breakpoint and follow the flow. But Turbopack is a rather advanced system involving async Rust (more on this later). So, in keeping with the tradition of not trying to do things that rely too heavily on my knowledge, I have done the tried and true, log random things until they look relevant. And what I found made me realize that logging was not going to be enough. It was time to do my tried and true learning technique, visualization. Ever since my first job , I have been building custom tools to visualize codebases. Perhaps this is due to my aphantasia. I'm not really sure. Some of these visualizers make their way into general use for me. But more often than not, they are a means of understanding. When I applied for a job at Shopify working on YJIT, I built a simple visualizer but never got around to making it more useful than a learning tool. The same thing is true here, but this time, thanks to AI, it looks a bit more professional. This time, we want to give a bit more structure than what we'd be able to do with a simple print. 1 We are trying to get events out that have a bunch of information. Mostly, we are interested in files and their contents over time. Looking through the codebase, we find that one key abstract is an ident; this will help us identify files. We will simply find points that seem interesting, make a corresponding event, make sure it has idents associated with it, and send that event over a WebSocket. Then, with that raw information, we can have our visualizer stitch together what exactly happens. If we take a look, we can see our code step through the process. And ultimately end up in the bundle despite not being used. If you notice, though, between steps 3 and 4, our code changed a bit. We lost this PURE annotation. Why? Luckily, because we tried to capture as much context as we could. We can see that a boolean "Scope Hoisting" has been enabled. Could that be related? If we turn it off, we instead see: Our pure annotation is kept around, and as a result, our code is eliminated. If we take a step back, this can make sense. Something during the parse step is creating a closure around our enum code, but when it does so, it is marking that as a "pure" closure, meaning it has no side effects. Later, because this annotation is dropped, the minifier doesn't know that it can just get rid of this closure. As I've been trying to find time to write this up, it seems that people on the bug report have found this on their own as well. So we've found the behavior of the bug. Now we need to understand why it is happening. Remember, we are fixing a bug as a means of understanding the software. Not just to fix a bug. So what exactly is going on? Well, we are trying to stitch together two libraries. Software bugs are way more likely to occur on these seams. In this case, after reading the code for a while, the problem becomes apparent. SWC parses our code and turns it into an AST. But if you take a look at an AST , comments are not a part of the AST. So instead, SWC stores comments off in a hashmap where we can look them up by byte pos. So for each node in the AST, it can see if there is a comment attached. But for the PURE comment, it doesn't actually need to look this comment up. It is not a unique comment that was in the source code. It is a pre-known meta comment. So rather than store each instance in the map, it makes a special value. Now, this encoding scheme causes some problems for turbopack. Turbopack does not act on a single file; it acts across many files. In fact, for scope hoisting, we are trying to take files across modules and condense them into a single scope. So now, when we encounter one of these bytepos encodings, how do we know what module it belongs to? The obvious answer to many might be to simply make a tuple like , and while that certainly works, it does come with tradeoffs. One of these is memory footprint. I didn't find an exact reason. But given the focus on performance on turbopack, I'd imagine this is one of the main motivations. Instead, we get a fairly clever encoding of module and bytepos into a single BytePos, aka a u32. I won't get into the details of the representation here; it involves some condition stuff. But needless to say, now that we are going from something focusing on one file to focusing on multiple and trying to smuggle in this module_id into our BytePos, we ended up missing one detail, PURE. Now our pure value is being interpreted as some module at some very high position instead of the proper bytes. To fix this bug, I found the minimal fix was simply the following: With this our enum properly is marked as PURE and disappears from the output! Now remember, we aren't trying to make a bug fix. We are trying to understand the codebase. Is this the right fix? I'm not sure. I looked around the codebase, and there are a number of other swc sentinel values that I think need to also be handled (PLACEHOLDER and SYNTHESIZED). There is also the decoding path. For dummy, the decoding path panics. Should we do the same? Should we be handling pure at a higher level, where it never even goes through the encoder? Update : As I was writing this, someone else proposed a fix . As I was writing the article, I did see that others had started to figure out the things I had determined from my investigation. But I was not confident enough that it was the right fix yet. In fact, the PR differs a bit from my local fix. It does handle the other sentinel, but at a different layer. It also chooses to decode with module 0. Which felt a bit wrong to me. But again, these are decisions that people who work on this codebase long-term are better equipped to decide than me. I must admit that simply fixing this bug didn't quite help me understand the codebase. Not just because it is a fairly good size. But because I couldn't see this fundamental unit that everything was composed of. In some of the code snippets above, you will see types that mention Vc. This stands for ValueCell. There are a number of ways to try to understand these; you can check out the docs for turbo engine for some details. Or you can read the high-level overview that skips the implementation details for the most part. You can think of these cells like the cells in a spreadsheet. They provide a level of incremental computation. When the value of some cell updates, we can invalidate stuff. Unlike a spreadsheet, the turbo engine is lazy. I've worked with these kinds of systems before. Some are very explicitly modeled after spreadsheets. Others are based on rete networks or propagators. I am also immediately reminded of salsa from the Rust analyzer team. I've also worked with big, complicated non-computational graphs. But even with that background, I know myself, I've never been able to really understand these things until I can visualize them. And while a general network visualizer can be useful (and might actually be quite useful if I used the aggregate graph), I've found that for my understanding, I vastly prefer starting small and exploring out the edges of the graph. So that's what I did. But before we get to that visualization, I want to highlight something fantastic in the implementation: a central place for controlling a ton of the decisions that go into this system. The backend here lets us decide so many things about how the execution of our tasks will run. Because of this, we have one place we can insert a ton of tooling and begin to understand how this system works. As before, we are going to send things on a WebSocket. But unlike last time, our communication will actually be two-way. We are going to be controlling how the tasks run. In my little test project, I edited a file, and my visualizer displayed the following. Admittedly, it is a bit janky, but there are some nice features. First, on the left, we can see all the pending tasks. In this case, something has marked our file read as dirty, so we are trying to read the file. We can see the contents of a cell that this task has. And we can see the dependents of this task. Here is what it looks like once we release that task to run. We can now see 3 parse tasks have kicked off. Why 3? I'll be honest, I haven't looked into it. But a good visualization is about provoking questions, not only answering them. Did I get my visualization wrong because I misunderstood something about the system? Are there three different subsystems that want to parse, and we want to do them in parallel? Have we just accidentally triggered more parses than we should be? This is precisely what we want out of a visualizer. Is it perfect? No, would I ship this as a general visualizer? No. Am I happy with the style? Not in the least. But already it enables a look into the project I couldn't see before. Here we can actually watch the graph unfold as I execute more steps. What a fascinating view of a once opaque project. With this visualizer, I was able to make changes to my project and watch values as they flow through the systems. I made simple views for looking at code. If I extended this, I can imagine it being incredibly useful for debugging general issues, for seeing the ways in which things are scheduled, and for finding redundancies in the graph. Once I was able to visualize this, I really started to understand the codebase better. I was able to see all the values that didn't need to be recomputed when we made changes. The whole thing clicked. This was an exercise in exploring a new codebase that is a bit different of a process than I see others take. It isn't an easy process, it isn't quick. But I've found myself repeating this process over and over again. For the turborepo codebase, this is just the beginning. This exploration was done over a few weekends in the limited spare time I could find. But already I can start to put the big picture together. I can start to see how I could shape my tools to help me answer more questions. If you've never used tool building as a way to learn a codebase, I highly recommend it. One thing I always realize as I go through this process is just how hard it is to work interactively with our current software. Our languages, our tools, our processes are all written without ways to live code, without ways to inspect their inner workings. It is also incredibly hard to find a productive UI environment for this kind of live exploration. The running state of the visualizer contains all the valuable information. Any system that needs you to retrace your steps to get the UI back to the state it was once in to visualize more is incredibly lacking. So I always find myself in the browser, but immediately, I am having to worry about performance. We have made massive strides in so many aspects of software development. I hope that we will fix this one as well. Setting a goal Editing randomly Fixing things I find that are broken Reading to answer questions Making a visualizer Our utilities.ts file is read and parsed. It ends up in a file under a "chunks" directory.

0 views
Justin Duke 1 weeks ago

Outgrowing Django admin

For a bit of dessert work this week, I'm working on a full-fledged attempt at replacing the majority of our stock Django admin usage with something purposeful. I say majority and not totality because even though I am an unreasonable person, I am not that unreasonable. We have over a hundred Django models, and the idea of trying to rip and replace each and every one of them — or worse yet, to design some sort of DSL by which we do that — is too quixotic even for me. The vast majority of our admin usage coalesces around three main models, and they're the ones you might guess: the user/newsletter model, the email model, and the subscriber model. My hope is that building out a markedly superior interface for interacting with these three things and sacrificing the long tail still nets out for a much happier time for myself and the support staff. Django admin is a source of both much convenience as much frustration: the abstractions make it powerful and cheap when you're first scaling, but the bill for those abstractions come due in difficult and intractable ways. When I talk with other Django developers, they divide cleanly into one of two camps: either "what are you talking about, Django admin is perfect as-is" or "oh my God, I can't believe we didn't migrate off of it sooner." Ever the annoying centrist, I find myself agreeing with both camps: Let's set aside the visual design of the admin for a second, because arguing about visual design is not compelling prose. To me, the core issue with Django's admin interface, once you get more mature, is the fact that it's a very simple request-response lifecycle. Django pulls all the data, state, and information you might need and throws it up to a massive behemoth view for you to digest and interact with. It is by definition atomic: you are looking at a specific model, and the only way to bring in other models to the detail view is by futzing around with inlines and formsets. The classic thing that almost any Django developer at scale has run into is the N+1 problem — but not even necessarily the one you're thinking about. Take a fairly standard admin class: If you've got an email admin object and one of the fields on the is a — because you want to be able to change and see which user wrote a given email — Django by default will serialize every single possible user into a nice tag for you. Even if this doesn't incur a literal N+1, you're asking the backend to generate a select with thousands (or more) options; the serialization overhead alone will timeout your request. And so the answer is, nowadays, to use or , which pulls in a jQuery 1.9 package 1 Yes, in 2026. No, I don't want to talk about it. to call an Ajax endpoint instead: This is the kind of patch that feels like a microcosm of the whole problem: technically correct, ergonomically awkward, and aesthetically offensive. But the deeper issue is composability rather than performance. A well-defined data model has relationships that spread in every direction. A subscriber has Stripe subscriptions and Stripe charges. It has foreign keys onto email events and external events. When you're debugging an issue reported by a subscriber, you want to see all of these things in one place, interleaved and sorted chronologically. Django admin's answer to this is inlines: This works — until it doesn't. You start to run into pagination issues; you can't interleave those components with one another because they're rendered as separate, agnostic blocks; you can't easily filter or search within a single inline. You could create a helper method on the subscriber class to sort all related events and present them as a single list, but you once again run into the non-trivial problem of this being part of a fixed request-response lifecycle. And that kind of serialized lookup can get really expensive: You can do more bits of cleverness — parallelizing lookups, caching aggressively, using and everywhere — but now you're fighting the framework rather than using it. The whole point of Django admin was to not build this stuff from scratch, and yet here you are, building bespoke rendering logic inside callbacks. I still love Django admin. On the next Django project I start, I will not create a bespoke thing from day one but instead rely on my trusty, outdated friend until it's no longer bearable. But what grinds my gears is the fact that, as far as I can tell, every serious Django company has this problem and has had to solve it from scratch. There's no blessed graduation path, whether in the framework itself or the broader ecosystem. I think that's one of the big drawbacks of Django relative to its peer frameworks. As strong and amazing as its community is, it's missing a part of the flywheel from more mature deployments upstreaming their findings and discoveries back into the zeitgeist. Django admin is an amazing asset; I am excited to be, if not rid of it, to be seeing much less of it in the future.

0 views