Posts in Web-development (20 found)
Chris Coyier Yesterday

AI is my CMS

I mean… it’s not really, of course. I just thought such a thing would start to trickle out to people’s minds as agentic workflows start to take hold. Has someone written "AI is my CMS" yet? Feels inevitable. Like why run a build tool when you can just prompt another page? AI agents are already up in your codebase fingerbanging whole batches of files on command. What’s the difference between a CMS taking some content and smashing it into some templates and an AI doing that same job instead? Isn’t less tooling good? I had missed that this particular topic already had quite a moment in the sun this past December. Lee Robinson wrote Coding Agents & Complexity Budgets . Without calling it out by name, Lee basically had a vibe-coding weekend where he ripped out Sanity from cursor.com. I don’t think Lee is wrong for this choice. Spend some money to save some money. Remove some complexity. Get the code base more AI-ready. Yadda yadda. Even though Lee didn’t call out Sanity, they noticed and responded . They also make some good and measured points, I think. Which makes this a pretty great blog back-and-forth, by the way, which you love to see. Some of their argument as to why it can be the right choice to have Sanity is that some abstraction and complexity can be good, actually, because building websites from content can be complicated, especially as time and scale march on. And if you rip out a tool that does some of it, only to re-build many of those features in-house, what have you really gained? TIME FOR MY TWO CENTS. The language feels a little wrong to me. I think if you’re working with Markdown-files as content in a Next.js app… that’s already a CMS. You didn’t rip out a CMS, you ripped out a cloud database . Yes, that cloud database does binary assets also, and handles user management, and has screens for CRUDing the content, but to me it’s more of a cloud data store than a CMS. The advantage Lee got was getting the data and assets out of the cloud data store. I don’t think they were using stuff like the fancy GROQ language to get at their content in fine-grained ways. It’s just that cursor.com happened to not really need a database, and in fact was using it for things they probably shouldn’t have been (like video hosting). Me, I don’t think there is one right answer. If keeping content in Markdown files and building sites by smashing those into templates is wrong, then every static site generator ever built is wrong (🙄). But keeping content in databases isn’t wrong either. I tend to lean that way by default, since the power you get from being able to query is so obviously and regularly useful. Maybe they are both right in that having LLM tools that have the power to wiggleworm their way into the content no matter where it is, is helpful. In the codebase? Fine. In a DB that an MCP can access? Fine.

0 views
Kev Quirk Yesterday

My WordPress - A Private In-Browser WordPress Install

I saw this while perusing my RSS feeds last night, and thought it was interesting. In all honesty, I've completely moved away from WordPress since all the drama a while ago. But this is quite cool - My WordPress is basically a version of WordPress that runs entirely in your browser. You visit my.wordpress.net it downloads some files to your machine, and you have WordPress - no install, no sign up. Just a private WordPress instance in your browser that only you can visit. Obviously if you reset your browser, or switch to another browser, you will lose your instance, but there are backup/restore options available. I think it might be good as a private journal or something, but I'm sure other people will find some interesting use cases for it. Either way, pretty cool. Read more about My WordPress Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
Justin Duke 2 days ago

Archiving the roadmap

Pour one out for Buttondown's transparent roadmap , which I formally archived yesterday evening after a year or so of informal archival. This felt like the journey that so many other companies have had who have tried to keep public roadmaps and then for one reason or another got rid of theirs. Mine had nothing to do with transparency. It was entirely due to the fact that Linear now makes a much better product than GitHub does — at least for the kind of project management I need — and if there was a way to easily make our Linear publicly visible, I would be happy to do so. The third-party services and integrations which purport to offer such functionality ( Productlane being the most notable) seem like more trouble and money than they're worth. More than anything, the reason I dithered about this for so long was a false sense of worry that there would be a backlash. Around 100 or so folks have commented, watched, or reacted to various issues over the years, which is not a huge amount but not a small one either, and it felt faintly bad to leave them all in the cold. But in reality, no one has minded or noticed that much. And whatever negative goodwill we generate from no longer having this public repository is offset by the negative goodwill we avoid from having that public repository look so obviously abandoned.

0 views
David Bushell 3 days ago

SvelteKit i18n and FOWL

Perhaps my favourite JavaScript APIs live within the Internationalization namespace. A few neat things the global allows: It’s powerful stuff and the browser or runtime provides locale data for free! That means timezones, translations, and local conventions are handled for you. Remember moment.js? That library with locale data is over 600 KB (uncompressed). That’s why JavaScript now has the Internationalization API built-in. SvelteKit and similar JavaScript web frameworks allow you to render a web page server-side and “hydrate” in the browser. In theory , you get the benefits of an accessible static website with the progressively enhanced delights of a modern “web app”. I’m building attic.social with SvelteKit. It’s an experiment without much direction. I added a bookmarks feature and used to format dates. Perfect! Or was it? Disaster strikes! See this GIF: What is happening here? Because I don’t specify any locale argument in the constructor it uses the runtime’s default. When left unconfigured, many environments will default to . I spotted this bug only in production because I’m hosting on a Cloudflare worker. SvelteKit’s first render is server-side using but subsequent renders use in my browser. My eyes are briefly sullied by the inferior US format! Is there a name for this effect? If not I’m coining: “Flash of Wrong Locale” (FOWL). To combat FOWL we must ensure that SvelteKit has the user’s locale before any templates are rendered. Browsers may request a page with the HTTP header. The place to read headers is hooks.server.ts . I’ve vendored the @std/http negotiation library to parse the request header. If no locales are provided it returns which I change to . SvelteKit’s is an object to store custom data for the lifetime of a single request. Event are not directly accessible to SvelteKit templates. That could be dangerous. We must use a page or layout load function to forward the data. Now we can update the original example to use the data. I don’t think the rune is strictly necessary but it stops a compiler warning . This should eliminate FOWL unless the header is missing. Privacy focused browsers like Mullvad Browser use a generic header to avoid fingerprinting. That means users opt-out of internationalisation but FOWL is still gone. If there is a cache in front of the server that must vary based on the header. Otherwise one visitor defines the locale for everyone who follows unless something like a session cookie bypasses the cache. You could provide a custom locale preference to override browser settings. I’ve done that before for larger SvelteKit projects. Link that to a session and store it in a cookie, or database. Naturally, someone will complain they don’t like the format they’re given. This blog post is guaranteed to elicit such a comment. You can’t win! Why can’t you be normal, Safari? Despite using the exact same locale, Safari still commits FOWL by using an “at” word instead of a comma. Who’s fault is this? The ECMAScript standard recommends using data from Unicode CLDR . I don’t feel inclined to dig deeper. It’s a JavaScriptCore quirk because Bun does the same. That is unfortunate because it means the standard is not quite standard across runtimes. By the way, the i18n and l10n abbreviations are kinda lame to be honest. It’s a fault of my design choices that “internationalisation” didn’t fit well in my title. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Natural alphanumeric sorting Relative date and times Currency formatting

0 views

I built a site for hosting events

Recently I started the Columbus Vintage Computing Club, it's a group for nerds like me that enjoy classic computers and software. Our first meeting was a week ago and it was an absolute blast. We had a Tandy and Vic20 on display, and lengthy discussions down memory lane! While I was planning the group, I knew I needed somewhere to post about the club and see who was interested in attending. I've run groups before (previously the Frontend Columbus group), so I went to the platform I knew best, Meetup. The first thing I noticed was Meetup absolutely chugged on my laptop. A site for posting groups and events should not bring my laptop to it's knees, but I guess that's the JavaScript filled world we live in. The next thing I noticed (after investing the time in creating my group and event), was that as a free-tier peasant, I was only allowed to have 10 people RSVP to my event. Absolutely insane! Everywhere Meetup was shoving their subscription infront of me, begging for my cash, just so I can run a free event for computer nerds. So fuck Meetup, I built Kai. Kai is a free and open-source platform for posting groups and organizing events. Open to anyone, no ads, no tracking. You can check it out at KaiMeet.com . It's beta, it's rough around the edges, and I only loaded in support for U.S. cities at the moment (want to use it somewhere else, email me !). But it works for my club, maybe it'll work for yours? More features will be coming as needed for the club I host, or if people reach out and ask for things. You can also open a PR on Github . Kai will be run as Donationware (BuyMeACoffee link) . If ya like it, help me pay the server cost. Kai represents something interesting that, to be honest, I'm excited to watch: the fall of SaaS companies that have ruined the internet. I'm not saying Kai will be disruptive, but somebody's version of what I'm doing will be. And people will keep doing this now that creating software is accessible. Those services that pray on being the only game in town, that sell your data to advertising partners and shove subscriptions in your face? They should be freaking the fuck out right now.

0 views
Chris Coyier 4 days ago

Claude is an Electron App

Juicy intro from Nikita Prokopov : In  “Why is Claude an Electron App?”  Drew Breunig wonders: Claude spent $20k on an agent swarm implementing (kinda) a C-compiler in Rust, but desktop Claude is an Electron app. If code is free, why aren’t all apps native? And then argues that the answer is that LLMs are not good enough yet. They can do 90% of the work, so there’s still a substantial amount of manual polish, and thus, increased costs. But I think that’s not the real reason. The real reason is: native has nothing to offer.

0 views
David Bushell 4 days ago

Building on AT Protocol

At Protocol has got me! I’m morphing into an atmosphere nerd . AT Protocol — atproto for short — is the underlying tech that powers Bluesky and new social web apps. Atproto as I understand it is largely an authorization and data layer. All atproto data is inherently public. In theory it can be encrypted for private use but leaky metadata and de-anonymisation is a whole thing. Atproto users own the keys to their data which is stored on a Personal Data Server (PDS). You don’t need to manage your own. If you don’t know where your data is stored, good chance it’s on Bluesky’s PDS. You can move your data to another PDS like Blacksky or Eurosky . Or if you’re a nerd like me self-host your own PDS . You own your data and no PDS can stop you moving it. Atproto provides OAuth; think “Sign in with GitHub” . But instead of an account being locked behind the whims of proprietary slopware, user identity is proven via their PDS. Social apps like Bluesky host a PDS allowing users to create a new account. That account can be used to login to other apps like pckt , Leaflet , or Tangled . You could start a new account on Tangled’s PDS and use that for Bluesky. Atproto apps are not required to provide a PDS but it helps to onboard new users. Of course I did. You can sign in at attic.social Attic is a cozy space with lofty ambitions. What does Attic do? I’m still deciding… it’ll probably become a random assortment of features. Right now it has bookmarks. Bookmarks will have search and tags soon. Technical details: to keep the server stateless I borrowed ideas from my old SvelteKit auth experiment. OAuth and session state is stored in encrypted HTTP-only cookies. I used the atcute TypeScript libraries to do the heavy atproto work. I found @flo-bit’s projects which helped me understand implementation details. Attic is on Cloudflare workers for now. When I’ve free time I’ll explore the SvelteKit Bunny adapter . I am busy on client projects so I’ll be scheming Attic ideas in my free time. What’s so powerful about atproto is that users can move their account/data. Apps write data to a PDS using a lexicon ; a convention to say: “this is a Bluesky post”, for example. Other apps are free to read that data too. During authorization, apps must ask for permission to write to specific lexicons. The user is in control. You may have heard that Bluesky is or isn’t “decentralised”. Bluesky was simply the first atproto app. Most users start on Bluesky and may never be aware of the AT Protocol. What’s important is that atproto makes it difficult for Bluesky to “pull a Twitter”, i.e. kill 3rd party apps, such as the alternate Witchsky . If I ever abandon attic.social your data is still in your hands. Even if the domain expires! You can extract data from your PDS. You can write a new app to consume it anytime. That’s the power of AT Protocol. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
Kev Quirk 4 days ago

Pure Blog Is Now Feature Complete...ish

I've just released v1.8.0 of Pure Blog , which was the final big feature I wanted to add 1 . At this point, Pure Blog does all the things I would want a useful CMS to do, such as: The result is a tool that works exactly how I want it to work. It's very simple to customise through the admin GUI, but there are also lots of advanced options available to more tech-savvy folk. Someone reached out to me recently and told me that their non-technical grandfather is running Pure Blog with no issues. Equally, I've had developers reach out to say that they're enjoying the flexibility of Pure Blog too. This is exactly why I created Pure Blog - to create a tool that can be used by anyone. My original plan was to just make a simple blogging platform, but I've ended up creating a performant platform that can be used for all kinds of sites, not just a blog. At this point I'm considering Pure Blog to be feature complete*. But there is an asterisk there, because you never know what the future holds. Right now it supports everything I want it to support, but my needs may change in the future. If they do, I'll develop more features. In the meantime I'm going to enjoy what I've built by continuing to produce content in this lovely little CMS (even if I do say so myself). I know there's a few people using Pure Blog our there, so I hope you're enjoying it as much as I am. If you want to try Pure Blog yourself, you can download the source code from here , and this post should get you up and running in just a few minutes. One could argue that previous versions were just development releases, and this is really v1.0, but I've gone with the versioning I went with, and I can't be bothered changing that now. :-)  ↩ This site scores a 96 on Google's Pagespeed Insights. Pretty impressive for a dynamic PHP-based site.  ↩ Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . Storing content in plain markdown, just like an SSG. Easy theme customisations . Hooks for doing clever things when something happens. Data files so I can loop through data to produce pages where I don't have to duplicate effort, like on my blogroll . A couple of simple shortcodes to make my life easier. Layout partials so I can customise certain parts of the site. Custom routes so I can add little extra features, like a discover page , or the ability to visit a random post . Caching because no-one wants a slow site 2 . Custom layouts and functions so I can go even deeper with my customisations without touching the core code base. One could argue that previous versions were just development releases, and this is really v1.0, but I've gone with the versioning I went with, and I can't be bothered changing that now. :-)  ↩ This site scores a 96 on Google's Pagespeed Insights. Pretty impressive for a dynamic PHP-based site.  ↩

0 views
matduggan.com 4 days ago

Update to the Ghost theme that powers this site

I added a few modifications to the OSS Ghost theme that powers this site. You can get it here: https://gitlab.com/matdevdug/minimal-ghost-theme I tried to make it pretty easy to customize, but if you need something changed feel free to open an issue on the repo. Thanks for all the feedback! Added better image caption support. Added the cool Mastodon feature outlined here to attribute posts from your site back to your Mastodon username by following the instructions here.

0 views
Kev Quirk 1 weeks ago

Sunsetting The 512kb Club

All good things must come to an end, and today is that day for one of my projects, the 512kb Club . I started the 512kb Club back in November 2020, so it's been around 5.5 years. It's become a drain and I'm ready to move on. As of today I won't be accepting any new submissions to the project. At the time of writing this, there are 25 PRs open for new submissions, I'll work through them, then will disable the ability to submit pull requests. Over the years there have been nearly 2,000 pull requests, and there are currently around 950 sites listed on the 512kb Club. Pretty cool, but it's a lot of work to manage - there's reviewing new submissions (which is a steady stream of pull requests), cleaning up old sites, updating sites, etc. It's more than I have time to do. I'm also trying to focus my time on other projects, like Pure Commons . It's sad to see this kind of project fall by the wayside, but life moves on. Having said that, if you think you want to take over 512kb Club, let's have a chat, there are some pre-requisites though: I'm probably going to get a lot of emails with offers to help (which is fantastic), but if we've never interacted before, I won't be moving forward with your kind offer. After reading the above, if we know each other, and you're still interested, use the email button below and we can have a chat about you potentially taking over. By taking over, I will expect you to: If you're just looking to take over and use it as a means to slap ads on it, and live off the land, I'd rather it go to landfill, and will just take the site down. That's why I only want someone I know and trust to take it over. I think I've made my point now. 🙃 If there's no-one prepared to take over, I plan to do one final export of the source from Jekyll, then upload that to my web server, where it will live until I decide to no longer renew the domain. I'll also update the site with a message stating that the project has been sunset and there will be no more submissions. If you don't wanna see that happen, please get in touch. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . We need to know each other. I'm not going to hand the project over to someone I don't know, sorry. You probably need to be familiar with Jekyll and Git. Take ownership of the domain, so you will be financially responsible for renewals. Take ownership of the GitHub repo , so you will be responsible for all pull requests, issues and anything else Git related. Be responsible for all hosting and maintenance of the project - the site is currently hosted on my personal Vercel account, which I will be deleting after handing off. Be a good custodian of the 512kb Club and continue to maintain it in its current form.

1 views
Susam Pal 1 weeks ago

HN Skins 0.3.0

HN Skins 0.3.0 is a minor update to HN Skins, a web browser userscript that adds custom themes to Hacker News and allows you to browse HN with a variety of visual styles. This release includes fixes for a few issues that slipped through earlier versions. For example, the comment input textbox now uses the same font face and size as the rest of the active theme. The colour of visited links has also been slightly muted to make it easier to distinguish them from unvisited links. In addition, some skins have been renamed: Teletype is now called Courier and Nox is now called Midnight. Further, the font face of several monospace based themes is now set to instead of . This allows the browser's preferred monospace font to be used. The font face of the Courier skin (formerly known as Teletype) remains set to . This will never change because the sole purpose of this skin is to celebrate this legendary font. To view screenshots of HN Skins or install it, visit github.com/susam/hnskins . Read on website | #web | #programming | #technology

0 views

How to Host your Own Email Server

I recently started a new platform where I sell my books and courses, and in this website I needed to send account related emails to my users for things such as email address verification and password reset requests. The reasonable option that is often suggested is to use a paid email service such as Mailgun or SendGrid. Sending emails on your own is, according to the Internet, too difficult. Because the prospect of adding yet another dependency on Big Tech is depressing, I decided to go against the general advice and roll my own email server. And sure, it wasn't trivial, but it wasn't all that hard either! Are you interested in hosting your own email server, like me? In this article I'll tell you how to go from nothing to being able to send emails that are accepted by all the big email players. My main concern is sending, but I will also cover the simple solution that I'm using to receive emails and replies.

0 views

Are Design Tools Relevant Anymore

I was a product designer for a few years. I had switched careers to design after suffering burn out as a software engineer. During those years, my entire day was spent in Figma, building high fidelity mockups, leading workshops and creating prototypes. While Figma helped me move quickly, rapidly iterating after receiving user feedback, the engineer part of me always felt it was a throwaway step. You build something, only to then have somebody else build it again in code. I recently had to put on my design hat again, putting together interactive prototypes around a few redesign ideas. At first, I reached for Figma, but after fiddling around for an hour, decided to go a different route. While prototyping in Figma used to be faster than building in code, that’s no longer true. With Claude Code, building out frontend components is fast . Much faster than messing with layers, frames and symbols in Figma. Let me explain. Enterprise apps have well defined brand guidelines. Colors, type, scale. They are often built off an existing component library (think Bootstrap, shadcn). This means you can use Claude in a way that follows the look and feel of your application, and is constrained to the components the development team leverages. The rails help keep Claude from going off into the deep end. Design then becomes focused on solving the user’s problem through UX, less fiddling around with UI. I can open Freeform on my iPad, sketch something out, and prompt Claude to leverage our foundation to make my sketch a reality. Then, I can dig into the code and tweak things to be just right. The result is a more interactive, true to life prototype that gives your engineering team a head start with coded components. You get better feedback from users and stakeholders as it’s easier to visualize what the final product looks like. You discover pitfalls that might not have shown up until an engineer was halfway into the card. On top of all that, you move a lot faster, you’re designing and building in 1 step rather than 2, giving your engineering team a head start once designs are finalized. So then, what’s the point of Figma and Sketch? You can tell Figma is battling with this reality by pushing Figma make. The issue is, it’s too constrained and produces poor results. You can’t link it to existing coded components, Tailwind configs, etc. On the other hand, usin my approach requires a technical background. You need to guide with framework suggestions, foundational setup and be able to takeover and tweak yourself. That said, there in the shorter term there’s likely still a place for Figma and Sketch at the table. Designing using the method I talked about requires a technical background, otherwise your results will be all over the place, and small tweaks will be next to impossible. As the technology gets better though, I’ll be surprised if Figma and Sketch survive the next couple of years.

0 views
Karan Sharma 1 weeks ago

A Web Terminal for My Homelab with ttyd + tmux

I wanted a browser terminal at that works from laptop, tablet, and phone without special client setup. The stack that works cleanly for this is ttyd + tmux. Two decisions matter most: Why each flag matters: reverse proxies to with TLS via Cloudflare DNS challenge. Because ttyd uses WebSockets heavily, reverse proxy support for upgrades is essential. I tuned tmux for long-running agent sessions, not just manual shell use. This was a big pain point, so I added both workflows: Browser-native copy tmux copy mode On mobile, ttyd’s top-left menu (special keys) makes prefix navigation workable. This is tailnet-only behind Tailscale. No public exposure. Still, the container has and , which is a strong trust boundary. If you expose anything like this publicly, add auth in front and treat it as high-risk infrastructure. The terminal is now boring in the best way: stable, predictable, and fast to reach from any device. handles terminal-over-websocket behavior well. enforces a single active client, which avoids cross-tab resize contention. : writable shell : matches my existing Caddy upstream ( ) : one active client only (no resize fight club) : real host shell from inside the container : correct login environment and tmux config loading : persistent attach/re-attach status line shows host + session + path + time pane border shows pane number + current command active pane is clearly highlighted : create/attach named session : create named window : rename window : session/window picker : pane movement : pane resize Browser-native copy to turn tmux mouse off drag-select + browser copy shortcut to turn tmux mouse back on tmux copy mode enters copy mode and shows select, copy (shows ) or exits (shows )

0 views
Kev Quirk 1 weeks ago

Another New Lick of Paint

Around a month ago I switched this blog to Pure Blog , at the same time, I decided to simplify the design and give it a new lick of paint. Here's what it looked like: It was okay . But I've done the thing before, and I really wanted something different. The problem was, I didn't know what I wanted. My wife and I recently went away for the weekend. While away, we stopped off at a lovely little coffee shop where they served us water and a pot of tea from these beautifully coloured pots. The mustard yellow and the steel blue are just beautiful; they work so well together, and I immediately decided I wanted to use this kind of palette for my next website design. Since Monday I've been working on the re-design (something that's really simple to do with Pure Blog ). it's now ready and I've launched the new site this evening. Here's what it looks like now: I thought about using the mustard colour for the entire background, but since this is a blog, reading experience is very important, and I felt I was straining my eyes when reading in full mustard mode. So I toned it down to this nice cream colour, and stuck with mustard for the header and footer only. While I was there I also got rid of the effect to simplify the site header even more. I have to say, I'm really happy with the result. There's bound to be some little bug or caching issues here and there, which I'll mop up as I discover them. If you find an issue, please drop me an email or leave a comment, and I'll get it sorted. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
Lea Verou 1 weeks ago

External import maps, today!

A few weeks ago, I posted Web dependencies are broken. Can we fix them? . Today’s post is a little less gloomy: Turns out that the major limitation that would allow centralized set-it-and-forget-it import map management can be lifted today, with excellent browser support! The core idea is that you can use DOM methods to inject an import map dynamically , by literally creating an element in a classic (blocking) script and appending it after the injector script. 💡 This is a gamechanger. It makes external import maps nice-to-have sugar instead of the only way to have centralized import map management decoupled from HTML generation. All we need to do is build a little injector script, no need for tightly coupled workflows that take over everything. Once you have that, it takes a single line of HTML to include it anywhere. If you’re already using a templating system, great! You could add to your template for every page. But you don’t need a templating system: even if you’re rawdogging HTML (e.g. for a simple SPA), it’s no big deal to just include a in there manually. This is not even new: when the injector is a classic (non-module) script placed before any modules are fetched, it works in every import map implementation , all the way back to Chrome 89, Safari 16.4+, and Firefox 108+ ! Turns out, JSPM made the same discovery: JSPM v4 uses the same technique . It is unclear why it took all of us so long to discover it but I’m glad we got there. First, while there is some progress around making import maps more resilient, your best bet for maximum compatibility is for the injector script to be a good ol’ blocking that comes before everything else. This means no , no , no — you want to get it in before any modules start loading or many browsers will ignore it. Then, you literally use DOM methods to create a and append it after the script that is injecting the import map (which you can get via ). This is a minimal example: Remember, this literally injects inline import maps in your page. This means that any relative URLs will be interpreted relative to the current page ! If you’re building an SPA or your URLs are all absolute or root-relative, that’s no biggie. But if these are relative URLs, they will not work as expected across pages. You need to compute the absolute URL for each mapped URL and use that instead. This sounds complicated, but it only adds about 5 more lines of code: Note that is in module scripts, since the same module can be loaded from different places and different scripts. Once it becomes possible to inject import maps from a module script, you could use to get the URL of the current module. Until then, you can use a bit of error handling to catch mistakes: This is the minimum, since the script literally breaks if is . You could get more elaborate and warn about / attributes, or if scripts are present before the current script. These are left as an exercise for the reader. While this alleviates the immediate need for external import maps , the DX and footguns make it a bit gnarly, so having first-class external import map support would still be a welcome improvement. But even if we could do today, the unfortunate coupling with HTML is still at the receiving end of all this, and creates certain limitations, such as specifiers not working in worker scripts . My position remains that HTML being the only way to include import maps is a hack . I’m not saying this pejoratively. Hacks are often okay — even necessary! — in the short term. This particular hack allowed us to get import maps out the door and shipped quickly, without getting bogged down into architecture astronaut style discussions that can be non-terminating. But it also created architectural debt . These types of issues can always be patched ad hoc, but that increases complexity, both for implementers and web developers. Ultimately, we need deeper integration of specifiers and import maps across the platform . (with or without an attribute) should become a shortcut , not the only way mappings can be specified. In my earlier post, I outlined a few ideas that could help get us closer to that goal and make import maps ubiquitous and mindless . Since they were well received, I opened issues for them: I’m linking to issues in standards repos in the interest of transparency. Please don’t spam them, even with supportive comments (that’s what reactions are for). Also keep in mind that the vast majority of import map improvements are meant for tooling authors and infrastructure providers — nobody expects regular web developers to author import maps by hand. The hope is also that better platform-wide integration can pave the way for satisfying the (many!) requests to expand specifiers beyond JS imports . Currently, the platform has no good story for importing non-JS resources from a package, such as styles, images, icons, etc. But even without any further improvement, simply the fact that injector scripts are possible opens up so many possibilities! The moment I found out about this I started working on making the tool I wished had existed to facilitate end-to-end dependency management without a build process (piggybacking on the excellent JSPM Generator for the heavy lifting), which I will announce in a separate post very soon [1] . Stay tuned! But if you’re particularly curious and driven, you can find it even before then, both the repo and npm package are already public 😉🤫 ↩︎ Linking to import maps via an HTTP header ( ?) URLs to bridge the gap between specifiers and URLs Since import maps as an import attribute proved out to be tricky, I also filed another proposal for a synchronous import map API . But if you’re particularly curious and driven, you can find it even before then, both the repo and npm package are already public 😉🤫 ↩︎

0 views
Jeff Geerling 1 weeks ago

Expert Beginners and Lone Wolves will dominate this early LLM era

After migrating this blog from a static site generator into Drupal in 2009 , I noted: As a sad side-effect, all the blog comments are gone. Forever. Wiped out. But have no fear, we can start new discussions on many new posts! I archived all the comments from the old 'Thingamablog' version of the blog, but can't repost them here (at least, not with my time constraints... it would just take a nice import script, but I don't have the time for that now).

0 views
Hugo 2 weeks ago

Dogfooding: Why I Migrated My Own Blog to Writizzy

In 2022, I created an open-source static blog generator: Bloggrify . It’s conceptually similar to Hugo —it generates a static site (just a bunch of HTML files) that you can host for free on Cloudflare, GitHub Pages, or Bunny.net . Before that, I had tried everything: WordPress, Joomla, Medium. I wanted to regain flexibility and customize my blog exactly how I wanted. But let’s be honest: I’m a developer, and I mainly wanted a new technical playground. Fast forward to 2026, and I have to admit: using a static blog has become a major friction point for my writing. So, I decided to migrate again, this time to a managed platform: Writizzy , another product I’m building. This move is a great opportunity to talk about several things: Bloggrify started as a love letter to the Nuxt ecosystem, specifically . Back when I migrated from WordPress, my criteria were simple: In 2022, it wasn't a "product" yet—just my personal blog code made public. It only became a full-fledged open-source project in 2024, with a dedicated site and a proper README to encourage contributions. I wanted the product to be "opinionated." Nuxt-content does 90% of the heavy lifting, but it’s a generic tool. For a real blog, you still need to build the RSS feed, sitemap, robots.txt, comments, table of contents, share buttons, newsletter integration, analytics, and SEO. That’s what Bloggrify is: a "starter pack" that comes with everything pre-configured. Think of it as Docus , but for blogs instead of documentation. I’m a numbers person. When I launch a project, I want to see usage. It might sound trivial, but considering the effort it takes to manage NPM releases (which is honestly a nightmare), handle versioning, and maintain themes, you expect a minimum return on investment. Bloggrify reached 164 stars on GitHub and sits somewhere in the middle of the pack on Jamstack.org . That’s... okay, I guess. But in reality, I have almost zero feedback on its actual usage. A few rare GitHub issues, one contributor who was active for a few weeks before vanishing, and then silence. I only know of one blog that used it before switching back to Hugo. The experience has been bittersweet. Building in the dark is demotivating. However, it did lead me to launch two other side-products: I launched Broadcast and Pulse in 2024 and 2025. They’re living a quiet life, but they aren't "exploding." My target audience is static bloggers—mostly developers. And as we know, developers are the hardest group to convince to pay for a service! Still, I’m satisfied. These products taught me how to build a SaaS, handle subscriptions, and find my ideal tech stack. My own newsletters were sent via Broadcast (reaching about 150 subscribers), and I used Pulse to track which articles were actually being read. The reality? These two tools generate about €100 in Monthly Recurring Revenue (MRR) . Not enough to retire on, but a great learning experience. And that brings us to Writizzy. With Bloggrify, I realized my writing workflow had become painful. Between maintaining the framework, jumping between spell-checkers, writing in Markdown, spinning up a local server to check for broken links, and waiting for build/deployment times... I was losing hours. For my last article, someone pointed out a few typos. It took me 20 minutes between editing the file and seeing the fix live. Add to that the friction of managing images in an IDE and the recent Nuxt 4 / Nuxt-content updates which, while I love them, have made the dev experience slightly slower for simple blogging. To be honest, I wasn't aware of that. I put up with these inconveniences and was still very happy to have “flexibility” in what I could do with my blog. I wasn't fully aware of this "friction" until I built Writizzy . Writizzy is the synthesis of my blogging experience. It’s a mix of Substack, Ghost, and Medium, but built as a European alternative with four core pillars: I moved my English blog to Writizzy first (this one), with no intention of moving the french one. But I soon noticed I was writing much faster on the English site. The workflow was just... better. Copy-pasting images directly into the editor, instant previews, no server to launch. It was a joy. I hesitated for a long time before migrating eventuallycoding.com . I knew that by doing so, I was taking the risk of killing Bloggrify. If even I don't use it anymore, the project enters a danger zone. When you don’t use your own product daily—when you’re no longer obsessed with the problem it solves—it’s almost impossible to stay attached to it. This is a symptom I see in so many "disposable" projects across the internet: built by people who flutter from one idea to the next without any real skin in the game. So yes, moving away from Bloggrify is a risk. But I’ve come to terms with it. Today, I have almost zero evidence that Bloggrify is being used. Meanwhile, Writizzy already has 314 blogs and 11 paying users (€135 MRR) in just four months. Why stubbornly cling to Bloggrify? Ultimately, I believe I’m solving the same problem with Writizzy, but in a much better way. I receive feedback emails and feature requests every single week. I get constant positive reinforcement from people actually using it. The product isn’t perfect, but it improves every day. It improves because real users are pushing me to refine the site, fix what’s broken, and add the features that absolutely need to be there. And it also improves because I use it constantly. This is the massive benefit of dogfooding . Every day, I am confronted with my own software, so I know exactly what needs to change. So yes, Bloggrify is moving to maintenance mode. I’m taking this opportunity to turn all templates into Open Source. Two of them were "premium," but it wouldn't make sense to keep them that way today. I tell myself I’ll still evolve it from time to time, but honestly, I wonder if I’m just lying to myself. As for Hakanai.io , I’m definitely continuing. The problem it solves still fascinates me. I get great feedback, especially on Broadcast. Pulse , however, suffers from being misunderstood. It’s a "blog analytics" product, and people don't really grasp what that entails—SEO advice, outlier detection, evergreen content tracking. I’m not great at marketing, so it mostly flies under the radar except for the readers of this blog who took the time to test it. But I’m motivated to keep them alive. As for Writizzy , there is no doubt. The product is incredibly exciting to build. The stakes are high: building a platform for expression that exists outside the US-centric giants. The traction is there, and the numbers follow (+45% MoM user growth). Welcome to this blog, now officially on Writizzy. As a reader, you can already test several things: The Discover feed to read other articles from Writizzy bloggers. We’ve handpicked a few to start with, and this feed will become even more customizable in the future. Welcome home. Dogfooding: Why you absolutely must use your own products. The harsh reality of Open Source: Why it’s harder than it looks. Product Satisfaction: The joy of building something people actually use. The future of my projects: Bloggrify, Writizzy, and Hakanai.io . A simple templating language (Markdown). Extensibility (RSS feeds, sitemaps, etc.). Low carbon footprint (static sites are incredibly efficient). broadcast.hakanai.io : A newsletter system for static blogs based on RSS feeds. pulse.hakanai.io : A specialized analytics tool for bloggers (not just generic web traffic). Sustainability : Focusing on reversibility and interoperability. Discoverability. Economic accessibility : Implementing Purchasing Power Parity (PPP). Transparency. The comments section . The newsletter subscription (if you haven’t already).

0 views