Latest Posts (20 found)

MOOving to a self-hosted Bluesky PDS

Bluesky is a “Twitter clone” that runs on the AT Protocol . I have to be honest, I’d struggle to explain how atproto works. I think it’s similar to Nostr but like, good? When atproto devs talk about The Atmosphere they sound like blockchain bros. The marketing needs consideration. Bluesky however, is a lot of fun. Feels like early Twitter. Nobody cool uses Twitter anymore ever. It’s a cesspit of racists asking Gork to undress women. Mastodon and Bluesky are the social platforms I use. I’ve always been tempted to self-host my own Mastodon instance but the requirements are steep. I use the omg.lol server instead. Self-hosting the Bluesky PDS is much less demanding. My setup includes: This is the host machine I glued an NVMe onto the underside. All services run as Docker containers for easy security sandboxing. I say easy but it took many painful years to master Docker. I have the Pi on a VLAN firewall because I’m extra paranoid. I setup my Bluesky PDS using the official Docker container. It’s configure with environment variables and has a single data volume mounted. I backup that volume to my NAS. I’ve put Caddy in front of the PDS container. Right now it just acts as a reverse proxy. This gives me flexibility later if I want to add access logs, rate limiting, or other plugins. Booo! If you know a good European alternative please let me know! The tunnel links Caddy to the outside world via Cloudflare to avoid exposing my home IP address. Cloudflare also adds an extra level of bot protection. The guides I followed suggest adding wildcard DNS for the tunnel. Cloudflare has shuffled the dashboard for the umpteenth time and I can’t figure out how. I think sub-domains are only used for user handles, e.g. . I use a different custom domain for my handle ( ) with a manual TXT record to verify. Allowing the PDS to send emails isn’t strictly necessary. It’s useful for password resets and I think it’ll send a code if I migrate PDS again. I went through the hassle of adding my PDS domain to Proton Mail and followed their SMTP guide . This shows how the PDS enviornment variables are formatted. It took me forever to figure out where the username and password went. PDS MOOver by Bailey Townsend is the tool that does the data migration. It takes your Bluesky password and probably sees your private key, so use at your own risk! I setup a new account to test it before I YOLO’d my main. MOOve successful! I still login at but I now select “custom account provider” and enter my PDS domain. SkyTools has a tool that confirms it. Bluesky Debug can check handles are verified correctly. PDSIs.dev is a neat atproto explorer. I cross-referenced the following guides for help: Most of the Cloudflare stuff is outdated because Cloudflare rolls dice every month. Bluesky is still heavily centralised but the atproto layer allows anyone to control their own data. I like doing that on principle. I don’t like maintenance, but I’ve heard that’s minimal for a PDS. Supposedly it’s possible to migrate back to Bluesky’s PDS if I get bored. I’m tempted to build something in The Atmosphere . Any ideas? Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Notes on Self Hosting a Bluesky PDS Alongside Other Services Self-host federated Bluesky instance (PDS) with CloudFlare Tunnel Host a PDS via a Cloudflare Tunnel Self-hosting Bluesky PDS

0 views
David Bushell 2 days ago

Croissant and CORS proxy update

Croissant is my home-cooked RSS reader. I wish it was only a progressive web app (PWA) but due to missing CORS headers, many feeds remain inaccessible. My RSS feeds have the header and so should yours! Blogs Are Back has a guide to enable CORS for your blog . Bypassing CORS requires some kind of proxy. Other readers use a custom browser extension. That is clever, but extensions can be dangerous. I decided on two solutions. I wrapped my PWA in a Tauri app . This is also dangerous if you don’t trust me. I also provided a server proxy for the PWA. A proxy has privacy concerns but is much safer. I’m sorry if anyone is using Croissant as a PWA because the proxy is now gone. If a feed has the correct CORS headers it will continue to work. Sorry for the abrupt change. That’s super lame, I know! To be honest I’ve lost a bit of enthusiasm for the project and I can’t maintain a proxy. Croissant was designed to be limited in scope to avoid too much burden. In hindsight the proxy was too ambitious. Technically, yes! But you’ll have to figure that out by yourself. If you have questions, such as where to find the code, how the code works etc, the answer is no. I don’t mean to be rude, I just don’t have any time! You’re welcome to ask for support but unless I can answer in 30 seconds I’ll have to decline. Croissant is feature complete! It does what I set out to achieve. I have fixed several minor bugs and tweaked a few styles. Until inspiration (or a bug) strikes I won’t do another update anytime soon. Maybe later in the year I’ll decide to overhaul it? Who can predict! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 1 weeks ago

Everything you never wanted to know about visually-hidden

Nobody asked for it but nevertheless, I present to you my definitive “it depends” tome on visually-hidden web content. I’ll probably make an amendment before you’ve finished reading. If you enjoy more questions than answers, buckle up! I’ll start with the original premise, even though I stray off-topic on tangents and never recover. I was nerd-sniped on Bluesky. Ana Tudor asked : Is there still any point to most styles in visually hidden classes in ’26? Any point to shrinking dimensions to and setting when to nothing via / reduces clickable area to nothing? And then no dimensions = no need for . @anatudor.bsky.social Ana proposed the following: Is this enough in 2026? As an occasional purveyor of the class myself, the question wriggled its way into my brain. I felt compelled to investigate the whole ordeal. Spoiler: I do not have a satisfactory yes-or-no answer, but I do have a wall of text! I went so deep down the rabbit hole I must start with a table of contents: I’m writing this based on the assumption that a class is considered acceptable for specific use cases . My final section on native visually-hidden addresses the bigger accessibility concerns. It’s not easy to say where this technique is appropriate. It is generally agreed to be OK but a symptom of — and not a fix for — other design issues. Appropriate use cases for are far fewer than you think. Skip to the history lesson if you’re familiar. , — there have been many variations on the class name. I’ve looked at popular implementations and compiled the kitchen sink version below. Please don’t copy this as a golden sample. It merely encompasses all I’ve seen. There are variations on the selector using pseudo-classes that allow for focus. Think “skip to main content” links, for example. What is the purpose of the class? The idea is to hide an element visually, but allow it to be discovered by assistive technology. Screen readers being the primary example. The element must be removed from layout flow. It should leave no render artefacts and have no side effects. It does this whilst trying to avoid the bugs and quirks of web browsers. If this sounds and looks just a bit hacky to you, you have a high tolerance for hacks! It’s a massive hack! How was this normalised? We’ll find out later. I’ll whittle down the properties for those unfamiliar. Absolute positioning is vital to remove the element from layout flow. Otherwise the position of surrounding elements will be affected by its presence. This crops the visible area to nothing. remains as a fallback but has long been deprecated and is obsolete. All modern browsers support . These two properties remove styles that may add layout dimensions. This group effectively gives the element zero dimensions. There are reasons for instead of and negative margin that I’ll cover later. Another property to ensure no visible pixels are drawn. I’ve seen the newer value used but what difference that makes if any is unclear. This was added to address text wrapping inside the square (I’ll explain later). So basically we have and a load of properties that attempted to make the element invisible. We cannot use or or because those remove elements from the accessibility tree. So the big question remains: why must we still ‘zero’ the dimensions? Why is not sufficient? To make sense of this mystery I went back to the beginning. It was tricky to research this topic because older articles have been corrected with modern information. I recovered many details from the archives and mailing lists with the help of those involved. They’re cited along the way. Our journey begins November 2004. A draft document titled “CSS Techniques for WCAG 2.0” edited by Wendy Chisholm and Becky Gibson includes a technique for invisible labels. While it is usually best to include visual labels for all form controls, there are situations where a visual label is not needed due to the surrounding textual description of the control and/or the content the control contains. Users of screen readers, however, need each form control to be explicitly labeled so the intent of the control is well understood when navigated to directly. Creating Invisible labels for form elements ( history ) The following CSS was provided: Could this be the original class? My research jumped through decades but eventually I found an email thread “CSS and invisible labels for forms” on the W3C WAI mailing list. This was a month prior, preluding the WCAG draft. A different technique from Bob Easton was noted: The beauty of this technique is that it enables using as much text as we feel appropriate, and the elements we feel appropriate. Imagine placing instructive text about the accessibility features of the page off left (as well as on the site’s accessibility statement). Imagine interspersing “start of…” landmarks through a page with heading tags. Or, imagine parking full lists off left, lists of access keys, for example. Screen readers can easily collect all headings and read complete lists. Now, we have a made for screen reader technique that really works! Screenreader Visibility - Bob Easton (2003) Easton attributed both Choan Gálvez and Dave Shea for their contributions. In same the thread, Gez Lemon proposed to ensure that text doesn’t bleed into the display area . Following up, Becky Gibson shared a test case covering the ideas. Lemon later published an article “Invisible Form Prompts” about the WCAG plans which attracted plenty of commenters including Bob Easton. The resulting WCAG draft guideline discussed both the and ideas. Note that instead of using the nosize style described above, you could instead use postion:absolute; and left:-200px; to position the label “offscreen”. This technique works with the screen readers as well. Only position elements offscreen in the top or left direction, if you put an item off to the right or the bottom, many browsers will add scroll bars to allow the user to reach the content. Creating Invisible labels for form elements Two options were known and considered towards the end of 2004. Why not both? Indeed, it appears Paul Bohman on the WebAIM mailing list suggested such a combination in February 2004. Bohman even discovered possibly the first zero width bug. I originally recommended setting the height and width to 0 pixels. This works with JAWS and Home Page Reader. However, this does not work with Window Eyes. If you set the height and width to 1 pixel, then the technique works with all browsers and all three of the screen readers I tested. Re: Hiding text using CSS - Paul Bohman Later in May 2004, Bohman along with Shane Anderson published a paper on this technique. Citations within included Bob Easton and Tom Gilder . Aside note: other zero width bugs have been discovered since. Manuel Matuzović noted in 2023 that links in Safari were not focusable . The zero width story continues as recently as February 2026 (last week). In browse mode in web browsers, NVDA no longer treats controls with 0 width or height as invisible. This may make it possible to access previously inaccessible “screen reader only” content on some websites. NVDA 2026.1 Beta TWO now available - NV Access News Digger further into WebAIM’s email archive uncovered a 2003 thread in which Tom Gilder shared a class for skip navigation links . I found Gilder’s blog in the web archives introducing this technique. I thought I’d put down my “skip navigation” link method down in proper writing as people seem to like it (and it gives me something to write about!). Try moving through the links on this page using the keyboard - the first link should magically appear from thin air and allow you to quickly jump to the blog tools, which modern/visual/graphical/CSS-enabled browsers (someone really needs to come up with an acronym for that) should display to the left of the content. Skip-a-dee-doo-dah - Tom Gilder Gilder’s post links to a Dave Shea post which in turn mentions the 2002 book “Building Accessible Websites” by Joe Clark . Chapter eight discusses the necessity of a “skip navigation” link due to table-based layout but advises: Keep them visible! Well-intentioned developers who already use page anchors to skip navigation will go to the trouble to set the anchor text in the tiniest possible font in the same colour as the background, rendering it invisible to graphical browsers (unless you happen to pass the mouse over it and notice the cursor shape change). Building Accessible Websites - 08. Navigation - Joe Clark Clark expressed frustration over common tricks like the invisible pixel. It’s clear no class existed when this was written. Choan Gálvez informed me that Eric Meyer would have the css-discuss mailing list. Eric kindly searched the backups but didn’t find any earlier discussion. However, Eric did find a thread on the W3C mailing list from 1999 in which Ian Jacobs (IBM) discusses the accessibility of “skip navigation” links. The desire to visually hide “skip navigation” links was likely the main precursor to the early techniques. In fact, Bob Easton said as much: As we move from tag soup to CSS governed design, we throw out the layout tables and we throw out the spacer images. Great! It feels wonderful to do that kind of house cleaning. So, what do we do with those “skip navigation” links that used to be attached to the invisible spacer images? Screenreader Visibility - Bob Easton (2003) I had originally missed that in my excitement seeing the class. I reckon we’ve reached the source of the class. At least conceptually. Technically, the class emerged from several ideas, rather than a “eureka” moment. Perhaps more can be gleaned from other CSS techniques such a the desire to improve accessibility of CSS image replacement . Bob Easton retired in 2008 after a 40 year career at IBM. I reached out to Bob who was surprised to learn this technique was still a topic today † . Bob emphasised the fact that it was always a clumsy workaround and something CSS probably wasn’t intended to accommodate . I’ll share more of Bob’s thoughts later. † I might have overdone the enthusiasm Let’s take an intermission! My contact page is where you can send corrections by the way :) The class stabilised for a period. Visit 2006 in the Wayback Machine to see WebAIM’s guide to invisible content — Paul Bohman’s version is still recommended. Moving forward to 2011, I found Jonathan Snook discussing the “clip method”. Snook leads us to Drupal developer Jeff Burnz the previous year. […] we still have the big problem of the page “jump” issue if this is applied to a focusable element, such as a link, like skip navigation links. WebAim and a few others endorse using the LEFT property instead of TOP, but this no go for Drupal because of major pain-in-the-butt issues with RTL. In early May 2010 I was getting pretty frustrated with this issue so I pulled out a big HTML reference and started scanning through it for any, and I mean ANY property I might have overlooked that could possible be used to solve this thorny issue. It was then I recalled using clip on a recent project so I looked up its values and yes, it can have 0 as a value. Using CSS clip as an Accessible Method of Hiding Content - Jeff Burnz It would seem Burnz discovered the technique independently and was probably the first to write about it. Burnz also notes a right-to-left (RTL) issue. This could explain why pushing content off-screen fell out of fashion. 2010 also saw the arrival of HTML5 Boilerplate along with issue #194 in which Jonathan Neal plays a key role in the discussion and comments: If we want to correct for every seemingly-reasonable possibility of overflow in every browser then we may want to consider [code below] This was their final decision. I’ve removed for clarity. This is very close to what we have now, no surprise since HTML5 Boilterplate was extremely popular. I’m leaning to conclude that the additional properties are really just there for the “possibility” of pixels escaping containment as much as fixing any identified problem. Thierry Koblentz covered the state of affairs in 2012 noting that: Webkit, Opera and to some extent IE do not play ball with [clip] . Koblentz prophesies: I wrote the declarations in the previous rule in a particular order because if one day clip works as everyone would expect, then we could drop all declarations after clip, and go back to the original Clip your hidden content for better accessibility - Thierry Koblentz Sound familiar? With those browsers obsolete, and if behaves itself, can the other properties be removed? Well we have 14 years of new bugs features to consider first. In 2016, J. Renée Beach published: Beware smushed off-screen accessible text . This appears to be the origin of (as demonstrated by Vispero .) Over a few sessions, Matt mentioned that the string of text “Show more reactions” was being smushed together and read as “Showmorereactions”. Beach’s class did not include the kitchen sink. The addition of became standard alongside everything else. Aside note: the origin of remains elusive. One Bootstrap issue shows it was rediscovered in 2018 to fix a browser bug. However, another HTML5 Boilterplate issue dated 2017 suggests negative margin broke reading order. Josh Comeau shared a React component in 2024 without margin. One of many examples showing that it has come in and out of fashion. We started with WCAG so let’s end there. The latest WCAG technique for “Using CSS to hide a portion of the link text” provides the following code. Circa 2020 the property was added as browser support increased and became deprecated. An obvious change I not sure warrants investigation (although someone had to be first!) That brings us back to what we have today. Are you still with me? As we’ve seen, many of the properties were thrown in for good measure. They exist to ensure absolutely no pixels are painted. They were adapted over the years to avoid various bugs, quirks, and edge cases. How many such decisions are now irrelevant? This is a classic Chesterton’s Fence scenario. Do not remove a fence until you know why it was put up in the first place. Well we kinda know why but the specifics are practically folklore at this point. Despite all that research, can we say for sure if any “why” is still relevant? Back to Ana Tudor’s suggestion. How do we know for sure? The only way is extensive testing. Unfortunately, I have neither the time nor skill to perform that adequately here. There is at least one concern with the code above, Curtis Wilcox noted that in Safari the focus ring behaves differently. Other minimum viable ideas have been presented before. Scott O’Hara proposed a different two-liner using . JAWS, Narrator, NVDA with Edge all seem to behave just fine. As do Firefox with JAWS and NVDA, and Safari on macOS with VoiceOver. Seems also fine with iOS VO+Safari and Android TalkBack with Firefox or Chrome. In none of these cases do we get the odd focus rings that have occurred with other visually hidden styles, as the content is scaled down to zero. Also because not hacked into a 1px by 1px box, there’s no text wrapping occurring, so no need to fix that issue. transform scale(0) to visually hide content - Scott O’Hara Sounds promising! It turns out Katrin Kampfrath had explored both minimum viable classes a couple of years ago, testing them against the traditional class. I am missing the experience and moreover actual user feedback, however, i prefer the screen reader read cursor to stay roughly in the document flow. There are screen reader users who can see. I suppose, a jumping read cursor is a bit like a shifting layout. Exploring the visually-hidden css - Katrin Kampfrath Kampfrath’s limited testing found the read cursor size differs for each class. The technique was favoured but caution is given. A few more years ago, Kitty Giraudel tested several ideas concluding that was still the most accessible for specific text use. This technique should only be used to mask text. In other words, there shouldn’t be any focusable element inside the hidden element. This could lead to annoying behaviours, like scrolling to an invisible element. Hiding content responsibly - Kitty Giraudel Zell Liew proposed a different idea in 2019. Many developers voiced their opinions, concerns, and experiments over at Twitter. I wanted to share with you what I consolidated and learned. A new (and easy) way to hide content accessibly - Zell Liew Liew’s idea was unfortunately torn asunder. Although there are cases like inclusively hiding checkboxes where near-zero opacity is more accessible. I’ve started to go back in time again! I’m also starting to question whether this class is a good idea. Unless we are capable and prepared to thoroughly test across every combination of browser and assistive technology — and keep that information updated — it’s impossible to recommend anything. This is impossible for developers! Why can’t browser vendors solve this natively? Once you’ve written 3000 words on a twenty year old CSS hack you start to question why it hasn’t been baked into web standards by now. Ben Myers wrote “The Web Needs a Native .visually-hidden” proposing ideas from HTML attributes to CSS properties. Scott O’Hara responded noting larger accessibility issues that are not so easily handled. O’Hara concludes: Introducing a native mechanism to save developers the trouble of having to use a wildly available CSS ruleset doesn’t solve any of those underlying issues. It just further pushes them under the rug. Visually hidden content is a hack that needs to be resolved, not enshrined - Scott O’Hara Sara Soueidan had floated the topic to the CSS working group back in 2016. Soueidan closed the issue in 2025, coming to a similar conclusion. I’ve been teaching accessibility for a little less than a decade now and if there’s one thing I learned is that developers will resort to using utility to do things that are more often than not just bad design decisions. Yes, there are valid and important use cases. But I agree with all of @scottaohara’s points, and most importantly I agree that we need to fix the underlying issues instead of standardizing a technique that is guaranteed to be overused and misused even more once it gets easier to use. csswg-drafts comment - Sara Soueidan Adrian Roselli has a blog post listing priorities for assigning an accessible name to a control. Like O’Hara and Soueidan, Roselli recognises there is no silver bullet. Hidden text is also used too casually to provide information for just screen reader users, creating overly-verbose content . For sighted screen reader users , it can be a frustrating experience to not be able to find what the screen reader is speaking, potentially causing the user to get lost on the page while visually hunting for it. My Priority of Methods for Labeling a Control - Adrian Roselli In short, many believe that a native visually-hidden would do more harm than good. The use-cases are far more nuanced and context sensitive than developers realise. It’s often a half-fix for a problem that can be avoided with better design. I’m torn on whether I agree that it’s ultimately a bad idea. A native version would give software an opportunity to understand the developer’s intent and define how “visually hidden” works in practice. It would be a pragmatic addition. The technique has persisted for over two decades and is still mentioned by WCAG. Yet it remains hacks upon hacks! How has it survived for so long? Is that a failure of developers, or a failure of the web platform? The web is overrun with inaccessible div soup . That is inexcusable. For the rest of us who care about accessibility — who try our best — I can’t help but feel the web platform has let us down. We shouldn’t be perilously navigating code hacks, conflicting advice, and half-supported standards. We need more energy money dedicated to accessibility. Not all problems can be solved with money. But what of the thousands of unpaid hours, whether volunteered or solicited, from those seeking to improve the web? I risk spiralling into a rant about browser vendors’ financial incentives, so let’s wrap up! I’ll end by quoting Bob Easton from our email conversation: From my early days in web development, I came to the belief that semantic HTML, combined with faultless keyboard navigation were the essentials for blind users. Experience with screen reader users bears that out. Where they might occasionally get tripped up is due to developers who are more interested in appearance than good structural practices. The use cases for hidden content are very few, such as hidden information about where a search field is, when an appearance-centric developer decided to present a search field with no visual label, just a cute unlabeled image of a magnifying glass. […] The people promoting hidden information are either deficient in using good structural practices, or not experienced with tools used by people they want to help. Bob ended with: You can’t go wrong with well crafted, semantically accurate structure. Ain’t that the truth. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Accessibility notice Class walkthrough Where it all began Further adaptations Minimum viable technique Native visually-hidden Zero dimensions Position off-screen

6 views
David Bushell 1 weeks ago

Web font choice and loading strategy

When I rebuilt my website I took great care to optimise fonts for both performance and aesthetics. Fonts account for around 50% of my website (bytes downloaded on an empty cache). I designed and set a performance budget around my font usage. I use three distinct font families and three different methods to load them. Web fonts are usually defined by the CSS rule. The property allows us some control over how fonts are loaded. The value has become somewhat of a best practice — at least the most common default. The CSS spec says: Gives the font face an extremely small block period (100ms or less is recommended in most cases) and an infinite swap period . In other words, the browser draws the text immediately with a fallback if the font face isn’t loaded, but swaps the font face in as soon as it loads. CSS Fonts Module Level 4 - W3C That small “block period”, if implemented by the browser, renders an invisible font temporarily to minimise FOUC . Personally I default to and don’t change unless there are noticeable or measurable issues. Most of the time you’ll use swap. If you don’t know which option to use, go with swap. It allows you to use custom fonts and tip your hand to accessibility. font-display for the Masses - Jeremy Wagner Google Fonts’ default to which has performance gains. In effect, this makes the font files themselves asynchronous—the browser immediately displays our fallback text before swapping to the web font whenever it arrives. This means we’re not going to leave users looking at any invisible text (FOIT), which makes for both a faster and more pleasant experience. Speed Up Google Fonts - Harry Roberts Harry further notes that a suitable fallback is important, as I’ll discover below. My three fonts in order of importance are: Ahkio for headings. Its soft brush stroke style has a unique hand-drawn quality that remains open and legible. As of writing, I load three Ahkio weights at a combined 150 KB. That is outright greed! Ahkio is core to my brand so it takes priority in my performance budget (and financial budget, for that matter!) Testing revealed the 100ms † block period was not enough to avoid FOUC, despite optimisation techniques like preload . Ahkio’s design is more condensed so any fallback can wrap headings over additional lines. This adds significant layout shift. † Chrome blog mention a zero second block period . Firefox has a config preference default of 100ms. My solution was to use instead of which extends the block period from a recommended 0–100ms up to a much longer 3000ms. Gives the font face a short block period (3s is recommended in most cases) and an infinite swap period . In other words, the browser draws “invisible” text at first if it’s not loaded, but swaps the font face in as soon as it loads. CSS Fonts Module Level 4 - W3C This change was enough to avoid ugly FOUC under most conditions. Worst case scenario is three seconds of invisible headings. With my website’s core web vitals a “slow 4G” network can beat that by half. For my audience an extended block period is an acceptable trade-off. Hosting on an edge CDN with good cache headers helps minimised the cost. Update: Richard Rutter suggested which gives more fallback control than I knew. I shall experiment and report back! Atkinson Hyperlegible Next for body copy. It’s classed as a grotesque sans-serif with interesting quirks such as a serif on the lowercase ‘i’. I chose this font for both its accessible design and technical implementation as a variable font . One file at 78 KB provides both weight and italic variable axes. This allows me to give links a subtle weight boost. For italics I just go full-lean. I currently load Atkinson Hyperlegible with out of habit but I’m strongly considering why I don’t use . Gives the font face an extremely small block period (100ms or less is recommended in most cases) and a short swap period (3s is recommended in most cases). In other words, the font face is rendered with a fallback at first if it’s not loaded, but it’s swapped in as soon as it loads. However, if too much time passes, the fallback will be used for the rest of the page’s lifetime instead. CSS Fonts Module Level 4 - W3C The browser can give up and presumably stop downloading the font. The spec actually says that and “[must/should] only be used for small pieces of text.” Although it notes that most browsers implement the default with similar strategies to . 0xProto for code snippets. If my use of Ahkio was greedy, this is gluttonous! A default would be acceptable. My justification is that controlling presentation of code on a web development site is reasonable. 0xProto is designed for legibility with a personality that compliments my design. I don’t specify 0xProto with the CSS rule. Instead I use the JavaScript font loading API to conditionally load when a element is present. Note the name change because some browsers aren’t happy with a numeric first character. Not shown is the event wrapper around this code. I also load the script with both and attributes. This tells the browser the script is non-critical and avoids render blocking. I could probably defer loading even later without readers noticing the font pop in. Update: for clarity, browsers will conditionally load but JavaScript can purposefully delay the loading further to avoid fighting for bandwidth. When JavaScript is not available the system default is fine. There we have it, three fonts, three strategies, and a few open questions and decisions to make. Those may be answered when CrUX data catches up. My new website is a little chunkier than before but its well within reasonable limits. I’ll monitor performance and keep turning the dials. Web performance is about priorities . In isolation it’s impossible to say exactly how an individual asset should be loaded. There are upper limits, of course. How do you load a one megabyte font? You don’t. Unless you’re a font studio providing a complete type specimen. But even then you could split the font and progressive load different unicode ranges. I wonder if anyone does that? Anyway I’m rambling now, bye. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 2 weeks ago

Declarative Dialog Menu with Invoker Commands

The off-canvas menu — aka the Hamburger , if you must — has been hot ever since Jobs’ invented mobile web and Ethan Marcott put a name to responsive design . Making an off-canvas menu free from heinous JavaScript has always been possible, but not ideal. I wrote up one technique for Smashing Magazine in 2013. Later I explored in an absurdly titled post where I used the new Popover API . I strongly push clients towards a simple, always visible, flex-box-wrapping list of links. Not least because leaving the subject unattended leads to a multi-level monstrosity. I also believe that good design and content strategy should allow users to navigate and complete primary goals without touching the “main menu”. However, I concede that Hamburgers are now mainstream UI. Jason Bradberry makes a compelling case . This month I redesigned my website . Taking the menu off-canvas at all breakpoints was a painful decision. I’m still not at peace with it. I don’t like plain icons. To somewhat appease my anguish I added big bold “Menu” text. The HTML for the button is pure declarative goodness. I added an extra “open” prefix for assistive tech. Aside note: Ana Tudor asked do we still need all those “visually hidden” styles? I’m using them out of an abundance of caution but my feeling is that Ana is on to something. The menu HTML is just as clean. It’s that simple! I’ve only removed my opinionated class names I use to draw the rest of the owl . I’ll explain more of my style choices later. This technique uses the wonderful new Invoker Command API for interactivity. It is similar to the I mentioned earlier. With a real we get free focus management and more, as Chris Coyier explains . I made a basic CodePen demo for the code above. So here’s the bad news. Invoker commands are so new they must be polyfilled for old browsers. Good news; you don’t need a hefty script. Feature detection isn’t strictly necessary. Keith Cirkel has a more extensive polyfill if you need full API coverage like JavaScript events. My basic version overrides the declarative API with the JavaScript API for one specific use case, and the behaviour remains the same. Let’s get into CSS by starting with my favourite: A strong contrast outline around buttons and links with room to breath. This is not typically visible for pointer events. For other interactions like keyboard navigation it’s visible. The first button inside the dialog, i.e. “Close (menu)”, is naturally given focus by the browser (focus is ‘trapped’ inside the dialog). In most browsers focus remains invisible for pointer events. WebKit has bug. When using or invoker commands the style is visible on the close button for pointer events. This seems wrong, it’s inconsistent, and clients absolutely rage at seeing “ugly” focus — seriously, what is their problem?! I think I’ve found a reliable ‘fix’. Please do not copy this untested . From my limited testing with Apple devices and macOS VoiceOver I found no adverse effects. Below I’ve expanded the ‘not open’ condition within the event listener. First I confirm the event is relevant. I can’t check for an instance of because of the handler. I’d have to listen for keyboard events and that gets murky. Then I check if the focused element has the visible style. If both conditions are true, I remove and reapply focus in a non-visible manner. The boolean is Safari 18.4 onwards. Like I said: extreme caution! But I believe this fixes WebKit’s inconsistency. Feedback is very welcome. I’ll update here if concerns are raised. Native dialog elements allow us to press the ESC key to dismiss them. What about clicking the backdrop? We must opt-in to this behaviour with the attribute. Chris Ferdinandi has written about this and the JavaScript fallback . That’s enough JavaScript! My menu uses a combination of both basic CSS transitions and cross-document view transitions . For on-page transitions I use the setup below. As an example here I fade opacity in and out. How you choose to use nesting selectors and the rule is a matter of taste. I like my at-rules top level. My menu also transitions out when a link is clicked. This does not trigger the closing dialog event. Instead the closing transition is mirrored by a cross-document view transition. The example below handles the fade out for page transitions. Note that I only transition the old view state for the closing menu. The new state is hidden (“off-canvas”). Technically it should be possible to use view transitions to achieve the on-page open and close effects too. I’ve personally found browsers to still be a little janky around view transitions — bugs, or skill issue? It’s probably best to wrap a media query around transitions. “Reduced” is a significant word. It does not mean “no motion”. That said, I have no idea how to assess what is adequately reduced! No motion is a safe bet… I think? So there we have it! Declarative dialog menu with invoker commands, topped with a medley of CSS transitions and a sprinkle of almost optional JavaScript. Aren’t modern web standards wonderful, when they work? I can’t end this topic without mentioning Jim Nielsen’s menu . I won’t spoil the fun, take a look! When I realised how it works, my first reaction was “is that allowed?!” It work’s remarkably well for Jim’s blog. I don’t recall seeing that idea in the wild elsewhere. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 2 weeks ago

Big Design, Bold Ideas

I’ve only gone and done it again! I redesigned my website. This is the eleventh major version. I dare say it’s my best attempt yet. There are similarities to what came before and plenty of fresh CSS paint to modernise the style. You can visit my time machine to see the ten previous designs that have graced my homepage. Almost two decades of work. What a journey! I’ve been comfortable and coasting for years. This year feels different. I’ve made a career building for the open web. That is now under attack. Both my career, and the web. A rising sea of slop is drowning out all common sense. I’m seeing peers struggle to find work, others succumb to the chatbot psychosis. There is no good reason for such drastic change. Yet change is being forced by the AI industrial complex on its relentless path of destruction. I’m not shy about my stance on AI . No thanks! My new homepage doubles down. I won’t be forced to use AI but I can’t ignore it. Can’t ignore the harm. Also I just felt like a new look was due. Last time I mocked up a concept in Adobe XD . Adobe in now unfashionable and Figma, although swank, has that Silicon Valley stench . Penpot is where the cool kids paint pretty pictures of websites. I’m somewhat of an artist myself so I gave Penpot a go. My current brand began in 2016 and evolved in 2018 . I loved the old design but the rigid layout didn’t afford much room to play with content. I spent a day pushing pixels and was quite chuffed with the results. I designed my bandit game in Pentpot too (below). That gave me the confidence to move into real code. I’m continuing with Atkinson Hyperlegible Next for body copy. I now license Ahkio for headings. I used Komika Title before but the all-caps was unwieldy. I’m too lazy to dig through backups to find my logotype source. If you know what font “David” is please tell me! I worked with Axia Create on brand strategy. On that front, we’ll have more exciting news to share later in the year! For now what I realised is that my audience here is technical. The days of small business owners seeking me are long gone. That market is served by Squarespace or Wix. It’s senior tech leads who are entrusted to find and recruit me, and peers within the industry who recommend me. This understanding gave me focus. To illustrate why AI is lame I made an interactive mini-game! The slot machine metaphor should be self-explanatory. I figured a bit of comedy would drive home my AI policy . In the current economy if you don’t have a sparkle emoji is it even a website? The game is built with HTML canvas, web components, and synchronised events I over-complicated to ensure a unique set of prizes. The secret to high performance motion blur is to cheat with pre-rendered PNGs. In hindsight I could have cheated more with a video. I commissioned Declan Chidlow to create a bespoke icon set. Declan delivered! The icons look so much better than the random assortment of placeholders I found. I’m glad I got a proper job done. I have neither the time nor skill for icons. Declan read my mind because I received a 88×31 web badge bonus gift. I had mocked up a few badges myself in Penpot. Scroll down to see them in the footer. Declan’s badge is first and my attempts follow. I haven’t quite nailed the pixel look yet. My new menu is built using with invoker commands and view transitions for a JavaScript-free experience. Modern web standards are so cool when the work together! I do have a tiny JS event listener to polyfill old browsers. The pixellated footer gradient is done with a WebGL shader. I had big plans but after several hours and too many Stack Overflow tabs, I moved on to more important things. This may turn into something later but I doubt I’ll progress trying to learn WebGL. Past features like my Wasm static search and speech synthesis remain on the relevant blog pages. I suspect I’ll be finding random one-off features I forgot to restyle. My homepage ends with another strong message. The internet is dominated by US-based big tech. Before backing powers across the Atlantic, consider UK and EU alternatives. The web begins at home. I remain open to working with clients and collaborators worldwide. I use some ‘big tech’ but I’m making an effort to push for European alternatives. US-based tech does not automatically mean “bad” but the absolute worst is certainly thriving there! Yeah I’m English, far from the smartest kind of European, but I try my best. I’ve been fortunate to find work despite the AI threat. I’m optimistic and I refuse to back down from calling out slop for what it is! I strongly believe others still care about a job well done. I very much doubt the touted “10x productivity” is resulting in 10x profits. The way I see it, I’m cheaper, better, and more ethical than subsidised slop. Let me know on the socials if you love or hate my new design :) P.S. I published this Sunday because Heisenbugs only appear in production. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 1 months ago

Mozilla Slopaganda

Mozilla published a new State of Mozilla . It’s absolute slopaganda . A mess of trippy visuals and corpo-speak that’s been through the slop wringer too many times. I read it so you don’t have to. ⚠️ Warning: the State of Mozilla website has flashing graphics and janky animations. The website opens with a faux terminal console that logs some nonsense that’s too fast to read. It includes a weird line about “synth-mushroom pop”. This is not the only bizarre reference to magic mushrooms as we’ll see later. What other connection am I missing here? If you’re lucky, next you’ll be treated to a fake CAPTCHA including words “HU MAN”, “INT ERNET”, and “FU TUR E”. The leading headline is: Doing for AI what we did for the web Which I take to mean, lighting a fire under Microsoft and then fading into obscurity. They directly call out Microsoft later. Back then Mozilla had a competitive web product: Firefox. Mozilla has no AI product today to do anthing. Mozilla’s opening sales pitch — which is certified slop — includes this bullet point: We Choose a Different Economic Model: A ‘double bottom line’ — advancing our mission and shaping markets — continues to guide everything we do. If I may direct you to Mozilla’s financial report PDF: Approximately 86% and 85% of Mozilla’s revenues from customers with contracts were derived from one customer for the years ended December 31, 2024 and 2023, respectively. 2024 Audited Financial Statement (page 19) That one customer is of course: Google. I wonder why Mozilla are painting Microsoft as the enemy? Mozilla’s idea of a double bottom line is explained further on the “LEDGER” page † . Although extracting meaning from the words is beyond me. † I can’t link to anything because of the stupid splash screen and CAPTCHAs. The stakes section is wild. We’re asked to choose one of two futures. Future A is choosing Microsoft. This is represented by hot people making out and a creepy robot driver. Twenty five years ago, when Microsoft controlled 95% of browsers, they defined how people accessed information, who could build what, and on what terms. Mozilla did topple Microsoft’s Internet Explorer reign. Then both lost the browser war to Google’s Chrome. But no mention of Google; can’t bite the hand that feeds you. Future B is choosing Mozilla. And more shrooms, apparently. Imagine it’s 2030. You wake up and your digital world feels different — familiar, but freer. This future is AI fan fiction because Mozilla doesn’t actually have any AI products. So which future do you choose, dogging — don’t google it, or shrooms? Seriously, what is with the mushroom references? Is this part of the edgy “fellow kids” rebranding? The page is also adorned with a marquee repeating: “DO NOT ACCEPT DEFAULT SETTINGS”. This must be a subtle reference to Firefox’s on-by-default telemetry, “privacy-preserving” tracking, Google search, and new AI integration? This section is slop. If the future of AI and the web are still up for grabs, the tools we build - our products, programs and investments - are our most powerful levers in shaping how things work out. So surely Mozilla has an AI product to showcase now, right? Wrong. They have Firefox and Thunderbird. It’s nice they remembered about Thunderbird though that wasn’t a given. The rest of the page is vague vacuous promises about AI investment. Who is this written for, the ever expanding board and c-suite? Mozilla was born to challenge a tech monopoly […] — just not papa Google’s — […] and we succeeded not by becoming a significant player in the browser market […] — again leaning on history. Firefox has not been “significant” in years. Mozilla leadership has watched Firefox market share plummet and been helpless. Even if I was an AI lover, Mozilla has nothing to compete in the AI space. They just say things. With mushrooms. We’re focusing our ~$1.4B in reserves Mozilla claim $1.4 billion in reserves (and no debt). They’re funded by over half a billy anually from Google. 👏 Stop 👏 donating 👏 to 👏 Mozilla 👏 Mozilla is the same Big Tech they pretend to rebel against. Donate your money to a worthy open source independent project before it’s drowned by slop. State of Mozilla ends by covering the same empty AI-infused promises. Mozilla talk about the past because that’s all they have. Mozilla fantasise about the future because they have nothing in the present. My favourite part: Also, launch “AI controls” into Firefox, giving people a clear way to turn AI off entirely - current and future AI features. One of the few clear deliverable goals is an AI “kill switch”. Mozilla aren’t exactly sure what their AI future will be but at least you can say no . That’s something! As fun as it is to rib on Mozilla, the web needs Firefox. I feel for the Firefox developers who actually care. State of Mozilla will inspire no one. The sloppy prose are borderline unreadable. The presentation is designed to stop you reading. THE FUTURE IS EXPERIMENTAL SYNTH-MUSHROOM POP The future is Microsoft. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 1 months ago

AI Policy and The Inevitable

I’ve made minor updates to my AI Policy but you probably don’t care because you’re tired of reading criticism. You’ve dismissed all that because AI is inevitable. If you do care, and you should, you are not taking crazy pills! Billions pumped into the relentless marketing machine make for a very effective counter to one’s sanity. This post is primarily written towards web and software developers. All em-dashes were written by human. The reality is you just don’t care about the harm done by the AI industrial complex. You only care about the perceived harm to your ego and reputation. So you compartmentalise because pulling the one-armed code bandit is a rush. I’m not an “AI hater” despite calling myself that once. I don’t affiliate myself with any anti-AI movement (if those exist). I’m just looking at reality and seeing two things. None of this stuff works half as good as promised. Seems to me the hallucinations have spread from machine to human. Either that or I just have higher standards. It’s easy to be confused until you notice a high percentage of proponents are in on selling it. What bothers me most though is the way it’s made and sold. That’s the brunt of my AI Policy . But you don’t care, so moving on. I have no desire whatsoever to make a career babysitting a chat box — excuse me, Orchestration Engineering — regardless of how well it works. Sounds like mind-numbing drudgery by all accounts I’ve read that claim it’s their new nine-to-five. Last time I said no thanks I was mocked by one guy on Hacker News — well played sir — the implication being that I was entitled. That I should suck it up and accept change. Must I? In this timeline, wouldn’t further advancements in AI only increase the menialness of my labour? I’m sure that’ll pay well. Or am I supposed to be selling the AI too — that could be my misunderstanding here. What does the future hold for all of us? Spurious claims of “90% of my code is written by AI” and “AI has made me 10x more productive” are only ever backed up by vibes. Question: what’s your methodology, what metrics are you measuring? Do you have any comparable numbers prior to AI? Are we simply counting green squares on the GitHub contribution chart? Let’s put aside doubts around current quality and results. What about tomorrow? There’s no guarantee AI will get better. Training costs are exponential. The world’s data has been vacuumed and laundered. We’re seeing new AI products look suspiciously like old models wrapped in a trench coat. Stagnation is a real possibility. AI companies are teetering on the edge of a fiscal event horizon; and that’s the optimistic view. Nothing is ever said about the financial barrier to entry. The $200 per month heavily subsidised subscription excludes all but the privileged. I can already hear replies of “It’ll pay for itself with 10x productivity” — show me your 10x profits — “Costs will come down” — they’re going up. Let’s be honest this is white tech for the one percent. The only inevitability I see if that you’ll continue to ignore very real criticism because the train is leaving; buy a first class ticket or get left behind. What about those adversely affected? Sorry, I can’t hear you! Choo Choooo! Next stop: Gas Town . As things stand clients still can’t pay me to prompt. That’s all covered by my AI Policy . I moved the “Morals and Ethics” category first and extended it with recent events. I added “Economy” alongside “Employment and Education” . A few more additions were added elsewhere. My policy is not intending to be an academic essay. Some statements may need better sources. If you genuinely think I’m misinformed on certain topics I’m tentatively open to feedback. Please don’t @ me about energy usage, linking napkin math that ignores half of it. Yes my rhetoric is colourful at times. I’m not obliged to be civil when force-fed by the billion dollar machine that won’t take “no” for an answer — that wasn’t about one email , by the way. Taking a neutral and friendly stance on AI is not the middle ground when the scales are tipped by economic powers trying to destroy my career. AI criticism is not a personal attack. That is unless AI has replaced your personality, as well as your code. To my fellow level-headed developers, stay strong! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 1 months ago

Hmmarkdown 2

Everyone has an opinion on markdown but why stop there? Write your own parser, make those opinions reality! That’s what I did with Hmmarkdown — my HTML-aware markdown library. It has built my website content for the past year. Turns out parsing markdown (with HTML) isn’t easy. My original approach evolved into a game of whac-a-mole to quash edge case bugs. Last week I began a new parsing experiment I’d been mulling over. The idea proved workable and I finished the job. Hmmarkdown 2 was born! The new codebase is still rough around the edges but already an upgrade. I used my original test suite to ensure the same output. My primary goal was a more maintainable, extendable, and faster library, which it should be soon. Markdown is best in it’s simplest form . Complex extensions to the syntax make no sense. If HTML is the target and easier to markup, just write HTML! Existing markdown libraries allow HTML but they skip past it. The purpose of Hmmarkdown is to allow me to write primarily markdown but interweave HTML where it makes sense. Along with my original example , here’s a common pattern I use: The mix of HTML and markdown above is transformed into the HTML below. In practice I mix little markup but it’s extremely useful to have the ability. Was this worth the investment? Probably not, but I’m in too deep! My old parser separated lines by then grouped lines by block: paragraph, blockquote, heading, list, etc. Those blocks were then parsed as HTML (crudely). Text nodes were parsed for inline markdown: links, bold, italic, etc. A subset of block-level HTML elements were passed back through the parser. Regular expressions did the heavy lifting. That was the old architecture. It worked but it got messy. The new parser begins with a more traditional tokenizer. The tokenizer iterates the input character by character to generate an array of tokens, namely: † Wait a minute “+” is not markdown! I’ll explain later… Every token is a single ASCII character except for Tag and Text . Tag tokens are HTML tags, like , , or . Text tokens are a unicode string of everything else. From the tokens, I generate a basic DOM-like tree with a “root” node and child tokens. ‡ I ignore carriage returns (macOS user), they’re probably dealt with in Text nodes and eventually trimmed? Using the HTML + markdown input example below: The initial token tree state could be visualised like this: With this tree I recursively parse the open tag nodes where the tag name is in an allowed set. This lets me ignore hard-coded HTML tags like , , and where I never want to parse or modify. Using the node for example, I iterate the children to generate a new array of children . The first two tokens and are appended to the new array. Then an token is found so is called. It will return a node (or nothing, for false positives). If that fails will be tried. If nothing matches the and tokens are appended without change. In this example from the original input matches bold formatting. The tree state now looks that this: The next step wraps text and inline tags with HTML paragraphs. This is probably the ropiest area of my code but it works (mostly). tokens play a key role and they’re removed at this stage. Excess whitespace is also trimmed. Next I merge adjacent text tokens before applying SmartyPants replacement, and finally HTML entities are escaped. In this example because the token did not match markdown image syntax it is merged as text. The final HTML output is a simple recursive function over tree nodes to generate a string. I’ve added extra formatting for readability below. HTML attributes are never parsed they just come along for the ride. And that is how Hmmarkdown 2 works! Or at least should work . We’ll see if any formatting bugs appear on my website. The new tokenizer approach means I can largely avoid regex. The supported markdown syntax is still punishingly strict and opinionated. The code repo is public but I have plenty to tidy up and optimise. There is no validation nor error reporting. I wouldn’t advise using it unless you’re me! So about that token. Whereas unordered lists start with an token, ordered lists would be written as followed by . In fact, the numeric order and value doesn’t matter to markdown. Both examples should output identical items marked 1 and 2 sequentially. (Some libraries do add a attribute. That’s a thing I don’t need.) Anyway, this is a pain to parse. I would need eleven additional tokens for digits to and . That adds overhead and a lot of false positives in the look-ahead matching. To avoid this entirely I do a cheeky bit of regex pre-processing. Unordered list lines are replaced with before I tokenize. Now I can use the exact same logic I use to parse ordered lists. Hmmarkdown has never supported nested lists because in over a decade of blogging I’ve never nested a list. That saves me another headache. Tune in next year when I throw this all away and announce Hmmarkdown 3! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Exclamation Parentheses Square brackets Angle brackets

0 views
David Bushell 1 months ago

Proton Spam and the AI Consent Problem

On Jan 14th Proton sent out an email newsletter with the subject line: Introducing Projects - Try Lumo’s powerful new feature now Lumo is Proton’s “AI” offering. There is a problem with this email. And I’m not talking about the question of how exactly AI aligns with Proton’s core values of privacy and security . The problem is I had already explicitly opted out of Lumo emails. That toggle for “Lumo product updates” is unchecked. Lumo is the only topic I’m not subscribed to. Proton has over a dozen newsletters, including some crypto nonsense. I opt-in to everything but Lumo, I gave an undeniable no to Lumo emails. So the email I received from Proton is spam , right? My understanding is that spam is a violation of GDPR and UK data protection laws. Regardless, Proton’s email is a clear abuse of their own service towards a paying business customer. Before I grab my pitchfork I emailed Proton support. Despite the subject line and contents, and despite the “From Lumo” name and address, maybe this was an honest mistake? Proton’s first reply explained how to opt-out. Hello David, Thank you for contacting us. You can unsubscribe from the newsletters if you do the following: - Log in to your account at https://account.protonvpn.com/login - Navigate to the Account category - Disable the check-marks under “Email subscriptions” - If you need additional assistance, let me know. [screenshot of the same opt-out toggle] -Have a nice day. John Support directs me to the exact same “Lumo product updates” toggle I had already unchecked. I replied explaining that I had already opted out. Support replies saying they’re “checking this with the team” then later replies again asking for screenshots. Can you make sure to send me a screenshot of this newsletter option disabled, as well as the date when the last message was sent to you regarding the Lumo offer? You can send me a screenshot of the whole message, including the date. Is it perhaps 14 January 2026 that you received the message? I found that last line curious, are they dealing with other unhappy customers? Maybe I’m reading too much into it. I sent the screenshots and signed off with “Don’t try to pretend this fits into another newsletter category.” After more “checking this with the team” I got a response today. In this case, the mentioned newsletter is for promoting Lumo Business Suit to Business-related plans. Hence, why you received it, as Product Updates and Email Subscription are two different things. In the subscription section, you will see the “Email Subscription” category, where you can disable the newsletter in order to avoid getting it in the future. If I understand correctly, Proton are claiming this email is the “Proton for Business newsletter” . Not the “Lumo product updates” newsletter. I don’t know about you, but I think that’s baloney. Proton Support had five full business days to come up with a better excuse. Please tell me, how can I have been any more explicit about opting out of Lumo emails, only to receive “Try Lumo” “From Lumo”, and be told that is not actually a Lumo email? Has anyone else noticed that the AI industry can’t take “no” for an answer? AI is being force-fed into every corner of tech. It’s unfathomable to them that some of us aren’t interested. The entire AI industry is built upon a common principle of non-consent. They laugh in the face of IP and copyright law. AI bots DDoS websites and lie about user-agents . Can it get worse than the sickening actions of Grok? I dread to think. As Proton has demonstrated above, and Mozilla/Firefox recently too, the AI industry simply will not accept “no” as an answer. Some examples like spam are more trivial than others, but the growing trend is vile and disturbing. I do not want your AI. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 4 months ago

Better Alt Text

It’s been a rare week where I was able to (mostly) ignore client comms and do whatever I wanted! That means perusing my “todo” list, scoffing at past me for believing I’d ever do half of it, and plucking out a gem. One of those gems was a link to “Developing an alt text button for images on [James’ Coffee Blog]” . I like this feature. I want it on my blog! My blog wraps images and videos in a element with an optional caption. Reduced markup example below. How to add visible alt text? I decided to use declarative popover . I used popover for my glossary web component but that implementation required JavaScript. This new feature can be done script-free! Below is an example of the end result. Click the “ALT” button to reveal the text popover (unless you’re in RSS land, in which case visit the example , and if you’re not in Chrome, see below). To implement this I appended an extra and element with the declarative popover attributes after the image. I generate unique popover and anchor names in my build script. I can’t define them as inline custom properties because of my locked down content security policy . Instead I use the attribute function in CSS. Anchor positioning allows me to place these elements over the image. I could have used absolute positioning inside the if not for the caption extending the parent block. Sadly using means only one thing… My visible alt text feature is Chrome-only! I’ll pray for Interop 2026 salvation and call it progressive enhancement for now. To position the popover I first tried but that sits the popover around/outside the image. Instead I need to sit inside/above the image. The allows that. The button is positioned in a similar way. Aside from being Chrome-only I think this is a cool feature. Last time I tried to use anchor positioning I almost cried in frustration… so this was a success! It will force me to write better alt text. How do I write alt text good? Advice is welcome. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

1 views
David Bushell 4 months ago

Croissant Favicons and Tauri Troubles

Croissant v0.4 is out! I fixed a few minor bugs and added favicons. I’ve had a surprising amount of feedback. I wasn’t expecting anyone to care about an app I designed for myself. Thanks to all who managed to navigate my contact form . Croissant’s design philosophy is vague because I’m just making it up as I go along. Essentially it’s an experiment in keeping it simple. Not “MVP” because MVP is nonsense — and not “minimalism” because that does not mean good. Croissant is just basic and unbloated. The three most requested features have been: Folders is never going to happen, sorry! That would literally double the codebase for a feature I’d never use myself but have to maintain. Bookmarks is possible. Croissant is just a reader not an organisation tool but I see the value of “read later”. Not sure how this will work yet. I do not want to build a bookmark manager. Favicons has happened! When I declared “no icons” I was talking about the craze of UI icons everywhere . Icons without labels! Meaningless béziers from self-important designers that leave the rest of us saying “WTF does this button do?” Favicons actually serve a purpose and improve the design. Favicons are a simple feature but were not easy to implement. Tauri is causing me headaches. I’m starting to rethink if I should continue the native app wrapper or focus solely on the PWA . The web platform always wins. How many cautionary tales must I read before I accept the truth! Why am I wasting time debugging Tauri and Apple’s webview for issues that don’t even exist in Safari? Wasted time and energy. I’m accruing non-transferable knowledge in my (very) limited brain capacity. Croissant v0.4 might be the last native macOS version. It only exists because the PWA requires a server proxy (CORS) that has privacy concerns . Maybe I can add a “bring your own proxy” feature? Podcast feeds include an image tag but basic RSS does not. There are standardised ways to provide an image/icon with and web manifests . These both require parsing the website’s HTML to discover. I’m relying on “known” root locations; namely: These locations aren’t required for either icon but browsers check there by default so it’s a good place to guess. For the 200-ish blogs I subscribe to I get a ~65% success rate. Not ideal but good enough for now. I really want to avoid HTML spelunking but I may have to. Expect an improvement in the next update. For now a croissant emoji is used for missing icons. I’m using the Offscreen Canvas API to generate a standard image size to cache locally. Favicons are currently cached for a week before refreshing. First I tried using a service worker to cache. Tauri was not happy. Second I tried using OPFS with the File System API. Tauri was not happy. I dumped Base64 into local storage and Tauri was OK with that but I wasn’t because that’s horrible. Finally I went back to IndexedDB which is perfectly happy storing binary blobs. So you can see why Tauri is on thin ice! I don’t want tech choices dictating what parts of the web platform I can use without jumping through non-standard hurdles. That’s all for now. I hope to have another update this year! Visit CroissantRSS.com to download or install Croissant as a progressive web app. Oh yeah… and I kinda messed up deployment of the PWA service worker so you may need to backup, remove, and reinstall… sorry! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 4 months ago

RSS Club #004: Ghost of Autumn

The summer solstice has long past which means it’s Christmas soon if the local supermarkets are to be believed. I refuse to eat a mince pie before November at the earliest. Daylight savings time will come to an end (impossible to know exactly when). For the UK that means dark mornings, dark evenings, and grey skies around noon. It’s time to hibernate! I can recommend a bit of entertainment to wile away the winter. Imperium by Robert Harris is the first book of the Cicero trilogy. Although fiction, this novel is based upon real events at the end of the Roman Republic. The series follows the political career of Marcus Tullius Cicero . A fascinating era of human history. Move over Wordle, Connections is the new daily brain teaser. New to me anyway. If puzzle numbers are to go by it’s been around for years. Presumably inspired by the Connecting Wall you must make 4 groups from 16 words. Green is supposed to be the most obvious but I keep finding the blue group first. I’ve just finished playing Ghost of Yōtei the spiritual sequel to Ghost of Tsushima . If I rated Tsushima 5 stars I’d give Yōtei 4 stars. I achieved the platinum trophy for 100% completion in both games. The game is beautifully designed and fun to explore. Fair warning: moderate spoilers ahead. Yōtei is a great game but the story doesn’t hit the same emotional level as Tsushima. The ending fell flat for me and overstayed its welcome. The antagonists progressively lost their mystique until they became boring. Their repeated escapes were eye-rolling. It made Atsu look dumb and the Matsumae clan comically inept. Plot points are forced and pacing is criminally ruined by bad open world design. They front-load the starting area with the most side activities and then almost immediately move the main quest elsewhere. Ignore content, or ignore story? You can fast travel back and forth of course but it ruins the immersion. I played 20 hours and only saw two cutscenes. Most side characters are relegated to vendor NPC level which was disappointing. I’m left confused as to what purpose the wolf served? Despite these issues it was an experience worthy of the hours invested. As we know the true game is finding the tengai hat and fundoshi armour and terrifying the local samurai. I’m afraid I did not dare witness the final cutscene in this attire. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 4 months ago

What is a Linux?

Do you build websites like me? Maybe you’re an Apple user, by fandom, or the fact that macOS is not Windows. You’ve probably heard about this Linux thing. But what is it? In this post I try to explain what is a Linux and how Linux does what it be. ⚠️ This will be a blog post where the minutest of details is well actually-ied by orange site dwelling vultures. I’ll do my best to remain factual. At a high level Linux is best described as an OS (operating system) like Windows or macOS. Where Linux differs is that its components are all open source. Open source refers to the source code. Linux code is freely available. “Free” can mean gratis ; without payment. But open source licenses like GPL and MIT explicitly allow the sale of software. “Free” can also mean libre ; unrestricted, allowing users to modify and redistribute the code. Linux software is typically both free and free. You may see acronyms like OSS (open source software), and FOSS/FLOSS (free/libre and open source software), emphasising a more liberal ideology. Some believe that non-free JavaScript is nothing short of malware forced upon users. Think about the sins you’ve committed with JavaScript and ask yourself: are they wrong? Linux and OSS is a wonderful can of worms with polarising opinions. We can break down Linux usage into three categories. Linux can be “headless” meaning there is no desktop GUI. Headless systems are operated via the command line and keyboard (except for the occasional web control panel). This is the backbone of the Internet. The vast majority of web servers are headless Linux. “Desktop Linux” refers to the less nerdy experience of using a GUI with a mouse. Linux has never done well in this category. Depending on whom you ask, Windows (with a capital W) dominates. Steam survey puts Windows at 95% for gaming. Other sources are more favourable towards macOS reporting upwards of 15%. Linux is niche for desktop. Some will claim success for Linux in the guise of Android OS . Although technically based on Linux much of Android and Google’s success is antithetical to FOSS principles. SteamOS from Valve is a gaming Linux distro making moves in this category. Embedded systems are things like factory robots, orbital satellites, smart fridges, fast food kiosks, etc. There’s a good chance these devices run Linux. If it’s Windows you’ll know by the blue screen and horrendous input latency. That was four categories, sorry. Linux is not one operating system but many serving different requirements. If Bill Gates created Windows and Steve Jobs oversaw macOS, who’s the Linux mastermind? Linux is named after Linus Torvalds who is still the lead developer of the Linux kernel. But there is no Microsoft or Apple of Linux. Due to its open source nature, Linux is more like a collection of interchangeable pieces working together. There is no default Linux install. You must choose a distribution like a starter Pokémon. Linux distros differ in their choice of core pieces like: The Linux kernel includes the low-level services common to all Linux systems. The kernel also has drivers for hardware and file systems. Each distro typically compiles its own kernel which means hardware support can vary out of the box. It’s possible to recompile the kernel to include modules specific to your needs. Linux distros can exist for niche and specialised use cases. OpenWrt is a distro for network devices like wireless routers. DietPi is a lightweight choice for single board computers (a favourite of mine). Distros exist for seasoned nerds. Gentoo Linux is compiled from source with highly specific optimisation flags. NixOS provides an immutable base with declarative builds. If no distro meets your requirements, why not build Linux from scratch? You can find all sorts of weird and wonderful distros on Distro Watch . If you consult the distro timeline on Wikipedia you can see an extensive hierarchy. It’s overwhelming! Know that most are hobbyist projects not maintained for long. They’re nothing more than pre-installed software, opinionated settings, and a wallpaper. Distros like Debian and Arch Linux offer a more generalised OS. They provide the base for most commonly used distros. RHEL (Red Hat Enterprise Linux) also exists for the corporate world. From Debian comes Ubuntu and Raspberry Pi OS . Ubuntu desktop is by far the most popular distro for day-to-day use. Ubuntu makes significant changes to Debian and provides its own downstream package repository. Where should you start? You’ll get some crazy bad answers. Just try Ubuntu. It has the “network effect” and you’re more likely to find support online. This advice is likely to elicit the most comments! Desktop Linux can look wildly different across distros. There is no universal desktop GUI like you’d find on Windows or macOS. KDE offers the classic Windows-like experience. Gnome is more akin to macOS. XFCE is a lightweight option. Hyprland strips back the GUI using a tiled window presentation. There is a shortage of design and accessibility expertise within FOSS. Linux can be ugly and inaccessible at times. If you like design perfection Linux can make your eye twitch. On the plus side, you’re not stuck with a vendor-locked experience. Desktop environments provide hundreds of dials to customise their appearance. Want a start menu? You can add one! Hate the dock? Remove it! Some parts of Linux are even styled and scripted with CSS and JavaScript. Distros come with a package manager (think NPM). This is the main source of system updates and software. On Debian-based systems you’ll find commands. Arch-based systems use . Distros may include a custom GUI and auto-update feature for those scared of the command line. Linux has multiple upstream package repositories. If you run on Debian you’ll get an old version (politely referred to as: “stable”). In comparison, running on Arch gives you the cutting edge, likely compiled from Github last night. Remember that almost everything around Linux is open source. You’re free to compile and install software from anywhere. Software maintainers often provide an install script. See Node.js for example: To download and immediately execute a script from the Internet is insanely insecure! You’re suppose to vet the code first but nobody does. Every Linux system is different so software support can be tricky. Containerised software has become a popular distribution method to solve compatibility issues. Flatpak is the leading choice and Flathub is a bountiful app store. AppImage is a similar project. Ubuntu is trying to make Snaps happen in this space. Hopefully I’ve explained what Linux is! But is it for you? Linux can be a great OS if you’re a web developer writing code. All the familar tools should be available. If you like to tinker, Linux will be a never-ending source of weekend projects . Linux has unrivalled backwards compatibility and avoids the comparable bloat of Windows and macOS. Older hardware can feel surprisingly fresh under Linux. If you require access to proprietary design software like the Adobe suite you’re out of luck. This is why I’m stuck on macOS for my day job. Clients love to deliver vendor lock-in with their designs. There are often 3rd-party workarounds for apps like Figma. Unofficial apps are always buggy and prone to breakage. Both the best and worse parts about Linux is too much choice. Everything can be modified, replaced, improved, and broken. I’ll end before this turns into a book. Let me know if you found this informative! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. The Linux kernel A package manager A boot and init system Network utilities Desktop experience

1 views
David Bushell 4 months ago

Email: The Final Form

I’m taking one last stab at my contact form before it’s binned forever. Back in May I came up with a “progressive dehancement” technique to validate form submissions. For non-JavaScript ‘users’ I took the drastic measure of blocking all @gmail and @outlook addresses. Along with problem phrases like “SEO” and entire unicode ranges. Despite my best effort to maintain a JavaScript-free form I failed. Bots have ruined the web! I’ve done the unthinkable and added a CAPTCHA thing. I chose the invisible Cloudflare Turnstile . It seems to be accessible and the least intrusive. I’m using to add the three worst words in web development. At least it’s not “Please use Chrome”. That’s worse I suppose. Cloudflare is a controversial choice not least because some employees sponsor unsavoury projects . Is it time to deflare? I degoogled years ago. I degithubed a few months ago. I mostly deflared moving to Bunny CDN but my contact form remains on Cloudflare. Why is it so difficult to find reputable services? Just don’t be a bad guy! I considered rolling my own invisible CAPTCHA thing. It’s security by obscurity when you think about it. Over summer I had a go at building Anubis at home . The “proof of work” is really just “proof of JavaScript” in practice. I don’t want JavaScript gatekeeping my entire website. Cloudflare’s secret sauce will be some combination of browser detection and heuristics. If this fails I’m tapping out and you can email me directly. It’s weird that my address is unobscured yet I get more spam via the contact form! Update: Note for October 11th Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 5 months ago

Not My Cup of Tea

As blog topics go, last week’s Next.js clownshow was a freebie. A bit of front-end dev and a middle finger to fascism. I had it drafted before my tea went cold. I wasn’t expecting round two to hit harder. This week I momentarily lost the will to blog. Ok, I’m being very dramatic. It wasn’t writer’s block or imposter syndrome. Not the usual suspects. I was just stun-locked for a couple of days. Like any self-respecting person with an ounce of sanity, I’ve been off Twitter X since it got all fashy. Nevertheless, it’s impossible to avoid the crazy stuff. And this week’s crazy was another level. Enjoyed my discussion with PM Netanyahu on how AI education and literacy will keep our free societies ahead. We spoke about AI empowering everyone to build software and the importance of ensuring it serves quality and progress. Optimistic for peace, safety, and greatness for Israel and its neighbors. @rauchg Sep 29, 2025 (xcancel) - Guillermo Rauch On seeing this I noted in anger followed by a hastily worded social post: I wonder if @svelte.dev is onboard with this? The obvious answer is: ‘no’. Everything I already know suggests the Svelte maintainers are antithetical to Rauch. My words were clumsy not malicious. I was merely wondering what on earth does Svelte do? WTF does anyone do when a major funding source does… that? A quick catch-up for those unaware: In the wake of this mess some people were keen to remind everyone of Svelte’s independence. Rich Harris and others were lost for words. Can you blame them? I called out Svelte specifically because it’s the one project with ties to Vercel I care about. Svelte is a shining light in a rather bleak JavaScript ecosystem . I try to avoid political discussion online. I’m a little ham-fisted in questioning the ethics of my tech stack. Some argue that taking Vercel’s money has moral baggage. That it makes the recipient complicit in brand-washing. Personally I’m not sure what to think. To cut ties with Vercel would be morally courageous. Then what? There’s no easy alternative to keep the lights on. The day after Rauch’s infamous selfie Vercel announced $300M in series F funding . The “F” stands for “f*ck you” I’m told. Vercel’s pivot to AI banked them one third of a billy in a world where profit doesn’t matter . Until the “AI” bubble bursts Vercel are untouchable. Is it wrong to siphon off a little cheddar for better use? Let’s be honest, who else is funding open source software? Few users are willing to pay for it, developers included. The world revolves around load-bearing Nebraskans . So what can projects like Svelte do? Does it matter? Nothing matters anymore. You can just say things these days. Make up your own truths. Re-roll the chatbot until it agrees. And if your own chatbot continues to fact-check you just rewrite history . We live in a reality where you can spew white supremacy fan fiction for the boys on X and Cloudflare will sponsor your side hustle a week later. Moral bankruptcy is a desirable trait in this economy. Is it any wonder open source maintainers with a backbone are shell-shocked? I’ll leave Svelte to figure out how to navigate impossible waters. Let’s hope that open governance remains intact, lest it go off the rails . For the rest of us: Taking action and Doing The Right Thing is often difficult, always exhausting, but it is what we must do, together. We all deserve better. The world deserves better. It’ll take a little work to get there, but there is hope. We all have a choice - Salma Alam-Naylor Amen to that. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Guillermo Rauch is CEO of Vercel Netanyahu is a wanted war criminal Vercel funds Svelte and employs creator Rich Harris Svelte remains open governance; not owned by Vercel Cut ties and take the moral victory for a day and then be amazed by the magical vanishing act of an entire dev community when asked for spare change tomorrow? Continue discarding skeets until the drama blows over? Pivot to “AI”?

0 views
David Bushell 5 months ago

RSS Club #003: Silksong

This is an “RSS-only” post where I discuss topics tangentially related to my usual web design and development feed. Ignore or enjoy! Before I delve into the dark corners of Pharloom I’d like to discuss one related topic. Silksong is a hard game that offers no affordances. There is no “easy mode”. There are no granular controls to tailor the experience to your individual ability or preference. Silksong is in stark contrast to games like Star Wars Outlaws by Ubisoft. Modern Ubisoft games take accessibility seriously. Outlaws has a huge list of accessibility options covering visuals, audio, controls, menus, and gameplay. Outlaws is a masterpiece of development that is unfortunately undone by being a rather boring game. The Silksong meme is “get good” and it’s a great game if your able to “get good”. But I strongly disagree with the notion that it has to be that way. Any argument for single-player games not providing “difficulty” settings is inherently exclusionist. If a game offers one mode, that does not mean all players experience the same challenge. An abled twenty year old memer is playing an easier game than anyone with a disadvantage. Be that any disability or distraction. How others experience a game should have absolutely no bearing on your own enjoyment. That’s just pathetic. Online discussion tends to fall into shallow gate-keeping. Providing more accessibility options would help ensure players get a comparable experience. Such options allow more players to find fun in a game. There’s a grey area between artistic choice and discrimination. I believe hard games should be allowed to exist in the same way comedians should be allowed to tell edgy jokes. The difference is games can have options, so why add self-imposed limits to exclude players? In 2007 game devs Team Cherry published a five star game, Hollow Knight . I never played it on release. I picked up the Voidheart Edition some years later and completed almost every challenge (can’t remember how far I got in the Pantheons). Silksong is the long awaited sequel. Hornet is the playable character replacing John Hollow Knight as Silksong’s protagonist. Hornet’s movement and play style is faster and more aggressive. If Hollow Knight was five stars I’d give Silksong three stars. A good game but nothing special and let down by lessons unlearned. Fair warning: spoilers galore below! Nothing is off limits. After 18 days and 52 in-game hours I rolled credits on Silksong. I immediately jumped back in for another week to tie up loose ends. After 68 hours, my Silksong adventure is over! I played the first Hollow Knight and shamelessly followed a walkthrough verbatim. Silksong was an amazing experience to explore blind and discover for myself. I followed every path and found every upgrade to make the main quest easier. I enjoyed the fast paced combat. That said, I can’t be bothered to 100% Silksong. I’ve seen two endings and I’ve watched the rest on YouTube. I’m in Act 3 and I joined the Flea Festival and I’m happy here! I’ve absolutely no desire to see more. A shame because there are cool boss fights I’m missing. But boss fights are what let this game down. Before I discuss the bad these are my top five in fight order: These were great fights I beat clean without spamming tools. Sadly, fun bosses were few and far between in Silksong. Too many bosses were giant amorphous blobs where contact damage from their fat-ass movement was more deadly than the attacks. Boring; forgettable. That’s my main criticism of Silksong; underwhelming boss fights and an over-reliance on harder gauntlet battles. The High Halls gauntlet would have felt more rewarding had I not been fatigued already. Most bosses I beat within a dozen tries. For me Silksong lacked a Soul Master moment where hours of practice and progress felt like a real achievement. First Sinner came close but as an optional hidden boss — I never got captured and had to break into The Slab — it didn’t feel like a major victory. I’m actually complaining about lack of — or the wrong kind of — difficulty in boss fights. Difficulty in a fun way where I learn to overcome a challenge. Not difficulty where I repeat the same madness until I get lucky and not caught in the air between fat bouncy blob, randomly spawning blob, and heat-seeking acid spew blob. Lazy game design is not fun. I know quite a lot of potentially cool bosses are in Act 3 along with possibly 20–30 hours of extra content. I wish I cared. It just feels like a chore to progress any further. Backloading the endgame is just bad design in my opinion. You might say I never finished the game and quit before the final act. Whatever, I got my money’s worth! Personally I think Silksong was comparably easier than I remembered Hollow Knight. Regardless, I found the difficulty in this game unfair. Deaths in Hollow Knight always felt preventable. Many deaths in Silksong came from gambling with flying enemies designed to punish no matter how you moved or attacked. Like many players I discovered poisoned Cogflys to be very effective against pesky flies. This made areas like Bilewater bearable. If everyone is using the same overpowered tactic that suggests poor game balance. Platforming challenges were fun once I mastered the 45° degree pogo. Longclaw made that easier. Climbing Mount Fay to unlock the double jump was a pleasant change of pace. I would have preferred a Path of Pain over Bilewater’s gimmick. The economic balance of rosaries and shards sucked and dissuaded me from experimenting. Many tools felt unnecessary and my load-out rarely changed. The Architect Crest looked interesting but farming shards was bad enough. I went 80% Hunter and 20% Reaper when I required longer attacks. I found Weavenest early which also locked in my choice. Silksong is a fantastic game overshadowed by the relative perfection that was the original Hollow Knight. In comparison I found Silksong a little too unfair and at times unrewarding. I don’t feel like I could play Silksong again but I could give Hollow Knight another run. Unlocking every skill in Hollow Knight felt like a big milestone. Silksong missed that mark. For me Silksong is a solid three stars that took risks but failed to improve on the original. I’ll be playing Ghost of Yōtei next! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. First Sinner

0 views
David Bushell 5 months ago

How Much Does Freedom Cost?

Trump’s National Design Studio has an executive order to “modernize the interfaces that serve everyday citizens” . That means rich/white people (but not the ‘disabled’ kind). The US government had digital service agencies that cared about a performant and accessible web until they got the DOGE treatment . The NDS’ latest website trumpcard.gov is a Next.js disasterclass . Vercel’s CEO Guillermo Rauch thinks an endorsement by a friend of Epstein is… a good thing? Anyway, Trump invites you to “Submit Your Appl 🦅 tion”. This side-eying American Bald Eagle is a 579 frame animation. Each frame is a 1671×1300 pixel PNG weighing on average 30 KB each. Frames 261 through 320, where the eagle is looking straight forward, are replaced by frame 320 to save bandwidth. Despite this valiant effort the total size of these PNG files is 16.7 megabytes . PNG frames are requested by a Web Worker and saved using the CacheStorage API. The worker returns URLs for each frame. The React hook is used ( very carefully ) to trigger to update an elements source. And that is how you get 16.7 MBs of freedom. Alternate text is seemingly used as a comment for developers. Eagle-eyed readers will have noticed the eagle’s body is a static image. The PNG frames only contain the head which is no larger than 400×400 in the centre. A quick crop and squoosh suggests a 20% saving with no quality loss. Using a lossy codec like AVIF would allow for anywhere between 50–80% smaller images with little perceptual quality loss. I’m guessing the animation trickery is done to superimpose the eagle over the text “Submit Your Application”. Is it worth the cost? No. Just use a video! You could just make the entire thing a video including the text (like my screen recording above). This would limit the responsive design and the initial text transition but would be much smaller than 16.7 MB. To retain separation of elements a video codec with alpha transparency can be used (see CSS-Tricks , Jake Archibald ). WebM/VP9 works in Chrome and Firefox and HEVC works for Safari/iOS. A quick test returns 500–800 KB depending on codec and quality. Using the HTML attribute allows to work in most browsers. These are rough numbers but suffice it to say a PNG based animation is expensive . Then again, if you’re in the market for a $1 million dollar card you can probably afford this too. Decapitating America’s “national bird” is not the only sin committed by the National Design Studio. Trump’s gold card website is a treasure trove of bad development. View source and see what fun you can find. And remember, for facist-friendly hosting™, think Vercel. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 5 months ago

Let’s see Paul Allen’s CSS Reset

CSS “resets” are boilerplate code designed to remove or normalize browser defaults. They provide solid foundation to build bespoke CSS upon. When utilised correctly they should be unobtrusive. Any quirks being ones of personal taste and flair. These quirks are why CSS programmers obsess over their reset stylesheets like the infamous business card scene (YouTube) † . † ⚠️ beyond the satirical quotes, American Psycho is a brutal movie with extreme violence. Heed warning before watching, there are no further parallels to the comfort of web development! CSS resets have matured along with the CSS spec(s). They’ve gotten complicated. Some incorporate choices that can have big consequences if not used with care. Perhaps a good reason to roll your own CSS reset; you must understand every line. Here are a few notable recent examples to learn from: And saving the best for last: I won’t copy & paste it all because my CSS reset is a “living standard” like the HTML spec. You can find 👉 my CSS reset here 👈 You’ll have noticed I wrap all selectors in to zero out specificity, just in case. I will also typically import my CSS reset into the lowest cascade layer. If you don’t bikeshed those layer names in the comments I’ll be disappointed. I define my layers alongside top-level imports rather than wrapping large blocks of code. The only thing that gets imported earlier are declarations. Another “just in case”. Does this allow the browser to fetch fonts faster? No idea lol (ask Harry ). I will preload critical fonts in the anyway. I just like putting fonts up top in CSS; they’re special. This combo of selector and cascade layers make specificity a non-issue. I’ve been on the logical properties everywhere train for years. This practice allows for almost free right-to-left support. For nuances see RTL Styling 101 by Ahmad Shadeed. You’re missing out on that free RTL styling if you forget the class. The class is added by Google Translate. Google doesn’t alter the attribute so it won’t naturally flip without help. Kagi Translate does add — one of a thousand reasons to stop using Google. I prefix the naked tag name for clarity. Does that matter? Nope! My properties are ordered alphabetically because I’ve worked with Stu Robson . My haphazard aesthetic ragged edge ordering did not vibe with Stu’s sensible design system. I’ve been an alphabetical convert ever since. As an almost 40 year-old that still sings the alphabet to place letters, this is not easy for me. Siri, delete that last paragraph. Okay, what’s all that about? I once had a client that resized the browser below 300px and complained. The is more interesting. This enables a full viewport “hero”, depending how you go about it. It also allows you to push the footer to the bottom on shorter pages. Nothing weirder than seeing a floating footer in the middle. Why ? The viewport height on iOS Safari is dynamic (maybe Android browsers too, I forget). Anything sized to it causes janky layout thrashing on scroll. This BBC profile on Armand Duplantis using is a good example of that issue. The initial size before scrolling is usually the smallest. This fits my use case. I don’t actually reset stuff like the default heading margin. I always style those later. When arrives I may add: Just out of respect for the glorious new pseudo-class. I add to paragraphs and it does something in Safari . I don’t headings as some resetters do by default. I find that too opinionated. Only specific design patterns suit balanced headings. I love a bit of hanging punctuation so I yolo’d it. Then I learned from Jeremy Keith (as one does) this practice has unwanted side effects. So I reset it back to on form elements. A reset within a reset, is that code smell? I may or may not have secretly patched half a dozen client websites. Speaking of fancy pants typography, please show support for Richard Rutter’s Interop 2026 . Prioritise long-standing bugs first Big Browser 🙏 For a long time I refused to use “font smoothing”. I was staunchly a “please don’t mess with text rendering” person. That was until I did a deep dive on: What’s the deal with WebKit Font Smoothing? — I’m not happy with that post, it’s confusing, but the takeaway is I now add the style to my reset. So that’s my CSS reset . It has and will change over time. Let’s see yours! Do you keep it lean, or prefer a chonky reset? P.S. did I just big blog twice in one day? Is that allowed? Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Chris Coyier (Sep 2025) UA+ (Apr 2025) Piccalilli (Sep 2023) Josh W. Comeau (2021–2025) Elad Shechter (Oct 2021)

0 views
David Bushell 5 months ago

I Shut The Emails Out

Last week I let the emails in by self-hosting an unencrypted SMTP server on port 25. That was a “fun” challenge but left me feeling exposed. I could host elsewhere but most reputable hosting providers block SMTP ports. I found another answer. I know. Bleh! Sellout to Big Tech why don’t I? I don’t see you opening ports into your home. Anyway, Cloudflare has this thing called Email Routing and Email Workers . Cloudflare handles the SMTP and verification of incoming emails. Emails can be forwarded elsewhere or handled by a worker. Catch-all can be configured so it’s wise to do your own address validation. My worker takes the following actions: I’m locked in to Cloudflare now for this side project. Might as well use the full suite. The basic worker script is simple. There is one issue I found. claims to be type: And claims to take type: — amongst others. In reality the email API does not play nicely with the storage API. My worker timed out after 5 minutes with an error. “Provided readable stream must have a known length (request/response body or readable half of FixedLengthStream)” That is why I’m using which I yoinked from @std/streams . I’d rather stream directly but this was a quick fix. Since I have a one megabyte limit it’s not a problem. My original idea was to generate an RSS feed for my Croissant web app à la Kill the Newsletter (which I could just use instead of reinventing…) HTML emails though are a special class of disaster. Semantics and accessibility, anyone? No? Table layouts from the ’90s? Oh, okay… that’s another blog post. Actually hold up. Do we just lose all respect for accessibility when coding HTML emails? Apparently so. I built an HTML email once early in my career and I refused to do it ever again. Here’s a screenshot of the web UI I was already working on to read emails. I’m employing similar tricks I learnt when sanitising RSS feeds . This time I allow inline styles. I remove scripts and images (no thanks). Content security policy is very effective at blocking any tracking attempts that might sneak through. I have a second proxy worker that receives a URL and resolves any HTTP redirects to return the real URL. For good measure, tracking params like are removed at every step. Email truly is the cesspit of the internet. Dreadful code. Ruthless tracking. Why am I doing this again? Most of these newsletters have an RSS and web version available. I can’t believe I let this stuff into my home! Come to think of it, maybe I can pass the plain text versions through a Markdown parser? Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. validate address reject emails larger than to an R2 Storage bucket with metadata

0 views