Latest Posts (20 found)

Warning: containment breach in cascade layer!

CSS cascade layers are the ultimate tool to win the specificity wars. Used alongside the selector, specificity problems are a thing of the past. Or so I thought. Turns out cascade layers are leakier than a xenonite sieve. Cross-layer shenanigans can make bad CSS even badder. I discovered a whole new level of specificity hell. Scroll down if you dare! There are advantages too, I’ll start with a neat trick. To setup this trick I’ll quickly cover my favoured CSS methodology for a small website. I find defining three cascade layers is plenty. In I add my reset styles , custom properties, anything that touches a global element, etc. In I add the core of the website. In I add classes that look suspiciously like Tailwind , for pragmatic use. Visually-hidden is a utility class in my system. I recently built a design where many headings and UI elements used an alternate font with a unique style. It made practical sense to use a utility class like the one below. This is but a tribute, the real class had more properties. The class is DRY and easily integrated into templates and content editors. Adding this to the highest cascade layer makes sense. I don’t have to worry about juggling source order or overriding properties on the class itself. I especially do not have to care about specificity or slap everywhere like a fool. This worked well. Then I zoom further into the Figma picture and was betrayed! The design had an edge case where letter-spacing varied for one specific component. It made sense for the design. It did not make sense for my system. If you remember, my cascade layer takes priority over my layer so I can’t simply apply a unique style to the component. For the sake of a demo let’s assume my component has this markup. I want to change back to the normal letter-spacing. Oops, I’ve lost the specificity war regardless of what selector I use. The utility class wins because I set it up to win. My “escape hatch” uses custom property fallback values . In most cases is not defined and the default is applied. For my edge case component I can ‘configure’ the utility class. I’ve found this to be an effective solution that feels logical and intuitive. I’m working with the cascade. It’s a good thing that custom properties are not locked within cascade layers! I don’t think anyone would expect that to happen. In drafting this post I was going to use an example to show the power of cascade layers. I was going to say that not even wins. Then I tested my example and found that does actually override higher cascade layers. It breaches containment too! What colour are the paragraphs? Suffice it to say that things get very weird. See my CodePen . Spoiler: blue wins. I’m sure there is a perfectly cromulent reason for this behaviour but on face value I don’t like it! Bleh! I feel like should be locked within a cascade layer. I don’t even want to talk about the inversion… I’m sure there are GitHub issues, IRC logs, and cave wall paintings that discuss how cascade layers should handle — they got it wrong! The fools! We could have had something good here! Okay, maybe I’m being dramatic. I’m missing the big picture, is there a real reason it has to work this way? It just feels… wrong? I’ve never seen a use case for that wasn’t tear-inducing technical debt. Permeating layers with feels wrong even though custom properties behaving similar feels right. It’s hard to explain. I reckon if you’ve built enough websites you’ll get that sense too? Or am I just talking nonsense? I subscribe to the dogma that says should never be used but it’s not always my choice . I build a lot of bespoke themes. The WordPress + plugin ecosystem is the ultimate specificity war. WordPress core laughs in the face of “CSS methodology” and loves to put styles where they don’t belong . Plugin authors are forced to write even gnarlier selectors. When I finally get to play, styles are an unmitigated disaster. Cascade layers can curtail unruly WordPress plugins but if they use it’s game over; I’m back to writing even worse code. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 5 days ago

No-stack web development

This year I’ve been asked more than ever before what web development “stack” I use. I always respond: none. We shouldn’t have a go-to stack! Let me explain why. My understanding is that a “stack” is a choice of software used to build a website. That includes language and tooling, libraries and frameworks , and heaven forbid: subscription services. Text editors aren’t always considered part of the stack but integration is a major factor. Web dev stacks often manifest as used to install hundreds of megs of JavaScript, Blazing Fast ™ Rust binaries, and never ending supply chain attacks . A stack is also technical debt, non-transferable knowledge, accelerated obsolescence, and vendor lock-in. That means fragility and overall unnecessary complication. Popular stacks inevitably turn into cargo cults that build in spite of the web, not for it. Let’s break that down. If you have a go-to stack, you’ve prescribed a solution before you’ve diagnosed a problem. You’ve automatically opted in to technical baggage that you must carry the entire project. Project doesn’t fit the stack? Tough; shoehorn it to fit. Stacks are opinionated by design. To facilitate their opinions, they abstract away from web fundamentals. It takes all of five minutes for a tech-savvy person to learn JSON . It takes far, far longer to learn Webpack JSON . The latter becomes useless knowledge once you’ve moved on to better things. Brain space is expensive. Other standards like CSS are never truly mastered but learning an abstraction like Tailwind will severely limit your understanding. Stacks are a collection of move-fast-and-break churnware; fleeting software that updates with incompatible changes, or deprecates entirely in favour of yet another Rust refactor. A basic HTML document written 20 years ago remains compatible today. A codebase built upon a stack 20 months ago might refuse to play. The cost of re-stacking is usually unbearable. Stack-as-a-service is the endgame where websites become hopelessly trapped. Now you’re paying for a service that can’t fix errors . You’ve sacrificed long-term stability and freedom for “developer experience”. I’m not saying you should code artisanal organic free-range websites. I’m saying be aware of the true costs associated with a stack. Don’t prescribed a solution before you’ve diagnosed a problem. Choose the right tool for each job only once the impact is known. Satisfy specific goals of the website, not temporary development goals. Don’t ask a developer what their stack is without asking what problem they’re solving. Be wary of those who promote or mandate a default stack. Be doubtful of those selling a stack. When you develop for a stack, you risk trading the stability of the open web platform, that is to say: decades of broad backwards compatibility, for GitHub’s flavour of the month. The web platform does not require build toolchains. Always default to, and regress to, the fundamentals of CSS, HTML, and JavaScript. Those core standards are the web stack. Yes, you’ll probably benefits from more tools. Choose them wisely. Good tools are intuitive by being based on standards, they can be introduced and replaced with minimal pain. My only absolute advice: do not continue legacy frameworks like React . If that triggers an emotional reaction: you need a stack intervention! It may be difficult to accept but Facebook never was your stack; it’s time to move on. Use the tool, don’t become the tool. Edit: forgot to say: for personal projects, the gloves are off. Go nuts! Be the churn. Learn new tools and even code your own stack. If you’re the sole maintainer the freedom to make your own mistakes can be a learning exercise in itself. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 1 weeks ago

CSS subgrid is super good

I’m all aboard the CSS subgrid train. Now I’m seeing subgrid everywhere. Seriously, what was I doing before subgrid? I feel like I was bashing rocks together. Consider the follower HTML: The content could be simple headings and paragraphs. It could also be complex HTML patterns from a Content Management System (CMS) like the WordPress block editor, or ACF flexible content (a personal favourite). Typically when working with CMS output, the main content will be restricted to a maximum width for readable line lengths. We could use a CSS grid to achieve such a layout. Below is a visual example using the Chromium dev tools to highlight grid lines. This example uses five columns with no gap resulting in six grid lines. The two outer most columns are meaning they can expand to fill space or collapse to zero-width. The two inner columns are which act as a margin. The centre column is the smallest or two values; either , or the full viewport width (minus the margins). Counting grid line correctly requires embarrassing finger math and pointing at the screen. Thankfully we can name the lines. I set a default column of for all child elements. Of course, we could have done this the old fashioned way. Something like: But grid has so much more potential to unlock! What if a fancy CMS wraps a paragraph in a block with the class . This block is expected to magically extend a background to the full-width of the viewport like the example below. This used to be a nightmare to code but with CSS subgrid it’s a piece of cake. We break out of the column by changing the to — that’s the name I chose for the outer most grid lines. We then inherit the parent grid using the template. Finally, the nested children are moved back to the column. The selector keeps specificity low. This allows a single class to override the default column. CSS subgrid isn’t restricted to one level. We could keep nesting blocks inside each other and they would all break containment. If we wanted to create a “boxed” style we can simply change the to instead of . This is why I put the margins inside. In hindsight my grid line names are probably confusing, but I don’t have time to edit the examples so go paint your own bikeshed :) On smaller viewports below the outer most columns collapse to zero-width and the “boxed” style looks exactly like the style. This approach is not restricted to one centred column. See my CodePen example and the screenshot below. I split the main content in half to achieve a two-column block where the text edge still aligns, but the image covers the available space. CSS subgrid is perfect for WordPress and other CMS content that is spat out as a giant blob of HTML. We basically have to centre the content wrapper for top-level prose to look presentable. With the technique I’ve shown we can break out more complex block patterns and then use subgrid to align their contents back inside. It only takes a single class to start! Here’s the CodePen link again if you missed it. Look how clean that HTML is! Subgrid helps us avoid repetitive nested wrappers. Not to mention any negative margin shenanigans. Powerful stuff, right? Browser support? Yes. Good enough that I’ve not had any complaints. Your mileage may vary, I am not a lawyer. Don’t subgrid and drive. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 2 weeks ago

I quit. The clankers won.

… is what I’m reading far too often! Some of you are losing faith! A growing sentiment amongst my peers — those who haven’t already resigned to an NPC career path † — is that blogging is over. Coding is cooked. What’s the point of sharing insights and expertise when the Cognitive Dark Forest will feed on our humanity? Before I’m dismissed as an ill-informed hater please note: I’ve done my research. † To be fair it’s a valid choice in this economy. Clock in, slop around, clock out. Why not? It’s never been more important to blog. There has never been a better time to blog. I will tell you why. We’re being starved for human conversation and authentic voices. What’s more: everyone is trying to take your voice away. Do not opt-out of using it yourself. First let’s accept the realities. The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms . Mass surveillance and tracking are a feature, privacy is a bug. Everything is an “algorithm” optimised to exploit. How can we possibly combat that? From a purely selfish perspective it’s never been easier to stand out and assert yourself as an authority. When everyone is deferring to the big bullshitter in the cloud your original thoughts are invaluable. Your brain is your biggest asset. Share it with others for mutual benefit. I find writing stuff down improves my memory and hardens my resolve. I bet that’s true for you too. It’s part rote learning part rubberducking † . Writing publicly in blog form forces me to question assumptions. Even when research fails me Cunningham’s Law saves me. † Some will claim writing into a predictive chat box helps too, and sure, they’re absolutely right! Blogging makes you a better professional. No matter how small your audience, someone will eventually stumble upon your blog and it will unblock their path. Don’t accept a fate being forced upon you. The AI industry is 99% hype; a billion dollar industrial complex to put a price tag on creation. At this point if you believe AI is ‘just a tool’ you’re wilfully ignoring the harm . (Regardless, why do I keep being told it’s an ‘extreme’ stance if I decide not to buy something?) The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates. We’re saying goodbye to Sora. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. @soraofficialapp - XCancel Is there anything, in the entire recorded history of human creation, that could have possibly mattered less than the flatulence Sora produced? NFTs had more value. I’m not protective over the word “art”. Generative AI is art. It’s irredeemably shit art; end of conversation. A child’s crayon doodle is also lacking refined artistry but we hang it on our fridge because a human made it and that matters. We care and caring has a positive effect on our lives. When you pass human creativity through the slop wringer, or just prompt an incantation, the result is continvoucly morged ; a vapid mockery of the input. The garbage out no longer matters, nobody cares, nobody benefits. I forgot where I was going with this… oh right: don’t resign yourself to the deskilling of our craft . You should keep blogging! Take pride in your ability and unique voice. But please don’t desecrate yourself with slop. The only winning move is not to play. WarGames (1983) We’ve gotten too comfortable with the convenience of Big Tech . We do not have to continue playing their game. Don’t buy the narratives they’re selling. The AI industry is built on the predatory business model of casinos. Except they’ve forget the house is supposed to win. One upside of this looming economic and intellectual depression is that the media is beginning to recognise gate keepers are no longer the hand that feeds them. Big Tech is not the web. You don’t have to use it nor support it. Blog for the old web , the open web , the indie web — the web you want to see. And if you think I’m being dramatic and I’ve upset your new toys, you’re welcome to be left behind in the miasmatic dystopia these technofacists are racing to build. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 3 weeks ago

Top ten Figma betrayals

Figma is the industry standard for painting pretty pictures of websites. It’s where designers spend my designated dev time pushing pixels around one too many artboards. Figma promises to remove the proverbial fence between design and development. In reality it provides the comfort of an ideal viewport that doesn’t exist. I don’t mind Figma (the software), although I prefer Penpot myself. I still dabble in the deceptive arts of web design. Don’t be thinking I’m out here hating on designers. I like to stick my nose inside a Figma file and point out issues before they escalate. Below I cover classic Figma betrayals that I bet you’ve experienced. Betrayals happen when software promises more than it can deliver. Take a gander at this amazing website design I whipped up in Figma to illustrate the most common betrayals. I told you I was a designer! I’ll evolve this design throughout the post. Figma has deemed 1440×1024 to be “Desktop” resolution so I’ve started there. In this mockup I’ve added a full-width banner of our hero Johnny Business . I’ve built this website far too many times than I care to remember. I’ll repeat the same question here I ask every time I build it: what happens at other viewport sizes? Do I scale the banner proportionally? On wider viewports this is likely to push content out of sight. It might even require scrolling to see the entire image on Johnny’s ultra-wide 8K. The phrase “above the fold” will be spoken in a Teams call, can we avoid that? Do I also set a maximum height on the banner? This is going to decapitate poor Johnny! He paid a lot for that haircut. What are we doing below the “Desktop” viewport, by the way? Let’s design for the 402×874 resolution Figma calls “iPhone 17” because it was first on the list. Note the absolute perfect crop of Johnny’s sockless businessing. Okay, next question: how do we move between “mobile” and “desktop”? That’s a very specific focal point. We can’t just change it willy-nilly! Code has rules; logic. A website must be responsive between all breakpoints. Are we going to use multiple images? At what breakpoint do they swap? Because that perfectly cropped mobile image doesn’t scale up very far. Hold the phone! A shadow stakeholder has asked for a redesign to “make it pop!” The ultra-wide problem has been solved with a centred fixed-width style. If that is the intention? Does either the banner or header stretch to the edge of the viewport? More importantly, that image and text has no room to move. I’ve only reduced the viewport by 200 pixels and it’s already crashing into Johnny’s face. Are we expecting breakpoints every 100 pixels? — No, wait! Please don’t spend more time designing more breakpoints! Okay, I’ll hold until more breakpoints are designed. Are we extending my development deadline? No. Okay. As development continues I’ve got more bad news to share. Figma is very happy allowing us to enter arbitrary line breaks for the perfect text fit. That’s not how the web works. One of these options is probably what we’ll see if text is left to naturally break. Yes, we can technically allow for a manual line break. That’s a pain in the content management system, but sure. Text is still forced to wrap on a smaller viewport, then what? Oh that? Now you want the manual line break to magically disappear? (╯°□°)╯︵ ┻━┻ I lied when I said “top ten” Figma betrayals. The issues above can appear in hundreds of guises across any component. If you’re betrayed once you’ll be hit again and again. Figma is not exactly conducive to responsive web design. Designing more breakpoints often leads to more questions, not less. Another betrayal I pull my hair out over is the three card pattern packed with content. This leads to an immediate breakpoint where one card drops awkwardly below. I dread this because the word “carousel” will be uttered and my sobbing is heard far and wide. Carousels are not a content strategy. I was once inspecting a Figma file only to witness the enemy cursor drive by and drop several dots underneath an image. The audacity! Figma betrayals are classic waterfall mistakes that are solved by human conversation. Developers need to be part of the design process to ask these questions. Content authors should be involved before and not after a design is complete. You’ll note I never answered the questions above because what might work for my fictional design isn’t universal. On a tangential topic Matthias Ott notes: Think about what actually happens when a designer and an engineer disagree about an interaction pattern. There’s a moment of tension – maybe even frustration. The engineer says it’ll be fragile. The designer says it’s essential for the experience. Neither is wrong, necessarily. But the conversation – if your process allows for it to happen – that back-and-forth where both sides have to articulate why they believe what they believe, is where the design becomes robust and both people gain experience. Not in the Figma file. Not in the pull request. In the friction between two people who care about different things and are forced to find a shared answer. The Shape of Friction - Matthias Ott Figma is not friction-free and that’s fine. We can’t expect any software in the hands of a single person to solve problems alone. Software doesn’t know what questions to ask. Not then with Clippy, not now with Copilot. Humans should talk to one another, not the software. Together we can solve things early the easy way, or later the hard way. One thing that has kept me employed is the ability to identify questions early and not allow Fireworks, Photoshop, Sketch, XD, and now Figma to lead a project astray. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 3 weeks ago

I should build a game

I should build a game! I feel like that’s a common dream, right? Game development is what got me interested in design and programming to begin with. I learnt ECMAScript via Flash ActionScript many moons ago. Some time later “Thoughts on Flash” brought a swift demise and ruined legacy to Flash. History is written by the winners, they say. Although Flash was largely proprietary software, and Adobe would have ruined it themselves, Flash was a wonderfully creative tool in its prime. I studied art and went into print/web design before transitioning almost entirely to front-end dev. I’ve been trapped here every since! In that time, open web standards have become way more powerful than Flash every was. Today HTML is the new Flash. Over my winter break I created a new playground where I relearned old tricks by building fun little canvas prototypes. Just basic stuff. No libraries or game engines. This is my retreat of solace until the “AI” fallout blows over. I’ll be sharing my slop-free explorations into game dev. The purpose here is understanding and creativity. No amount of prompt-fondling can achieve that! Work got busy, which is a good thing I guess, and I haven’t had time to build more. If the web industry does fall apart, at least I have a fallback plan to keep me busy! I’m going to build the games I always wanted to. Or at least try. I’ve been playing Slay the Spire 2 recently and I thought, “I could build that!” — I mean, I could technically build a shallow shitty clone. Nevertheless, it inspired me once again to consider if I really could design and build a game. I’ve set myself a personal goal of spending a few hours every week to create something game related. Maybe that’s sketching concept art, or plotting puzzles, or writing code, or researching, or just daydreaming ideas. Not with the grand plan of creating “the game”. I don’t know where it will lead but I know I’ll enjoy the process. Whether I share anything is unknown. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 3 weeks ago

RSS Club #006: Burnout

This is an RSS-only post, which I like to do sporadically! Thank you for subscribing :) Am I burning out? Let me know what you think, internet doctors. I work a four day week and I have done so for many years. Fridays are mine to have fun. By fun I mean making my own websites without the pressure of clients. That helps me wind down. When the weekend arrives my mind is already stress free. At least it was! I’ve been struggling more than usual lately. My watch monitors heart rate, steps, sleep etc. It has started to report a lower than average “body battery” — that’s what Garmin has trademarked to say: “sir, you look like shit.” A major factor here is definitely a hamstring tear that has kept me from running. Not long ago I was doing half-marathons every other week. Now I can only manage a light 5k or risk prolonged injury. Being stuck inside isn’t helping my mental or physical health. Hopefully before summer I’ll have recovered. But there is more I reckon. I’m fed up. Everything makes me grouchy. Is it too simple to say that the web industry, and tech at large, has lost its collective marbles? Not a week goes by where I don’t mute a word on social media, or unsubscribe from a blog. Everyone is talking nonsense. Everyone is grifting. It never used to be this way. What depresses me most though is how negative my own blog can be on occassions. Part of me wants to defend my career. To call out the ludicrous stuff that is said and done these days. I’m not worried about upsetting people. The clients that hire me don’t care that I dared mock an industry influencer or challenged one of the old boys’ club. I try to do that in a joking way but my tone has always been blunt. That has gotten me into a wee bit of trouble before. Lately though, I can’t help but feel I’ve been looking for trouble. Is it even possible to ‘fight back’ in a positive way? I’m not just talking about “AI” bollocks, I mean the general enshittifcation of the web industry and tech at large. The hot drama and spicy takes are great for clickbait and like-farming. I’ve been too guilty of that. Even though I know for a fact that my most popular posts, over the long run, are topics like: Multiple Accounts and SSH Keys . That got zero attention the day I published it but I’ve received random “thank you” emails every year since. Thing is though, I actually do get “thank you” emails for my stance against AI. There are a lot of developers who aren’t in a position to speak their mind. I don’t blame anyone for staying quiet when their job is on the line. I’m lucky I am my own boss. I’ve always blogged primarily for myself. That’s the secret to blogging I think. Regardless, after so many years I have the power to reach a significant audience. I feel somewhat obliged to do something with that. I’m just not sure I’m venting my frustrations in the right way. Maybe I am burning out and it’s affecting my judgement? I’m genuinely curious. Send me an email: [email protected] Are you burning out? Am I burning out? Or is the industry burning down around us? Feedback is always welcome. I can take criticism. I’ve received some absolute scorchers from anonymous cowards recently. I wish I could share those but I do respect my privacy policy . (That’s not an invitation for hate!) Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 3 weeks ago

404 Deno CEO not found

I visited deno.com yesterday. I wanted to know if the hundreds of hours I’d spent mastering Deno was a sunk cost. Do I continue building for the runtime, or go back to Node? Well I guess that pretty much sums up why a good chunk of Deno employees left the company over the last week. Layoffs are what American corpo culture calls firing half the staff. Totally normal practice for a sustainable business. Mass layoffs are deemed better for the moral of those who remain than a weekly culling before Friday beers. The Romans loved a good decimation. † If I were a purveyor of slop and tortured metaphors, I’d have adorned this post with a deepfake of Ryan Dahl fiddling as Deno burned. But I’m not, so the solemn screenshot will suffice. † I read Rome, Inc. recently. Not a great book, I’m just explaining the reference. A year ago I wrote about Deno’s decline . The facts, undeterred by my subjective scorn, painted a harsh picture; Deno Land Inc. was failing. Deno incorporated with $4.9M of seed capital five years ago. They raised a further $21M series A a year later. Napkin math suggests a five year runway for an unprofitable company (I have no idea, I just made that up.) Coincidentally, after my blog post topped Hacker News — always a pleasure for my inbox — Ryan Dahl (Deno CEO) clapped back on the offical Deno blog: There’s been some criticism lately about Deno - about Deploy, KV, Fresh, and our momentum in general. You may have seen some of the criticism online; it’s made the rounds in the usual places, and attracted a fair amount of attention. Some of that criticism is valid. In fact, I think it’s fair to say we’ve had a hand in causing some amount of fear and uncertainty by being too quiet about what we’re working on, and the future direction of our company and products. That’s on us. Reports of Deno’s Demise Have Been Greatly Exaggerated - Ryan Dahl Dahl mentioned that adoption had doubled following Deno 2.0. Since the release of Deno 2 last October - barely over six months ago! - Deno adoption has more than doubled according to our monthly active user metrics. User base doubling sounds like a flex for a lemonade stand unless you give numbers. I imagine Sequoia Capital expected faster growth regardless. The harsh truth is that Deno’s offerings have failed to capture developers’ attention. I can’t pretend to know why — I was a fanboy myself — but far too few devs care about Deno. On the rare occasions Deno gets attention on the orange site, the comments page reads like in memoriam . I don’t even think the problem was that Deno Deploy, the main source of revenue, sucked. Deploy was plagued by highly inconsistent isolate start times . Solicited feedback was ignored. Few cared. It took an issue from Wes Bos , one of the most followed devs in the game, for anyone at Deno to wake up. Was Deploy simply a ghost town? Deno rushed the Deploy relaunched for the end of 2025 and it became “generally available” last month. Anyone using it? Anyone care? The Deno layoffs this week suggest only a miracle would have saved jobs. The writing was on the wall. Speaking of ghost towns, the JSR YouTube channel is so lonely I feel bad for linking it. I only do because it shows just how little interest some Deno-led projects mustered. JSR floundered partly because Deno was unwilling couldn’t afford to invest in better infrastructure . But like everything else in the Deno ecosystem, users just weren’t interested. What makes a comparable project like NPMX flourish so quickly? Evidently, developers don’t want to replace Node and NPM. They just want what they already have but better; a drop-in improvement without friction. To Deno and Dahl’s credit, they recognised this with the U-turn on HTTP imports . But the resulting packaging mess made things worse. JSR should have been NPMX. Deno should have gone all-in on but instead we got mixed messaging and confused docs. I could continue but it would just be cruel to dissect further. I’ve been heavily critical of Deno in the past but I really wanted it to succeed. There were genuinely good people working at Deno who lost their job and that sucks. I hope the Deno runtime survives. It’s a breath of fresh air. B*n has far more bugs and compatibility issues than anyone will admit. Node still has too much friction around TypeScript and ECMAScript modules. So where does Deno go from here? Over to you, Ryan. Where is Deno CEO, Ryan Dahl? Tradition dictates an official PR statement following layoffs. Seems weird not to have one prepared in advance. That said, today is Friday, the day to bury bad news. I may be publishing this mere hours before we hear what happens next… Given Dahl’s recent tweets and blog post , a pivot to AI might be Deno’s gamble. By the way, it’s rather telling that all the ex-employees posted their departures on Bluesky. What that tells you depends on whether you enjoy your social media alongside Grok undressing women upon request. I digress. Idle speculation has led to baseless rumours of an OpenAI acquisition. I’m not convinced that makes sense but neither does the entire AI industry. I’m not trying to hate on Dahl but c’mon bro you’re the CEO. What’s next for Deno? Give me users anyone a reason to care. Although if you’re planning a 10× resurgence with automated Mac Minis, I regret asking. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 4 weeks ago

SMTP on the edge

Disclaimer: this post includes my worst idea yet! Until now my contact form submissions were posted to a Cloudflare worker. The worker encrypted the details with PGP encryption . It then used the Amazon AWS “Simple Email Service” API to send an email to myself. PGP encryption meant that any middleman after the worker, like Amazon, could not snoop. (TLS only encrypts in transit.) The setup was okay but involved too many services. If you thought that was over-engineered, get a load of my next idea. My experiment with a self-hosted SMTP server was short-lived but I did learn to code SMTP protocol with server-side JavaScript. During that tinkering I had issue upgrading TLS on the SMTP server for receiving email. In my recent AT Protocol PDS adventure I learned that Proton Mail can generate restricted tokens for SMTP client auth. I’ve also been slowly migrating from Cloudflare to Bunny in my spare time. I was reminded that Bunny has Deno edge workers. Lightbulb moment: can I rawdog SMTP in a Bunny worker? This cuts out the AWS middleman. Neither Bunny nor Proton ever see the unencrypted data. True end-to-end encryption for my contact form! I threw together a proof-of-concept. My script opened a TCP connection to Proton using and sent the SMTP message. The connection was upgraded with to secure it. It then followed a very fragile sequence of SMTP messages to authenticate and send an email. If the unexpected happened it bailed immediately. Surprisingly this worked! I’m not sharing code because I don’t want to be responsible for any misuse. There is nothing in Bunny’s Terms of Service or Acceptable Use policy that explicitly prohibits sending email. Magic containers do block ports but edge scripting doesn’t. I asked Bunny support who replied: While Edge Scripting doesn’t expose the same explicit port limitation table as Magic Containers, it’s not intended to be used as a general-purpose SMTP client or email relay. Outbound traffic is still subject to internal network controls, abuse prevention systems, and our Acceptable Use Policy. Even if SMTP connections may technically work in some cases, sending email directly from Edge Scripts (especially at scale) can trigger automated abuse protections. We actively monitor for spam and unsolicited email patterns, and this type of usage can be restricted without a specific “port block” being publicly documented. If you need to send transactional emails from your application, we strongly recommend using a dedicated email service provider (via API) rather than direct SMTP from Edge Scripting. bunny.net support …that isn’t an outright “no” but it’s obviously a bad idea. To avoid risking an account ban I decided to use the Bunny edge worker to forward the encrypted data to a self-hosted API. That service handles the SMTP. In theory I could decrypt and log locally, but I’d prefer to let Proton Mail manage security. I’m more likely to check my email inbox than a custom GUI anyway. The OpenPGP JavaScript module is a big boy at 388 KB (minified) and 144 KB (compressed). I load this very lazily after an event on my contact form. Last year in a final attempt to save my contact form I added a Cloudflare CAPTCHA to thwart bots. I’ve removed that now because I believe there is sufficient obfuscation and “proof-of-work” to deter bad guys. Binning both Cloudflare and Amazon feels good. I deleted my entire AWS account. My new contact form seems to be working. Please let me know if you’ve tried to contact me in the last two weeks and it errored. If this setup fails, I really will remove the form forever! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. PGP encryption in the browser to Bunny edge worker SMTP directly to Proton

0 views
David Bushell 1 months ago

What is agentic engineering?

Below is a parody of Simon Willison’s What is agentic engineering? I use the term agentic engineering to describe the practice of casino gambling with the assistance of random superstitions. What are random superstitions ? They’re superstitions that can both write and execute entropy. Popular examples include blowing on dice, wearing lucky socks, and saying a prayer. What’s a superstition ? Clearly defining that term is a challenge that has frustrated gambling researchers since at least the 1990s BC but the definition I’ve come to accept, at least in the field of Random Number Generators (RNGs) like GPT-5 and Gemini and Claude, is this one: The “superstition” is a belief that calls upon God with your prompt and passes it a set of magic definitions, then calls any ritual that the deity requests and feeds the results back into the slot machine. For random superstitions, those rituals include one that can confirm bias. You prompt the random superstition to define a bias. The superstition then generates and executes random numbers in a loop until that bias has been confirmed. Dogmatic faith is the defining capability that makes agentic engineering possible. Without the ability to directly play a hand, anything output by an RNG is of limited value. With automated card shuffling, these superstitions can start iterating towards gambling that demonstrably “works”. Enough of that. If you want to experience agenetic engineering yourself, visit my homepage and play the one-armed code bandit! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 1 months ago

SvelteKit i18n and FOWL

Perhaps my favourite JavaScript APIs live within the Internationalization namespace. A few neat things the global allows: It’s powerful stuff and the browser or runtime provides locale data for free! That means timezones, translations, and local conventions are handled for you. Remember moment.js? That library with locale data is over 600 KB (uncompressed). That’s why JavaScript now has the Internationalization API built-in. SvelteKit and similar JavaScript web frameworks allow you to render a web page server-side and “hydrate” in the browser. In theory , you get the benefits of an accessible static website with the progressively enhanced delights of a modern “web app”. I’m building attic.social with SvelteKit. It’s an experiment without much direction. I added a bookmarks feature and used to format dates. Perfect! Or was it? Disaster strikes! See this GIF: What is happening here? Because I don’t specify any locale argument in the constructor it uses the runtime’s default. When left unconfigured, many environments will default to . I spotted this bug only in production because I’m hosting on a Cloudflare worker. SvelteKit’s first render is server-side using but subsequent renders use in my browser. My eyes are briefly sullied by the inferior US format! Is there a name for this effect? If not I’m coining: “Flash of Wrong Locale” (FOWL). To combat FOWL we must ensure that SvelteKit has the user’s locale before any templates are rendered. Browsers may request a page with the HTTP header. The place to read headers is hooks.server.ts . I’ve vendored the @std/http negotiation library to parse the request header. If no locales are provided it returns which I change to . SvelteKit’s is an object to store custom data for the lifetime of a single request. Event are not directly accessible to SvelteKit templates. That could be dangerous. We must use a page or layout load function to forward the data. Now we can update the original example to use the data. I don’t think the rune is strictly necessary but it stops a compiler warning . This should eliminate FOWL unless the header is missing. Privacy focused browsers like Mullvad Browser use a generic header to avoid fingerprinting. That means users opt-out of internationalisation but FOWL is still gone. If there is a cache in front of the server that must vary based on the header. Otherwise one visitor defines the locale for everyone who follows unless something like a session cookie bypasses the cache. You could provide a custom locale preference to override browser settings. I’ve done that before for larger SvelteKit projects. Link that to a session and store it in a cookie, or database. Naturally, someone will complain they don’t like the format they’re given. This blog post is guaranteed to elicit such a comment. You can’t win! Why can’t you be normal, Safari? Despite using the exact same locale, Safari still commits FOWL by using an “at” word instead of a comma. Who’s fault is this? The ECMAScript standard recommends using data from Unicode CLDR . I don’t feel inclined to dig deeper. It’s a JavaScriptCore quirk because Bun does the same. That is unfortunate because it means the standard is not quite standard across runtimes. By the way, the i18n and l10n abbreviations are kinda lame to be honest. It’s a fault of my design choices that “internationalisation” didn’t fit well in my title. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Natural alphanumeric sorting Relative date and times Currency formatting

0 views
David Bushell 1 months ago

Building on AT Protocol

At Protocol has got me! I’m morphing into an atmosphere nerd . AT Protocol — atproto for short — is the underlying tech that powers Bluesky and new social web apps. Atproto as I understand it is largely an authorization and data layer. All atproto data is inherently public. In theory it can be encrypted for private use but leaky metadata and de-anonymisation is a whole thing. Atproto users own the keys to their data which is stored on a Personal Data Server (PDS). You don’t need to manage your own. If you don’t know where your data is stored, good chance it’s on Bluesky’s PDS. You can move your data to another PDS like Blacksky or Eurosky . Or if you’re a nerd like me self-host your own PDS . You own your data and no PDS can stop you moving it. Atproto provides OAuth; think “Sign in with GitHub” . But instead of an account being locked behind the whims of proprietary slopware, user identity is proven via their PDS. Social apps like Bluesky host a PDS allowing users to create a new account. That account can be used to login to other apps like pckt , Leaflet , or Tangled . You could start a new account on Tangled’s PDS and use that for Bluesky. Atproto apps are not required to provide a PDS but it helps to onboard new users. Of course I did. You can sign in at attic.social Attic is a cozy space with lofty ambitions. What does Attic do? I’m still deciding… it’ll probably become a random assortment of features. Right now it has bookmarks. Bookmarks will have search and tags soon. Technical details: to keep the server stateless I borrowed ideas from my old SvelteKit auth experiment. OAuth and session state is stored in encrypted HTTP-only cookies. I used the atcute TypeScript libraries to do the heavy atproto work. I found @flo-bit’s projects which helped me understand implementation details. Attic is on Cloudflare workers for now. When I’ve free time I’ll explore the SvelteKit Bunny adapter . I am busy on client projects so I’ll be scheming Attic ideas in my free time. What’s so powerful about atproto is that users can move their account/data. Apps write data to a PDS using a lexicon ; a convention to say: “this is a Bluesky post”, for example. Other apps are free to read that data too. During authorization, apps must ask for permission to write to specific lexicons. The user is in control. You may have heard that Bluesky is or isn’t “decentralised”. Bluesky was simply the first atproto app. Most users start on Bluesky and may never be aware of the AT Protocol. What’s important is that atproto makes it difficult for Bluesky to “pull a Twitter”, i.e. kill 3rd party apps, such as the alternate Witchsky . If I ever abandon attic.social your data is still in your hands. Even if the domain expires! You can extract data from your PDS. You can write a new app to consume it anytime. That’s the power of AT Protocol. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 1 months ago

Bunny.net shared storage zones

Whilst moving projects off Cloudflare and migrating to Bunny I discovered a neat ‘Bunny hack’ to make life easier. I like to explicitly say “no” to AI bots using AI robots.txt † . Updating this file across multiple websites is tedious. With Bunny it’s possible to use a single file. † I’m no fool, I know the AI industry has a consent problem but the principle matters. My solution was to create a new storage zone as a single source of truth. In the screenshot above I’ve uploaded my common file to its own storage zone. This zone doesn’t need any “pull zone” (CDN) connected. The file doesn’t need to be publicly accessible by itself here. With that ready I next visited each pull zone that will share the file. Under “CDN > Edge rules” in the menu I added the following rule. I chose the action: “Override Origin: Storage Zone” and selected the new shared zone. Under conditions I added a “Request URL” match for . Using a wildcard makes it easier to copy & paste. I tried dynamic variables but they don’t work for conditions. I added an identical edge rule for all websites I want to use the . Finally, I made sure the CDN cache was purged for those URLs. This technique is useful for other shared assets like a favicon, for example. Neat, right? One downside to this approach is vendor lock-in. If or when Bunny hops the shark and I migrate elsewhere I must find a new solution. My use case for is not critical to my websites functioning so it’s fine if I forget. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 1 months ago

MOOving to a self-hosted Bluesky PDS

Bluesky is a “Twitter clone” that runs on the AT Protocol . I have to be honest, I’d struggle to explain how atproto works. I think it’s similar to Nostr but like, good? When atproto devs talk about The Atmosphere they sound like blockchain bros. The marketing needs consideration. Bluesky however, is a lot of fun. Feels like early Twitter. Nobody cool uses Twitter anymore ever. It’s a cesspit of racists asking Gork to undress women. Mastodon and Bluesky are the social platforms I use. I’ve always been tempted to self-host my own Mastodon instance but the requirements are steep. I use the omg.lol server instead. Self-hosting the Bluesky PDS is much less demanding. My setup includes: This is the host machine I glued an NVMe onto the underside. All services run as Docker containers for easy security sandboxing. I say easy but it took many painful years to master Docker. I have the Pi on a VLAN firewall because I’m extra paranoid. I setup my Bluesky PDS using the official Docker container. It’s configure with environment variables and has a single data volume mounted. I backup that volume to my NAS. I’ve put Caddy in front of the PDS container. Right now it just acts as a reverse proxy. This gives me flexibility later if I want to add access logs, rate limiting, or other plugins. Booo! If you know a good European alternative please let me know! The tunnel links Caddy to the outside world via Cloudflare to avoid exposing my home IP address. Cloudflare also adds an extra level of bot protection. The guides I followed suggest adding wildcard DNS for the tunnel. Cloudflare has shuffled the dashboard for the umpteenth time and I can’t figure out how. I think sub-domains are only used for user handles, e.g. . I use a different custom domain for my handle ( ) with a manual TXT record to verify. Allowing the PDS to send emails isn’t strictly necessary. It’s useful for password resets and I think it’ll send a code if I migrate PDS again. I went through the hassle of adding my PDS domain to Proton Mail and followed their SMTP guide . This shows how the PDS enviornment variables are formatted. It took me forever to figure out where the username and password went. PDS MOOver by Bailey Townsend is the tool that does the data migration. It takes your Bluesky password and probably sees your private key, so use at your own risk! I setup a new account to test it before I YOLO’d my main. MOOve successful! I still login at but I now select “custom account provider” and enter my PDS domain. SkyTools has a tool that confirms it. Bluesky Debug can check handles are verified correctly. PDSIs.dev is a neat atproto explorer. I cross-referenced the following guides for help: Most of the Cloudflare stuff is outdated because Cloudflare rolls dice every month. Bluesky is still heavily centralised but the atproto layer allows anyone to control their own data. I like doing that on principle. I don’t like maintenance, but I’ve heard that’s minimal for a PDS. Supposedly it’s possible to migrate back to Bluesky’s PDS if I get bored. I’m tempted to build something in The Atmosphere . Any ideas? Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Notes on Self Hosting a Bluesky PDS Alongside Other Services Self-host federated Bluesky instance (PDS) with CloudFlare Tunnel Host a PDS via a Cloudflare Tunnel Self-hosting Bluesky PDS

1 views
David Bushell 1 months ago

Croissant and CORS proxy update

Croissant is my home-cooked RSS reader. I wish it was only a progressive web app (PWA) but due to missing CORS headers, many feeds remain inaccessible. My RSS feeds have the header and so should yours! Blogs Are Back has a guide to enable CORS for your blog . Bypassing CORS requires some kind of proxy. Other readers use a custom browser extension. That is clever, but extensions can be dangerous. I decided on two solutions. I wrapped my PWA in a Tauri app . This is also dangerous if you don’t trust me. I also provided a server proxy for the PWA. A proxy has privacy concerns but is much safer. I’m sorry if anyone is using Croissant as a PWA because the proxy is now gone. If a feed has the correct CORS headers it will continue to work. Sorry for the abrupt change. That’s super lame, I know! To be honest I’ve lost a bit of enthusiasm for the project and I can’t maintain a proxy. Croissant was designed to be limited in scope to avoid too much burden. In hindsight the proxy was too ambitious. Technically, yes! But you’ll have to figure that out by yourself. If you have questions, such as where to find the code, how the code works etc, the answer is no. I don’t mean to be rude, I just don’t have any time! You’re welcome to ask for support but unless I can answer in 30 seconds I’ll have to decline. Croissant is feature complete! It does what I set out to achieve. I have fixed several minor bugs and tweaked a few styles. Until inspiration (or a bug) strikes I won’t do another update anytime soon. Maybe later in the year I’ll decide to overhaul it? Who can predict! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 1 months ago

Everything you never wanted to know about visually-hidden

Nobody asked for it but nevertheless, I present to you my definitive “it depends” tome on visually-hidden web content. I’ll probably make an amendment before you’ve finished reading. If you enjoy more questions than answers, buckle up! I’ll start with the original premise, even though I stray off-topic on tangents and never recover. I was nerd-sniped on Bluesky. Ana Tudor asked : Is there still any point to most styles in visually hidden classes in ’26? Any point to shrinking dimensions to and setting when to nothing via / reduces clickable area to nothing? And then no dimensions = no need for . @anatudor.bsky.social Ana proposed the following: Is this enough in 2026? As an occasional purveyor of the class myself, the question wriggled its way into my brain. I felt compelled to investigate the whole ordeal. Spoiler: I do not have a satisfactory yes-or-no answer, but I do have a wall of text! I went so deep down the rabbit hole I must start with a table of contents: I’m writing this based on the assumption that a class is considered acceptable for specific use cases . My final section on native visually-hidden addresses the bigger accessibility concerns. It’s not easy to say where this technique is appropriate. It is generally agreed to be OK but a symptom of — and not a fix for — other design issues. Appropriate use cases for are far fewer than you think. Skip to the history lesson if you’re familiar. , — there have been many variations on the class name. I’ve looked at popular implementations and compiled the kitchen sink version below. Please don’t copy this as a golden sample. It merely encompasses all I’ve seen. There are variations on the selector using pseudo-classes that allow for focus. Think “skip to main content” links, for example. What is the purpose of the class? The idea is to hide an element visually, but allow it to be discovered by assistive technology. Screen readers being the primary example. The element must be removed from layout flow. It should leave no render artefacts and have no side effects. It does this whilst trying to avoid the bugs and quirks of web browsers. If this sounds and looks just a bit hacky to you, you have a high tolerance for hacks! It’s a massive hack! How was this normalised? We’ll find out later. I’ll whittle down the properties for those unfamiliar. Absolute positioning is vital to remove the element from layout flow. Otherwise the position of surrounding elements will be affected by its presence. This crops the visible area to nothing. remains as a fallback but has long been deprecated and is obsolete. All modern browsers support . These two properties remove styles that may add layout dimensions. This group effectively gives the element zero dimensions. There are reasons for instead of and negative margin that I’ll cover later. Another property to ensure no visible pixels are drawn. I’ve seen the newer value used but what difference that makes if any is unclear. This was added to address text wrapping inside the square (I’ll explain later). So basically we have and a load of properties that attempted to make the element invisible. We cannot use or or because those remove elements from the accessibility tree. So the big question remains: why must we still ‘zero’ the dimensions? Why is not sufficient? To make sense of this mystery I went back to the beginning. It was tricky to research this topic because older articles have been corrected with modern information. I recovered many details from the archives and mailing lists with the help of those involved. They’re cited along the way. Our journey begins November 2004. A draft document titled “CSS Techniques for WCAG 2.0” edited by Wendy Chisholm and Becky Gibson includes a technique for invisible labels. While it is usually best to include visual labels for all form controls, there are situations where a visual label is not needed due to the surrounding textual description of the control and/or the content the control contains. Users of screen readers, however, need each form control to be explicitly labeled so the intent of the control is well understood when navigated to directly. Creating Invisible labels for form elements ( history ) The following CSS was provided: Could this be the original class? My research jumped through decades but eventually I found an email thread “CSS and invisible labels for forms” on the W3C WAI mailing list. This was a month prior, preluding the WCAG draft. A different technique from Bob Easton was noted: The beauty of this technique is that it enables using as much text as we feel appropriate, and the elements we feel appropriate. Imagine placing instructive text about the accessibility features of the page off left (as well as on the site’s accessibility statement). Imagine interspersing “start of…” landmarks through a page with heading tags. Or, imagine parking full lists off left, lists of access keys, for example. Screen readers can easily collect all headings and read complete lists. Now, we have a made for screen reader technique that really works! Screenreader Visibility - Bob Easton (2003) Easton attributed both Choan Gálvez and Dave Shea for their contributions. In same the thread, Gez Lemon proposed to ensure that text doesn’t bleed into the display area . Following up, Becky Gibson shared a test case covering the ideas. Lemon later published an article “Invisible Form Prompts” about the WCAG plans which attracted plenty of commenters including Bob Easton. The resulting WCAG draft guideline discussed both the and ideas. Note that instead of using the nosize style described above, you could instead use postion:absolute; and left:-200px; to position the label “offscreen”. This technique works with the screen readers as well. Only position elements offscreen in the top or left direction, if you put an item off to the right or the bottom, many browsers will add scroll bars to allow the user to reach the content. Creating Invisible labels for form elements Two options were known and considered towards the end of 2004. Why not both? Indeed, it appears Paul Bohman on the WebAIM mailing list suggested such a combination in February 2004. Bohman even discovered possibly the first zero width bug. I originally recommended setting the height and width to 0 pixels. This works with JAWS and Home Page Reader. However, this does not work with Window Eyes. If you set the height and width to 1 pixel, then the technique works with all browsers and all three of the screen readers I tested. Re: Hiding text using CSS - Paul Bohman Later in May 2004, Bohman along with Shane Anderson published a paper on this technique. Citations within included Bob Easton and Tom Gilder . Aside note: other zero width bugs have been discovered since. Manuel Matuzović noted in 2023 that links in Safari were not focusable . The zero width story continues as recently as February 2026 (last week). In browse mode in web browsers, NVDA no longer treats controls with 0 width or height as invisible. This may make it possible to access previously inaccessible “screen reader only” content on some websites. NVDA 2026.1 Beta TWO now available - NV Access News Digger further into WebAIM’s email archive uncovered a 2003 thread in which Tom Gilder shared a class for skip navigation links . I found Gilder’s blog in the web archives introducing this technique. I thought I’d put down my “skip navigation” link method down in proper writing as people seem to like it (and it gives me something to write about!). Try moving through the links on this page using the keyboard - the first link should magically appear from thin air and allow you to quickly jump to the blog tools, which modern/visual/graphical/CSS-enabled browsers (someone really needs to come up with an acronym for that) should display to the left of the content. Skip-a-dee-doo-dah - Tom Gilder Gilder’s post links to a Dave Shea post which in turn mentions the 2002 book “Building Accessible Websites” by Joe Clark . Chapter eight discusses the necessity of a “skip navigation” link due to table-based layout but advises: Keep them visible! Well-intentioned developers who already use page anchors to skip navigation will go to the trouble to set the anchor text in the tiniest possible font in the same colour as the background, rendering it invisible to graphical browsers (unless you happen to pass the mouse over it and notice the cursor shape change). Building Accessible Websites - 08. Navigation - Joe Clark Clark expressed frustration over common tricks like the invisible pixel. It’s clear no class existed when this was written. Choan Gálvez informed me that Eric Meyer would have the css-discuss mailing list. Eric kindly searched the backups but didn’t find any earlier discussion. However, Eric did find a thread on the W3C mailing list from 1999 in which Ian Jacobs (IBM) discusses the accessibility of “skip navigation” links. The desire to visually hide “skip navigation” links was likely the main precursor to the early techniques. In fact, Bob Easton said as much: As we move from tag soup to CSS governed design, we throw out the layout tables and we throw out the spacer images. Great! It feels wonderful to do that kind of house cleaning. So, what do we do with those “skip navigation” links that used to be attached to the invisible spacer images? Screenreader Visibility - Bob Easton (2003) I had originally missed that in my excitement seeing the class. I reckon we’ve reached the source of the class. At least conceptually. Technically, the class emerged from several ideas, rather than a “eureka” moment. Perhaps more can be gleaned from other CSS techniques such a the desire to improve accessibility of CSS image replacement . Bob Easton retired in 2008 after a 40 year career at IBM. I reached out to Bob who was surprised to learn this technique was still a topic today † . Bob emphasised the fact that it was always a clumsy workaround and something CSS probably wasn’t intended to accommodate . I’ll share more of Bob’s thoughts later. † I might have overdone the enthusiasm Let’s take an intermission! My contact page is where you can send corrections by the way :) The class stabilised for a period. Visit 2006 in the Wayback Machine to see WebAIM’s guide to invisible content — Paul Bohman’s version is still recommended. Moving forward to 2011, I found Jonathan Snook discussing the “clip method”. Snook leads us to Drupal developer Jeff Burnz the previous year. […] we still have the big problem of the page “jump” issue if this is applied to a focusable element, such as a link, like skip navigation links. WebAim and a few others endorse using the LEFT property instead of TOP, but this no go for Drupal because of major pain-in-the-butt issues with RTL. In early May 2010 I was getting pretty frustrated with this issue so I pulled out a big HTML reference and started scanning through it for any, and I mean ANY property I might have overlooked that could possible be used to solve this thorny issue. It was then I recalled using clip on a recent project so I looked up its values and yes, it can have 0 as a value. Using CSS clip as an Accessible Method of Hiding Content - Jeff Burnz It would seem Burnz discovered the technique independently and was probably the first to write about it. Burnz also notes a right-to-left (RTL) issue. This could explain why pushing content off-screen fell out of fashion. 2010 also saw the arrival of HTML5 Boilerplate along with issue #194 in which Jonathan Neal plays a key role in the discussion and comments: If we want to correct for every seemingly-reasonable possibility of overflow in every browser then we may want to consider [code below] This was their final decision. I’ve removed for clarity. This is very close to what we have now, no surprise since HTML5 Boilterplate was extremely popular. I’m leaning to conclude that the additional properties are really just there for the “possibility” of pixels escaping containment as much as fixing any identified problem. Thierry Koblentz covered the state of affairs in 2012 noting that: Webkit, Opera and to some extent IE do not play ball with [clip] . Koblentz prophesies: I wrote the declarations in the previous rule in a particular order because if one day clip works as everyone would expect, then we could drop all declarations after clip, and go back to the original Clip your hidden content for better accessibility - Thierry Koblentz Sound familiar? With those browsers obsolete, and if behaves itself, can the other properties be removed? Well we have 14 years of new bugs features to consider first. In 2016, J. Renée Beach published: Beware smushed off-screen accessible text . This appears to be the origin of (as demonstrated by Vispero .) Over a few sessions, Matt mentioned that the string of text “Show more reactions” was being smushed together and read as “Showmorereactions”. Beach’s class did not include the kitchen sink. The addition of became standard alongside everything else. Aside note: the origin of remains elusive. One Bootstrap issue shows it was rediscovered in 2018 to fix a browser bug. However, another HTML5 Boilterplate issue dated 2017 suggests negative margin broke reading order. Josh Comeau shared a React component in 2024 without margin. One of many examples showing that it has come in and out of fashion. We started with WCAG so let’s end there. The latest WCAG technique for “Using CSS to hide a portion of the link text” provides the following code. Circa 2020 the property was added as browser support increased and became deprecated. An obvious change I not sure warrants investigation (although someone had to be first!) That brings us back to what we have today. Are you still with me? As we’ve seen, many of the properties were thrown in for good measure. They exist to ensure absolutely no pixels are painted. They were adapted over the years to avoid various bugs, quirks, and edge cases. How many such decisions are now irrelevant? This is a classic Chesterton’s Fence scenario. Do not remove a fence until you know why it was put up in the first place. Well we kinda know why but the specifics are practically folklore at this point. Despite all that research, can we say for sure if any “why” is still relevant? Back to Ana Tudor’s suggestion. How do we know for sure? The only way is extensive testing. Unfortunately, I have neither the time nor skill to perform that adequately here. There is at least one concern with the code above, Curtis Wilcox noted that in Safari the focus ring behaves differently. Other minimum viable ideas have been presented before. Scott O’Hara proposed a different two-liner using . JAWS, Narrator, NVDA with Edge all seem to behave just fine. As do Firefox with JAWS and NVDA, and Safari on macOS with VoiceOver. Seems also fine with iOS VO+Safari and Android TalkBack with Firefox or Chrome. In none of these cases do we get the odd focus rings that have occurred with other visually hidden styles, as the content is scaled down to zero. Also because not hacked into a 1px by 1px box, there’s no text wrapping occurring, so no need to fix that issue. transform scale(0) to visually hide content - Scott O’Hara Sounds promising! It turns out Katrin Kampfrath had explored both minimum viable classes a couple of years ago, testing them against the traditional class. I am missing the experience and moreover actual user feedback, however, i prefer the screen reader read cursor to stay roughly in the document flow. There are screen reader users who can see. I suppose, a jumping read cursor is a bit like a shifting layout. Exploring the visually-hidden css - Katrin Kampfrath Kampfrath’s limited testing found the read cursor size differs for each class. The technique was favoured but caution is given. A few more years ago, Kitty Giraudel tested several ideas concluding that was still the most accessible for specific text use. This technique should only be used to mask text. In other words, there shouldn’t be any focusable element inside the hidden element. This could lead to annoying behaviours, like scrolling to an invisible element. Hiding content responsibly - Kitty Giraudel Zell Liew proposed a different idea in 2019. Many developers voiced their opinions, concerns, and experiments over at Twitter. I wanted to share with you what I consolidated and learned. A new (and easy) way to hide content accessibly - Zell Liew Liew’s idea was unfortunately torn asunder. Although there are cases like inclusively hiding checkboxes where near-zero opacity is more accessible. I’ve started to go back in time again! I’m also starting to question whether this class is a good idea. Unless we are capable and prepared to thoroughly test across every combination of browser and assistive technology — and keep that information updated — it’s impossible to recommend anything. This is impossible for developers! Why can’t browser vendors solve this natively? Once you’ve written 3000 words on a twenty year old CSS hack you start to question why it hasn’t been baked into web standards by now. Ben Myers wrote “The Web Needs a Native .visually-hidden” proposing ideas from HTML attributes to CSS properties. Scott O’Hara responded noting larger accessibility issues that are not so easily handled. O’Hara concludes: Introducing a native mechanism to save developers the trouble of having to use a wildly available CSS ruleset doesn’t solve any of those underlying issues. It just further pushes them under the rug. Visually hidden content is a hack that needs to be resolved, not enshrined - Scott O’Hara Sara Soueidan had floated the topic to the CSS working group back in 2016. Soueidan closed the issue in 2025, coming to a similar conclusion. I’ve been teaching accessibility for a little less than a decade now and if there’s one thing I learned is that developers will resort to using utility to do things that are more often than not just bad design decisions. Yes, there are valid and important use cases. But I agree with all of @scottaohara’s points, and most importantly I agree that we need to fix the underlying issues instead of standardizing a technique that is guaranteed to be overused and misused even more once it gets easier to use. csswg-drafts comment - Sara Soueidan Adrian Roselli has a blog post listing priorities for assigning an accessible name to a control. Like O’Hara and Soueidan, Roselli recognises there is no silver bullet. Hidden text is also used too casually to provide information for just screen reader users, creating overly-verbose content . For sighted screen reader users , it can be a frustrating experience to not be able to find what the screen reader is speaking, potentially causing the user to get lost on the page while visually hunting for it. My Priority of Methods for Labeling a Control - Adrian Roselli In short, many believe that a native visually-hidden would do more harm than good. The use-cases are far more nuanced and context sensitive than developers realise. It’s often a half-fix for a problem that can be avoided with better design. I’m torn on whether I agree that it’s ultimately a bad idea. A native version would give software an opportunity to understand the developer’s intent and define how “visually hidden” works in practice. It would be a pragmatic addition. The technique has persisted for over two decades and is still mentioned by WCAG. Yet it remains hacks upon hacks! How has it survived for so long? Is that a failure of developers, or a failure of the web platform? The web is overrun with inaccessible div soup . That is inexcusable. For the rest of us who care about accessibility — who try our best — I can’t help but feel the web platform has let us down. We shouldn’t be perilously navigating code hacks, conflicting advice, and half-supported standards. We need more energy money dedicated to accessibility. Not all problems can be solved with money. But what of the thousands of unpaid hours, whether volunteered or solicited, from those seeking to improve the web? I risk spiralling into a rant about browser vendors’ financial incentives, so let’s wrap up! I’ll end by quoting Bob Easton from our email conversation: From my early days in web development, I came to the belief that semantic HTML, combined with faultless keyboard navigation were the essentials for blind users. Experience with screen reader users bears that out. Where they might occasionally get tripped up is due to developers who are more interested in appearance than good structural practices. The use cases for hidden content are very few, such as hidden information about where a search field is, when an appearance-centric developer decided to present a search field with no visual label, just a cute unlabeled image of a magnifying glass. […] The people promoting hidden information are either deficient in using good structural practices, or not experienced with tools used by people they want to help. Bob ended with: You can’t go wrong with well crafted, semantically accurate structure. Ain’t that the truth. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Accessibility notice Class walkthrough Where it all began Further adaptations Minimum viable technique Native visually-hidden Zero dimensions Position off-screen

6 views
David Bushell 1 months ago

Web font choice and loading strategy

When I rebuilt my website I took great care to optimise fonts for both performance and aesthetics. Fonts account for around 50% of my website (bytes downloaded on an empty cache). I designed and set a performance budget around my font usage. I use three distinct font families and three different methods to load them. Web fonts are usually defined by the CSS rule. The property allows us some control over how fonts are loaded. The value has become somewhat of a best practice — at least the most common default. The CSS spec says: Gives the font face an extremely small block period (100ms or less is recommended in most cases) and an infinite swap period . In other words, the browser draws the text immediately with a fallback if the font face isn’t loaded, but swaps the font face in as soon as it loads. CSS Fonts Module Level 4 - W3C That small “block period”, if implemented by the browser, renders an invisible font temporarily to minimise FOUC . Personally I default to and don’t change unless there are noticeable or measurable issues. Most of the time you’ll use swap. If you don’t know which option to use, go with swap. It allows you to use custom fonts and tip your hand to accessibility. font-display for the Masses - Jeremy Wagner Google Fonts’ default to which has performance gains. In effect, this makes the font files themselves asynchronous—the browser immediately displays our fallback text before swapping to the web font whenever it arrives. This means we’re not going to leave users looking at any invisible text (FOIT), which makes for both a faster and more pleasant experience. Speed Up Google Fonts - Harry Roberts Harry further notes that a suitable fallback is important, as I’ll discover below. My three fonts in order of importance are: Ahkio for headings. Its soft brush stroke style has a unique hand-drawn quality that remains open and legible. As of writing, I load three Ahkio weights at a combined 150 KB. That is outright greed! Ahkio is core to my brand so it takes priority in my performance budget (and financial budget, for that matter!) Testing revealed the 100ms † block period was not enough to avoid FOUC, despite optimisation techniques like preload . Ahkio’s design is more condensed so any fallback can wrap headings over additional lines. This adds significant layout shift. † Chrome blog mention a zero second block period . Firefox has a config preference default of 100ms. My solution was to use instead of which extends the block period from a recommended 0–100ms up to a much longer 3000ms. Gives the font face a short block period (3s is recommended in most cases) and an infinite swap period . In other words, the browser draws “invisible” text at first if it’s not loaded, but swaps the font face in as soon as it loads. CSS Fonts Module Level 4 - W3C This change was enough to avoid ugly FOUC under most conditions. Worst case scenario is three seconds of invisible headings. With my website’s core web vitals a “slow 4G” network can beat that by half. For my audience an extended block period is an acceptable trade-off. Hosting on an edge CDN with good cache headers helps minimised the cost. Update: Richard Rutter suggested which gives more fallback control than I knew. I shall experiment and report back! Atkinson Hyperlegible Next for body copy. It’s classed as a grotesque sans-serif with interesting quirks such as a serif on the lowercase ‘i’. I chose this font for both its accessible design and technical implementation as a variable font . One file at 78 KB provides both weight and italic variable axes. This allows me to give links a subtle weight boost. For italics I just go full-lean. I currently load Atkinson Hyperlegible with out of habit but I’m strongly considering why I don’t use . Gives the font face an extremely small block period (100ms or less is recommended in most cases) and a short swap period (3s is recommended in most cases). In other words, the font face is rendered with a fallback at first if it’s not loaded, but it’s swapped in as soon as it loads. However, if too much time passes, the fallback will be used for the rest of the page’s lifetime instead. CSS Fonts Module Level 4 - W3C The browser can give up and presumably stop downloading the font. The spec actually says that and “[must/should] only be used for small pieces of text.” Although it notes that most browsers implement the default with similar strategies to . 0xProto for code snippets. If my use of Ahkio was greedy, this is gluttonous! A default would be acceptable. My justification is that controlling presentation of code on a web development site is reasonable. 0xProto is designed for legibility with a personality that compliments my design. I don’t specify 0xProto with the CSS rule. Instead I use the JavaScript font loading API to conditionally load when a element is present. Note the name change because some browsers aren’t happy with a numeric first character. Not shown is the event wrapper around this code. I also load the script with both and attributes. This tells the browser the script is non-critical and avoids render blocking. I could probably defer loading even later without readers noticing the font pop in. Update: for clarity, browsers will conditionally load but JavaScript can purposefully delay the loading further to avoid fighting for bandwidth. When JavaScript is not available the system default is fine. There we have it, three fonts, three strategies, and a few open questions and decisions to make. Those may be answered when CrUX data catches up. My new website is a little chunkier than before but its well within reasonable limits. I’ll monitor performance and keep turning the dials. Web performance is about priorities . In isolation it’s impossible to say exactly how an individual asset should be loaded. There are upper limits, of course. How do you load a one megabyte font? You don’t. Unless you’re a font studio providing a complete type specimen. But even then you could split the font and progressive load different unicode ranges. I wonder if anyone does that? Anyway I’m rambling now, bye. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 2 months ago

Declarative Dialog Menu with Invoker Commands

The off-canvas menu — aka the Hamburger , if you must — has been hot ever since Jobs’ invented mobile web and Ethan Marcott put a name to responsive design . Making an off-canvas menu free from heinous JavaScript has always been possible, but not ideal. I wrote up one technique for Smashing Magazine in 2013. Later I explored in an absurdly titled post where I used the new Popover API . I strongly push clients towards a simple, always visible, flex-box-wrapping list of links. Not least because leaving the subject unattended leads to a multi-level monstrosity. I also believe that good design and content strategy should allow users to navigate and complete primary goals without touching the “main menu”. However, I concede that Hamburgers are now mainstream UI. Jason Bradberry makes a compelling case . This month I redesigned my website . Taking the menu off-canvas at all breakpoints was a painful decision. I’m still not at peace with it. I don’t like plain icons. To somewhat appease my anguish I added big bold “Menu” text. The HTML for the button is pure declarative goodness. I added an extra “open” prefix for assistive tech. Aside note: Ana Tudor asked do we still need all those “visually hidden” styles? I’m using them out of an abundance of caution but my feeling is that Ana is on to something. The menu HTML is just as clean. It’s that simple! I’ve only removed my opinionated class names I use to draw the rest of the owl . I’ll explain more of my style choices later. This technique uses the wonderful new Invoker Command API for interactivity. It is similar to the I mentioned earlier. With a real we get free focus management and more, as Chris Coyier explains . I made a basic CodePen demo for the code above. So here’s the bad news. Invoker commands are so new they must be polyfilled for old browsers. Good news; you don’t need a hefty script. Feature detection isn’t strictly necessary. Keith Cirkel has a more extensive polyfill if you need full API coverage like JavaScript events. My basic version overrides the declarative API with the JavaScript API for one specific use case, and the behaviour remains the same. Let’s get into CSS by starting with my favourite: A strong contrast outline around buttons and links with room to breath. This is not typically visible for pointer events. For other interactions like keyboard navigation it’s visible. The first button inside the dialog, i.e. “Close (menu)”, is naturally given focus by the browser (focus is ‘trapped’ inside the dialog). In most browsers focus remains invisible for pointer events. WebKit has bug. When using or invoker commands the style is visible on the close button for pointer events. This seems wrong, it’s inconsistent, and clients absolutely rage at seeing “ugly” focus — seriously, what is their problem?! I think I’ve found a reliable ‘fix’. Please do not copy this untested . From my limited testing with Apple devices and macOS VoiceOver I found no adverse effects. Below I’ve expanded the ‘not open’ condition within the event listener. First I confirm the event is relevant. I can’t check for an instance of because of the handler. I’d have to listen for keyboard events and that gets murky. Then I check if the focused element has the visible style. If both conditions are true, I remove and reapply focus in a non-visible manner. The boolean is Safari 18.4 onwards. Like I said: extreme caution! But I believe this fixes WebKit’s inconsistency. Feedback is very welcome. I’ll update here if concerns are raised. Native dialog elements allow us to press the ESC key to dismiss them. What about clicking the backdrop? We must opt-in to this behaviour with the attribute. Chris Ferdinandi has written about this and the JavaScript fallback . That’s enough JavaScript! My menu uses a combination of both basic CSS transitions and cross-document view transitions . For on-page transitions I use the setup below. As an example here I fade opacity in and out. How you choose to use nesting selectors and the rule is a matter of taste. I like my at-rules top level. My menu also transitions out when a link is clicked. This does not trigger the closing dialog event. Instead the closing transition is mirrored by a cross-document view transition. The example below handles the fade out for page transitions. Note that I only transition the old view state for the closing menu. The new state is hidden (“off-canvas”). Technically it should be possible to use view transitions to achieve the on-page open and close effects too. I’ve personally found browsers to still be a little janky around view transitions — bugs, or skill issue? It’s probably best to wrap a media query around transitions. “Reduced” is a significant word. It does not mean “no motion”. That said, I have no idea how to assess what is adequately reduced! No motion is a safe bet… I think? So there we have it! Declarative dialog menu with invoker commands, topped with a medley of CSS transitions and a sprinkle of almost optional JavaScript. Aren’t modern web standards wonderful, when they work? I can’t end this topic without mentioning Jim Nielsen’s menu . I won’t spoil the fun, take a look! When I realised how it works, my first reaction was “is that allowed?!” It work’s remarkably well for Jim’s blog. I don’t recall seeing that idea in the wild elsewhere. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 2 months ago

Big Design, Bold Ideas

I’ve only gone and done it again! I redesigned my website. This is the eleventh major version. I dare say it’s my best attempt yet. There are similarities to what came before and plenty of fresh CSS paint to modernise the style. You can visit my time machine to see the ten previous designs that have graced my homepage. Almost two decades of work. What a journey! I’ve been comfortable and coasting for years. This year feels different. I’ve made a career building for the open web. That is now under attack. Both my career, and the web. A rising sea of slop is drowning out all common sense. I’m seeing peers struggle to find work, others succumb to the chatbot psychosis. There is no good reason for such drastic change. Yet change is being forced by the AI industrial complex on its relentless path of destruction. I’m not shy about my stance on AI . No thanks! My new homepage doubles down. I won’t be forced to use AI but I can’t ignore it. Can’t ignore the harm. Also I just felt like a new look was due. Last time I mocked up a concept in Adobe XD . Adobe in now unfashionable and Figma, although swank, has that Silicon Valley stench . Penpot is where the cool kids paint pretty pictures of websites. I’m somewhat of an artist myself so I gave Penpot a go. My current brand began in 2016 and evolved in 2018 . I loved the old design but the rigid layout didn’t afford much room to play with content. I spent a day pushing pixels and was quite chuffed with the results. I designed my bandit game in Pentpot too (below). That gave me the confidence to move into real code. I’m continuing with Atkinson Hyperlegible Next for body copy. I now license Ahkio for headings. I used Komika Title before but the all-caps was unwieldy. I’m too lazy to dig through backups to find my logotype source. If you know what font “David” is please tell me! I worked with Axia Create on brand strategy. On that front, we’ll have more exciting news to share later in the year! For now what I realised is that my audience here is technical. The days of small business owners seeking me are long gone. That market is served by Squarespace or Wix. It’s senior tech leads who are entrusted to find and recruit me, and peers within the industry who recommend me. This understanding gave me focus. To illustrate why AI is lame I made an interactive mini-game! The slot machine metaphor should be self-explanatory. I figured a bit of comedy would drive home my AI policy . In the current economy if you don’t have a sparkle emoji is it even a website? The game is built with HTML canvas, web components, and synchronised events I over-complicated to ensure a unique set of prizes. The secret to high performance motion blur is to cheat with pre-rendered PNGs. In hindsight I could have cheated more with a video. I commissioned Declan Chidlow to create a bespoke icon set. Declan delivered! The icons look so much better than the random assortment of placeholders I found. I’m glad I got a proper job done. I have neither the time nor skill for icons. Declan read my mind because I received a 88×31 web badge bonus gift. I had mocked up a few badges myself in Penpot. Scroll down to see them in the footer. Declan’s badge is first and my attempts follow. I haven’t quite nailed the pixel look yet. My new menu is built using with invoker commands and view transitions for a JavaScript-free experience. Modern web standards are so cool when the work together! I do have a tiny JS event listener to polyfill old browsers. The pixellated footer gradient is done with a WebGL shader. I had big plans but after several hours and too many Stack Overflow tabs, I moved on to more important things. This may turn into something later but I doubt I’ll progress trying to learn WebGL. Past features like my Wasm static search and speech synthesis remain on the relevant blog pages. I suspect I’ll be finding random one-off features I forgot to restyle. My homepage ends with another strong message. The internet is dominated by US-based big tech. Before backing powers across the Atlantic, consider UK and EU alternatives. The web begins at home. I remain open to working with clients and collaborators worldwide. I use some ‘big tech’ but I’m making an effort to push for European alternatives. US-based tech does not automatically mean “bad” but the absolute worst is certainly thriving there! Yeah I’m English, far from the smartest kind of European, but I try my best. I’ve been fortunate to find work despite the AI threat. I’m optimistic and I refuse to back down from calling out slop for what it is! I strongly believe others still care about a job well done. I very much doubt the touted “10x productivity” is resulting in 10x profits. The way I see it, I’m cheaper, better, and more ethical than subsidised slop. Let me know on the socials if you love or hate my new design :) P.S. I published this Sunday because Heisenbugs only appear in production. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 2 months ago

Mozilla Slopaganda

Mozilla published a new State of Mozilla . It’s absolute slopaganda . A mess of trippy visuals and corpo-speak that’s been through the slop wringer too many times. I read it so you don’t have to. ⚠️ Warning: the State of Mozilla website has flashing graphics and janky animations. The website opens with a faux terminal console that logs some nonsense that’s too fast to read. It includes a weird line about “synth-mushroom pop”. This is not the only bizarre reference to magic mushrooms as we’ll see later. What other connection am I missing here? If you’re lucky, next you’ll be treated to a fake CAPTCHA including words “HU MAN”, “INT ERNET”, and “FU TUR E”. The leading headline is: Doing for AI what we did for the web Which I take to mean, lighting a fire under Microsoft and then fading into obscurity. They directly call out Microsoft later. Back then Mozilla had a competitive web product: Firefox. Mozilla has no AI product today to do anthing. Mozilla’s opening sales pitch — which is certified slop — includes this bullet point: We Choose a Different Economic Model: A ‘double bottom line’ — advancing our mission and shaping markets — continues to guide everything we do. If I may direct you to Mozilla’s financial report PDF: Approximately 86% and 85% of Mozilla’s revenues from customers with contracts were derived from one customer for the years ended December 31, 2024 and 2023, respectively. 2024 Audited Financial Statement (page 19) That one customer is of course: Google. I wonder why Mozilla are painting Microsoft as the enemy? Mozilla’s idea of a double bottom line is explained further on the “LEDGER” page † . Although extracting meaning from the words is beyond me. † I can’t link to anything because of the stupid splash screen and CAPTCHAs. The stakes section is wild. We’re asked to choose one of two futures. Future A is choosing Microsoft. This is represented by hot people making out and a creepy robot driver. Twenty five years ago, when Microsoft controlled 95% of browsers, they defined how people accessed information, who could build what, and on what terms. Mozilla did topple Microsoft’s Internet Explorer reign. Then both lost the browser war to Google’s Chrome. But no mention of Google; can’t bite the hand that feeds you. Future B is choosing Mozilla. And more shrooms, apparently. Imagine it’s 2030. You wake up and your digital world feels different — familiar, but freer. This future is AI fan fiction because Mozilla doesn’t actually have any AI products. So which future do you choose, dogging — don’t google it, or shrooms? Seriously, what is with the mushroom references? Is this part of the edgy “fellow kids” rebranding? The page is also adorned with a marquee repeating: “DO NOT ACCEPT DEFAULT SETTINGS”. This must be a subtle reference to Firefox’s on-by-default telemetry, “privacy-preserving” tracking, Google search, and new AI integration? This section is slop. If the future of AI and the web are still up for grabs, the tools we build - our products, programs and investments - are our most powerful levers in shaping how things work out. So surely Mozilla has an AI product to showcase now, right? Wrong. They have Firefox and Thunderbird. It’s nice they remembered about Thunderbird though that wasn’t a given. The rest of the page is vague vacuous promises about AI investment. Who is this written for, the ever expanding board and c-suite? Mozilla was born to challenge a tech monopoly […] — just not papa Google’s — […] and we succeeded not by becoming a significant player in the browser market […] — again leaning on history. Firefox has not been “significant” in years. Mozilla leadership has watched Firefox market share plummet and been helpless. Even if I was an AI lover, Mozilla has nothing to compete in the AI space. They just say things. With mushrooms. We’re focusing our ~$1.4B in reserves Mozilla claim $1.4 billion in reserves (and no debt). They’re funded by over half a billy anually from Google. 👏 Stop 👏 donating 👏 to 👏 Mozilla 👏 Mozilla is the same Big Tech they pretend to rebel against. Donate your money to a worthy open source independent project before it’s drowned by slop. State of Mozilla ends by covering the same empty AI-infused promises. Mozilla talk about the past because that’s all they have. Mozilla fantasise about the future because they have nothing in the present. My favourite part: Also, launch “AI controls” into Firefox, giving people a clear way to turn AI off entirely - current and future AI features. One of the few clear deliverable goals is an AI “kill switch”. Mozilla aren’t exactly sure what their AI future will be but at least you can say no . That’s something! As fun as it is to rib on Mozilla, the web needs Firefox. I feel for the Firefox developers who actually care. State of Mozilla will inspire no one. The sloppy prose are borderline unreadable. The presentation is designed to stop you reading. THE FUTURE IS EXPERIMENTAL SYNTH-MUSHROOM POP The future is Microsoft. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views