Posts in Css (20 found)
Den Odell 3 days ago

Escape Velocity: Break Free from Framework Gravity

Frameworks were supposed to free us from the messy parts of the web. For a while they did, until their gravity started drawing everything else into orbit. Every framework brought with it real progress. React, Vue, Angular, Svelte, and others all gave structure, composability, and predictability to frontend work. But now, after a decade of React dominance, something else has happened. We haven’t just built apps with React, we’ve built an entire ecosystem around it—hiring pipelines, design systems, even companies—all bound to its way of thinking. The problem isn’t React itself, nor any other framework for that matter. The problem is the inertia that sets in once any framework becomes infrastructure. By that point, it’s “too important to fail,” and everything nearby turns out to be just fragile enough to prove it. React is no longer just a library. It’s a full ecosystem that defines how frontend developers are allowed to think. Its success has created its own kind of gravity, and the more we’ve built within it, the harder it’s become to break free. Teams standardize on it because it’s safe: it’s been proven to work at massive scale, the talent pool is large, and the tooling is mature. That’s a rational choice, but it also means React exerts institutional gravity. Moving off it stops being an engineering decision and becomes an organizational risk instead. Solutions to problems tend to be found within its orbit, because stepping outside it feels like drifting into deep space. We saw this cycle with jQuery in the past, and we’re seeing it again now with React. We’ll see it with whatever comes next. Success breeds standardization, standardization breeds inertia, and inertia convinces us that progress can wait. It’s the pattern itself that’s the problem, not any single framework. But right now, React sits at the center of this dynamic, and the stakes are far higher than they ever were with jQuery. Entire product lines, architectural decisions, and career paths now depend on React-shaped assumptions. We’ve even started defining developers by their framework: many job listings ask for “React developers” instead of frontend engineers. Even AI coding agents default to React when asked to start a new frontend project, unless deliberately steered elsewhere. Perhaps the only thing harder than building on a framework is admitting you might need to build without one. React’s evolution captures this tension perfectly. Recent milestones include the creation of the React Foundation , the React Compiler reaching v1.0 , and new additions in React 19.2 such as the and Fragment Refs. These updates represent tangible improvements. Especially the compiler, which brings automatic memoization at build time, eliminating the need for manual and optimization. Production deployments show real performance wins using it: apps in the Meta Quest Store saw up to 2.5x faster interactions as a direct result. This kind of automatic optimization is genuinely valuable work that pushes the entire ecosystem forward. But here’s the thing: the web platform has been quietly heading in the same direction for years, building many of the same capabilities frameworks have been racing to add. Browsers now ship View Transitions, Container Queries, and smarter scheduling primitives. The platform keeps evolving at a fair pace, but most teams won’t touch these capabilities until React officially wraps them in a hook or they show up in Next.js docs. Innovation keeps happening right across the ecosystem, but for many it only becomes “real” once React validates the approach. Which is fine, assuming you enjoy waiting for permission to use the platform you’re already building on. The React Foundation represents an important milestone for governance and sustainability. This new foundation is a part of the Linux Foundation, and founding members include Meta, Vercel, Microsoft, Amazon, Expo, Callstack, and Software Mansion. This is genuinely good for React’s long-term health, providing better governance and removing the risk of being owned by a single company. It ensures React can outlive any one organization’s priorities. But it doesn’t fundamentally change the development dynamic of the framework. Yet. The engineers who actually build React still work at companies like Meta and Vercel. The research still happens at that scale, driven by those performance needs. The roadmap still reflects the priorities of the companies that fund full-time development. And to be fair, React operates at a scale most frameworks will never encounter. Meta serves billions of users through frontends that run on constrained mobile devices around the world, so it needs performance at a level that justifies dedicated research teams. The innovations they produce, including compiler-driven optimization, concurrent rendering, and increasingly fine-grained performance tooling, solve real problems that exist only at that kind of massive scale. But those priorities aren’t necessarily your priorities, and that’s the tension. React’s innovations are shaped by the problems faced by companies running apps at billions-of-users scale, not necessarily the problems faced by teams building for thousands or millions. React’s internal research reveals the team’s awareness of current architectural limitations. Experimental projects like Forest explore signal-like lazy computation graphs; essentially fine-grained reactivity instead of React’s coarse re-render model. Another project, Fir , investigates incremental rendering techniques. These aren’t roadmap items; they’re just research prototypes happening inside Meta. They may never ship publicly. But they do reveal something important: React’s team knows the virtual DOM model has performance ceilings and they’re actively exploring what comes after it. This is good research, but it also illustrates the same dynamic at play again: that these explorations happen behind the walls of Big Tech, on timelines set by corporate priorities and resource availability. Meanwhile, frameworks like Solid and Qwik have been shipping production-ready fine-grained reactivity for years. Svelte 5 shipped runes in 2024, bringing signals to mainstream adoption. The gap isn’t technical capability, but rather when the industry feels permission to adopt it. For many teams, that permission only comes once React validates the approach. This is true regardless of who governs the project or what else exists in the ecosystem. I don’t want this critique to take away from what React has achieved over the past twelve years. React popularized declarative UIs and made component-based architecture mainstream, which was a huge deal in itself. It proved that developer experience matters as much as runtime performance and introduced the idea that UI could be a pure function of input props and state. That shift made complex interfaces far easier to reason about. Later additions like hooks solved the earlier class component mess elegantly, and concurrent rendering through `` opened new possibilities for truly responsive UIs. The React team’s research into compiler optimization, server components, and fine-grained rendering pushes the entire ecosystem forward. This is true even when other frameworks ship similar ideas first. There’s value in seeing how these patterns work at Meta’s scale. The critique isn’t that React is bad, but that treating any single framework as infrastructure creates blind spots in how we think and build. When React becomes the lens through which we see the web, we stop noticing what the platform itself can already do, and we stop reaching for native solutions because we’re waiting for the framework-approved version to show up first. And crucially, switching to Solid, Svelte, or Vue wouldn’t eliminate this dynamic; it would only shift its center of gravity. Every framework creates its own orbit of tools, patterns, and dependencies. The goal isn’t to find the “right” framework, but to build applications resilient enough to survive migration to any framework, including those that haven’t been invented yet. This inertia isn’t about laziness; it’s about logistics. Switching stacks is expensive and disruptive. Retraining developers, rebuilding component libraries, and retooling CI pipelines all take time and money, and the payoff is rarely immediate. It’s high risk, high cost, and hard to justify, so most companies stay put, and honestly, who can blame them? But while we stay put, the platform keeps moving. The browser can stream and hydrate progressively, animate transitions natively, and coordinate rendering work without a framework. Yet most development teams won’t touch those capabilities until they’re built in or officially blessed by the ecosystem. That isn’t an engineering limitation; it’s a cultural one. We’ve somehow made “works in all browsers” feel riskier than “works in our framework.” Better governance doesn’t solve this. The problem isn’t React’s organizational structure; it’s our relationship to it. Too many teams wait for React to package and approve platform capabilities before adopting them, even when those same features already exist in browsers today. React 19.2’s `` component captures this pattern perfectly. It serves as a boundary that hides UI while preserving component state and unmounting effects. When set to , it pauses subscriptions, timers, and network requests while keeping form inputs and scroll positions intact. When revealed again by setting , those effects remount cleanly. It’s a genuinely useful feature. Tabbed interfaces, modals, and progressive rendering all benefit from it, and the same idea extends to cases where you want to pre-render content in the background or preserve state as users navigate between views. It integrates smoothly with React’s lifecycle and `` boundaries, enabling selective hydration and smarter rendering strategies. But it also draws an important line between formalization and innovation . The core concept isn’t new; it’s simply about pausing side effects while maintaining state. Similar behavior can already be built with visibility observers, effect cleanup, and careful state management patterns. The web platform even provides the primitives for it through tools like , DOM state preservation, and manual effect control. What . Yet it also exposes how dependent our thinking has become on frameworks. We wait for React to formalize platform behaviors instead of reaching for them directly. This isn’t a criticism of `` itself; it’s a well-designed API that solves a real problem. But it serves as a reminder that we’ve grown comfortable waiting for framework solutions to problems the platform already lets us solve. After orbiting React for so long, we’ve forgotten what it feels like to build without its pull. The answer isn’t necessarily to abandon your framework, but to remember that it runs inside the web, not the other way around. I’ve written before about building the web in islands as one way to rediscover platform capabilities we already have. Even within React’s constraints, you can still think platform first: These aren’t anti-React practices, they’re portable practices that make your web app more resilient. They let you adopt new browser capabilities as soon as they ship, not months later when they’re wrapped in a hook. They make framework migration feasible rather than catastrophic. When you build this way, React becomes a rendering library that happens to be excellent at its job, not the foundation everything else has to depend on. A React app that respects the platform can outlast React itself. When you treat React as an implementation detail instead of an identity, your architecture becomes portable. When you embrace progressive enhancement and web semantics, your ideas survive the next framework wave. The recent wave of changes, including the React Foundation, React Compiler v1.0, the `` component, and internal research into alternative architectures, all represent genuine progress. The React team is doing thoughtful work, but these updates also serve as reminders of how tightly the industry has become coupled to a single ecosystem’s timeline. That timeline is still dictated by the engineering priorities of large corporations, and that remains true regardless of who governs the project. If your team’s evolution depends on a single framework’s roadmap, you are not steering your product; you are waiting for permission to move. That is true whether you are using React, Vue, Angular, or Svelte. The framework does not matter; the dependency does. It is ironic that we spent years escaping jQuery’s gravity, only to end up caught in another orbit. React was once the radical idea that changed how we build for the web. Every successful framework reaches this point eventually, when it shifts from innovation to institution, from tool to assumption. jQuery did it, React did it, and something else will do it next. The React Foundation is a positive step for the project’s long-term sustainability, but the next real leap forward will not come from better governance. It will not come from React finally adopting signals either, and it will not come from any single framework “getting it right.” Progress will come from developers who remember that frameworks are implementation details, not identities. Build for the platform first. Choose frameworks second. The web isn’t React’s, it isn’t Vue’s, and it isn’t Svelte’s. It belongs to no one. If we remember that, it will stay free to evolve at its own pace, drawing the best ideas from everywhere rather than from whichever framework happens to hold the cultural high ground. Frameworks are scaffolding, not the building. Escaping their gravity does not mean abandoning progress; it means finding enough momentum to keep moving. Reaching escape velocity, one project at a time. Use native forms and form submissions to a server, then enhance with client-side logic Prefer semantic HTML and ARIA before reaching for component libraries Try View Transitions directly with minimal React wrappers instead of waiting for an official API Use Web Components for self-contained widgets that could survive a framework migration Keep business logic framework-agnostic, plain TypeScript modules rather than hooks, and aim to keep your hooks short by pulling logic from outside React Profile performance using browser DevTools first and React DevTools second Try native CSS features like , , scroll snap , , and before adding JavaScript solutions Use , , and instead of framework-specific alternatives wherever possible Experiment with the History API ( , ) directly before reaching for React Router Structure code so routing, data fetching, and state management can be swapped out independently of React Test against real browser APIs and behaviors, not just framework abstractions

0 views
Jim Nielsen 5 days ago

Browser APIs: The Web’s Free SaaS

Authentication on the web is a complicated problem. If you’re going to do it yourself, there’s a lot you have to take into consideration. But odds are, you’re building an app whose core offering has nothing to do with auth. You don’t care about auth. It’s an implementation detail. So rather than spend your precious time solving the problem of auth, you pay someone else to solve it. That’s the value of SaaS. What would be the point of paying for an authentication service, like workOS, then re-implementing auth on your own? They have dedicated teams working on that problem. It’s unlikely you’re going to do it better than them and still deliver on the product you’re building. There’s a parallel here, I think, to building stuff in the browser. Browsers provide lots of features to help you deliver good websites fast to an incredibly broad and diverse audience. Browser makers have teams of people who, day-in and day-out, are spending lots of time developing and optimizing new their offerings. So if you leverage what they offer you, that gives you an advantage because you don’t have to build it yourself. You could build it yourself. You could say “No thanks, I don’t want what you have. I’ll make my own.” But you don’t have to. And odds are, whatever you do build yourself, is not likely to be as fast as the highly-optimized subsystems you can tie together in the browser . And the best part? Unlike SasS, you don’t have to pay for what the browser offers you. And because you’re not paying, it can’t be turned off if you stop paying. , for example, is a free API that’ll work forever. That’s a great deal. Are you taking advantage? Reply via: Email · Mastodon · Bluesky

0 views
David Bushell 1 weeks ago

Better Alt Text

It’s been a rare week where I was able to (mostly) ignore client comms and do whatever I wanted! That means perusing my “todo” list, scoffing at past me for believing I’d ever do half of it, and plucking out a gem. One of those gems was a link to “Developing an alt text button for images on [James’ Coffee Blog]” . I like this feature. I want it on my blog! My blog wraps images and videos in a element with an optional caption. Reduced markup example below. How to add visible alt text? I decided to use declarative popover . I used popover for my glossary web component but that implementation required JavaScript. This new feature can be done script-free! Below is an example of the end result. Click the “ALT” button to reveal the text popover (unless you’re in RSS land, in which case visit the example , and if you’re not in Chrome, see below). To implement this I appended an extra and element with the declarative popover attributes after the image. I generate unique popover and anchor names in my build script. I can’t define them as inline custom properties because of my locked down content security policy . Instead I use the attribute function in CSS. Anchor positioning allows me to place these elements over the image. I could have used absolute positioning inside the if not for the caption extending the parent block. Sadly using means only one thing… My visible alt text feature is Chrome-only! I’ll pray for Interop 2026 salvation and call it progressive enhancement for now. To position the popover I first tried but that sits the popover around/outside the image. Instead I need to sit inside/above the image. The allows that. The button is positioned in a similar way. Aside from being Chrome-only I think this is a cool feature. Last time I tried to use anchor positioning I almost cried in frustration… so this was a success! It will force me to write better alt text. How do I write alt text good? Advice is welcome. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
The Jolly Teapot 1 weeks ago

October 2025 blend of links

Some links don’t call for a full blog post, but sometimes I still want to share some of the good stuff I encounter on the web. Why it is so hard to tax the super-rich ・Very interesting and informative video, to the point that I wish it were a full series. Who knew I would one day be so fascinated by the topic of… checks notes … economics? jsfree.org ・Yes, a thousand yes to this collection of sites that work without needing any JavaScript. I don’t know if it’s the season or what, but these days I’m blocking JS every chance that I get. I even use DuckDuckGo again as a search engine because other search engines often require JavaScript to work. Elon Musk’s Grokipedia contains copied Wikipedia pages ・Just to be safe, I’ve immediately added a redirection on StopTheMadness so that the grokipedia domain is replaced by wikipedia.com (even if Wikipedia has its problems, especially in French). Also, what’s up with this shitty name? Why not Grokpedia ? I would still not care, but at least it wouldn’t sound as silly. POP Phone ・I don’t know for whom yet, but I will definitely put one of these under the Christmas tree this winter. (Via Kottke ) PolyCapture ・The app nerd in me is looking at these screenshots like a kid looks at a miniature train. (Via Daring Fireball ) Bari Weiss And The Tyranny Of False Balance ・“ You don’t need to close newspapers when you can convince editors that ‘balance’ means giving equal weight to demonstrable lies and documented facts. ” light-dark() ・Neat and elegant new CSS element that made me bring back the dark mode on this site, just to have an excuse to use it in the CSS. Eunoia: Words that Don't Translate ・Another link to add to your bookmark folder named “conversation starters.” (Via Dense Discovery ) Why Taste Matters More ・“ Taste gives you vision. It’s the lens through which you decide what matters, and just as importantly, what doesn’t. Without taste, design drifts into decoration or efficiency for efficiency’s sake. Devoid of feeling .” Tiga – Bugatti ・I recently realised that this truly fantastic song is already more than 10 years old, and I still can’t wrap my head around this fact. The video, just like the song, hasn’t aged one bit; I had forgotten how creative and fun it is. More “Blend of links” posts here Blend of links archive

0 views
Josh Comeau 1 weeks ago

Springs and Bounces in Native CSS

The “linear()” timing function is a game-changer; it allows us to model physics-based motion right in vanilla CSS! That said, there are some limitations and quirks to be aware of. I’ve been experimenting with this API for a while now, and in this post, I’ll share all of the tips and tricks I’ve learned for using it effectively. ✨

0 views
Thomasorus 2 weeks ago

List of JavaScript frameworks

The way the web was initially envisioned was through separation of concerns: HTML is for structure, CSS for styles and JavaScript for interactivity. For a long time the server was sending HTML to the browser/client through templates populated with data from the server. Then the page downloaded CSS and JavaScript. Those two then "attached" themselves to the structure and acted on it through HTML attributes, and could then change its looks, call for more data, create interactivity. Each time a visitor clicked a link, this whole process would start again, downloading the new page and its dependencies and rendering it in the browser. Using the history API and Ajax requests to fetch HTML of the next page and replace the current body with it. Basically emulating the look and feel of single page applications in multi-pages applications. Event handling/reactivity/dom manipulation via HTML attributes. Development happens client side, without writing JavaScript. Static HTML gets updated via web sockets or Ajax calls on the fly with small snippets rendered on the server. Development happens server side, without writing JavaScript. Most of the time a plugin or feature of an existing server side framework. A client-side, JavaScript component-based (mixing HTML, CSS and JavaScript in a single file) framework or library gets data through API calls (REST or GraphQL) and generates HTML blocks on the fly directly in the browser. Long initial load time, then fast page transitions, but a lot of features normally managed by the browser or the server needs to be re-implemented. The framework or library is loaded alongside the SPA code: The framework or library compiles to the SPA and disappears: A single page application library gets extended to render or generate static "dry" pages as HTML on the server to avoid the initial long loading time, detrimental to SEO. Often comes with opinionated choices like routing, file structure, compilation improvements. After the initial page load, the single page application code is loaded and attaches itself to the whole page to make it interactive, effectively downloading and rendering the website twice ("hydration"): After the initial page load, the single page application code is loaded and attaches itself only on certain elements that needs interactivity, partially avoiding the double download and rendering ("partial hydration", "islands architecture"): A server-side component-based framework or library gets data through API calls (REST or GraphQL) and serves HTML that gets its interactivity without hydration, for example by loading the interactive code needed as an interaction happens. Using existing frontend and backend stacks in an opinionated way to offer a fullstack solution in full JavaScript A client-side, component based application (Vue, React, etc) gets its state from pre-rendered JSON Stimulus JS Livewire (PHP) Stimulus Reflex (Ruby) Phoenix Liveview (Elixir) Blazor (C#) Unicorn (Python) Angular with AOT Next (React) Nuxt (Vue) Sveltekit (Svelte) Astro (React, Vue, Svelte, Solid, etc.) Solid Start (Solid)

0 views
Thomasorus 2 weeks ago

Cross platform web app solutions

A discovery list of technical solutions to produce a desktop and/or mobile app using web technologies. Apparently (all this is quite new to me) most solutions embed NodeJS inside them, making executing JavaScript the easiest part of the problem. Real trouble comes when talking about the UI, since each OS has different ways of rendering UI. Several solutions exist to make the programs multiplateform. Those solutions package webkot (chromium) and nodejs inside the app and make it work as a fake app on the desktop. Works well but comes with a lot of bloat and heavy ram consumption. Overall both are in the same family but they are differences between NW.js and Electron . Since bringing chromium makes the program very big, there are solutions to bridge between web apps and existing, lighter UI frameworks. Most of the time then, the framework is used to create a bridge between HTML/CSS and the existing frameworks components, modules or UI API. Since all OSes can render webview, it's possible to ask for one at the OS level by providing a bridge. The problem with this solution might be that if the OS has an outdated webview engine, all modern HTML/CSS/JS solutions might not work? Except for neutralino, most projects of this type tends to use webview , a C/C++/Go library. Several bindings library for other languages already exist. Electron is made by Github NW.js is made by Intel NodeGui provides a bridge between HTML/CSS and QT. Deno webview Sciter is a binary that can be used to create web apps. But under the hood it's using a superset of JavaScript called TIScript Sciter JS seems to be Sciter but with common JavaScript, using the quick JS engine.

0 views
Jim Nielsen 2 weeks ago

Write Code That Runs in the Browser, or Write Code the Browser Runs

I’ve been thinking about a note from Alex Russell where he says: any time you're running JS on the main thread, you're at risk of being left behind by progress. The zen of web development is to spend a little time in your own code, and instead to glue the big C++/Rust subsystems together, then get out of the bloody way. In his thread on Bluesky, Alex continues : How do we do this? Using the declarative systems that connect to those big piles of C++/Rust: CSS for the compositor (including scrolling & animations), HTML parser to build DOM, and for various media, dishing off to the high-level systems in ways that don't call back into your JS. I keep thinking about this difference: There’s a big difference between A) making suggestions for the browser, and B) being its micromanager . Hence the title: you can write code that will run in the browser, or you can write code that calls the browser to run. So what are the browser ‘subsystems’ I can glue together? What are some examples of things I can ask the browser to do rather than doing them myself? A examples come to mind: requestAnimationFrame -> document.startViewTransition -> @view-transition" data-og-image /> The trick is to let go of your need for control. Say to yourself, “If I don’t micromanage the browser on this task and am willing to let go of control, in return it will choose how to do this itself with lower-level APIs that are more performant than anything I can write.” For example, here are some approaches to animating transitions on the web where each step moves more responsibility from your JavaScript code on the main thread to the browser’s rendering engine: It’s a scale from: I want the most control, and in exchange I’ll worry about performance. I don’t need control, and in exchange you’ll worry about performance. I don’t know about you, but I’d much rather hand over performance, accessibility, localization, and a whole host of issues to the experts who build browsers. Building on the web is a set of choices: Anytime you choose to do something yourself, you’re choosing to make a trade-off. Often that increase in control comes at the cost of a degradation in performance. Why do it yourself? Often it’s because you want a specific amount of control over the experience you’re creating. That may be perfectly ok! But it should be a deliberate choice, not because you didn’t consider (or know) the browser offers you an alternative. Maybe it does! So instead of asking yourself, “How can I write code that does what I want?” Consider asking yourself, “Can I write code that ties together things the browser already does to accomplish what I want (or close enough to it)?” Building this way will likely improve your performance dramatically — not to mention decrease your maintenance burden dramatically! Reply via: Email · Mastodon · Bluesky I need to write code that does X. I need to write code that calls a browser API to do X. View transitions API (instead of JS DOM diffing and manual animation). CSS transitions or (GPU-accelerated) vs. manual JS with updates. in CSS vs. JS scroll logic. CSS grid or flexbox vs. JS layout engines (e.g., Masonry clones). and elements with native decoding and hardware acceleration vs. JS media players. or with for responsive images vs. manual image swapping logic in JS. Built-in form state ( ) and validation ( , , etc.) vs. JS-based state, tracking, and validation logic. Native elements like , , , etc., which provide built-in keyboard and accessibility behavior vs. custom ARIA-heavy components. JS timers, DOM manipulation, browser repaints when it can. Dropped frames. Syncs to browser repaint cycle. Smooth, but you gotta handle a lot yourself (diffing, cleanup, etc.) View Transitions in JS JS triggers, browser snapshots and animates. Native performance, but requires custom choreography on your part. View Transitions in CSS Declare what you expect broadly, then let the browser take over. Do it yourself. Let the browser do it. Somewhere in between.

0 views

When it comes to MCPs, everything we know about API design is wrong

TL;DR: I built a lightweight Chrome MCP. Scroll to the end to learn how to install it. Read the whole post to learn a little bit about the Zen of MCP design. Claude Code has built in tools to fetch web pages and to search the web – they actually run through Anthropic's servers, if I recall correctly. They do clever things to carefully manage context and to return information in a format that's easy for Claude to digest. These tools work really well. Right up to the point where they completely fall apart. An uncoached testimonial from the only customer who matters. Last week, I somehow got it into my head that I should update my custom blogging client to use Apple's new Liquid Glass look and feel. The first issue I ran into was that Claude was absolutely sure that macOS 26 wasn't out yet. (Amusingly, when asked to review a draft of this post, one of the things it flagged was: ' Inconsistent model naming - You refer to "macOS 26" but I believe you mean "macOS 15" (Sequoia). macOS 26 would be way in the future.') Claude was, however, happy to speculate about what a "Liquid Glass" UI might look like. Once I reminded the model that it had memory issues and Apple had indeed released the new version of their operating system, it was ready to get to work. I told it to go read Apple's Human Interface Guidelines and make a plan. This is what Claude saw: It turns out that Apple no longer offer a downloadable version of the HIG. And the online version requires JavaScript . After a bit of flailing, Claude reached for the industry-standard Playwright MCP from Microsoft. The Playwright MCP is a collection of 21 tools covering all aspects of driving a browser and debugging webapps, from to to . Just having the Playwright MCP available costs 13,678 tokens (7% of the whole context window) in every single session, even if you never use it. (Yes, the Google Chrome team has their own Chrome MCP. Its API surface is even bigger ) And once you do start using it, things get worse. Some of its tools return the entire DOM of the webpage you're working with. This means that simple requests fail because they return more tokens than Claude can handle in a response: It's frustrating to see a coding agent trying over and over to use a tool the way it's supposed to and having that tool just fail to return useful data. After hearing me complain about this a few times, Dan Grigsby commented that he'd had success just asking Claude to teach itself a skill: Using the raw Dev Tools remote control protocol to drive Chrome. This seemed like a neat trick, so I asked my Claude to take a swing at it. Claude was only too happy to try to speak raw JSON-RPC to Chrome on port 9292. It...just worked. But it was also very clunky and wasteful feeling. Claude was writing raw JSON-RPC command lines for each and every interaction. It was very verbose and required the LLM to get a whole bunch of details right on every single command invocation. It was time to make a proper Skill. After thinking about it for a moment, I asked Claude to write a little zero-dependency command-line tool called that it could run with the Bash tool to control Chrome, as well as a new file explaining how to use that script . encapsulated the complexity and made Chrome easily scriptable from the command line. The skill sets up the basics of web browsing with its tools and uses progressive disclosure to tell Claude how to get more information, but only when it has a need to know. For example, these examples of how to use the tool . Claude didn't always reach for the skill, so it wasn't aware of its new command-line tool, but once I pointed it in the right direction, it worked surprisingly well . This setup was incredibly token efficient – Nothing in the context window at startup other than a skill and in the system prompt. What was a little frustrating for me was that any time Claude wanted to do anything with the browser, it had to run a custom Bash command that I had to approve. Every click. Every navigation. Every javascript expression. It got old really, really fast. There's no real way to fix that without creating a custom MCP. But that would put us right back where we were with the official Playwright MCP, right? Nearly two dozen tools and 13k tokens spilled on the floor every time we started a session. Even trimming things down to only the dozen most important commands is still a bunch of tools, most of which Claude won't use in a given session. If you've ever done API design, you probably know how important it is to name your methods well. You know that every method should do one thing and only one thing. You know that you really need to type (and validate) all your parameters to make sure your callers can tell what they're supposed to be passing in and to make bad method calls fail as soon as possible. It would be absolutely unhinged to have a method called that took a parameter called that was itself a method dispatcher, a parameter called , and a parameter called . You'd have to be crazy to think that it's acceptable API design to have the optional, untyped field just have a description like And yet. That is exactly how I designed it. And it's just great. The high-level tool description reads: At session startup, the whole MCP config weighs in at just 947 tokens. I'm pretty sure I can shave at least 30-40 more. It's optimized to make Claude's life as easy as possble. Rather than having a method to start the browser, the MCP...just does it when it needs to. Same with opening a new tab if there wasn't one waiting. The tool description tells Claude what to do and where to read up when it needs more help. At least so far, it works just great for me. One of the mistakes I made while developing the MCP was to instruct Claude to cut down the API surface by only accepting CSS selectors, rather than accepting CSS or XPath. It seemed natural to me that a smaller, simpler API would be easier for Claude to work with and reason about. Right up until I saw the MCP tool description containing multiple admonitions like . The whole thing just...worked better when I let the selector fields accept either CSS or XPath. Another thing that Claude got not-quite-right when it first implemented the MCP was that it included detailed human-readable text for all the method parameters. Because LLMs that are using MCPs can see both the and the actual JSON schema, you don't need to repeat things like lists of values for an enum or type validations. One trick you can use is to ask your agent to tell you exactly what it can see about how to use an API. One of the weirdest realizations I had while building is this: I have no doubt that there are a dozen similar tools out there, but it was literally faster and easier to build the tool that I thought should exist than to test out a dozen tools to see if any of them work the way I think they should. Over the last couple of decades, the common wisdom has become that Postel's Law (aka the robustness principle) is dated and wrong and that APIs should be rigid and rigorous. That's the wrong choice when you're designing for use by LLMs. This might be a hard lesson to hear, but tools you build for LLMs are going to work much, much better if you think of your end-user as a "person" rather than a computer. Build your tools like they're a set of scripts you're handing to that undertrained kid who just got hired in the NOC. They are going to page you at 2AM when they can't figure out what's going on or when they misuse the tools in a way they can't unwind. Names and method descriptions matter far more than they ever have before. Automatic recovery is hugely important. Designing for error recovery rather than failing fast will make the whole system more reliable and less expensive to operate. When errors are unaviodable, your error messages should tell the user how to fix or work around the problem in plain English. If you can't give the user exactly what they asked for, but you can give them a partial answer or related information, do that. Claude absolutely does not care about the architectural purity of your API. It just wants to help you get work done with the limited resources at its disposal. This new MCP and skill for Claude Code, is called superpowers-chrome . You can install it like this: If you're already using Superpowers , you can just type /plugin, navigate to 'Install plugins', pick 'superpowers-marketplace' and then you should see . I'd love to hear from you if you find it helpful. I'd also love patches and pull requests.

0 views
Manuel Moreale 3 weeks ago

Alice

This week on the People and Blogs series we have an interview with Alice, whose blog can be found at thewallflowerdigest.co.uk . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Winnie Lim and the other 122 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I'm Alice, I'm currently 37, I'm from the East Midlands in the UK, and have lived in the region all my life. I live with my husband (whom I married in June) and our two cats. They are the best cats. At university, I studied English Literature because I never had any idea what I wanted to do for a career! I really enjoyed my time at university. Looking back, it was such a luxury to have all the time dedicated to reading books and thinking deeply about them (even if I was always too shy to contribute much in seminars!). I can't say an English Lit. degree has ever been beneficial in a practical sense, but I'm happy that I've started to dust off some of the cobwebs on it with my book blog! My work and my blog are separate, but I think the fact that it exists at all is a result of the way my career went, or rather didn't go! I got a Master's degree in Information and Library Management, but failed to ever get a proper professional job. Plan A was University Librarian, but I didn't get the graduate trainee placement I needed, and, with that, I was forever locked out of university libraries. I never saw a job posting that didn't require "at least 5 years of experience in an equivalent role", and social anxiety hampered the development of networking skills. I was a library assistant at a university for a while, where a good portion of my colleagues were in the same boat as me! Plan B was a School Librarian, purely because it was the only job I got offered. I was ill-suited to it, never really enjoyed and the school I was in had little interest in supporting the Library or developing a reading culture. I did that for about 4 years, the whole time trying to come up with an alternative plan. Eventually, Plan C presented itself, and I ended up in a little niche of library management systems, where I worked on data migrations for special libraries, and eventually moved into archives and museums. This is a job that really suits me. It turns out my true love all along was actually databases, information retrieval and the challenge of solving all the puzzles that involved! If I could go back in time to 18-21 years old, knowing myself as I do now, I would make different decisions! But, for now, I am happy where I ended up, and I'm still making a little contribution to the cultural sector! The Wallflower Digest was born in 2022 because my previous job had stopped offering me stimulating challenges, and I was feeling overlooked, bored and trapped by a lack of opportunities! My self-esteem was taking a real hit, and I just needed something to give me a goal and focus. I have had blogs in the past when bored at work! My library assistant job in my twenties was in a tiny, quiet campus library that involved some lone working evening shifts where there would be nothing to do but sit on the enquiries desk for hours! That was how my first blog started; it was mostly a TV blog called Between Screens. That was hosted by WordPress.com (I did have custom domains, though!) and is now long deleted. I used to write recaps and reviews of my favourite TV shows, movies and video games. This was mostly Made in Chelsea, Game of Thrones, Veronica Mars and Mass Effect ! I also played around briefly with a fiction blog and a sewing blog, but those were short-lived. This time around, I wanted my blog to be somewhere to exercise my writing skills and have a chance to play around with CSS and maybe other website bits if I wanted. When I picked the name, I wasn't sure what the blog was going to be, but I think I managed to nail it. I wanted something that felt like me. I've always been very shy, but I was painfully so as a child, and someone (probably a teacher) referred to me as a 'wallflower', and that term got stuck in my young brain. I don't know if the meaning will translate for those who aren't native English speakers, so as a definition, a "wallflower" can mean someone with an introverted personality type (or social anxiety) who will usually distance themselves from the crowd and actively avoid being in the limelight. Plus, I like flowers! I recently planted some wallflowers (Erysimum) in my garden! And then a ‘digest’ is a compilation or summary of information, and my blog is a mess of different topics. I share as I digest the things I read, learn and experience in my life. When I got started, I spun my wheels for a bit in the mud of terrible advice for new bloggers. You know, this strange idea that a blog has to make money, and therefore has to solve problems for an audience! This is why some of my oldest posts have a recognisable Content formatting of SEO friendly headings and keywords! But I eventually realised the fun and mental stimulation I needed came from just doing whatever I wanted, and that an Audience wasn't important to me (actually, I fear that!)! And, more importantly, the blogs I was finding that I enjoyed the most were messy little personal blogs where people shared snippets of their lives. These days, I remind myself that I can do what I want. I see it now as a loosely defined project to help me distil the things that resonate, and help me to understand myself a little better. I share whatever I want to, which currently is book reviews, updates on my life, occasionally progress with my garden (though I've been too busy this year!), and my embroidery or other craft projects. Lately - trying to be less of a wallflower - I've been taking part in more blogging community linkups and tag memes, which have been a lot of fun to answer prompts, but also for "blog hopping" and seeing who else is out there! I'm hoping to branch out from the book-based ones to other topics and blog hop beyond the borders of the book community, or the more tech-focused folks I found on Mastodon. I am toying with the idea of creating my own if I can't find an existing one that feels right! Life has been very busy recently, so the blog has really been ticking over on book reviews and joining in with the book blogger community's Top Ten Tuesday weekly link-up (currently hosted by ArtsyReaderGirl ). It's hard to find the time for more "creative" posts at the moment, but I do try to put really effort into my TTT and try to find something to say about the books I choose to list. Sometimes I get struck by inspiration - usually a topic that keeps recurring in my life somehow - and I'll start a draft, or just jot some thoughts into a note and eventually find the time to work it into something that makes sense! That is the biggest challenge when I work 40 hours a week and have to do all the other responsibilities of life, relationships and health things that come with being an adult. I mean, I've been trying to find the time and mental bandwidth to write a full review with my analysis of the book Rouge by Mona Awad since January (I loved it, and I'm still thinking about it)! But it's still in drafts, and I think I need to read it a third time now. It's like a running joke that I'll forever talk about it and never get it posted! I also post life updates semi-regularly. Those posts are just a catch-up on whatever is going on - how my walking/move more challenges are going, TV or movies, anything else I feel like! I love to read that kind of 'slice of life' content from other people. Now and again, I'll share something about my social anxiety struggles. I'm always battling this, and I find writing out my experiences and feelings helps to work it out of my system. As for the process, my drafts usually get entered straight into the Jetpack app on my phone. I used Obsidian as my digital notes app for general thoughts and inspiration, and all my book reviews and ebook highlights get synced into there, too. What I've got going on with Obsidian is its own little project (essentially as my own personal database!). Most of the time, I post whenever I've finished writing because time is too short to proofread, and that's why my blog is full of typos and errors! I do re-read things later on and correct mistakes I spot, but that's as far as it goes! I also love to use Canva to create graphics. Every book review gets a little graphic with a summary; those originated in my short-lived attempt to get involved with Bookstagram, and I enjoyed making them so much that I've kept them for the blog. I am also a visual person, so it is important to me that I like the look of my website! I think my creativity relies more on my mental state than my physical space! Definitely, my menstrual cycle comes with days where I'm buzzing with ideas and writing is easier, and I wake up with ideas first thing in the morning before the responsibilities of the day have taken over. I do need quite though. I can't think with background chatter, I have no idea how people manage to work in noisy cafes! They make me instantly tired, and my brain shuts down. Writing is easiest when I am on my PC with a full keyboard and dual monitors, but because I work from home full-time at the same desk, I don't like to be pinned in the same spot in my evenings, shut away from my husband, so PC time only really happens on the weekend. More often, I write on my phone; I also have an iPad, but if I'm typing on mobile, I'm faster on my phone. I am hosted by Hostingr, which has been fine and easy to use for a non-techie like me. My CMS is WordPress, it came installed and I find it familiar and easy to use with a big community. I find there is usually a plugin to solve most problems! I have no problem with the block editor, and I love that I can hook my blog up to the wider WordPress.com world to more easily connect with other bloggers. I use the Jetpack app for quick editing and posting, as well as my RSS feed, and to explore and discover new blogs through tags. I honestly think Jetpack gets underrated as a discovery tool! I don't think I would change anything about my blog. With hindsight, I do wish I'd wasted less time down the SEO rabbit hole and removed the pre-installed AISEO plugin earlier! I could also have figured out how I connect my blog to WordPress/Jetpack sooner to find other bloggers. I would not have made my thoughts on Atomic Habits so SEO friendly... it got caught in Google's net and now I regret how well it does search results. There is a crowd of James Clear fans who get upset when you don't praise it as the life-changing work of a genius they hold it up to be. Every few months, I get something that makes me consider turning off the comments. I got a New Year deal with Hostingr for 4 years of hosting at a ridiculous discount, so I paid that all upfront, and I think it worked out about £3 a month. I'm going to have to work out what to do when that's up for renewal! I think my domain is £8.99 a year. That is all the cost; I don't make any money from my blog, nor do I plan to. This is just a hobby, and hobbies (just like my embroidery and gardening) often cost money! Monetising would immediately make it stressful for me and take the fun out of it. I don't mind if other people want to monestise as long as it's not obnoxious. I don't like newsletters where they put some things behind a paywall but not everything, or they put half of it behind the paywall. Those are annoying when they come through my RSS feed, and usually I end up unsubscribing. I've occasionally done a "buy me a coffee" kind of one-off donation to bloggers, or the pay-what-you-like subscription model, where I can just do a couple of quid a month to show support. Or if they're an artist and they have a shop, I buy something small if the postage to the UK is reasonable. My favourite blogs are the ones where I can feel the person writing it, and their personality and passions come through. I want to read human thoughts, not Content! I like details about people's lives with the things they love (books, TV shows, comics, flowers, whatever!), or might share that they're having a hard time with something and how they're coping. Michael at My Comic Relief writes wonderful, passionate and compassionate posts about his favourite TV shows, movies and comic books. When Doctor Who and The Acolyte were on, I was watching my RSS feed for these thoughts every week! I always find his perspective interesting and his enthusiasm infectious. Dragon Rambles is a mix of personal posts and book reviews written by Nic in New Zealand. I think she's been blogging for many years. I really love it when she shares new books she finds for her collection of retro science fiction and fantasy! I have no interest in ever reading any of them myself, but I love to read about them and her collection! I also enjoy reading Elizabeth Tai . She is based in Malaysia and was one of the first bloggers I found on Mastodon in my super early days, and it was through her that I learned popular Indie Web concepts like digital gardens and POSSE. I enjoy the fact that she writes about all kinds of things! I am actually surprised she's not been featured yet! I think Michael, Nic or Liz would be great to interview. Michael and Nic, I found in the land of WordPress, and may not even be aware of this project! My other 3 favourites you've already featured, but I'll mention them because I think they're great! Veronique has been a favourite for a long time! Her writing always feels intimate, and I love the little snippet she shares from her life, her artwork and her passion for zines. She also mentioned my blog in her interview, and I can't tell you how thrilled I was! I had to try to explain the whole thing to my husband, who does not read blogs! Winnie Lim is another long-time favourite of mine. Her blog is also very intimate and thoughtful, and I am always eager to read about her life and little adventures. And also Tracy Durnell's Mind Garden is like what I think I'd like my blog to be, if I had the time and inclination to properly organise myself! I know she's also had a P&B feature because that's how I found her. I love her weekly notes. I don't know why I enjoy reading what music she listened to and what meals she had that week, but I do! This one is a silly one, and maybe a bit of a blast from the past because I used to follow Cake Wrecks way back in the day (like 15 years ago!), and when I was collecting RSS feeds of blogs again a couple of years ago, I was so happy it was still around! Unlike Regresty, RIP (and RIP to what Esty used to be!). Anyway, there is something about badly decorated cakes that I find deeply hilarious (and bad art in general), and these collections of wonky cakes made by so-called professional bakers are a regular source of joy. I don't have anything in particular to share. I am just so excited to have been asked to take part! I hope everyone keeps on doing what they love and blogging about it in the way that they want! I am thankful to have found that the 'blogosphere' is still alive and well, and for me, it's such a peaceful refuge away from the overwhelming noise of social media. I am also hugely appreciative of projects like this that make it easier for bloggers to find each other, so thank you, Manu! Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 111 interviews . Make sure to also say thank you to Annie Mueller and the other 122 supporters for making this series possible.

0 views

What Dynamic Typing Is For

Unplanned Obsolescence is a blog is about writing maintainable, long-lasting software. It also frequently touts—or is, at the very least, not inherently hostile to—writing software in dynamically-typed programming languages. These two positions are somewhat at odds. Dynamically-typed languages encode less information. That’s a problem for the person reading the code and trying to figure out what it does. This is a simplified version of an authentication middleware that I include in most of my web services: it checks an HTTP request to see if it corresponds to a logged-in user’s session. Pretty straightforward stuff. The function gets a cookie from the HTTP request, checks the database to see if that token corresponds to a user, and then returns the user if it does. Line 2 fetches the cookie from the request, line 3 gets the user from the database, and the rest either returns the user or throw an error. There are, however, some problems with this. What happens if there’s no cookie included in the HTTP request? Will it return or an empty string? Will even exist if there’s no cookies at all? There’s no way to know without looking at the implementation (or, less reliably, the documentation). That doesn’t mean there isn’t an answer! A request with no cookie will return . That results in a call, which returns (the function checks for that). is a falsy value in JavaScript, so the conditional evaluates to false and throws an . The code works and it’s very readable, but you have to do a fair amount of digging to ensure that it works reliably. That’s a cost that gets paid in the future, anytime the “missing token” code path needs to be understood or modified. That cost reduces the maintainability of the service. Unsurprisingly, the equivalent Rust code is much more explicit. In Rust, the tooling can answer a lot more questions for me. What type is ? A simple hover in any code editor with an LSP tells me, definitively, that it’s . Because it’s Rust, you have to explicitly check if the token exists; ditto for whether the user exists. That’s better for the reader too: they don’t have to wonder whether certain edge cases are handled. Rust is not the only language with a strict, static typing. At every place I’ve ever worked, the longest-running web services have all been written in Java. Java is not as good as Rust at forcing you to show your work and handle edge cases, but it’s much better than JavaScript. Putting aside the question of which one I prefer to write, if I find myself in charge a production web service that someone else wrote, I would much prefer it to be in Java or Rust than JavaScript or Python. Conceding that, ceteris paribus , static typing is good for software maintainability, one of the reasons that I like dynamically-typed languages is that they encourage a style I find important for web services in particular: writing to the DSL. A DSL (domain-specific language) is programming language that’s designed for a specific problem area. This is in contrast to what we typically call “general-purpose programming languages” (e.g. Java, JavaScript, Python, Rust), which can reasonably applied to most programming tasks. Most web services have to contend with at least three DSLs: HTML, CSS, and SQL. A web service with a JavaScript backend has to interface with, at a minimum , four programming languages: one general-purpose and three DSLs. If you have the audacity to use something other than JavaScript on the server, then that number goes up to five, because you still need JavaScript to augment HTML. That’s a lot of languages! How are we supposed to find developers who can do all this stuff ? The answer that a big chunk of the industry settled on is to build APIs so that the domains of the DSLs can be described in the general-purpose programming language. Instead of writing HTML… …you can write JSX, a JavaScript syntax extension that supports tags. This has the important advantage of allowing you to include dynamic JavaScript expressions in your markup. And now we don’t have to kick out to another DSL to write web pages. Can we start abstracting away CSS too? Sure can! This example uses styled-components . This is a tactic I call “expanding the bounds” of the programming language. In an effort to reduce complexity, you try to make one language express everything about the project. In theory, this reduces the number of languages that one needs to learn to work on it. The problem is that it usually doesn’t work. Expressing DSLs in general-purpose programming syntax does not free you from having to understand the DSL—you can’t actually use styled-components without understanding CSS. So now a prospective developer has to both understand CSS and a new CSS syntax that only applies to the styled-components library. Not to mention, it is almost always a worse syntax. CSS is designed to make expressing declarative styles very easy, because that’s the only thing CSS has to do. Expressing this in JavaScript is naturally way clunkier. Plus, you’ve also tossed the web’s backwards compatibility guarantees. I picked styled-components because it’s very popular. If you built a website with styled-components in 2019 , didn’t think about the styles for a couple years, and then tried to upgrade it in 2023 , you would be two major versions behind. Good luck with the migration guide . CSS files, on the other hand, are evergreen . Of course, one of the reasons for introducing JSX or CSS-in-JS is that they add functionality, like dynamic population of values. That’s an important problem, but I prefer a different solution. Instead of expanding the bounds of the general-purpose language so that it can express everything, another strategy is to build strong and simple API boundaries between the DSLs. Some benefits of this approach include: The following example uses a JavaScript backend. A lot of enthusiasm for htmx (the software library I co-maintain) is driven by communities like Django and Spring Boot developers, who are thrilled to no longer be bolting on a JavaScript frontend to their website; that’s a core value proposition for hypermedia-driven development . I happen to like JavaScript though, and sometimes write services in NodeJS, so, at least in theory, I could still use JSX if I wanted to. What I prefer, and what I encourage hypermedia-curious NodeJS developers to do, is use a template engine . This bit of production code I wrote for an events company uses Nunjucks , a template engine I once (fondly!) called “abandonware” on stage . Other libraries that support Jinja -like syntax are available in pretty much any programming language. This is just HTML with basic loops ( ) and data access ( ). I get very frustrated when something that is easy in HTML is hard to do because I’m using some wrapper with inferior semantics; with templates, I can dynamically build content for HTML without abstracting it away. Populating this template in JavaScript is so easy . You just give it a JavaScript object with an field. That’s not particularly special on its own—many languages support serialized key-value pairs. This strategy really shines when you start stringing it together with SQL. Let’s replace that database function call with an actual query, using an interface similar to . I know the above code is not everybody’s taste, but I think it’s marvelous. You get to write all parts of the application in the language best suited to each: HTML for the frontend and SQL for the queries. And if you need to do any additional logic between the database and the template, JavaScript is still right there. One result of this style is that it increases the percentage of your service that is specified declaratively. The database schema and query are declarative, as is the HTML template. The only imperative code in the function is the glue that moves that query result into the template: two statements in total. Debugging is also dramatically easier. I typically do two quick things to narrow down the location of the bug: Those two steps are easy, can be done in production with no deployments, and provide excellent signal on the location of the error. Fundamentally, what’s happening here is a quick check at the two hard boundaries of the system: the one between the server and the client, and the one between the client and the database. Similar tools are available to you if you abstract over those layers, but they are lessened in usefulness. Every web service has network requests that can be inspected, but putting most frontend logic in the template means that the HTTP response’s data (“does the date ever get send to the frontend”) and functionality (“does the date get displayed in the right HTML element?”) can be inspected in one place, with one keystroke. Every database can be queried, but using the database’s native query language in your server means you can validate both the stored data (“did the value get saved?”) and the query (“does the code ask for the right value?”) independent of the application. By pushing so much of the business logic outside the general-purpose programming language, you reduce the likelihood that a bug will exist in the place where it is hardest to track down—runtime server logic. You’d rather the bug be a malformatted SQL query or HTML template, because those are easy to find and easy to fix. When combined with the router-driven style described in Building The Hundred-Year Web Service , you get simple and debuggable web systems. Each HTTP request is a relatively isolated function call: it takes some parameters, runs an SQL query, and returns some HTML. In essence, dynamically-typed languages help you write the least amount of server code possible, leaning heavily on the DSLs that define web programming while validating small amounts of server code via means other than static type checking. To finish, let’s take a look at the equivalent code in Rust, using rusqlite , minjina , and a quasi-hypothetical server implementation: I am again obfuscating some implementation details (Are we storing human-readable dates in the database? What’s that universal result type?). The important part is that this blows. Most of the complexity comes from the need to tell Rust exactly how to unpack that SQL result into a typed data structure, and then into an HTML template. The struct is declared so that Rust knows to expect a for . The derive macros create a representation that minijinja knows how to serialize. It’s tedious. Worse, after all that work, the compiler still doesn’t do the most useful thing: check whether is the correct type for . If it turns out that can’t be represented as a (maybe it’s a blob ), the query will compile correctly and then fail at runtime. From a safety standpoint, we’re not really in a much better spot than we were with JavaScript: we don’t know if it works until we run the code. Speaking of JavaScript, remember that code? That was great! Now we have no idea what any of these types are, but if we run the code and we see some output, it’s probably fine. By writing the JavaScript version, you are banking that you’ve made the code so highly auditable by hand that the compile-time checks become less necessary. In the long run, this is always a bad bet, but at least I’m not writing 150% more code for 10% more compile-time safety. The “expand the bounds” solution to this is to pull everything into the language’s type system: the database schema, the template engine, everything. Many have trod that path; I believe it leads to madness (and toolchain lock-in). Is there a better one? I believe there is. The compiler should understand the DSLs I’m writing and automatically map them to types it understands. If it needs more information—like a database schema—to figure that out, that information can be provided. Queries correspond to columns with known types—the programming language can infer that is of type . HTML has context-dependent escaping rules —the programming language can validate that is being used in a valid element and escape it correctly. With this functionality in the compiler, if I make a database migration that would render my usage of a dependent variable in my HTML template invalid, the compiler will show an error. All without losing the advantages of writing the expressive, interoperable, and backwards-compatible DSLs the comprise web development. Dynamically-typed languages show us how easy web development can be when we ditch the unnecessary abstractions. Now we need tooling to make it just as easy in statically-typed languages too. Thanks to Meghan Denny for her feedback on a draft of this blog. DSLs are better at expressing their domain, resulting in simpler code It aids debugging by segmenting bugs into natural categories The skills gained by writing DSLs are more more transferable CMD+U to View Source - If the missing data is in the HTML, it’s a frontend problem Run the query in the database - If the missing data is in the SQL, it’s a problem with the GET route Language extensions that just translate the syntax are alright by me, like generating HTML with s-expressions , ocaml functions , or zig comptime functions . I tend to end up just using templates, but language-native HTML syntax can be done tastefully, and they are probably helpful in the road to achieving the DX I’m describing; I’ve never seen them done well for SQL. Sqlx and sqlc seem to have the right idea, but I haven’t used either because I to stick to SQLite-specific libraries to avoid async database calls. I don’t know as much about compilers as I’d like to, so I have no idea what kind of infrastructure would be required to make this work with existing languages in an extensible way. I assume it would be hard.

0 views
David Bushell 4 weeks ago

What is a Linux?

Do you build websites like me? Maybe you’re an Apple user, by fandom, or the fact that macOS is not Windows. You’ve probably heard about this Linux thing. But what is it? In this post I try to explain what is a Linux and how Linux does what it be. ⚠️ This will be a blog post where the minutest of details is well actually-ied by orange site dwelling vultures. I’ll do my best to remain factual. At a high level Linux is best described as an OS (operating system) like Windows or macOS. Where Linux differs is that its components are all open source. Open source refers to the source code. Linux code is freely available. “Free” can mean gratis ; without payment. But open source licenses like GPL and MIT explicitly allow the sale of software. “Free” can also mean libre ; unrestricted, allowing users to modify and redistribute the code. Linux software is typically both free and free. You may see acronyms like OSS (open source software), and FOSS/FLOSS (free/libre and open source software), emphasising a more liberal ideology. Some believe that non-free JavaScript is nothing short of malware forced upon users. Think about the sins you’ve committed with JavaScript and ask yourself: are they wrong? Linux and OSS is a wonderful can of worms with polarising opinions. We can break down Linux usage into three categories. Linux can be “headless” meaning there is no desktop GUI. Headless systems are operated via the command line and keyboard (except for the occasional web control panel). This is the backbone of the Internet. The vast majority of web servers are headless Linux. “Desktop Linux” refers to the less nerdy experience of using a GUI with a mouse. Linux has never done well in this category. Depending on whom you ask, Windows (with a capital W) dominates. Steam survey puts Windows at 95% for gaming. Other sources are more favourable towards macOS reporting upwards of 15%. Linux is niche for desktop. Some will claim success for Linux in the guise of Android OS . Although technically based on Linux much of Android and Google’s success is antithetical to FOSS principles. SteamOS from Valve is a gaming Linux distro making moves in this category. Embedded systems are things like factory robots, orbital satellites, smart fridges, fast food kiosks, etc. There’s a good chance these devices run Linux. If it’s Windows you’ll know by the blue screen and horrendous input latency. That was four categories, sorry. Linux is not one operating system but many serving different requirements. If Bill Gates created Windows and Steve Jobs oversaw macOS, who’s the Linux mastermind? Linux is named after Linus Torvalds who is still the lead developer of the Linux kernel. But there is no Microsoft or Apple of Linux. Due to its open source nature, Linux is more like a collection of interchangeable pieces working together. There is no default Linux install. You must choose a distribution like a starter Pokémon. Linux distros differ in their choice of core pieces like: The Linux kernel includes the low-level services common to all Linux systems. The kernel also has drivers for hardware and file systems. Each distro typically compiles its own kernel which means hardware support can vary out of the box. It’s possible to recompile the kernel to include modules specific to your needs. Linux distros can exist for niche and specialised use cases. OpenWrt is a distro for network devices like wireless routers. DietPi is a lightweight choice for single board computers (a favourite of mine). Distros exist for seasoned nerds. Gentoo Linux is compiled from source with highly specific optimisation flags. NixOS provides an immutable base with declarative builds. If no distro meets your requirements, why not build Linux from scratch? You can find all sorts of weird and wonderful distros on Distro Watch . If you consult the distro timeline on Wikipedia you can see an extensive hierarchy. It’s overwhelming! Know that most are hobbyist projects not maintained for long. They’re nothing more than pre-installed software, opinionated settings, and a wallpaper. Distros like Debian and Arch Linux offer a more generalised OS. They provide the base for most commonly used distros. RHEL (Red Hat Enterprise Linux) also exists for the corporate world. From Debian comes Ubuntu and Raspberry Pi OS . Ubuntu desktop is by far the most popular distro for day-to-day use. Ubuntu makes significant changes to Debian and provides its own downstream package repository. Where should you start? You’ll get some crazy bad answers. Just try Ubuntu. It has the “network effect” and you’re more likely to find support online. This advice is likely to elicit the most comments! Desktop Linux can look wildly different across distros. There is no universal desktop GUI like you’d find on Windows or macOS. KDE offers the classic Windows-like experience. Gnome is more akin to macOS. XFCE is a lightweight option. Hyprland strips back the GUI using a tiled window presentation. There is a shortage of design and accessibility expertise within FOSS. Linux can be ugly and inaccessible at times. If you like design perfection Linux can make your eye twitch. On the plus side, you’re not stuck with a vendor-locked experience. Desktop environments provide hundreds of dials to customise their appearance. Want a start menu? You can add one! Hate the dock? Remove it! Some parts of Linux are even styled and scripted with CSS and JavaScript. Distros come with a package manager (think NPM). This is the main source of system updates and software. On Debian-based systems you’ll find commands. Arch-based systems use . Distros may include a custom GUI and auto-update feature for those scared of the command line. Linux has multiple upstream package repositories. If you run on Debian you’ll get an old version (politely referred to as: “stable”). In comparison, running on Arch gives you the cutting edge, likely compiled from Github last night. Remember that almost everything around Linux is open source. You’re free to compile and install software from anywhere. Software maintainers often provide an install script. See Node.js for example: To download and immediately execute a script from the Internet is insanely insecure! You’re suppose to vet the code first but nobody does. Every Linux system is different so software support can be tricky. Containerised software has become a popular distribution method to solve compatibility issues. Flatpak is the leading choice and Flathub is a bountiful app store. AppImage is a similar project. Ubuntu is trying to make Snaps happen in this space. Hopefully I’ve explained what Linux is! But is it for you? Linux can be a great OS if you’re a web developer writing code. All the familar tools should be available. If you like to tinker, Linux will be a never-ending source of weekend projects . Linux has unrivalled backwards compatibility and avoids the comparable bloat of Windows and macOS. Older hardware can feel surprisingly fresh under Linux. If you require access to proprietary design software like the Adobe suite you’re out of luck. This is why I’m stuck on macOS for my day job. Clients love to deliver vendor lock-in with their designs. There are often 3rd-party workarounds for apps like Figma. Unofficial apps are always buggy and prone to breakage. Both the best and worse parts about Linux is too much choice. Everything can be modified, replaced, improved, and broken. I’ll end before this turns into a book. Let me know if you found this informative! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. The Linux kernel A package manager A boot and init system Network utilities Desktop experience

0 views
Manuel Moreale 4 weeks ago

Linda Ma

This week on the People and Blogs series we have an interview with Linda Ma, whose blog can be found at midnightpond.com . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Aleem Ali and the other 120 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hey, I’m Linda. I grew up in Budapest in a Chinese family of four, heavily influenced by the 2000s internet. I was very interested in leaving home and ended up in the United Kingdom—all over, but with the most time spent in Edinburgh, Scotland. I got into design, sociology, and working in tech and startups. Then, I had enough of being a designer, working in startups, and living in the UK, so I left. I moved to Berlin and started building a life that fits me more authentically. My interests change a lot, but the persistent ones have been: journaling with a fountain pen, being horizontal in nature, breathwork, and ambient music. I was struck by a sudden need to write in public last year. I’d been writing in private but never felt the need to put anything online because I have this thing about wanting to remain mysterious. At least, that’s the story I was telling myself. In hindsight, the 'sudden need' was more of a 'wanting to feel safe to be seen.' I also wanted to find more people who were like-minded. Not necessarily interested in the same things as me, but thinking in similar ways. Through writing, I discovered that articulating your internal world with clarity takes time and that I was contributing to my own problems because I wasn't good at expressing myself. I write about these kinds of realizations in my blog. It’s like turning blurriness and stories into clarity and facts. I also do the opposite sometimes, where I reframe experiences and feelings into semi-fictional stories as a way to release them. I enjoy playing in this space between self-understanding through reality and self-soothing through fantasy. I also just enjoy the process of writing and the feeling of hammering on the keyboard. I wanted the blog to be formless and open-ended, so it didn’t have a name to begin with, and it was hanging out on my personal website. The name just kinda happened. I like the sound of the word “pond” and the feeling I get when I think of a pond. Then I thought: if I were a pond, what kind of pond would I be? A midnight pond. It reflects me, my writing, and the kind of impression I’d like to leave. It’s taken on a life of its own now, and I’m curious to see how it evolves. Nowadays, it seems I’m interested in writing shorter pieces and poems. I get a lot of inspiration from introspection, often catalyzed by conversations with people, paragraphs from books, music, or moments from everyday life. In terms of the writing process, the longer blogposts grow into being like this: I'll have fleeting thoughts and ideas that come to me pretty randomly. I try to put them all in one place (a folder in Obsidian or a board in Muse ). I organically return to certain thoughts and notes over time, and I observe which ones make me feel excited. Typically, I'll switch to iA Writer to do the actual writing — something about switching into another environment helps me get into the right mindset. Sometimes the posts are finished easily and quickly, sometimes I get stuck. When I get stuck, I take the entire piece and make it into a pile of mess in Muse. Sometimes the mess transforms into a coherent piece, sometimes it gets abandoned. When I finish something and feel really good about it, I let it sit for a couple days and look at it again once the post-completion high has faded. This is advice from the editors of the Modern Love column , and it’s very good advice. I occasionally ask a friend to read something to gauge clarity and meaning. I like the idea of having more thinking buddies. Please feel free to reach out if you think we could be good thinking buddies. Yes, I do believe the physical space influences my creativity. And it’s not just the immediate environment (the room or desk I'm writing at) but also the thing or tool I'm writing with (apps and notebook) as well as the broader environment (where I am geographically). There’s a brilliant book by Vivian Gornick called The Situation and the Story: The Art of Personal Narrative and a quote in it: “If you don’t leave home you suffocate, if you go too far you lose oxygen.” It’s her comment on one of the example pieces she discusses. This writer was talking about how he couldn’t write when he was too close or too far from home. It’s an interesting perspective to consider, and I find it very relatable. Though I wouldn’t have arrived at this conclusion had I not experienced both extremes. My ideal creative environment is a relatively quiet space where I can see some trees or a body of water when I look up. The tools I mentioned before and my physical journal are also essential to me. My site is built with Astro , the code is on GitHub, and all deploys through Netlify. The site/blog is really just a bunch of .md and .mdx files with some HTML and CSS. I code in VS Code. I wouldn’t change anything about the content or the name. Maybe I would give the tech stack or platform more thought if I started it now? In moments of frustration with Astro or code, I’ve often wondered if I should just accept that I’m not a techie and use something simpler. It’s been an interesting journey figuring things out though. Too deep into it, can’t back out now. The only running cost I have at the moment is the domain which is around $10 a year. iA Writer was a one-time purchase of $49.99. My blog doesn’t generate revenue. I don’t like the idea of turning personal blogs into moneymaking machines because it will most likely influence what and how you write. But — I am supportive of creatives wanting to be valued for what they create and share from an authentic place. I like voluntary support based systems like buymeacoffee.com or ko-fi.com . I also like the spirit behind platforms like Kickstarter or Metalabel . I started a Substack earlier this year where I share the longer posts from my blog. I’m not sure how I feel about this subscription thing, but I now use the paywall to protect posts that are more personal than others. I’ve come across a lot of writing I enjoy though and connected with others through writing. Here are a few I’ve been introduced to or stumbled upon: Interesting, no-longer-active blogs: I like coming across sites that surprise me. Here’s one that boggles my mind, and here’s writer Catherine Lacey’s website . There’s also this online documentary and experience of The Garden of Earthly Delights by Jheronimous Bosch that I share all the time, and Spencer Chang’s website is pretty cool. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 110 interviews . Make sure to also say thank you to Ben Werdmuller and the other 120 supporters for making this series possible. Lu’s Wikiblogardenite — Very real and entertaining blog of a "slightly-surreal videos maker and coder". Romina’s Journal — Journal of a graphic designer and visual artist. Maggie Appleton’s Garden — Big fan of Maggie’s visual essays on programming, design, and anthropology. Elliott’s memory site — This memory site gives me a cozy feeling. Where are Kay and Phil? — Friends documenting their bike tours and recipes. brr — Blog of an IT professional who was deployed to Antarctica during 2022-2023. The late Ursula K. Le Guin’s blog — She started this at the age of 81 in 2010.

2 views
Preah's Website 1 months ago

(Guide) Intro To Social Blogging

Social networks have rapidly become so vital to many people's lives on the internet. People want to see what their friends are doing, where they are, and photos of what they're doing. They also want to share these same things with their friends, all without having to go through the manual and sometimes awkward process of messaging them directly and saying "Hey, how're you doing?" Developers and companies have complied with this desire for instant connection. We see the launch of Friendster in 2002, MySpace and a job-centered one we all know, LinkedIn , in 2003. Famously, Facebook in 2004, YouTube in 2005, Twitter (now X) in 2006. Followed by Instagram , Snapchat , Google+ (RIP), TikTok , and Discord . People were extremely excited about this. We are more connected than ever. But we are losing in several ways. These companies that own these platforms want to make maximum profit, leading them to offer subscription-based services in some cases, or more distressing, sell their users' data to advertisers. They use algorithms to serve cherry-picked content that creates dangerous echo-chambers, and instill the need for users to remain on their device for sometimes hours just to see what's new, exacerbating feelings of FOMO and wasting precious time. Facebook has been found to conduct experiments on its users to fuel rage and misinformation for the purpose of engagement. 1 2 When did socializing online with friends and family become arguing with strangers, spreading misinformation, and experiencing panic attacks because of the constant feed of political and social unrest? I don't expect anyone to drop their social media. Plenty of people use it in healthy ways. We even have decentralized social media, such as the fediverse (think Mastodon) and the AT Protocol (think Bluesky) to reduce the problem of one person or company owning everything. I think this helps, and seeing a feed of your friends' short thoughts or posts occasionally is nice if you're not endlessly scrolling. I also think it's vital to many people to be able to explore recommendations frequently to get out of their bubble and experience variety. There is another option, one I am personally more fond of. It can sit nicely alongside your existing social media or replace it. It serves a different purpose than something like Twitter (X) or Instagram. It's meant to be a slower, more nuanced form of socializing and communicating, inspired by the pre-social media era, or at least the early one. For the purposes of this guide, I will refer to this as "Blog Feeds." A little intro in one page can be explained by blogfeeds.net , 3 which includes an aggregation of blogs to follow, essentially creating a network of people similar to a webring. 4 This will help you explore new blogs you find interesting and create a tighter group. Another gem is ooh.directory , which sorts blogs by category and interest, allows you to flip through random blogs, and visit the most recently-updated blogs for ideas of who to follow. Basically, a blog feed involves making a personal blog, which can have literally whatever you want on it, and following other people. The "following" aspect can be done through RSS (most common), or email newsletter if their site supports it. If the blog is part of the AT Protocol, you may be able to follow it using a Bluesky account. More about that later. Making a blog sounds scary and technical, but it doesn't have to be. If you know web development or want to learn, you can customize a site to be whatever your heart desires. If you're not into that, there are many services that make it incredibly easy to get going. You can post about your day, about traveling, about gaming, theme it a specific way, or post short thoughts on nothing much at all if you want. All I ask is that you do this because you want to, not solely because you might make a profit off of your audience. Also, please reconsider using AI to write posts if you are thinking of doing that! It's fully up to you, but in my opinion, why should I read something no one bothered to write? Hosted Services: Bear Blog: In the creator's own words, "A privacy-first, no-nonsense, super-fast blogging platform." Sign up, select a pre-made theme if you want and modify it to your liking, make post templates, and connect a custom domain if desired. Comes with ready-to-go RSS, and pretty popular among bloggers currently. This site runs on it. Pika: “An editor that makes you want to write, designed to get out of your way and perfectly match what readers will see.” With Pika you can sign up, choose a theme, customize without code, write posts in a clean editor, export your content, and connect your own domain, with a focus on privacy and design. You can start for free (up to ~50 posts) and upgrade later if you want unlimited posts, newsletter subscribers, analytics, etc. Substack: You might have seen this around before, it's quite popular. It's a platform built for people to publish posts and sometimes make money doing it. You can start a newsletter or blog, choose what’s free and what’s paid, send posts (and even podcasts or video) to subscribers’ inboxes, build a community, and access basic analytics. It’s simple and user-friendly, with a 10% fee if you monetize. This may not be the most loved option by other small bloggers due to its association with newsletter-signup popups and making a profit. It is also the most similar to other social media among blogging options . Ghost: An open-source platform focused on publishing and monetization. Ghost provides an editor (with live previews, Markdown + embeds, and an admin UI), built-in SEO, newsletter tools, membership & subscription support, custom themes, and control over your domain and data. You can self-host (free, for full flexibility) or use their managed Ghost(Pro) hosting, and benefit from faster performance, email delivery, and extensible APIs. Wordpress: The world’s most popular website and blogging platform, powering over 40% of the web. WordPress lets you create a simple blog or a business site using free and premium themes and plugins. You can host it yourself with full control, or use their hosted service (WordPress.com) for convenience. It supports custom domains, rich media, SEO tools, and extensibility through code or plugins. Squarespace: You might have heard of this on your favorite YouTuber's channel during a sponsorship (you don't sit through those, do you?). It is a platform for building websites, blogs, and online stores with no coding required. Squarespace offers templates, a drag-and-drop editor, built-in SEO, analytics, and e-commerce tools under a subscription. You can connect a custom domain, publish blog posts, and manage newsletters. Self-hosted, if you're more technical: Astro: A modern web framework built for speed, content, and flexibility. Astro lets you build blogs, portfolios, and full sites using any UI framework, or none at all, with zero JavaScript by default. 5 It supports Markdown, MDX, and server-side rendering, plus integrations for CMSs, themes, and deployment platforms. Hugo: An open-source static site generator built for efficiency and flexibility. It lets you create blogs and websites using Markdown, shortcodes, and templates. It supports themes, taxonomies, custom content types, and control over site structure without needing a database. Zola: Another open-source static site generator. Zola uses Markdown for content, Tera templates for layouts, and comes with built-in features like taxonomies, RSS feeds, and syntax highlighting. It requires no database, and is easy to configure. 11ty: Pronounced Eleventy. A flexible static site generator that lets you build content-focused websites using plain HTML, Markdown, or templating languages like Nunjucks, Liquid, and others. 11ty requires no database, supports custom data structures, and gives full control over your site’s output. Jekyll: A popular static site generator that transforms plain text into self-hosted websites and blogs. Jekyll uses Markdown, Liquid templates, and simple configuration to generate content without a database. It supports themes, plugins, and custom layouts, and integrates seamlessly with GitHub Pages for free hosting. Honorable mention: Wow, that's a lot of options! Don't get overwhelmed. Here are the basics for a simple site like Bear Blog or a static site generator. You write a post. This post tends to be in Markdown, which is a markup language (like HTML) for creating formatted text. It's actually not too far from something like Microsoft Word. In this case, if you want a header, you can put a pound symbol in front of your header text to tell your site that it should be formatted as one. Same with quotation blocks, bolding, italics and all that. Here is a simple Markdown cheatsheet provided by Bear Blog. Some other blogging platforms have even more options for formatting, like informational or warning boxes. After you've written it, you can usually preview it before posting. While you're writing, you might want to use a live-preview to make sure you're formatting it how you intend. After posting, people can go read your post and possibly interact with it in some ways if you want that. I'm not going to attempt to describe AT Protocol when there is another post that does an excellent job. But what I am going to mention, briefly, is using this protocol to follow blogs via Bluesky or another AT Protocol handle. Using something like leaflet.pub , you can create a blog on there, and follow other similar blogs. Here is an example of a blog on leaflet , and if you have Bluesky, go ahead and test subscribing using it. They also support comments and RSS. You don't have to memorize what RSS stands for (Really Simple Syndication, if you're curious). This is basically how you create a feed, like a Twitter (X) timeline or a Facebook homepage. When you subscribe to someone's blog, 6 you can get a simple, consolidated aggregation of new posts. At this point, RSS is pretty old but still works exactly as intended, and most sites have RSS feeds. What you need to start is a newsreader app. There are a lot of options, so it depends on what you value most. When you subscribe to a website, you put that into your newsreader app, and it fetches the content and displays it for you, among other neat features. Usually they include nice themes, no ads to bother you, and folder or tag organization. You may have to find a site's feed and copy the link, like , or your reader app may be able to find it automatically from a browser shortcut or from pasting in the normal link for the website. To learn more about adding a new subscription, see my feeds page . Here are some suggestions. Feel free to explore multiple and see what sticks: Feedly: A cloud-based, freemium RSS aggregator with apps and a browser interface. You can use a free account that supports a limited number of sources (about 100 feeds) and basic folders, but many advanced features—such as hiding ads, notes/highlights, power search, integration with Evernote/Pocket, and “Leo” AI filtering—require paid tiers. It supports iOS, Android, and web (desktop browsers). Feedly is not open source, it is a commercial service. Inoreader: Also a freemium service, available via web and on iOS and Android, with synchronization of your reading across devices. The free plan includes many of the core features (RSS subscription, folders, basic filtering), while more powerful features (such as advanced rules, full-text search, premium support, more feed limits) are gated behind paid tiers. Inoreader is not open source, it is a proprietary service with a freemium model. NetNewsWire: A native, free, and open-source RSS reader for Apple platforms (macOS, iPhone, iPad). It offers a clean, native experience and tracks your read/unread status locally or via syncing. Because it’s open source (MIT-licensed), you can inspect or contribute to its code. Its main limitation is platform since it’s focused on Apple devices. It's also not very visually flashy, if you care about that. feeeed (with four Es) : An iOS/iPadOS (and recent macOS) app that emphasizes a private, on-device reading experience without requiring servers or accounts. It is free (no ads or in-app purchases) and supports RSS subscriptions, YouTube, Reddit, and other sources, plus some AI summarization. Because it is designed to run entirely on device, there is no paid subscription for backend features, and it is private by design. It is not open-source. One personal note from me, I use this as my daily driver, and it has some minor bugs you may notice. It's developed by one person, so it happens. Reeder: A client app (primarily for Apple platforms: iOS, iPadOS, macOS) that fetches feed data from external services, such as Feedly, Inoreader, or local RSS sources. The new Reeder version supports unified timeline, filters, and media integration. It is not itself a feed-hosting service but a front end; thus, many of its features (such as sync or advanced filtering) depend on which backend you use. It uses iCloud to sync subscription and timeline state between devices. Reeder is proprietary (closed source) and uses a paid model or in-app purchases for more advanced versions. Unread: Another client app for Apple platforms with a focus on elegant reading. It relies on external feed services for syncing (you provide your own RSS or use a service like Feedly). Because Unread is a reader app, its features are more about presentation and gesture support; many of the syncing, feed limits, or premium capabilities depend on your chosen backend service. I would say Unread is my favorite so far, as it offers a lot for being free, has great syncing, tag organization, and a pleasing interface. It also fetches entire website content to get around certain limitations with some websites' feeds, allowing you to read everything in the app without visiting the website directly. FreshRSS: A self-hostable, open-source RSS/Atom aggregator that you run on your own server (like via Docker) and which supports reading through its own web interface or via third-party client apps. It allows full control over feed limits, filtering, theming, extensions, and it can generate feeds by scraping or filtering content. Because it is open source, there is no paid tier in the software itself (though you may incur hosting costs). Many client apps can connect to a FreshRSS instance for mobile or desktop reading. If you're interested in something interesting you can do with its API, check out Steve's post about automating feeds with FreshRSS. Click this for a more detailed breakdown of available RSS newsreaders. Additional resource on RSS and Feeds. Okay, soooo... I have a blog, I have RSS stuff, now what do I subscribe to, and how do I make this social? I'll let blogfeeds.net describe this: This takes us to our final point: Feeds. You can probably get away with just the first two items and then sharing it with people you already know, but what about meeting or talking to people you don't know? That's where Feeds come in. The idea is to create another page on your blog that has all the RSS feeds you're subscribed to. By keeping this public and always up to date, someone can visit your page, find someone new and follow them. Perhaps that person also has a feeds page, and the cycle continues until there is a natural and organic network of people all sharing with each other. So if you have a blog, consider making a feeds page and sharing it! If your RSS reader supports OPML file exports and imports, perhaps you can share that file as well to make it easier to share your feeds. Steve has an example of a feeds page , and blogfeeds.net has an aggregation of known blogs using feeds pages , to create a centralized place to follow blogs who have this same mindset. Once you make a feeds page, you can submit it to the site to get added to it. Then people can find your blog! There is debate on the best method for interaction with others via blogs. You have a few options. And the accompanying CSS, 7 which Bear Blog lets you edit: For each post, I do change the subject line (Re: {{post_title}}) manually to whatever the post title is. That way, someone can click the button and open their mail client already ready to go with a subject line pertaining to the post they want to talk about. Change the values and to whatever colors you want to match your site! See the bottom of this post to see what it looks like. Next: Comments: Comments are a tricky one. It's looked down on by some because of their lack of nuance and moderation stress, which is why Bear Blog doesn't natively have them. There are various ways to do comments, and it heavily depends on what blogging platform you choose, so here is Bear Blog's stance on it and some recommendations for setting up comments if you want . Guestbooks: This is an old form of website interaction that dates back to at least Geocities . The concept is that visitors to your site can leave a quick thought, their name, and optionally their own website to let you know they visited. You can see an example on my website , and my recommended service for a free guestbook is Guestbooks . You can choose a default theme and edit it if you want to match the rest of your site, implement spam protection, and access a dashboard for managing and deleting comments if needed. Here are some ideas to get you started and inspired: Add new pages, like a link to your other social media or music listening platforms, or a page dedicated to your pet. Email a random person on a blog to give your thoughts on a post of theirs or simply tell them their site is cool. Create an email just for this and for your website for privacy and separation, if desired. Add a Now page. It's a section specifically to tell others what you are focused on at this point of your life. Read more about it at nownownow.com . See an example on Clint McMahon's blog . A /now page shares what you’d tell a friend you hadn’t seen in a year. Write a post about a cool rock you found in your yard, or something similarly asinine. Revel in the lack of effort. Or, Make a post containing 1-3 sentences only. Join a webring . Make a page called Reviews, to review movies, books, TV shows, games, kitchen appliances, etc. That's all from me for now. Subscribe to my RSS feed , email me using the button at the bottom to tell me this post sucks, or that it's great, or if you have something to suggest to edit it, and bring back the old web. Subscribe via email or RSS Washington Post – Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation. Link . • Unpaywalled . ↩ The Guardian – Facebook reveals news feed experiment to control emotions. Link . ↩ This website was created by Steve, who has their own Bear Blog . Read Resurrect the Old Web , which inspired this post. ↩ A webring is a collection of websites linked together in a circular structure, organized around a specific theme. Each site has navigation links to the next and previous members, forming a ring. A central site usually lists all members to prevent breaking the ring if someone's site goes offline. ↩ Take a look at this Reddit discussion on why less JavaScript can be better . ↩ Or news site, podcast, or supported social media platform like Bluesky, and even subreddits. ↩ If you don't know what HTML and CSS is, basically, the first snippet of code I shared is HTML, used for the basic text and formatting of a website; CSS is used to apply fancy styles and color, among other things. ↩ Bear Blog: In the creator's own words, "A privacy-first, no-nonsense, super-fast blogging platform." Sign up, select a pre-made theme if you want and modify it to your liking, make post templates, and connect a custom domain if desired. Comes with ready-to-go RSS, and pretty popular among bloggers currently. This site runs on it. Pika: “An editor that makes you want to write, designed to get out of your way and perfectly match what readers will see.” With Pika you can sign up, choose a theme, customize without code, write posts in a clean editor, export your content, and connect your own domain, with a focus on privacy and design. You can start for free (up to ~50 posts) and upgrade later if you want unlimited posts, newsletter subscribers, analytics, etc. Substack: You might have seen this around before, it's quite popular. It's a platform built for people to publish posts and sometimes make money doing it. You can start a newsletter or blog, choose what’s free and what’s paid, send posts (and even podcasts or video) to subscribers’ inboxes, build a community, and access basic analytics. It’s simple and user-friendly, with a 10% fee if you monetize. This may not be the most loved option by other small bloggers due to its association with newsletter-signup popups and making a profit. It is also the most similar to other social media among blogging options . Ghost: An open-source platform focused on publishing and monetization. Ghost provides an editor (with live previews, Markdown + embeds, and an admin UI), built-in SEO, newsletter tools, membership & subscription support, custom themes, and control over your domain and data. You can self-host (free, for full flexibility) or use their managed Ghost(Pro) hosting, and benefit from faster performance, email delivery, and extensible APIs. Wordpress: The world’s most popular website and blogging platform, powering over 40% of the web. WordPress lets you create a simple blog or a business site using free and premium themes and plugins. You can host it yourself with full control, or use their hosted service (WordPress.com) for convenience. It supports custom domains, rich media, SEO tools, and extensibility through code or plugins. Squarespace: You might have heard of this on your favorite YouTuber's channel during a sponsorship (you don't sit through those, do you?). It is a platform for building websites, blogs, and online stores with no coding required. Squarespace offers templates, a drag-and-drop editor, built-in SEO, analytics, and e-commerce tools under a subscription. You can connect a custom domain, publish blog posts, and manage newsletters. Astro: A modern web framework built for speed, content, and flexibility. Astro lets you build blogs, portfolios, and full sites using any UI framework, or none at all, with zero JavaScript by default. 5 It supports Markdown, MDX, and server-side rendering, plus integrations for CMSs, themes, and deployment platforms. Hugo: An open-source static site generator built for efficiency and flexibility. It lets you create blogs and websites using Markdown, shortcodes, and templates. It supports themes, taxonomies, custom content types, and control over site structure without needing a database. Zola: Another open-source static site generator. Zola uses Markdown for content, Tera templates for layouts, and comes with built-in features like taxonomies, RSS feeds, and syntax highlighting. It requires no database, and is easy to configure. 11ty: Pronounced Eleventy. A flexible static site generator that lets you build content-focused websites using plain HTML, Markdown, or templating languages like Nunjucks, Liquid, and others. 11ty requires no database, supports custom data structures, and gives full control over your site’s output. Jekyll: A popular static site generator that transforms plain text into self-hosted websites and blogs. Jekyll uses Markdown, Liquid templates, and simple configuration to generate content without a database. It supports themes, plugins, and custom layouts, and integrates seamlessly with GitHub Pages for free hosting. Neocities: This is a modern continuation of Geocities , mainly focused on hand-coding HTML and CSS to create a custom site from scratch. Not ideal for blogging, but cool for showcasing a site and learning web development. It's free and open-source, and you can choose to pay for custom domains and more bandwidth, with no ads or data selling. You can see my silly site I made using Neocities for a D&D campaign I'm a part of at thepub.neocities.org . Feedly: A cloud-based, freemium RSS aggregator with apps and a browser interface. You can use a free account that supports a limited number of sources (about 100 feeds) and basic folders, but many advanced features—such as hiding ads, notes/highlights, power search, integration with Evernote/Pocket, and “Leo” AI filtering—require paid tiers. It supports iOS, Android, and web (desktop browsers). Feedly is not open source, it is a commercial service. Inoreader: Also a freemium service, available via web and on iOS and Android, with synchronization of your reading across devices. The free plan includes many of the core features (RSS subscription, folders, basic filtering), while more powerful features (such as advanced rules, full-text search, premium support, more feed limits) are gated behind paid tiers. Inoreader is not open source, it is a proprietary service with a freemium model. NetNewsWire: A native, free, and open-source RSS reader for Apple platforms (macOS, iPhone, iPad). It offers a clean, native experience and tracks your read/unread status locally or via syncing. Because it’s open source (MIT-licensed), you can inspect or contribute to its code. Its main limitation is platform since it’s focused on Apple devices. It's also not very visually flashy, if you care about that. feeeed (with four Es) : An iOS/iPadOS (and recent macOS) app that emphasizes a private, on-device reading experience without requiring servers or accounts. It is free (no ads or in-app purchases) and supports RSS subscriptions, YouTube, Reddit, and other sources, plus some AI summarization. Because it is designed to run entirely on device, there is no paid subscription for backend features, and it is private by design. It is not open-source. One personal note from me, I use this as my daily driver, and it has some minor bugs you may notice. It's developed by one person, so it happens. Reeder: A client app (primarily for Apple platforms: iOS, iPadOS, macOS) that fetches feed data from external services, such as Feedly, Inoreader, or local RSS sources. The new Reeder version supports unified timeline, filters, and media integration. It is not itself a feed-hosting service but a front end; thus, many of its features (such as sync or advanced filtering) depend on which backend you use. It uses iCloud to sync subscription and timeline state between devices. Reeder is proprietary (closed source) and uses a paid model or in-app purchases for more advanced versions. Unread: Another client app for Apple platforms with a focus on elegant reading. It relies on external feed services for syncing (you provide your own RSS or use a service like Feedly). Because Unread is a reader app, its features are more about presentation and gesture support; many of the syncing, feed limits, or premium capabilities depend on your chosen backend service. I would say Unread is my favorite so far, as it offers a lot for being free, has great syncing, tag organization, and a pleasing interface. It also fetches entire website content to get around certain limitations with some websites' feeds, allowing you to read everything in the app without visiting the website directly. FreshRSS: A self-hostable, open-source RSS/Atom aggregator that you run on your own server (like via Docker) and which supports reading through its own web interface or via third-party client apps. It allows full control over feed limits, filtering, theming, extensions, and it can generate feeds by scraping or filtering content. Because it is open source, there is no paid tier in the software itself (though you may incur hosting costs). Many client apps can connect to a FreshRSS instance for mobile or desktop reading. If you're interested in something interesting you can do with its API, check out Steve's post about automating feeds with FreshRSS. Email: Share an email people can contact you at, and when someone has something to say, they can email you about it. This allows for intential, nuanced discussion. Here is a template I use at the end of every post to facilitate this (totally stolen from Steve, again) : Comments: Comments are a tricky one. It's looked down on by some because of their lack of nuance and moderation stress, which is why Bear Blog doesn't natively have them. There are various ways to do comments, and it heavily depends on what blogging platform you choose, so here is Bear Blog's stance on it and some recommendations for setting up comments if you want . Guestbooks: This is an old form of website interaction that dates back to at least Geocities . The concept is that visitors to your site can leave a quick thought, their name, and optionally their own website to let you know they visited. You can see an example on my website , and my recommended service for a free guestbook is Guestbooks . You can choose a default theme and edit it if you want to match the rest of your site, implement spam protection, and access a dashboard for managing and deleting comments if needed. Add new pages, like a link to your other social media or music listening platforms, or a page dedicated to your pet. Email a random person on a blog to give your thoughts on a post of theirs or simply tell them their site is cool. Create an email just for this and for your website for privacy and separation, if desired. Add a Now page. It's a section specifically to tell others what you are focused on at this point of your life. Read more about it at nownownow.com . See an example on Clint McMahon's blog . Write a post about a cool rock you found in your yard, or something similarly asinine. Revel in the lack of effort. Or, Make a post containing 1-3 sentences only. Join a webring . Make a page called Reviews, to review movies, books, TV shows, games, kitchen appliances, etc. Washington Post – Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation. Link . • Unpaywalled . ↩ The Guardian – Facebook reveals news feed experiment to control emotions. Link . ↩ This website was created by Steve, who has their own Bear Blog . Read Resurrect the Old Web , which inspired this post. ↩ A webring is a collection of websites linked together in a circular structure, organized around a specific theme. Each site has navigation links to the next and previous members, forming a ring. A central site usually lists all members to prevent breaking the ring if someone's site goes offline. ↩ Take a look at this Reddit discussion on why less JavaScript can be better . ↩ Or news site, podcast, or supported social media platform like Bluesky, and even subreddits. ↩ If you don't know what HTML and CSS is, basically, the first snippet of code I shared is HTML, used for the basic text and formatting of a website; CSS is used to apply fancy styles and color, among other things. ↩

0 views
Ahmad Alfy 1 months ago

How Functional Programming Shaped (and Twisted) Frontend Development

A friend called me last week. Someone who’d built web applications back for a long time before moving exclusively to backend and infra work. He’d just opened a modern React codebase for the first time in over a decade. “What the hell is this?” he asked. “What are all these generated class names? Did we just… cancel the cascade? Who made the web work this way?” I laughed, but his confusion cut deeper than he realized. He remembered a web where CSS cascaded naturally, where the DOM was something you worked with , where the browser handled routing, forms, and events without twenty abstractions in between. To him, our modern frontend stack looked like we’d declared war on the platform itself. He asked me to explain how we got here. That conversation became this essay. A disclaimer before we begin : This is one perspective, shaped by having lived through the first browser war. I applied to make 24-bit PNGs work in IE6. I debugged hasLayout bugs at 2 AM. I wrote JavaScript when you couldn’t trust to work the same way across browsers. I watched jQuery become necessary, then indispensable, then legacy. I might be wrong about some of this. My perspective is biased for sure, but it also comes with the memory that the web didn’t need constant reinvention to be useful. There’s a strange irony at the heart of modern web development. The web was born from documents, hyperlinks, and a cascading stylesheet language. It was always messy, mutable, and gloriously side-effectful. Yet over the past decade, our most influential frontend tools have been shaped by engineers chasing functional programming purity: immutability, determinism, and the elimination of side effects. This pursuit gave us powerful abstractions. React taught us to think in components. Redux made state changes traceable. TypeScript brought compile-time safety to a dynamic language. But it also led us down a strange path. A one where we fought against the platform instead of embracing it. We rebuilt the browser’s native capabilities in JavaScript, added layers of indirection to “protect” ourselves from the DOM, and convinced ourselves that the web’s inherent messiness was a problem to solve rather than a feature to understand. The question isn’t whether functional programming principles have value. They do. The question is whether applying them dogmatically to the web (a platform designed around mutability, global scope, and user-driven chaos) made our work better, or just more complex. To understand why functional programming ideals clash with web development, we need to acknowledge what the web actually is. The web is fundamentally side-effectful. CSS cascades globally by design. Styles defined in one place affect elements everywhere, creating emergent patterns through specificity and inheritance. The DOM is a giant mutable tree that browsers optimize obsessively; changing it directly is fast and predictable. User interactions arrive asynchronously and unpredictably: clicks, scrolls, form submissions, network requests, resize events. There’s no pure function that captures “user intent.” This messiness is not accidental. It’s how the web scales across billions of devices, remains backwards-compatible across decades, and allows disparate systems to interoperate. The browser is an open platform with escape hatches everywhere. You can style anything, hook into any event, manipulate any node. That flexibility and that refusal to enforce rigid abstractions is the web’s superpower. When we approach the web with functional programming instincts, we see this flexibility as chaos. We see globals as dangerous. We see mutation as unpredictable. We see side effects as bugs waiting to happen. And so we build walls. Functional programming revolves around a few core principles: functions should be pure (same inputs → same outputs, no side effects), data should be immutable, and state changes should be explicit and traceable. These ideas produce code that’s easier to reason about, test, and parallelize, in the right context of course. These principles had been creeping into JavaScript long before React. Underscore.js (2009) brought map, reduce, and filter to the masses. Lodash and Ramda followed with deeper FP toolkits including currying, composition and immutability helpers. The ideas were in the air: avoid mutation, compose small functions, treat data transformations as pipelines. React itself started with class components and , hardly pure FP. But the conceptual foundation was there: treat UI as a function of state, make rendering deterministic, isolate side effects. Then came Elm, a purely functional language created by Evan Czaplicki that codified the “Model-View-Update” architecture. When Dan Abramov created Redux, he explicitly cited Elm as inspiration. Redux’s reducers are directly modeled on Elm’s update functions: . Redux formalized what had been emerging patterns. Combined with React Hooks (which replaced stateful classes with functional composition), the ecosystem shifted decisively toward FP. Immutability became non-negotiable. Pure components became the ideal. Side effects were corralled into . Through this convergence (library patterns, Elm’s rigor, and React’s evolution) Haskell-derived ideas about purity became mainstream JavaScript practice. In the early 2010s, as JavaScript applications grew more complex, developers looked to FP for salvation. jQuery spaghetti had become unmaintainable. Backbone’s two-way binding caused cascading updates (ironically, Backbone’s documentation explicitly advised against two-way binding saying “it doesn’t tend to be terribly useful in your real-world app” yet many developers implemented it through plugins). The community wanted discipline, and FP offered it: treat your UI as a pure function of state. Make data flow in one direction. Eliminate shared mutable state. React’s arrival in 2013 crystallized these ideals. It promised a world where : give it data, get back a component tree, re-render when data changes. No manual DOM manipulation. No implicit side effects. Just pure, predictable transformations. This was seductive. And in many ways, it worked. But it also set us on a path toward rebuilding the web in JavaScript’s image, rather than JavaScript in the web’s image. CSS was designed to be global. Styles cascade, inherit, and compose across boundaries. This enables tiny stylesheets to control huge documents, and lets teams share design systems across applications. But to functional programmers, global scope is dangerous. It creates implicit dependencies and unpredictable outcomes. Enter CSS-in-JS: styled-components, Emotion, JSS. The promise was component isolation. Styles scoped to components, no cascading surprises, no naming collisions. Styles become data , passed through JavaScript, predictably bound to elements. But this came at a cost. CSS-in-JS libraries generate styles at runtime, injecting them into tags as components mount. This adds JavaScript execution to the critical rendering path. Server-side rendering becomes complicated. You need to extract styles during the render, serialize them, and rehydrate them on the client. Debugging involves runtime-generated class names like . And you lose the cascade; the very feature that made CSS powerful in the first place. Worse, you’ve moved a browser-optimized declarative language into JavaScript, a single-threaded runtime. The browser can parse and apply CSS in parallel, off the main thread. Your styled-components bundle? That’s main-thread work, blocking interactivity. The web had a solution. It’s called a stylesheet. But it wasn’t pure enough. The industry eventually recognized these problems and pivoted to Tailwind CSS. Instead of runtime CSS generation, use utility classes. Instead of styled-components, compose classes in JSX. This was better, at least it’s compile-time, not runtime. No more blocking the main thread to inject styles. No more hydration complexity. But Tailwind still fights the cascade. Instead of writing once and letting it cascade to all buttons, you write on every single button element. You’ve traded runtime overhead for a different set of problems: class soup in your markup, massive HTML payloads, and losing the cascade’s ability to make sweeping design changes in one place. And here’s where it gets truly revealing: when Tailwind added support for nested selectors using (a feature that would let developers write more cascade-like styles), parts of the community revolted. David Khourshid (creator of XState) shared examples of using nested selectors in Tailwind, and the backlash was immediate. Developers argued this defeated the purpose of Tailwind, that it brought back the “problems” of traditional CSS, that it violated the utility-first philosophy. Think about what this means. The platform has cascade. CSS-in-JS tried to eliminate it and failed. Tailwind tried to work around it with utilities. And when Tailwind cautiously reintroduced a cascade-like feature, developers who were trained by years of anti-cascade ideology rejected it. We’ve spent so long teaching people that the cascade is dangerous that even when their own tools try to reintroduce platform capabilities, they don’t want them. We’re not just ignorant of the platform anymore. We’re ideologically opposed to it. React introduced synthetic events to normalize browser inconsistencies and integrate events into its rendering lifecycle. Instead of attaching listeners directly to DOM nodes, React uses event delegation. It listens at the root, then routes events to handlers through its own system. This feels elegant from a functional perspective. Events become data flowing through your component tree. You don’t touch the DOM directly. Everything stays inside React’s controlled universe. But native browser events already work. They bubble, they capture, they’re well-specified. The browser has spent decades optimizing event dispatch. By wrapping them in a synthetic layer, React adds indirection: memory overhead for event objects, translation logic for every interaction, and debugging friction when something behaves differently than the native API. Worse, it trains developers to avoid the platform. Developers learn React’s event system, not the web’s. When they need to work with third-party libraries or custom elements, they hit impedance mismatches. becomes a foreign API in their own codebase. Again: the web had this. The browser’s event system is fast, flexible, and well-understood. But it wasn’t controlled enough for the FP ideal of a closed system. The logical extreme of “UI as a pure function of state” is client-side rendering: the server sends an empty HTML shell, JavaScript boots up, and the app renders entirely in the browser. From a functional perspective, this is clean. Your app is a deterministic function that takes initial state and produces a DOM tree. From a web perspective, it’s a disaster. The browser sits idle while JavaScript parses, executes, and manually constructs the DOM. Users see blank screens. Screen readers get empty documents. Search engines see nothing. Progressive rendering which is one of the browser’s most powerful features, goes unused. The industry noticed. Server-side rendering came back. But because the mental model was still “JavaScript owns the DOM,” we got hydration : the server renders HTML, the client renders the same tree in JavaScript, then React walks both and attaches event handlers. During hydration, the page is visible but inert. Clicks do nothing, forms don’t submit. This is architecturally absurd. The browser already rendered the page. It already knows how to handle clicks. But because the framework wants to own all interactions through its synthetic event system, it must re-create the entire component tree in JavaScript before anything works. The absurdity extends beyond the client. Infrastructure teams watch in confusion as every user makes double the number of requests : the server renders the page and fetches data, then the client boots up and fetches the exact same data again to reconstruct the component tree for hydration. Why? Because the framework can’t trust the HTML it just generated. It needs to rebuild its internal representation of the UI in JavaScript to attach event handlers and manage state. This isn’t just wasteful, it’s expensive. Database queries run twice. API calls run twice. Cache layers get hit twice. CDN costs double. And for what? So the framework can maintain its pure functional model where all state lives in JavaScript. The browser had the data. The HTML had the data. But that data wasn’t in the right shape . It wasn’t a JavaScript object tree, so we throw it away and fetch it again. Hydration is what happens when you treat the web like a blank canvas instead of a platform with capabilities. The web gave us streaming HTML, progressive enhancement, and instant interactivity. We replaced it with JSON, JavaScript bundles, duplicate network requests, and “please wait while we reconstruct reality.” Consider the humble modal dialog. The web has , a native element with built-in functionality: it manages focus trapping, handles Escape key dismissal, provides a backdrop, controls scroll-locking on the body, and integrates with the accessibility tree. It exists in the DOM but remains hidden until opened. No JavaScript mounting required. It’s fast, accessible, and battle-tested by browser vendors. Now observe what gets taught in tutorials, bootcamps, and popular React courses: build a modal with elements. Conditionally render it when is true. Manually attach a click-outside handler. Write an effect to listen for the Escape key. Add another effect for focus trapping. Implement your own scroll-lock logic. Remember to add ARIA attributes. Oh, and make sure to clean up those event listeners, or you’ll have memory leaks. You’ve just written 100+ lines of JavaScript to poorly recreate what the browser gives you for free. Worse, you’ve trained developers to not even look for native solutions. The platform becomes invisible. When someone asks “how do I build a modal?”, the answer is “install a library” or “here’s my custom hook,” never “use .” The teaching is the problem. When influential tutorial authors and bootcamp curricula skip native APIs in favor of React patterns, they’re not just showing an alternative approach. They’re actively teaching malpractice. A generation of developers learns to build inaccessible soup because that’s what fits the framework’s reactivity model, never knowing the platform already solved these problems. And it’s not just bootcamps. Even the most popular component libraries make the same choice: shadcn/ui builds its Dialog component on Radix UI primitives, which use instead of the native element. There are open GitHub issues requesting native support, but the implicit message is clear: it’s easier to reimplement the browser than to work with it. The problem runs deeper than ignorance or inertia. The frameworks themselves increasingly struggle to work with the platform’s evolution. Not because the platform features are bad, but because the framework’s architectural assumptions can’t accommodate them. Consider why component libraries like Radix UI choose over . The native element manages its own state: it knows when it’s open, it handles its own visibility, it controls focus internally. But React’s reactivity model expects all state to live in JavaScript, flowing unidirectionally into the DOM. When a native element manages its own state, React’s mental model breaks down. Keeping in your React state synchronized with the element’s actual open/closed state becomes a nightmare of refs, effects, and imperative calls. Precisely what React was supposed to eliminate. Rather than adapt their patterns to work with stateful native elements, library authors reimplement the entire behavior in a way that fits the framework. It’s architecturally easier to build a fake dialog in JavaScript than to integrate with the platform’s real one. But the conflict extends beyond architectural preferences. Even when the platform adds features that developers desperately want, frameworks can’t always use them. Accordions? The web has and . Tooltips? There’s attribute and the emerging API. Date pickers? . Custom dropdowns? The web now supports styling elements with and pseudo-elements. You can even put elements with images inside elements now. It eliminates the need for the countless JavaScript select libraries that exist solely because designers wanted custom styling. Frameworks encourage conditional rendering and component state, so these elements don’t get rendered until JavaScript decides they should exist. The mental model is “UI appears when state changes,” not “UI exists, state controls visibility.” Even when the platform adds the exact features developers have been rebuilding in JavaScript for years, the ecosystem momentum means most developers never learn these features exist. And here’s the truly absurd part: even when developers do know about these new platform features, the frameworks themselves can’t handle them. MDN’s documentation for customizable elements includes this warning: “ Some JavaScript frameworks block these features; in others, they cause hydration failures when Server-Side Rendering (SSR) is enabled. ” The platform evolved. The HTML parser now allows richer content inside elements. But React’s JSX parser and hydration system weren’t designed for this. They expect to only contain text. Updating the framework to accommodate the platform’s evolution takes time, coordination, and breaking changes that teams are reluctant to make. The web platform added features that eliminate entire categories of JavaScript libraries, but the dominant frameworks can’t use those features without causing hydration errors. The stack that was supposed to make development easier now lags behind the platform it’s built on. The browser has native routing: tags, the History API, forward/back buttons. It has native forms: elements, validation attributes, submit events. These work without JavaScript. They’re accessible by default. They’re fast. Modern frameworks threw them out. React Router, Next.js’s router, Vue Router; they intercept link clicks, prevent browser navigation, and handle routing in JavaScript. Why? Because client-side routing feels like a pure state transition: URL changes, state updates, component re-renders. No page reload. No “lost” JavaScript state. But you’ve now made navigation depend on JavaScript. Ctrl+click to open in a new tab? Broken, unless you carefully re-implement it. Right-click to copy link? The URL might not match what’s rendered. Accessibility tools that rely on standard navigation patterns? Confused. Forms got the same treatment. Instead of letting the browser handle submission, validation, and accessibility, frameworks encourage JavaScript-controlled forms. Formik, React Hook Form, uncontrolled vs. controlled inputs; entire libraries exist to manage what already does. The browser can validate instantly, with no JavaScript. But that’s not reactive enough, so we rebuild validation in JavaScript, ship it to the client, and hope we got the logic right. The web had these primitives. We rejected them because they didn’t fit our FP-inspired mental model of “state flows through JavaScript.” Progressive enhancement used to be a best practice: start with working HTML, layer on CSS for style, add JavaScript for interactivity. The page works at every level. Now, we start with JavaScript and work backwards, trying to squeeze HTML out of our component trees and hoping hydration doesn’t break. We lost built-in accessibility. Native HTML elements have roles, labels, and keyboard support by default. Custom JavaScript widgets require attributes, focus management, and keyboard handlers. All easy to forget or misconfigure. We lost performance. The browser’s streaming parser can render HTML as it arrives. Modern frameworks send JavaScript, parse JavaScript, execute JavaScript, then finally render. That’s slower. The browser can cache CSS and HTML aggressively. JavaScript bundles invalidate on every deploy. We lost simplicity. is eight characters. A client-side router is a dependency, a config file, and a mental model. is self-documenting. A controlled form with validation is dozens of lines of state management. And we lost alignment with the platform. The browser vendors spend millions optimizing HTML parsing, CSS rendering, and event dispatch. We spend thousands of developer-hours rebuilding those features in JavaScript, slower. This isn’t a story of incompetence. Smart people built these tools for real reasons. By the early 2010s, JavaScript applications had become unmaintainable. jQuery spaghetti sprawled across codebases. Two-way data binding caused cascading updates that were impossible to debug. Teams needed discipline, and functional programming offered it: pure components, immutable state, unidirectional data flow. For complex, stateful applications (like dashboards with hundreds of interactive components, real-time collaboration tools, data visualization platforms) React’s model was genuinely better than manually wiring up event handlers and tracking mutations. The FP purists weren’t wrong that unpredictable mutation causes bugs. They were wrong that the solution was avoiding the platform’s mutation-friendly APIs instead of learning to use them well. But in the chaos of 2013, that distinction didn’t matter. React worked. It scaled. And Facebook was using it in production. Then came the hype cycle. React dominated the conversation. Every conference had React talks. Every tutorial assumed React as the starting point. CSS-in-JS became “modern.” Client-side rendering became the default. When big companies like Facebook, Airbnb, Netflix and others adopted these patterns, they became industry standards. Bootcamps taught React exclusively. Job postings required React experience. The narrative solidified: this is how you build for the web now. The ecosystem became self-reinforcing through its own momentum. Once React dominated hiring pipelines and Stack Overflow answers, alternatives faced an uphill battle. Teams that had already invested in React by training developers, building component libraries, establishing patterns are now facing enormous switching costs. New developers learned React because that’s what jobs required. Jobs required React because that’s what developers knew. The cycle fed itself, independent of whether React was the best tool for any particular job. This is where we lost the plot. Somewhere in the transition from “React solves complex application problems” to “React is how you build websites,” we stopped asking whether the problems we were solving actually needed these solutions. I’ve watched developers build personal blogs with Next.js. Sites that are 95% static content with maybe a contact form, because that’s what they learned in bootcamp. I’ve seen companies choose React for marketing sites with zero interactivity, not because it’s appropriate, but because they can’t hire developers who know anything else. The tool designed for complex, stateful applications became the default for everything, including problems the web solved in 1995 with HTML and CSS. A generation of developers never learned that most websites don’t need a framework at all. The question stopped being “does this problem need React?” and became “which React pattern should I use?” The platform’s native capabilities like progressive rendering, semantic HTML, the cascade, instant navigation are now considered “old-fashioned.” Reinventing them in JavaScript became “best practices.” We chased functional purity on a platform that was never designed for it. And we built complexity to paper over the mismatch. The good news: we’re learning. The industry is rediscovering the platform. HTMX embraces HTML as the medium of exchange. Server sends HTML, browser renders it, no hydration needed. Qwik resumable architecture avoids hydration entirely, serializing only what’s needed. Astro defaults to server-rendered HTML with minimal JavaScript. Remix and SvelteKit lean into web standards: forms that work without JS, progressive enhancement, leveraging the browser’s cache. These tools acknowledge what the web is: a document-based platform with powerful native capabilities. Instead of fighting it, they work with it. This doesn’t mean abandoning components or reactivity. It means recognizing that is a useful model inside your framework, not a justification to rebuild the entire browser stack. It means using CSS for styling, native events for interactions, and HTML for structure and then reaching for JavaScript when you need interactivity beyond what the platform provides. The best frameworks of the next decade will be the ones that feel like the web, not in spite of it. In chasing functional purity, we built a frontend stack that is more complex, more fragile, and less aligned with the platform it runs on. We recreated CSS in JavaScript, events in synthetic wrappers, rendering in hydration layers, and routing in client-side state machines. We did this because we wanted predictability, control, and clean abstractions. But the web was never meant to be pure. It’s a sprawling, messy, miraculous platform built on decades of emergent behavior, pragmatic compromises, and radical openness. Its mutability isn’t a bug. It’s the reason a document written in 1995 still renders in 2025. Its global scope isn’t dangerous. It’s what lets billions of pages share a design language. Maybe the web didn’t need to be purified. Maybe it just needed to be understood. I want to thank my friend Ihab Khattab for reviewing this piece and providing invaluable feedback.

0 views
Manuel Moreale 1 months ago

Blake Watson

This week on the People and Blogs series we have an interview with Blake Watson, whose blog can be found at blakewatson.com . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Jaga Santagostino and the other 121 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Sure! I’m Blake. I live in a small city near Jackson, Mississippi, USA. I work for MRI Technologies as a frontend engineer, building bespoke web apps for NASA. Previously I worked at an ad agency as an interactive designer. I have a neuromuscular condition called spinal muscular atrophy (SMA). It's a progressive condition that causes my muscles to become weaker over time. Because of that, I use a power wheelchair and a whole host of assistive technologies big and small. I rely on caregivers for most daily activities like taking a shower, getting dressed, and eating—just to name a few. I am able to use a computer on my own. I knew from almost the first time I used one that it was going to be important in my life. I studied Business Information Systems in college as a way to take computer-related courses without all the math of computer science (which scared me at the time). When I graduated, I had a tough time finding a job making websites. I did a bit of freelance work and volunteer work to build up a portfolio, but was otherwise unemployed for several years. I finally got my foot in the door and I recently celebrated a milestone of being employed for a decade . When I'm not working, I'm probably tinkering on side projects . I'm somewhat of a side project and home-cooked app enthusiast. I just really enjoy making and using my own tools. Over the last 10 years, I've gotten into playing Dungeons and Dragons and a lot of my side projects have been related to D&D. I enjoy design, typography, strategy games, storytelling, writing, programming, gamedev, and music. I got hooked on making websites in high school and college in the early 2000s. A friend of mine in high school had a sports news website. I want to say it was made with the Homestead site builder or something similar. I started writing for it and helping with it. I couldn’t get enough so I started making my own websites using WYSIWYG page builders. But I became increasingly frustrated with the limitations of page builders. Designing sites felt clunky and I couldn’t get elements to do exactly what I wanted them to do. I had a few blogs on other services over the years. Xanga was maybe the first one. Then I had one on Blogger for a while. In 2005, I took a course called Advanced Languages 1. It turned out to be JavaScript. Learning JavaScript necessitated learning HTML. Throughout the course I became obsessed with learning HTML, CSS, and JavaScript. Eventually, in August of 2005— twenty years ago —I purchased the domain blakewatson.com . I iterated on it multiple times a year at first. It morphed from quirky design to quirkier design as I learned more CSS. It was a personal homepage, but I blogged at other services. Thanks to RSS, I could list my recent blog posts on my website. When I graduated from college, my personal website became more of a web designer's portfolio, a professional site that I would use to attract clients and describe my services. But around that time I was learning how to use WordPress and I started a self-hosted WordPress blog called I hate stairs . It was an extremely personal disability-related and life journaling type of blog that I ran for several years. When I got my first full-time position and didn't need to freelance any longer, I converted blakewatson.com back into a personal website. But this time, primarily a blog. I discontinued I hate stairs (though I maintain an archive and all the original URLs work ). I had always looked up to various web designers in the 2000s who had web development related blogs. People like Jeffery Zeldman , Andy Clarke , Jason Santa Maria , and Tina Roth Eisenberg . For the past decade, I've blogged about web design, disability, and assistive tech—with the odd random topic here or there. I used to blog only when inspiration struck me hard enough to jolt my lazy ass out of whatever else I was doing. That strategy left me writing three or four articles a year (I don’t know why, but I think of my blog posts as articles in a minor publication, and this hasn’t helped me do anything but self-edit—I need to snap out of it and just post ). In March 2023, however, I noticed that I had written an article every month so far that year. I decided to keep up the streak. And ever since then, I've posted at least one article a month on my blog. I realize that isn't very frequent for some people, but I enjoy that pacing, although I wouldn't mind producing a handful more per year. Since I'm purposefully posting more, I've started keeping a list of ideas in my notes just so I have something to look through when it's time to write. I use Obsidian mostly for that kind of thing. The writing itself almost always happens in iA Writer . This app is critical to my process because I am someone who likes to tinker with settings and fonts and pretty much anything I can configure. If I want to get actual writing done, I need constraints. iA Writer is perfect because it looks and works great by default and has very few formatting options. I think I paid $10 for this app one time ten or more years ago. That has to be the best deal I've ever gotten on anything. I usually draft in Writer and then preview it on my site locally to proofread. I have to proofread on the website, in the design where it will live. If I proofread in the editor I will miss all kinds of typos. So I pop back and forth between the browser and the editor fixing things as I go. I can no longer type on a physical keyboard. I use a mix of onscreen keyboard and dictation when writing prose. Typing is a chore and part of the reason I don’t blog more often. It usually takes me several hours to draft, proofread, and publish a post. I mostly need to be at my desk because I have my necessary assistive tech equipment set up there. I romanticize the idea of writing in a comfy nook or at a cozy coffee shop. I've tried packing up my setup and taking it to a coffee shop, but in practice I get precious little writing done that way. What I usually do to get into a good flow state is put on my AirPods Pro, turn on noise cancellation, maybe have some ambient background noise or music , and just write. Preferably while sipping coffee or soda. But if I could have any environment I wanted, I would be sitting in a small room by a window a few stories up in a quaint little building from the game Townscaper , clacking away on an old typewriter or scribbling in a journal with a Parker Jotter . I've bounced around a bit in terms of tech stack, but in 2024, I migrated from a self-hosted WordPress site to a generated static site with Eleventy . My site is hosted on NearlyFreeSpeech.NET (NFSN)—a shared hosting service I love for its simplistic homemade admin system, and powerful VPS-like capabilities. My domain is registered with them as well, although I’m letting Cloudflare handle my DNS for now. I used Eleventy for the first time in 2020 and became a huge fan. I was stoked to migrate blakewatson.com . The source code is in a private repo on GitHub. Whenever I push to the main branch, DeployHQ picks it up and deploys it to my server. I also have a somewhat convoluted setup that checks for social media posts and displays them on my website by rebuilding and deploying automatically whenever I post. It's more just a way for me to have an archive of my posts on Mastodon than anything. Because my website is so old, I have some files not in my repo that live on my server. It is somewhat of a sprawling living organism at this point, with various small apps and tools (and even games !) deployed to sub-directories. I have a weekly scheduled task that runs and saves the entire site to Backblaze B2 . You know, I'm happy to say that I'd mostly do the same thing. I think everyone should have their own website. I would still choose to blog at my own domain name. Probably still a static website. I might structure things a bit differently. If I were designing it now, I might make more allowances for title-less, short posts (technically I can do this now, but they get lumped into my social feed, which I'm calling my Microblog, and kind of get lost). I might design it to be a little weirder rather than buttoned up as it is now. And hey, it's my website. I still might do that. Tinkering with your personal website is one of life's great joys. If you can't think of anything to do with your website, here are a hundred ideas for you . I don't make money from my website directly, but having a website was critical in getting my first job and getting clients before that. So, in a way, all the money I've made working could be attributed to having a personal website. I have a lot of websites and a lot of domains, so it's a little hard to figure out exactly what blakewatson.com itself costs. NFSN is a pay-as-you-go service. I'm currently hosting 13 websites of varying sizes and complexity, and my monthly cost aside from domains is about $23.49. $5 of that is an optional support membership. I could probably get the cost down further by putting the smaller sites together on a single shared server. I pay about $14 per year for the domain these days. I pay $10.50 per month for DeployHQ , but I use it for multiple sites including a for-profit side project, so it doesn’t really cost anything to use it for my blog (this is the type of mental gymnastics I like to do). I pay $15 per month for Fathom Analytics . In my mind, this is also subsidized by my for-profit side project. I mentioned that I backup my website to Backblaze B2. It's extremely affordable, and I think I'm paying below 50 cents per month currently for the amount of storage I'm using (and that also includes several websites). If you also throw in the cost of tools like Tower and Sketch , then there's another $200 worth of costs per year. But I use those programs for many things other than my blog. When you get down to it, blogs are fairly inexpensive to run when they are small and personal like mine. I could probably get the price down to free, save for the domain name, if I wanted to use something like Cloudflare Pages to host it—or maybe a free blogging service. I don't mind people monetizing their blogs at all. I mean if it's obnoxious then I'm probably not going to stay on your website very long. But if it's done tastefully with respect to the readers then good for you. I also don't mind paying to support bloggers in some cases. I have a number of subscriptions for various people to support their writing or other creative output. Here are some blogs I'm impressed with in no particular order. Many of these people have been featured in this series before. I'd like to take this opportunity to mention Anne Sturdivant . She was interviewed here on People & Blogs. When I first discovered this series, I put her blog in the suggestion box. I was impressed with her personal website empire and the amount of content she produced. Sadly, Anne passed away earlier this year. We were internet buddies and I miss her. 💜 I'd like to share a handful of my side projects for anyone who might be interested. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 109 interviews . Make sure to also say thank you to Bomburache and the other 121 supporters for making this series possible. Chris Coyier . " Mediocre ideas, showing up, and persistence. " <3 Jim Nielsen . Continually produces smart content. Don't know how he does it. Nicole Kinzel . She has posted nearly daily for over two years capturing human struggles and life with SMA through free verse poetry. Dave Rupert . I enjoy the balance of tech and personal stuff and the honesty of the writing. Tyler Sticka . His blog is so clean you could eat off of it. A good mix of tech and personal topics. Delightful animations. Maciej Cegłowski . Infrequent and longform. Usually interesting regardless of whether I agree or disagree. Brianna Albers . I’m cheating because this is a column and not a blog per se. But her writing reads like a blog—it's personal, contemplative, and compelling. There are so very few representations of life with SMA online that I'd be remiss not to mention her. Daring Fireball . A classic blog I’ve read for years. Good for Apple news but also interesting finds in typography and design. Robb Knight . To me, Robb’s website is the epitome of the modern indieweb homepage. It’s quirky, fun, and full of content of all kinds. And that font. :chefskiss: Katherine Yang . A relatively new blog. Beautiful site design. Katherine's site feels fresh and experimental and exudes humanity. HTML for People . I wrote this web book for anyone who is interested in learning HTML to make websites. I wrote this to be radically beginner-friendly. The focus is on what you can accomplish with HTML rather than dwelling on a lot of technical information. A Fine Start . This is the for-profit side project I mentioned. It is a new tab page replacement for your web browser. I originally made it for myself because I wanted all of my favorite links to be easily clickable from every new tab. I decided to turn it into a product. The vast majority of the features are free. You only pay if you want to automatically sync your links with other browsers and devices. Minimal Character Sheet . I mentioned enjoying Dungeons and Dragons. This is a web app for managing a D&D 5th edition character. I made it to be a freeform digital character sheet. It's similar to using a form fillable PDF, except that you have a lot more room to write. It doesn't force many particular limitations on your character since you can write whatever you want. Totally free.

0 views
fLaMEd fury 1 months ago

She Likes Listening To Punk Rock

What’s going on, Internet? September’s been a month of noise, nostalgia, and ferry rides. It kicked off with Minuit at Double Whammy , their reunion tour finale and easily one of the best (or only?) nights out I’ve had in ages. Great company, great tunes, and a dance floor that felt straight out of 2005, just with a crowd twenty years older, lol. The next morning I went full fangirl and stacked my Bandcamp cart with Minuit’s entire back catalogue, along with some Fur Patrol for balance. Then it was time for the London Hardhouse Reunion 2025 . My friends came up for the weekend and we had an amazing time, all bass and hoovers, with a bunch of my favourite DJs that I would see across a single year, all playing the same gig, the same night. The kind of night that leaves your ears ringing (yes I wore my ear plugs) and the tunes stuck in your head for days. We rounded the month off a little slower with a family weekend on Waiheke , swapping the inside of clubs for beaches, markets, and fish and chips. Between the gigs and the music, I somehow found time to catch up on TV. One new show that crossed my radar was The Runarounds ; a perfect binge watch. Fun, short, and chaotic in all the right ways. I wonder if I should share more on the shows I watch and find enjoyable? I also went digging into lost media for Aotearoa’s lost emo banger , a little dive into what happens when labels dropped the ball into the transition to digital and how local libraries can quietly save the music. Plenty of good tunes, good people, and good times this month 🎶 Read four books this month, all enjoyable and worth reading: New records added to the collection : We’ve wrapped up raiding for the season, and expansion. We spent the last couple weeks dipping into the first couple mythic bosses in Mana Forge Omega and we easily got two of them down. The Legion Remix is going live this week. I will play enough to get the mounts and armours sets I need and then give it a break. Rumours on the street are that we’ll be seeing the next expansion, Midnight as early as February. With Warcraft out of the way, I’m looking forward to getting back into Cyberpunk and finishing up some more story lines. After last months ball dropping I’m back with more exciting links from across the web. There’ll definitely be something of interest in here for you. Check back next month, and if you want more in the meantime, dive into the archive . On the site side of things, September was a good month for tidying and tinkering. I started by revamping the Bookmarks page, it’s now fully tagged and easier to browse, and I split the Blogroll off from the Links page so each has its own proper home. There’s been a bit of chatter in my small web circles recently, and I have post drafted I want to share soon. I built out a new Blog Stats page using Robb’s PostGraph to visualise my posting frequency. Then, to round things off, I gave my RSS and Atom feeds a glow-up. They’re now styled with an XSL transformation and integrated into the fLaMEd fury design system. The Feeds page itself got an update to clearly show all the feeds avaialbe. All this work inspired by Robb’s setup, which I pretty much jacked. Thanks Robb. Weird Web October is happening for the second year. I won’t be taking part (I know if I commited, I’d quickly fail). If you are more inclined and creative than I am and decided to take part, come hang out in the Weird Web October thread over on the forums . This post was brought to you by Verona by Elemeno P Sweeet, catch you, laterz 👋 Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website. A Beautiful Family by Jennifer Trevelyan A Different Kind of Power by Jacinda Ardern Toxic by Sarah Ditum Glass Barbie by Michael Botur Christina Perri - A Lighter Shade Of Blue Zara Larsson - Honor The Light Mimi Webb - Amelia Anne-Marie - Unhealthy Minuit - 88 Why I love to read your blog - Sylvia’s Noodling Nook Sylvia shares all the reasons why they like reading your website 🫶 blog comments - Jayeless.net Jessica Smith goes deep into blog comments Breaking Free from Social Media Silos Luke discovers the indie web and discusses the struggle of being out of touch when it seems like most of society exists on facebook rather than the web. Bringing Back the Blogroll Luke wonders about the Blogroll and ends up slapping it on the homepage sidebar after some inspiration You can now attach 10,000 character blogs to your Threads posts Sounds like a wonderful idea. Inb4 people invest their lives work into this platform and lose everything, lol Do blogs need to be so lonely? - The History of the Web We used to do this back in the day, I want to reflect on this deeper in it’s own post soon Curate your own newspaper with RSS Molly White wants us to escape newsletter inbox chaos and algorithmic surveillance by building your own enshittification-proof newspaper from the writers you already read Why you should get (back) into RSS curation. Another take on rediscovering RSS as a way to take control of what you read online, curating a personal, intentional feed instead of relying on algorithm-driven platforms Just Put It On Your Blog Shellsharks reminds us to stop overthinking where content belongs and just publish it on our own blogs, embracing the spirit of the independent web. The internet’s hidden creative renaissance (and how to find it) Shame it’s on Substack, but it explores the growing revival of the handmade web, where personal websites push back against the corporate internet. The Internet Feels Broken | Stephanie Vee Stephanie reflects on how today’s internet feels broken under the weight of corporate platforms, and argues for reclaiming the web through personal websites and blogging. Personal blogs are the best, I love yours and I’ll try and tell you why - Nothing Original Here Peter shares a post appreciating personal blogs for their honesty and connection, and why they matter more than social media. bstn - RSS manifesto An RSS manifesto arguing for a return to open web standards and personal curation instead of algorithm-driven feeds The HTML Hobbyist A personal website by HTML hobbyist, Nathan, sharing simple HTML, CSS, and RSS tutorials based on courses they taught at Berkley Computer Training between 1997-2003 Sanding off friction from indie web connection – Tracy Durnell’s Mind Garden Tracy Durnell looks at how indie web tools can be made easier to use, lowering the barriers for people to connect through their own websites. Why I gave the world wide web away for free | Tim Berners-Lee Tim Berners-Lee explains why he released the Web into the public domain. and why we must reclaim it from exploitative platforms and re-centre individual control. Understanding, not slop, is what’s interesting about LLMs - blakewatson.com Blake Watson takes a look at LLMs and where the real value isn’t in the flood of AI generated content, but in understanding how they work to simplify human-computer interaction

0 views
Lea Verou 1 months ago

In the economy of user effort, be a bargain, not a scam

Alan Kay [source] One of my favorite product design principles is Alan Kay’s “Simple things should be simple, complex things should be possible” . [1] I had been saying it almost verbatim long before I encountered Kay’s quote. Kay’s maxim is deceptively simple, but its implications run deep. It isn’t just a design ideal — it’s a call to continually balance friction, scope, and tradeoffs in service of the people using our products. This philosophy played a big part in Prism’s success back in 2012, helping it become the web’s de facto syntax highlighter for years, with over 2 billion npm downloads. Highlighting code on a page took including two files. No markup changes. Styling used readable CSS class names. Even adding new languages — the most common “complex” use case — required far less knowledge and effort than alternatives. At the same time, Prism exposed a deep extensibility model so plugin authors could patch internals and dramatically alter behavior. These choices are rarely free. The friendly styling API increased clash risk, and deep extensibility reduced encapsulation. These were conscious tradeoffs, and they weren’t easy. Simple refers to use cases that are simple from the user’s perspective , i.e. the most common use cases. They may be hard to implement, and interface simplicity is often inversely correlated with implementation simplicity. And which things are complex , depends on product scope . Instagram’s complex cases are vastly different than Photoshop’s complex cases, but as long as there is a range, Kay’s principle still applies. Since Alan Kay was a computer scientist, his quote is typically framed as a PL or API design principle, but that sells it short. It applies to a much, much broader class of interfaces. This class hinges on the distribution of use cases . Products often cut scope by identifying the ~20% of use cases that drive ~80% of usage — aka the Pareto Principle . Some products, however, have such diverse use cases that Pareto doesn’t meaningfully apply to the product as a whole. There are common use cases and niche use cases, but no clean 20-80 split. The long tail of niche use cases is so numerous, it becomes significant in aggregate . For lack of a better term, I’ll call these long‑tail UIs . Nearly all creative tools are long-tail UIs. That’s why it works so well for programming languages and APIs — both are types of creative interfaces. But so are graphics editors, word processors, spreadsheets, and countless other interfaces that help humans create artifacts — even some you would never describe as creative. Yes, programming languages and APIs are user interfaces . If this surprises you, watch my DotJS 2024 talk titled “API Design is UI Design” . It’s only 20 minutes, but covers a lot of ground, including some of the ideas in this post. I include both code and GUI examples to underscore this point; if the API examples aren’t your thing, skip them and the post will still make sense. You wouldn’t describe Google Calendar as a creative tool, but it is a tool that helps humans create artifacts (calendar events). It is also a long-tail product: there is a set of common, conceptually simple cases (one-off events at a specific time and date), and a long tail of complex use cases (recurring events, guests, multiple calendars, timezones, etc.). Indeed, Kay’s maxim has clearly been used in its design. The simple case has been so optimized that you can literally add a one hour calendar event with a single click (using a placeholder title). A different duration can be set after that first click through dragging [2] . But almost every edge case is also catered to — with additional user effort. Google Calendar is also an example of an interface that digitally encodes real-life, demonstrating that complex use cases are not always power user use cases . Often, the complexity is driven by life events. E.g. your taxes may be complex without you being a power user of tax software, and your family situation may be unusual without you being a power user of every form that asks about it. The Pareto Principle is still useful for individual features , as they tend to be more narrowly defined. E.g. there is a set of spreadsheet formulas (actually much smaller than 20%) that drives >80% of formula usage. While creative tools are the poster child of long-tail UIs, there are long-tail components in many transactional interfaces such as e-commerce or meal delivery (e.g. result filtering & sorting, product personalization interfaces, etc.). Filtering UIs are another big category of long-tail UIs, and they involve so many tradeoffs and tough design decisions you could literally write a book about just them. Airbnb’s filtering UI here is definitely making an effort to make simple things easy with (personalized! 😍) shortcuts and complex things possible via more granular controls. Picture a plane with two axes: the horizontal axis being the complexity of the desired task (again from the user’s perspective, nothing to do with implementation complexity), and the vertical axis the cognitive and/or physical effort users need to expend to accomplish their task using a given interface. Following Kay’s maxim guarantees these two points: But even if we get these two points — what about all the points in between? There are a ton of different ways to connect them, and they produce vastly different overall user experiences. How does your interface fare when a use case is only slightly more complex? Are users yeeted into the deep end of interface complexity (bad), or do they only need to invest a proportional, incremental amount of effort to achieve their goal (good)? Meet the complexity-to-effort curve , the most important usability metric you’ve never heard of. For delightful user experiences, making simple things easy and complex things possible is not enough — the transition between the two should also be smooth. You see, simple use cases are the spherical cows in space of product design . They work great for prototypes to convince stakeholders, or in marketing demos, but the real world is messy . Most artifacts that users need to create to achieve their real-life goals rarely fit into your “simple” flows completely, no matter how well you’ve done your homework. They are mostly simple — with a liiiiitle wart here and there. For a long-tail interface to serve user needs well in practice , we also need to design the curve, not just its endpoints . A model with surprising predictive power is to treat user effort as a currency that users are spending to buy solutions to their problems. Nobody likes paying it; in an ideal world software would read our mind and execute perfectly with zero user effort. Since we don’t live in such a world, users are typically willing to pay more in effort when they feel their use case warrants it. Just like regular pricing, actual user experience often depends more on the relationship between cost and expectation (budget) than on the absolute cost itself. If you pay more than you expected, you feel ripped off. You may still pay it because you need the product in the moment, but you’ll be looking for a better deal in the future. And if you pay less than you expected, you feel like you got a bargain, with all the delight and loyalty that entails. Incremental user effort cost should be proportional to incremental value gained. Suppose you’re ordering pizza. You want a simple cheese pizza with ham and mushrooms. You use the online ordering system, and you notice that adding ham to your pizza triples its price. We’re not talking some kind of fancy ham where the pigs were fed on caviar and bathed in champagne, just a regular run-of-the-mill pizza topping. You may still order it if you’re starving and no other options are available, but how does it make you feel? It’s not that different when the currency is user effort. The all too familiar “ But I just wanted to _________, why is it so hard? ”. When a slight increase in complexity results in a significant increase in user effort cost, we have a usability cliff . Usability cliffs make users feel resentful, just like the customers of our fictitious pizza shop. A usability cliff is when a small increase in use case complexity requires a large increase in user effort. Usability cliffs are very common in products that make simple things easy and complex things possible through entirely separate flows with no integration between them: a super high level one that caters to the most common use case with little or no flexibility, and a very low-level one that is an escape hatch: it lets users do whatever, but they have to recreate the solution to the simple use case from scratch before they can tweak it. Simple things are certainly easy: all we need to get a video with a nice sleek set of controls that work well on every device is a single attribute: . We just slap it on our element and we’re done with a single line of HTML: Now suppose use case complexity increases just a little . Maybe I want to add buttons to jump 10 seconds back or forwards. Or a language picker for subtitles. Or just to hide the volume control on a video that has no audio track. None of these are particularly niche, but the default controls are all-or-nothing: the only way to change them is to reimplement the whole toolbar from scratch, which takes hundreds of lines of code to do well. Simple things are easy and complex things are possible. But once use case complexity crosses a certain (low) threshold, user effort abruptly shoots up. That’s a usability cliff. For Instagram’s photo editor, the simple use case is canned filters, whereas the complex ones are those requiring tweaking through individual low-level controls. However, they are implemented as separate flows: you can tweak the filter’s intensity , but you can’t see or adjust the primitives it’s built from. You can layer both types of edits on the same image, but they are additive, which doesn’t work well. Ideally, the two panels would be integrated, so that selecting a filter would adjust the low-level controls accordingly, which would facilitate incremental tweaking AND would serve as a teaching aid for how filters work. My favorite end-user facing product that gets this right is Coda , a cross between a document editor, a spreadsheet, and a database. All over its UI, it supports entering formulas instead of raw values, which makes complex things possible. To make simple things easy, it also provides the GUI you’d expect even without a formula language. But here’s the twist: these presets generate formulas behind the scenes that users can tweak ! Whenever users need to go a little beyond what the UI provides, they can switch to the formula editor and adjust what was generated — far easier than writing it from scratch. Another nice touch: “And” is not just communicating how multiple filters are combined, but is also a control that lets users edit the logic. Defining high-level abstractions in terms of low-level primitives is a great way to achieve a smooth complexity-to-effort curve, as it allows you to expose tweaking at various intermediate levels and scopes. The downside is that it can sometimes constrain the types of high-level solutions that can be implemented. Whether the tradeoff is worth it depends on the product and use cases. If you like eating out, this may be a familiar scenario: — I would like the rib-eye please, medium-rare. — Thank you sir. How would you like your steak cooked? Keep user effort close to the minimum necessary to declare intent Annoying, right? And yet, this is how many user interfaces work; expecting users to communicate the same intent multiple times in slightly different ways. If incremental value should require incremental user effort , an obvious corollary is that things that produce no value should not require user effort . Using the currency model makes this obvious: who likes paying without getting anything in return? Respect user effort. Treat it as a scarce resource — just like regular currency — and keep it close to the minimum necessary to declare intent . Do not require users to do work that confers them no benefit, and could have been handled by the UI. If it can be derived from other input, it should be derived from other input. Source: NNGroup (adapted). A once ubiquitous example that is thankfully going away, is the credit card form which asks for the type of credit card in a separate dropdown. Credit card numbers are designed so that the type of credit card can be determined from the first four digits. There is zero reason to ask for it separately. Beyond wasting user effort, duplicating input that can be derived introduces an unnecessary error condition that you now need to handle: what happens when the entered type is not consistent with the entered number? User actions that meaningfully communicate intent to the interface are signal . Any other step users need to take to accomplish their goal, is noise . This includes communicating the same input more than once, providing input separately that could be derived from other input with complete or high certainty, transforming input from their mental model to the interface’s mental model, and any other demand for user effort that does not serve to communicate new information about the user’s goal. Some noise is unavoidable. The only way to have 100% signal-to-noise ratio would be if the interface could mind read. But too much noise increases friction and obfuscates signal. A short yet demonstrative example is the web platform’s methods for programmatically removing an element from the page. To signal intent in this case, the user needs to communicate two things: (a) what they want to do (remove an element), and (b) which element to remove. Anything beyond that is noise. The modern DOM method has an extremely high signal-to-noise ratio. It’s hard to imagine a more concise way to signal intent. However, the older method that it replaced had much worse ergonomics. It required two parameters: the element to remove, and its parent. But the parent is not a separate source of truth — it would always be the child node’s parent! As a result, its actual usage involved boilerplate , where developers had to write a much noisier [3] . Boilerplate is repetitive code that users need to include without thought, because it does not actually communicate intent. It’s the software version of red tape : hoops you need to jump through to accomplish your goal, that serve no obvious purpose in furthering said goal except for the fact that they are required. In this case, the amount of boilerplate may seem small, but when viewed as a percentage of the total amount of code, the difference is staggering. The exact ratio (81% vs 20% here) varies based on specifics such as variable names, but when the difference is meaningful, it transcends these types of low-level details. Of course, it was usually encapsulated in utility functions, which provided a similar signal-to-noise ratio as the modern method. However, user-defined abstractions don’t come for free, there is an effort (and learnability) tax there, too. Improving signal-to-noise ratio is also why the front-end web industry gravitated towards component architectures: they increase signal-to-noise ratio by encapsulating boilerplate. As an exercise for the reader, try to calculate the signal-to-noise ratio of a Bootstrap accordion (or any other complex Bootstrap component). Users are much more vocal about things not being possible, than things being hard. When pointing out friction issues in design reviews , I have sometimes heard “ users have not complained about this ”. This reveals a fundamental misunderstanding about the psychology of user feedback . Users are much more vocal about things not being possible, than about things being hard. The reason becomes clear if we look at the neuroscience of each. Friction is transient in working memory (prefrontal cortex). After completing a task, details fade. The negative emotion persists and accumulates, but filing a complaint requires prefrontal engagement that is brief or absent. Users often can’t articulate why the software feels unpleasant: the specifics vanish; the feeling remains. Hard limitations, on the other hand, persist as conscious appraisals. The trigger doesn’t go away, since there is no workaround, so it’s far more likely to surface in explicit user feedback. Both types of pain points cause negative emotions, but friction is primarily processed by the limbic system (emotion), whereas hard limitations remain in the prefrontal cortex (reasoning). This also means that when users finally do reach the breaking point and complain about friction, you better listen. Friction is primarily processed by the limbic system, whereas hard limitations remain in the prefrontal cortex Second, user complaints are filed when there is a mismatch in expectations . Things are not possible but the user feels they should be, or interactions cost more user effort than the user had budgeted, e.g. because they know that a competing product offers the same feature for less (work). Often, users have been conditioned to expect poor user experiences, either because all options in the category are high friction, or because the user is too novice to know better [4] . So they begrudgingly pay the price, and don’t think they have the right to complain, because it’s just how things are. You might ask, “If all competitors are equally high-friction, how does this hurt us?” An unmet need is a standing invitation to disruption that a competitor can exploit at any time. Because you’re not only competing within a category; you’re competing with all alternatives — including nonconsumption (see Jobs‑to‑be‑Done ). Even for retention, users can defect to a different category altogether (e.g., building native apps instead of web apps). Historical examples abound. When it comes to actual currency, a familiar example is Airbnb : Until it came along, nobody would complain that a hotel of average price is expensive — it was just the price of hotels. If you couldn’t afford it, you just couldn’t afford to travel, period. But once Airbnb showed there is a cheaper alternative for hotel prices as a whole , tons of people jumped ship. It’s no different when the currency is user effort. Stripe took the payment API market by storm when it demonstrated that payment APIs did not have to be so high friction. iPhone disrupted the smartphone market when it demonstrated that no, you did not have to be highly technical to use a smartphone. The list goes on. Unfortunately, friction is hard to instrument. With good telemetry you can detect specific issues (e.g., dead clicks), but there is no KPI to measure friction as a whole. And no, NPS isn’t it — and you’re probably using it wrong anyway . Instead, the emotional residue from friction quietly drags many metrics down (churn, conversion, task completion), sending teams in circles like blind men touching an elephant . That’s why dashboards must be paired with product vision and proactive, first‑principles product leadership . Steve Jobs exemplified this posture: proactively, aggressively eliminating friction presented as “inevitable.” He challenged unnecessary choices, delays, and jargon, without waiting for KPIs to grant permission. Do mice really need multiple buttons? Does installing software really need multiple steps? Do smartphones really need a stylus? Of course, this worked because he had the authority to protect the vision; most orgs need explicit trust to avoid diluting it. So, if there is no metric for friction, how do you identify it? Reducing friction rarely comes for free, just because someone had a good idea. These cases do exist, and they are great, but it usually takes sacrifices. And without it being an organizational priority, it’s very hard to steer these tradeoffs in that direction. The most common tradeoff is implementation complexity. Simplifying user experience is usually a process of driving complexity inwards and encapsulating it in the implementation. Explicit, low-level interfaces are far easier to implement, which is why there are so many of them. Especially as deadlines loom, engineers will often push towards externalizing complexity into the user interface, so that they can ship faster. And if Product leans more data-driven than data-informed, it’s easy to look at customer feedback and conclude that what users need is more features ( it’s not ) . The first faucet is a thin abstraction : it exposes the underlying implementation directly, passing the complexity on to users, who now need to do their own translation of temperature and pressure into amounts of hot and cold water. It prioritizes implementation simplicity at the expense of wasting user effort. The second design prioritizes user needs and abstracts the underlying implementation to support the user’s mental model. It provides controls to adjust the water temperature and pressure independently, and internally translates them to the amounts of hot and cold water. This interface sacrifices some implementation simplicity to minimize user effort. This is why I’m skeptical of blanket calls for “simplicity.”: they are platitudes. Everyone agrees that, all else equal, simpler is better. It’s the tradeoffs between different types of simplicity that are tough. In some cases, reducing friction even carries tangible financial risks, which makes leadership buy-in crucial. This kind of tradeoff cannot be made by individual designers — it requires usability as a priority to trickle down from the top of the org chart. The Oslo airport train ticket machine is the epitome of a high signal-to-noise interface. You simply swipe your credit card to enter and you swipe your card again as you leave the station at your destination. That’s it. No choices to make. No buttons to press. No ticket. You just swipe your card and you get on the train. Today this may not seem radical, but back in 2003, it was groundbreaking . To be able to provide such a frictionless user experience, they had to make a financial tradeoff: it does not ask for a PIN code, which means the company would need to simply absorb the financial losses from fraudulent charges (stolen credit cards, etc.). When user needs are prioritized at the top, it helps to cement that priority as an organizational design principle to point to when these tradeoffs come along in the day-to-day. Having a design principle in place will not instantly resolve all conflict, but it helps turn conflict about priorities into conflict about whether an exception is warranted, or whether the principle is applied correctly, both of which are generally easier to resolve. Of course, for that to work everyone needs to be on board with the principle. But here’s the thing with design principles (and most principles in general): they often seem obvious in the abstract, so it’s easy to get alignment in the abstract. It’s when the abstract becomes concrete that it gets tough. The Web Platform has its own version of this principle, which is called Priority of Constituencies : “User needs come before the needs of web page authors, which come before the needs of user agent implementors, which come before the needs of specification writers, which come before theoretical purity.” This highlights another key distinction. It’s more nuanced than users over developers; a better framing is consumers over producers . Developers are just one type of producer. The web platform has multiple tiers of producers: Even within the same tier there are producer vs consumer dynamics. When it comes to web development libraries, the web developers who write them are producers and the web developers who use them are consumers. This distinction also comes up in extensible software, where plugin authors are still consumers when it comes to the software itself, but producers when it comes to their own plugins. It also comes up in dual sided marketplace products (e.g. Airbnb, Uber, etc.), where buyer needs are generally higher priority than seller needs. In the economy of user effort, the antithesis of overpriced interfaces that make users feel ripped off are those where every bit of user effort required feels meaningful and produces tangible value to them. The interface is on the user’s side, gently helping them along with every step, instead of treating their time and energy as disposable. The user feels like they’re getting a bargain : they get to spend less than they had budgeted for! And we all know how motivating a good bargain is. User effort bargains don’t have to be radical innovations; don’t underestimate the power of small touches. A zip code input that auto-fills city and state, a web component that automatically adapts to its context without additional configuration, a pasted link that automatically defaults to the website title (or the selected text, if any), a freeform date that is correctly parsed into structured data, a login UI that remembers whether you have an account and which service you’ve used to log in before, an authentication flow that takes you back to the page you were on before. Sometimes many small things can collectively make a big difference. In some ways, it’s the polar opposite of death by a thousand paper cuts : Life by a thousand sprinkles of delight! 😀 In the end, “ simple things simple, complex things possible ” is table stakes. The key differentiator is the shape of the curve between those points. Products win when user effort scales smoothly with use case complexity, cliffs are engineered out, and every interaction declares a meaningful piece of user intent . That doesn’t just happen by itself. It involves hard tradeoffs, saying no a lot, and prioritizing user needs at the organizational level . Treating user effort like real money, forces you to design with restraint. A rule of thumb is place the pain where it’s best absorbed by prioritizing consumers over producers . Do this consistently, and the interface feels delightful in a way that sticks. Delight turns into trust. Trust into loyalty. Loyalty into product-market fit. Kay himself replied on Quora and provided background on this quote . Don’t you just love the internet? ↩︎ Yes, typing can be faster than dragging, but minimizing homing between input devices improves efficiency more, see KLM ↩︎ Yes, today it would have been , which is a little less noisy, but this was before the optional chaining operator. ↩︎ When I was running user studies at MIT, I’ve often had users exclaim “I can’t believe it! I tried to do the obvious simple thing and it actually worked!” ↩︎

1 views
Manuel Moreale 1 months ago

New site, kinda

If you’re reading this blog using RSS or via email (when I remember to send the content via email), you likely didn’t notice it. And if you’re reading my blog in the browser but are not a sharp observer, chances are, you also didn’t notice it. A new version of my site is live. At first glance, not much has changed. The typeface is still the same—love you, Iowan—the layout is still the same, the colours are still the same. For the most part, the site should still feel pretty much the same. So what has changed? A lot, especially under the hood. For example: I have rewritten the entire CSS, and I’m no longer using SASS since it’s no longer needed; interviews are now separate from regular content at the backend level and have their own dedicate URL structure (old URLs should still work, though); the site is now better structured to be expanded into something more akin to a digital garden than “just” a blog. Since I had to rewrite all the frontend code, I took this opportunity to tweak a few things here and there: quotes have a new style, the guestbook has been redesigned (go sign it if you haven’t already) , typography has been slightly tweaked in a couple of places, and the site should now scale much better on very big screens. More importantly, though, P&B interviews now have a more unique design—and a new colour scheme—something that makes me very happy. There are so many things I want to do for this series, but I just don’t have the time to dedicate to this, so I’m happy to have at least managed to give them a more unique identity here on the site. This space is still a work in progress. It will always be a work in progress, so expect things to change over time as I fine-tune minor details here and there. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

2 views
David Bushell 1 months ago

How Much Does Freedom Cost?

Trump’s National Design Studio has an executive order to “modernize the interfaces that serve everyday citizens” . That means rich/white people (but not the ‘disabled’ kind). The US government had digital service agencies that cared about a performant and accessible web until they got the DOGE treatment . The NDS’ latest website trumpcard.gov is a Next.js disasterclass . Vercel’s CEO Guillermo Rauch thinks an endorsement by a friend of Epstein is… a good thing? Anyway, Trump invites you to “Submit Your Appl 🦅 tion”. This side-eying American Bald Eagle is a 579 frame animation. Each frame is a 1671×1300 pixel PNG weighing on average 30 KB each. Frames 261 through 320, where the eagle is looking straight forward, are replaced by frame 320 to save bandwidth. Despite this valiant effort the total size of these PNG files is 16.7 megabytes . PNG frames are requested by a Web Worker and saved using the CacheStorage API. The worker returns URLs for each frame. The React hook is used ( very carefully ) to trigger to update an elements source. And that is how you get 16.7 MBs of freedom. Alternate text is seemingly used as a comment for developers. Eagle-eyed readers will have noticed the eagle’s body is a static image. The PNG frames only contain the head which is no larger than 400×400 in the centre. A quick crop and squoosh suggests a 20% saving with no quality loss. Using a lossy codec like AVIF would allow for anywhere between 50–80% smaller images with little perceptual quality loss. I’m guessing the animation trickery is done to superimpose the eagle over the text “Submit Your Application”. Is it worth the cost? No. Just use a video! You could just make the entire thing a video including the text (like my screen recording above). This would limit the responsive design and the initial text transition but would be much smaller than 16.7 MB. To retain separation of elements a video codec with alpha transparency can be used (see CSS-Tricks , Jake Archibald ). WebM/VP9 works in Chrome and Firefox and HEVC works for Safari/iOS. A quick test returns 500–800 KB depending on codec and quality. Using the HTML attribute allows to work in most browsers. These are rough numbers but suffice it to say a PNG based animation is expensive . Then again, if you’re in the market for a $1 million dollar card you can probably afford this too. Decapitating America’s “national bird” is not the only sin committed by the National Design Studio. Trump’s gold card website is a treasure trove of bad development. View source and see what fun you can find. And remember, for facist-friendly hosting™, think Vercel. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views