Posts in Frontend (20 found)
Josh Comeau 5 days ago

Brand New Layouts with CSS Subgrid

Subgrid allows us to extend a grid template down through the DOM tree, so that deeply-nested elements can participate in the same grid layout. At first glance, I thought this would be a helpful convenience, but it turns out that it’s so much more. Subgrid unlocks exciting new layout possibilities, stuff we couldn’t do until now. ✨

0 views
Jim Nielsen 3 weeks ago

Leveraging a Web Component For Comparing iOS and macOS Icons

Whenever Apple does a visual refresh in their OS updates, a new wave of icon archiving starts for me. Now that “Liquid Glass” is out, I’ve begun nabbing the latest icons from Apple and other apps and adding them to my gallery. Since I’ve been collecting these icons for so long, one of the more interesting and emerging attributes of my collection is the visual differences in individual app icons over time. For example: what are the differences between the icons I have in my collection for Duolingo? Well, I have a page for that today . That’ll let you see all the different versions I’ve collected for Duolingo — not exhaustive, I’m sure, but still interesting — as well as their different sizes . But what if you want to analyze their differences pixel-by-pixel? Turns out, There’s A Web Component For That™️. Image Compare is exactly what I was envisioning: “A tiny, zero-dependency web component for comparing two images using a slider” from the very fine folks at Cloud Four . It’s super easy to use: some HTML and a link to a script (hosted if you like, or you can vendor it ), e.g. And just like that, boom, I’ve got a widget for comparing two icons. For Duolingo specifically, I have a long history of icons archived in my gallery and they’re all available under the route for your viewing and comparison pleasure . Wanna see some more examples besides Duolingo? Check out the ones for GarageBand , Instagram , and Highlights for starters. Or, just look at the list of iOS apps and find the ones that are interesting to you (or if you’re a fan of macOS icons, check these ones out ). I kinda love how easy it was for my thought process to go from idea to reality: And I’ve written the post, so this chunk of work is now done. Reply via: Email · Mastodon · Bluesky “It would be cool to compare differences in icons by overlaying them…“ “Image diff tools do this, I bet I could find a good one…“ “Hey, Cloud Four makes a web component for this? Surely it’s good…” “Hey look, it’s just HTML: a tag linking to compiled JS along with a custom element? Easy, no build process required…“ “Done. Well that was easy. I guess the hardest part here will be writing the blog post about it.”

1 views
David Bushell 4 weeks ago

Better Alt Text

It’s been a rare week where I was able to (mostly) ignore client comms and do whatever I wanted! That means perusing my “todo” list, scoffing at past me for believing I’d ever do half of it, and plucking out a gem. One of those gems was a link to “Developing an alt text button for images on [James’ Coffee Blog]” . I like this feature. I want it on my blog! My blog wraps images and videos in a element with an optional caption. Reduced markup example below. How to add visible alt text? I decided to use declarative popover . I used popover for my glossary web component but that implementation required JavaScript. This new feature can be done script-free! Below is an example of the end result. Click the “ALT” button to reveal the text popover (unless you’re in RSS land, in which case visit the example , and if you’re not in Chrome, see below). To implement this I appended an extra and element with the declarative popover attributes after the image. I generate unique popover and anchor names in my build script. I can’t define them as inline custom properties because of my locked down content security policy . Instead I use the attribute function in CSS. Anchor positioning allows me to place these elements over the image. I could have used absolute positioning inside the if not for the caption extending the parent block. Sadly using means only one thing… My visible alt text feature is Chrome-only! I’ll pray for Interop 2026 salvation and call it progressive enhancement for now. To position the popover I first tried but that sits the popover around/outside the image. Instead I need to sit inside/above the image. The allows that. The button is positioned in a similar way. Aside from being Chrome-only I think this is a cool feature. Last time I tried to use anchor positioning I almost cried in frustration… so this was a success! It will force me to write better alt text. How do I write alt text good? Advice is welcome. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

1 views
Ahmad Alfy 1 months ago

Your URL Is Your State

Couple of weeks ago when I was publishing The Hidden Cost of URL Design I needed to add SQL syntax highlighting. I headed to PrismJS website trying to remember if it should be added as a plugin or what. I was overwhelmed with the amount of options in the download page so I headed back to my code. I checked the file for PrismJS and at the top of the file, I found a comment containing a URL: I had completely forgotten about this. I clicked the URL, and it was the PrismJS download page with every checkbox, dropdown, and option pre-selected to match my exact configuration. Themes chosen. Languages selected. Plugins enabled. Everything, perfectly reconstructed from that single URL. It was one of those moments where something you once knew suddenly clicks again with fresh significance. Here was a URL doing far more than just pointing to a page. It was storing state, encoding intent, and making my entire setup shareable and recoverable. No database. No cookies. No localStorage. Just a URL. This got me thinking: how often do we, as frontend engineers, overlook the URL as a state management tool? We reach for all sorts of abstractions to manage state such as global stores, contexts, and caches while ignoring one of the web’s most elegant and oldest features: the humble URL. In my previous article, I wrote about the hidden costs of bad URL design . Today, I want to flip that perspective and talk about the immense value of good URL design. Specifically, how URLs can be treated as first-class state containers in modern web applications. Scott Hanselman famously said “ URLs are UI ” and he’s absolutely right. URLs aren’t just technical addresses that browsers use to fetch resources. They’re interfaces. They’re part of the user experience. But URLs are more than UI. They’re state containers . Every time you craft a URL, you’re making decisions about what information to preserve, what to make shareable, and what to make bookmarkable. Think about what URLs give us for free: URLs make web applications resilient and predictable. They’re the web’s original state management solution, and they’ve been working reliably since 1991. The question isn’t whether URLs can store state. It’s whether we’re using them to their full potential. Before we dive into examples, let’s break down how URLs encode state. Here’s a typical stateful URL: For many years, these were considered the only components of a URL. That changed with the introduction of Text Fragments , a feature that allows linking directly to a specific piece of text within a page. You can read more about it in my article Smarter than ‘Ctrl+F’: Linking Directly to Web Page Content . Different parts of the URL encode different types of state: Sometimes you’ll see multiple values packed into a single key using delimiters like commas or plus signs. It’s compact and human-readable, though it requires manual parsing on the server side. Developers often encode complex filters or configuration objects into a single query string. A simple convention uses key–value pairs separated by commas, while others serialize JSON or even Base64-encode it for safety. For flags or toggles, it’s common to pass booleans explicitly or to rely on the key’s presence as truthy. This keeps URLs shorter and makes toggling features easy. Another old pattern is bracket notation , which represents arrays in query parameters. It originated from early web frameworks like PHP where appending to a parameter name signals that multiple values should be grouped together. Many modern frameworks and parsers (like Node’s library or Express middleware) still recognize this pattern automatically. However, it’s not officially standardized in the URL specification, so behavior can vary depending on the server or client implementation. Notice how it even breaks the syntax highlighting on my website. The key is consistency . Pick patterns that make sense for your application and stick with them. Let’s look at real-world examples of URLs as state containers: PrismJS Configuration The entire syntax highlighter configuration encoded in the URL. Change anything in the UI, and the URL updates. Share the URL, and someone else gets your exact setup. This one uses anchor and not query parameters, but the concept is the same. GitHub Line Highlighting It links to a specific file while highlighting lines 108 through 136. Click this link anywhere, and you’ll land on the exact code section being discussed. Google Maps Coordinates, zoom level, and map type all in the URL. Share this link, and anyone can see the exact same view of the map. Figma and Design Tools Before shareable design links, finding an updated screen or component in a large file was a chore. Someone had to literally show you where it lived, scrolling and zooming across layers. Today, a Figma link carries all that context like canvas position, zoom level, selected element. Literally everything needed to drop you right into the workspace. E-commerce Filters This is one of the most common real-world patterns you’ll encounter. Every filter, sort option, and price range preserved. Users can bookmark their exact search criteria and return to it anytime. Most importantly, they can come back to it after navigating away or refreshing the page. Before we discuss implementation details, we need to establish a clear guideline for what should go into the URL. Not all state belongs in URLs. Here’s a simple heuristic: Good candidates for URL state: Poor candidates for URL state: If you are not sure if a piece of state belongs in the URL, ask yourself: If someone else clicking this URL, should they see the same state? If so, it belongs in the URL. If not, use a different state management approach. The modern API makes URL state management straightforward: The event fires when the user navigates with the browser’s Back or Forward buttons. It lets you restore the UI to match the URL, which is essential for keeping your app’s state and history in sync. Usually your framework’s router handles this for you, but it’s good to know how it works under the hood. React Router and Next.js provide hooks that make this even cleaner: Now that we’ve seen how URLs can hold application state, let’s look at a few best practices that keep them clean, predictable, and user-friendly. Don’t pollute URLs with default values: Use defaults in your code when reading parameters: For high-frequency updates (like search-as-you-type), debounce URL changes: When deciding between and , think about how you want the browser history to behave. creates a new history entry, which makes sense for distinct navigation actions like changing filters, pagination, or navigating to a new view — users can then use the Back button to return to the previous state. On the other hand, updates the current entry without adding a new one, making it ideal for refinements such as search-as-you-type or minor UI adjustments where you don’t want to flood the history with every keystroke. When designed thoughtfully, URLs become more than just state containers. They become contracts between your application and its consumers. A good URL defines expectations for humans, developers, and machines alike A well-structured URL draws the line between what’s public and what’s private, client and server, shareable and session-specific. It clarifies where state lives and how it should behave. Developers know what’s safe to persist, users know what they can bookmark, and machines know whats worth indexing. URLs, in that sense, act as interfaces : visible, predictable, and stable. Readable URLs explain themselves. Consider the difference between the two URLs below. The first one hides intent. The second tells a story. A human can read it and understand what they’re looking at. A machine can parse it and extract meaningful structure. Jim Nielsen calls these “ examples of great URLs ”. URLs that explain themselves. URLs are cache keys. Well-designed URLs enable better caching strategies: You can even visualize a user’s journey without any extra tracking code: Your analytics tools can track this flow without additional instrumentation. Every URL parameter becomes a dimension you can analyze. URLs can communicate API versions, feature flags, and experiments: This makes gradual rollouts and backwards compatibility much more manageable. Even with the best intentions, it’s easy to misuse URL state. Here are common pitfalls: The classic single-page app mistake: If your app forgets its state on refresh, you’re breaking one of the web’s fundamental features. Users expect URLs to preserve context. I remember a viral video from years ago where a Reddit user vented about an e-commerce site: every time she hit “Back,” all her filters disappeared. Her frustration summed it up perfectly. If users lose context, they lose patience. This one seems obvious, but it’s worth repeating: URLs are logged everywhere: browser history, server logs, analytics, referrer headers. Treat them as public. Choose parameter names that make sense. Future you (and your team) will thank you. If you need to base64-encode a massive JSON object, the URL probably isn’t the right place for that state. Browsers and servers impose practical limits on URL length (usually between 2,000 and 8,000 characters) but the reality is more nuanced. As this detailed Stack Overflow answer explains, limits come from a mix of browser behavior, server configurations, CDNs, and even search engine constraints. If you’re bumping against them, it’s a sign you need to rethink your approach. Respect browser history. If a user action should be “undoable” via the back button, use . If it’s a refinement, use . That PrismJS URL reminded me of something important: good URLs don’t just point to content. They describe a conversation between the user and the application. They capture intent, preserve context, and enable sharing in ways that no other state management solution can match. We’ve built increasingly sophisticated state management libraries like Redux, MobX, Zustand, Recoil and others. They all have their place but sometimes the best solution is the one that’s been there all along. In my previous article, I wrote about the hidden costs of bad URL design. Today, we’ve explored the flip side: the immense value of good URL design. URLs aren’t just addresses. They’re state containers, user interfaces, and contracts all rolled into one. If your app forgets its state when you hit refresh, you’re missing one of the web’s oldest and most elegant features. Shareability : Send someone a link, and they see exactly what you see Bookmarkability : Save a URL, and you’ve saved a moment in time Browser history : The back button just works Deep linking : Jump directly into a specific application state Path Segments ( ). Best used for hierarchical resource navigation : - User 123’s posts - Documentation structure - Application sections Query Parameters ( ). Perfect for filters , options , and configuration : - UI preferences - Pagination - Data filtering - Date ranges Anchor ( ). Ideal for client-side navigation and page sections: - GitHub line highlighting - Scroll to section - Single-page app routing (though it’s rarely used these days) Search queries and filters Pagination and sorting View modes (list/grid, dark/light) Date ranges and time periods Selected items or active tabs UI configuration that affects content Feature flags and A/B test variants Sensitive information (passwords, tokens, PII) Temporary UI states (modal open/closed, dropdown expanded) Form input in progress (unsaved changes) Extremely large or complex nested data High-frequency transient states (mouse position, scroll position) Same URL = same resource = cache hit Query params define cache variations CDNs can cache intelligently based on URL patterns

0 views
Josh Comeau 1 months ago

Springs and Bounces in Native CSS

The “linear()” timing function is a game-changer; it allows us to model physics-based motion right in vanilla CSS! That said, there are some limitations and quirks to be aware of. I’ve been experimenting with this API for a while now, and in this post, I’ll share all of the tips and tricks I’ve learned for using it effectively. ✨

0 views
Jim Nielsen 1 months ago

Don’t Forget These Tags to Make HTML Work Like You Expect

I was watching Alex Petros’ talk and he has a slide in there titled “Incantations that make HTML work correctly”. This got me thinking about the basic snippets of HTML I’ve learned to always include in order for my website to work as I expect in the browser — like “Hey I just made a file on disk and am going to open it in the browser. What should be in there?” This is what comes to mind: Without , browsers may switch to quirks mode, emulating legacy, pre-standards behavior. This will change how calculations work around layout, sizing, and alignment. is what you want for consistent rendering. Or if you prefer writing markup like it’s 1998. Or even if you eschew all societal norms. It’s case-insensitive so they’ll all work. Declare the document’s language. Browsers, search engines, assistive technologies, etc. can leverage it to: Omit it and things will look ok, but lots of basic web-adjacent tools might get things wrong. Specifying it makes everything around the HTML work better and more accurately, so I always try to remember to include it. This piece of info can come back from the server as a header, e.g. But I like to set it in my HTML, especially when I’m making files on disk I open manually in the browser. This tells the browser how to interpret text, ensuring characters like é, ü, and others display correctly. So many times I’ve opened a document without this tag and things just don’t look right — like my smart quotes . For example: copy this snippet, stick it in an HTML file, and open it on your computer: Things might look a bit wonky. But stick a tag in there and you’ll find some relief. Sometimes I’ll quickly prototype a little HTML and think, “Great it’s working as I expect!” Then I go open it on mobile and everything looks tiny — “[Facepalm] you forgot the meta viewport tag!” Take a look at this screenshot, where I forgot the meta viewport tag on the left but included it on the right: That ever happen to you? No, just me? Well anyway, it’s a good ‘un to include to make HTML work the way you expect. I know what you’re thinking, I forgot the most important snippet of them all for writing HTML: Reply via: Email · Mastodon · Bluesky Get pronunciation and voice right for screen readers Improve indexing and translation accuracy Apply locale-specific tools (e.g. spell-checking)

1 views
Thomasorus 1 months ago

List of JavaScript frameworks

The way the web was initially envisioned was through separation of concerns: HTML is for structure, CSS for styles and JavaScript for interactivity. For a long time the server was sending HTML to the browser/client through templates populated with data from the server. Then the page downloaded CSS and JavaScript. Those two then "attached" themselves to the structure and acted on it through HTML attributes, and could then change its looks, call for more data, create interactivity. Each time a visitor clicked a link, this whole process would start again, downloading the new page and its dependencies and rendering it in the browser. Using the history API and Ajax requests to fetch HTML of the next page and replace the current body with it. Basically emulating the look and feel of single page applications in multi-pages applications. Event handling/reactivity/dom manipulation via HTML attributes. Development happens client side, without writing JavaScript. Static HTML gets updated via web sockets or Ajax calls on the fly with small snippets rendered on the server. Development happens server side, without writing JavaScript. Most of the time a plugin or feature of an existing server side framework. A client-side, JavaScript component-based (mixing HTML, CSS and JavaScript in a single file) framework or library gets data through API calls (REST or GraphQL) and generates HTML blocks on the fly directly in the browser. Long initial load time, then fast page transitions, but a lot of features normally managed by the browser or the server needs to be re-implemented. The framework or library is loaded alongside the SPA code: The framework or library compiles to the SPA and disappears: A single page application library gets extended to render or generate static "dry" pages as HTML on the server to avoid the initial long loading time, detrimental to SEO. Often comes with opinionated choices like routing, file structure, compilation improvements. After the initial page load, the single page application code is loaded and attaches itself to the whole page to make it interactive, effectively downloading and rendering the website twice ("hydration"): After the initial page load, the single page application code is loaded and attaches itself only on certain elements that needs interactivity, partially avoiding the double download and rendering ("partial hydration", "islands architecture"): A server-side component-based framework or library gets data through API calls (REST or GraphQL) and serves HTML that gets its interactivity without hydration, for example by loading the interactive code needed as an interaction happens. Using existing frontend and backend stacks in an opinionated way to offer a fullstack solution in full JavaScript A client-side, component based application (Vue, React, etc) gets its state from pre-rendered JSON Stimulus JS Livewire (PHP) Stimulus Reflex (Ruby) Phoenix Liveview (Elixir) Blazor (C#) Unicorn (Python) Angular with AOT Next (React) Nuxt (Vue) Sveltekit (Svelte) Astro (React, Vue, Svelte, Solid, etc.) Solid Start (Solid)

0 views
Raph Koster 1 months ago

Site updates

It’s been quite a while since the site was refreshed. I was forced into it by a PHP upgrade that rendered the old customizable theme I was using obsolete. We’re now running a new theme that has been styled to match the old one pretty closely, but I did go ahead and do some streamlining: way less plugins (especially ancient ones), simpler layout in several places, much better handling of responsive layouts for mobile, down to a single sidebar, and so on. All of this seems to have made the site quite a bit more performant, too. One of the big things that got fixed along the way is that images in galleries had a habit of displaying oddly stretched on Chrome and Edge, but not in Firefox. No idea what it was, but it seems to be fixed now. There are plenty of bits and bobs that still are not quite right. Keep an eye out and let me know if you see anything that looks egregiously wrong. Known issues: some of the lists of things, like presentations, essays, etc, are still funky. Breadcrumb styling seems to be inconsistent. The footer is a bit of a mess. If you do need to log in to comment, the Meta links are all at the footer for now. Virtually no one uses those links anymore, so having them up top didn’t seem to make sense… How things have changed! People tell me to move to Substack instead, but though I get the monetization factor, it rubs me wrong. I’d rather own my own site. Plus, it’s not like I am posting often enough to justify a ton of effort!

0 views
Ahmad Alfy 1 months ago

How Functional Programming Shaped (and Twisted) Frontend Development

A friend called me last week. Someone who’d built web applications back for a long time before moving exclusively to backend and infra work. He’d just opened a modern React codebase for the first time in over a decade. “What the hell is this?” he asked. “What are all these generated class names? Did we just… cancel the cascade? Who made the web work this way?” I laughed, but his confusion cut deeper than he realized. He remembered a web where CSS cascaded naturally, where the DOM was something you worked with , where the browser handled routing, forms, and events without twenty abstractions in between. To him, our modern frontend stack looked like we’d declared war on the platform itself. He asked me to explain how we got here. That conversation became this essay. A disclaimer before we begin : This is one perspective, shaped by having lived through the first browser war. I applied to make 24-bit PNGs work in IE6. I debugged hasLayout bugs at 2 AM. I wrote JavaScript when you couldn’t trust to work the same way across browsers. I watched jQuery become necessary, then indispensable, then legacy. I might be wrong about some of this. My perspective is biased for sure, but it also comes with the memory that the web didn’t need constant reinvention to be useful. There’s a strange irony at the heart of modern web development. The web was born from documents, hyperlinks, and a cascading stylesheet language. It was always messy, mutable, and gloriously side-effectful. Yet over the past decade, our most influential frontend tools have been shaped by engineers chasing functional programming purity: immutability, determinism, and the elimination of side effects. This pursuit gave us powerful abstractions. React taught us to think in components. Redux made state changes traceable. TypeScript brought compile-time safety to a dynamic language. But it also led us down a strange path. A one where we fought against the platform instead of embracing it. We rebuilt the browser’s native capabilities in JavaScript, added layers of indirection to “protect” ourselves from the DOM, and convinced ourselves that the web’s inherent messiness was a problem to solve rather than a feature to understand. The question isn’t whether functional programming principles have value. They do. The question is whether applying them dogmatically to the web (a platform designed around mutability, global scope, and user-driven chaos) made our work better, or just more complex. To understand why functional programming ideals clash with web development, we need to acknowledge what the web actually is. The web is fundamentally side-effectful. CSS cascades globally by design. Styles defined in one place affect elements everywhere, creating emergent patterns through specificity and inheritance. The DOM is a giant mutable tree that browsers optimize obsessively; changing it directly is fast and predictable. User interactions arrive asynchronously and unpredictably: clicks, scrolls, form submissions, network requests, resize events. There’s no pure function that captures “user intent.” This messiness is not accidental. It’s how the web scales across billions of devices, remains backwards-compatible across decades, and allows disparate systems to interoperate. The browser is an open platform with escape hatches everywhere. You can style anything, hook into any event, manipulate any node. That flexibility and that refusal to enforce rigid abstractions is the web’s superpower. When we approach the web with functional programming instincts, we see this flexibility as chaos. We see globals as dangerous. We see mutation as unpredictable. We see side effects as bugs waiting to happen. And so we build walls. Functional programming revolves around a few core principles: functions should be pure (same inputs → same outputs, no side effects), data should be immutable, and state changes should be explicit and traceable. These ideas produce code that’s easier to reason about, test, and parallelize, in the right context of course. These principles had been creeping into JavaScript long before React. Underscore.js (2009) brought map, reduce, and filter to the masses. Lodash and Ramda followed with deeper FP toolkits including currying, composition and immutability helpers. The ideas were in the air: avoid mutation, compose small functions, treat data transformations as pipelines. React itself started with class components and , hardly pure FP. But the conceptual foundation was there: treat UI as a function of state, make rendering deterministic, isolate side effects. Then came Elm, a purely functional language created by Evan Czaplicki that codified the “Model-View-Update” architecture. When Dan Abramov created Redux, he explicitly cited Elm as inspiration. Redux’s reducers are directly modeled on Elm’s update functions: . Redux formalized what had been emerging patterns. Combined with React Hooks (which replaced stateful classes with functional composition), the ecosystem shifted decisively toward FP. Immutability became non-negotiable. Pure components became the ideal. Side effects were corralled into . Through this convergence (library patterns, Elm’s rigor, and React’s evolution) Haskell-derived ideas about purity became mainstream JavaScript practice. In the early 2010s, as JavaScript applications grew more complex, developers looked to FP for salvation. jQuery spaghetti had become unmaintainable. Backbone’s two-way binding caused cascading updates (ironically, Backbone’s documentation explicitly advised against two-way binding saying “it doesn’t tend to be terribly useful in your real-world app” yet many developers implemented it through plugins). The community wanted discipline, and FP offered it: treat your UI as a pure function of state. Make data flow in one direction. Eliminate shared mutable state. React’s arrival in 2013 crystallized these ideals. It promised a world where : give it data, get back a component tree, re-render when data changes. No manual DOM manipulation. No implicit side effects. Just pure, predictable transformations. This was seductive. And in many ways, it worked. But it also set us on a path toward rebuilding the web in JavaScript’s image, rather than JavaScript in the web’s image. CSS was designed to be global. Styles cascade, inherit, and compose across boundaries. This enables tiny stylesheets to control huge documents, and lets teams share design systems across applications. But to functional programmers, global scope is dangerous. It creates implicit dependencies and unpredictable outcomes. Enter CSS-in-JS: styled-components, Emotion, JSS. The promise was component isolation. Styles scoped to components, no cascading surprises, no naming collisions. Styles become data , passed through JavaScript, predictably bound to elements. But this came at a cost. CSS-in-JS libraries generate styles at runtime, injecting them into tags as components mount. This adds JavaScript execution to the critical rendering path. Server-side rendering becomes complicated. You need to extract styles during the render, serialize them, and rehydrate them on the client. Debugging involves runtime-generated class names like . And you lose the cascade; the very feature that made CSS powerful in the first place. Worse, you’ve moved a browser-optimized declarative language into JavaScript, a single-threaded runtime. The browser can parse and apply CSS in parallel, off the main thread. Your styled-components bundle? That’s main-thread work, blocking interactivity. The web had a solution. It’s called a stylesheet. But it wasn’t pure enough. The industry eventually recognized these problems and pivoted to Tailwind CSS. Instead of runtime CSS generation, use utility classes. Instead of styled-components, compose classes in JSX. This was better, at least it’s compile-time, not runtime. No more blocking the main thread to inject styles. No more hydration complexity. But Tailwind still fights the cascade. Instead of writing once and letting it cascade to all buttons, you write on every single button element. You’ve traded runtime overhead for a different set of problems: class soup in your markup, massive HTML payloads, and losing the cascade’s ability to make sweeping design changes in one place. And here’s where it gets truly revealing: when Tailwind added support for nested selectors using (a feature that would let developers write more cascade-like styles), parts of the community revolted. David Khourshid (creator of XState) shared examples of using nested selectors in Tailwind, and the backlash was immediate. Developers argued this defeated the purpose of Tailwind, that it brought back the “problems” of traditional CSS, that it violated the utility-first philosophy. Think about what this means. The platform has cascade. CSS-in-JS tried to eliminate it and failed. Tailwind tried to work around it with utilities. And when Tailwind cautiously reintroduced a cascade-like feature, developers who were trained by years of anti-cascade ideology rejected it. We’ve spent so long teaching people that the cascade is dangerous that even when their own tools try to reintroduce platform capabilities, they don’t want them. We’re not just ignorant of the platform anymore. We’re ideologically opposed to it. React introduced synthetic events to normalize browser inconsistencies and integrate events into its rendering lifecycle. Instead of attaching listeners directly to DOM nodes, React uses event delegation. It listens at the root, then routes events to handlers through its own system. This feels elegant from a functional perspective. Events become data flowing through your component tree. You don’t touch the DOM directly. Everything stays inside React’s controlled universe. But native browser events already work. They bubble, they capture, they’re well-specified. The browser has spent decades optimizing event dispatch. By wrapping them in a synthetic layer, React adds indirection: memory overhead for event objects, translation logic for every interaction, and debugging friction when something behaves differently than the native API. Worse, it trains developers to avoid the platform. Developers learn React’s event system, not the web’s. When they need to work with third-party libraries or custom elements, they hit impedance mismatches. becomes a foreign API in their own codebase. Again: the web had this. The browser’s event system is fast, flexible, and well-understood. But it wasn’t controlled enough for the FP ideal of a closed system. The logical extreme of “UI as a pure function of state” is client-side rendering: the server sends an empty HTML shell, JavaScript boots up, and the app renders entirely in the browser. From a functional perspective, this is clean. Your app is a deterministic function that takes initial state and produces a DOM tree. From a web perspective, it’s a disaster. The browser sits idle while JavaScript parses, executes, and manually constructs the DOM. Users see blank screens. Screen readers get empty documents. Search engines see nothing. Progressive rendering which is one of the browser’s most powerful features, goes unused. The industry noticed. Server-side rendering came back. But because the mental model was still “JavaScript owns the DOM,” we got hydration : the server renders HTML, the client renders the same tree in JavaScript, then React walks both and attaches event handlers. During hydration, the page is visible but inert. Clicks do nothing, forms don’t submit. This is architecturally absurd. The browser already rendered the page. It already knows how to handle clicks. But because the framework wants to own all interactions through its synthetic event system, it must re-create the entire component tree in JavaScript before anything works. The absurdity extends beyond the client. Infrastructure teams watch in confusion as every user makes double the number of requests : the server renders the page and fetches data, then the client boots up and fetches the exact same data again to reconstruct the component tree for hydration. Why? Because the framework can’t trust the HTML it just generated. It needs to rebuild its internal representation of the UI in JavaScript to attach event handlers and manage state. This isn’t just wasteful, it’s expensive. Database queries run twice. API calls run twice. Cache layers get hit twice. CDN costs double. And for what? So the framework can maintain its pure functional model where all state lives in JavaScript. The browser had the data. The HTML had the data. But that data wasn’t in the right shape . It wasn’t a JavaScript object tree, so we throw it away and fetch it again. Hydration is what happens when you treat the web like a blank canvas instead of a platform with capabilities. The web gave us streaming HTML, progressive enhancement, and instant interactivity. We replaced it with JSON, JavaScript bundles, duplicate network requests, and “please wait while we reconstruct reality.” Consider the humble modal dialog. The web has , a native element with built-in functionality: it manages focus trapping, handles Escape key dismissal, provides a backdrop, controls scroll-locking on the body, and integrates with the accessibility tree. It exists in the DOM but remains hidden until opened. No JavaScript mounting required. It’s fast, accessible, and battle-tested by browser vendors. Now observe what gets taught in tutorials, bootcamps, and popular React courses: build a modal with elements. Conditionally render it when is true. Manually attach a click-outside handler. Write an effect to listen for the Escape key. Add another effect for focus trapping. Implement your own scroll-lock logic. Remember to add ARIA attributes. Oh, and make sure to clean up those event listeners, or you’ll have memory leaks. You’ve just written 100+ lines of JavaScript to poorly recreate what the browser gives you for free. Worse, you’ve trained developers to not even look for native solutions. The platform becomes invisible. When someone asks “how do I build a modal?”, the answer is “install a library” or “here’s my custom hook,” never “use .” The teaching is the problem. When influential tutorial authors and bootcamp curricula skip native APIs in favor of React patterns, they’re not just showing an alternative approach. They’re actively teaching malpractice. A generation of developers learns to build inaccessible soup because that’s what fits the framework’s reactivity model, never knowing the platform already solved these problems. And it’s not just bootcamps. Even the most popular component libraries make the same choice: shadcn/ui builds its Dialog component on Radix UI primitives, which use instead of the native element. There are open GitHub issues requesting native support, but the implicit message is clear: it’s easier to reimplement the browser than to work with it. The problem runs deeper than ignorance or inertia. The frameworks themselves increasingly struggle to work with the platform’s evolution. Not because the platform features are bad, but because the framework’s architectural assumptions can’t accommodate them. Consider why component libraries like Radix UI choose over . The native element manages its own state: it knows when it’s open, it handles its own visibility, it controls focus internally. But React’s reactivity model expects all state to live in JavaScript, flowing unidirectionally into the DOM. When a native element manages its own state, React’s mental model breaks down. Keeping in your React state synchronized with the element’s actual open/closed state becomes a nightmare of refs, effects, and imperative calls. Precisely what React was supposed to eliminate. Rather than adapt their patterns to work with stateful native elements, library authors reimplement the entire behavior in a way that fits the framework. It’s architecturally easier to build a fake dialog in JavaScript than to integrate with the platform’s real one. But the conflict extends beyond architectural preferences. Even when the platform adds features that developers desperately want, frameworks can’t always use them. Accordions? The web has and . Tooltips? There’s attribute and the emerging API. Date pickers? . Custom dropdowns? The web now supports styling elements with and pseudo-elements. You can even put elements with images inside elements now. It eliminates the need for the countless JavaScript select libraries that exist solely because designers wanted custom styling. Frameworks encourage conditional rendering and component state, so these elements don’t get rendered until JavaScript decides they should exist. The mental model is “UI appears when state changes,” not “UI exists, state controls visibility.” Even when the platform adds the exact features developers have been rebuilding in JavaScript for years, the ecosystem momentum means most developers never learn these features exist. And here’s the truly absurd part: even when developers do know about these new platform features, the frameworks themselves can’t handle them. MDN’s documentation for customizable elements includes this warning: “ Some JavaScript frameworks block these features; in others, they cause hydration failures when Server-Side Rendering (SSR) is enabled. ” The platform evolved. The HTML parser now allows richer content inside elements. But React’s JSX parser and hydration system weren’t designed for this. They expect to only contain text. Updating the framework to accommodate the platform’s evolution takes time, coordination, and breaking changes that teams are reluctant to make. The web platform added features that eliminate entire categories of JavaScript libraries, but the dominant frameworks can’t use those features without causing hydration errors. The stack that was supposed to make development easier now lags behind the platform it’s built on. The browser has native routing: tags, the History API, forward/back buttons. It has native forms: elements, validation attributes, submit events. These work without JavaScript. They’re accessible by default. They’re fast. Modern frameworks threw them out. React Router, Next.js’s router, Vue Router; they intercept link clicks, prevent browser navigation, and handle routing in JavaScript. Why? Because client-side routing feels like a pure state transition: URL changes, state updates, component re-renders. No page reload. No “lost” JavaScript state. But you’ve now made navigation depend on JavaScript. Ctrl+click to open in a new tab? Broken, unless you carefully re-implement it. Right-click to copy link? The URL might not match what’s rendered. Accessibility tools that rely on standard navigation patterns? Confused. Forms got the same treatment. Instead of letting the browser handle submission, validation, and accessibility, frameworks encourage JavaScript-controlled forms. Formik, React Hook Form, uncontrolled vs. controlled inputs; entire libraries exist to manage what already does. The browser can validate instantly, with no JavaScript. But that’s not reactive enough, so we rebuild validation in JavaScript, ship it to the client, and hope we got the logic right. The web had these primitives. We rejected them because they didn’t fit our FP-inspired mental model of “state flows through JavaScript.” Progressive enhancement used to be a best practice: start with working HTML, layer on CSS for style, add JavaScript for interactivity. The page works at every level. Now, we start with JavaScript and work backwards, trying to squeeze HTML out of our component trees and hoping hydration doesn’t break. We lost built-in accessibility. Native HTML elements have roles, labels, and keyboard support by default. Custom JavaScript widgets require attributes, focus management, and keyboard handlers. All easy to forget or misconfigure. We lost performance. The browser’s streaming parser can render HTML as it arrives. Modern frameworks send JavaScript, parse JavaScript, execute JavaScript, then finally render. That’s slower. The browser can cache CSS and HTML aggressively. JavaScript bundles invalidate on every deploy. We lost simplicity. is eight characters. A client-side router is a dependency, a config file, and a mental model. is self-documenting. A controlled form with validation is dozens of lines of state management. And we lost alignment with the platform. The browser vendors spend millions optimizing HTML parsing, CSS rendering, and event dispatch. We spend thousands of developer-hours rebuilding those features in JavaScript, slower. This isn’t a story of incompetence. Smart people built these tools for real reasons. By the early 2010s, JavaScript applications had become unmaintainable. jQuery spaghetti sprawled across codebases. Two-way data binding caused cascading updates that were impossible to debug. Teams needed discipline, and functional programming offered it: pure components, immutable state, unidirectional data flow. For complex, stateful applications (like dashboards with hundreds of interactive components, real-time collaboration tools, data visualization platforms) React’s model was genuinely better than manually wiring up event handlers and tracking mutations. The FP purists weren’t wrong that unpredictable mutation causes bugs. They were wrong that the solution was avoiding the platform’s mutation-friendly APIs instead of learning to use them well. But in the chaos of 2013, that distinction didn’t matter. React worked. It scaled. And Facebook was using it in production. Then came the hype cycle. React dominated the conversation. Every conference had React talks. Every tutorial assumed React as the starting point. CSS-in-JS became “modern.” Client-side rendering became the default. When big companies like Facebook, Airbnb, Netflix and others adopted these patterns, they became industry standards. Bootcamps taught React exclusively. Job postings required React experience. The narrative solidified: this is how you build for the web now. The ecosystem became self-reinforcing through its own momentum. Once React dominated hiring pipelines and Stack Overflow answers, alternatives faced an uphill battle. Teams that had already invested in React by training developers, building component libraries, establishing patterns are now facing enormous switching costs. New developers learned React because that’s what jobs required. Jobs required React because that’s what developers knew. The cycle fed itself, independent of whether React was the best tool for any particular job. This is where we lost the plot. Somewhere in the transition from “React solves complex application problems” to “React is how you build websites,” we stopped asking whether the problems we were solving actually needed these solutions. I’ve watched developers build personal blogs with Next.js. Sites that are 95% static content with maybe a contact form, because that’s what they learned in bootcamp. I’ve seen companies choose React for marketing sites with zero interactivity, not because it’s appropriate, but because they can’t hire developers who know anything else. The tool designed for complex, stateful applications became the default for everything, including problems the web solved in 1995 with HTML and CSS. A generation of developers never learned that most websites don’t need a framework at all. The question stopped being “does this problem need React?” and became “which React pattern should I use?” The platform’s native capabilities like progressive rendering, semantic HTML, the cascade, instant navigation are now considered “old-fashioned.” Reinventing them in JavaScript became “best practices.” We chased functional purity on a platform that was never designed for it. And we built complexity to paper over the mismatch. The good news: we’re learning. The industry is rediscovering the platform. HTMX embraces HTML as the medium of exchange. Server sends HTML, browser renders it, no hydration needed. Qwik resumable architecture avoids hydration entirely, serializing only what’s needed. Astro defaults to server-rendered HTML with minimal JavaScript. Remix and SvelteKit lean into web standards: forms that work without JS, progressive enhancement, leveraging the browser’s cache. These tools acknowledge what the web is: a document-based platform with powerful native capabilities. Instead of fighting it, they work with it. This doesn’t mean abandoning components or reactivity. It means recognizing that is a useful model inside your framework, not a justification to rebuild the entire browser stack. It means using CSS for styling, native events for interactions, and HTML for structure and then reaching for JavaScript when you need interactivity beyond what the platform provides. The best frameworks of the next decade will be the ones that feel like the web, not in spite of it. In chasing functional purity, we built a frontend stack that is more complex, more fragile, and less aligned with the platform it runs on. We recreated CSS in JavaScript, events in synthetic wrappers, rendering in hydration layers, and routing in client-side state machines. We did this because we wanted predictability, control, and clean abstractions. But the web was never meant to be pure. It’s a sprawling, messy, miraculous platform built on decades of emergent behavior, pragmatic compromises, and radical openness. Its mutability isn’t a bug. It’s the reason a document written in 1995 still renders in 2025. Its global scope isn’t dangerous. It’s what lets billions of pages share a design language. Maybe the web didn’t need to be purified. Maybe it just needed to be understood. I want to thank my friend Ihab Khattab for reviewing this piece and providing invaluable feedback.

0 views
Dangling Pointers 1 months ago

Skia: Exposing Shadow Branches

Skia: Exposing Shadow Branches Chrysanthos Pepi, Bhargav Reddy Godala, Krishnam Tibrewala, Gino A. Chacon, Paul V. Gratz, Daniel A. Jiménez, Gilles A. Pokam, and David I. August ASPLOS'25 This paper starts with your yearly reminder of the high cost of the Turing Tax : Recent works demonstrate that the front-end is a considerable source of performance loss [16], with upwards of 53% of performance [23] bounded by the front-end. Everyone knows that the front-end runs ahead of the back-end of a processor. If you want to think of it in AI terms, imagine a model that is told about the current value of and recent history of the program counter, and asked to predict future values of the program counter. The accuracy of these predictions determines how utilized the processor pipeline is. What I did not know is that in a modern processor, the front-end itself is divided into two decoupled components, one of which runs ahead of the other. Fig. 4 illustrates this Fetch Direction Instruction Processing (FDIP) microarchitecture: Source: https://dl.acm.org/doi/10.1145/3676641.3716273 The Instruction Address Generator (IAG) runs the furthest ahead and uses tables (e.g., the Branch Target Buffer (BTB)) in the Branch Prediction Unit (BPU) to predict the sequence of basic blocks which will be executed. Information about each predicted basic block is stored in the Fetch Target Queue (FTQ). The Instruction Fetch Unit (IFU) uses the control flow predictions from the FTQ to actually read instructions from the instruction cache. Some mispredictions can be detected after an instruction has been read and decoded. These result in an early re-steer (i.e., informing the IAG about the misprediction immediately after decode). When a basic block is placed into the FTQ, the associated instructions are prefetched into the IFU (to reduce the impact of instruction cache misses). This paper introduces the term “shadow branch”. A shadow branch is a (static) branch instruction which is currently stored in the instruction cache but is not present in any BPU tables. The top of fig. 5 illustrates a head shadow branch. A branch instruction caused execution to jump to byte 24 and execute the non-shaded instructions. This causes an entire cache line to be pulled into the instruction cache, including the branch instruction starting at byte 19. The bottom of fig. 5 shows a tail shadow branch. In this case, the instruction at byte 12 jumped away from the cache line, causing the red branch instruction at byte 16 to not be executed (even though it is present in the instruction cache). Source: https://dl.acm.org/doi/10.1145/3676641.3716273 Skia The proposed design (Skia) allows the IAG to make accurate predictions for a subset of shadow branches, thus improving pipeline utilization and reducing instruction cache misses. The types of shadow branches which Skia supports are: Direct unconditional branches (target PC can be determined without looking at backend state) Function calls As shown in Fig. 6, these three categories of branches (purple, red, orange) account for a significant fraction of all BTB misses: Source: https://dl.acm.org/doi/10.1145/3676641.3716273 When a cache line enters the instruction cache, the Shadow Branch Decoder (SBD) decodes just enough information to locate shadow branches in the cache line and determine the target PC (for direct unconditional branches and function calls). Metadata from the SBD is placed into two new branch prediction tables in the BPU: The U-SBB holds information about direct unconditional branches and function calls The R-SBB holds information about returns When the BPU encounters a BTB miss, it can fall back to the U-SBB or R-SBB for a prediction. Fig. 11 illustrates the microarchitectural changes proposed by Skia: Source: https://dl.acm.org/doi/10.1145/3676641.3716273 Section 4 goes into more details about these structures including: Replacement policy How a shadow branch is upgraded into a first-class branch in the BTB Handling variable length instructions Fig. 14 has (simulated) IPC improvements across a variety of benchmarks: Source: https://dl.acm.org/doi/10.1145/3676641.3716273 Dangling Pointers A common problem that HW and SW architects solve is getting teams out of a local minimum caused by fixed interfaces. The failure mode is when two groups of engineers agree on a static interface, and then each optimize their component as best they can without changing the interface. In this paper, the interface is the ISA, and Skia is a clever optimization inside of the CPU front-end. Skia shows that there is fruit to be picked here. It would be interesting to examine potential performance gains from architectural (i.e., ISA) changes to pick the same fruit. Thanks for reading Dangling Pointers! Subscribe for free to receive new posts. Source: https://dl.acm.org/doi/10.1145/3676641.3716273 The Instruction Address Generator (IAG) runs the furthest ahead and uses tables (e.g., the Branch Target Buffer (BTB)) in the Branch Prediction Unit (BPU) to predict the sequence of basic blocks which will be executed. Information about each predicted basic block is stored in the Fetch Target Queue (FTQ). The Instruction Fetch Unit (IFU) uses the control flow predictions from the FTQ to actually read instructions from the instruction cache. Some mispredictions can be detected after an instruction has been read and decoded. These result in an early re-steer (i.e., informing the IAG about the misprediction immediately after decode). When a basic block is placed into the FTQ, the associated instructions are prefetched into the IFU (to reduce the impact of instruction cache misses). Shadow Branches This paper introduces the term “shadow branch”. A shadow branch is a (static) branch instruction which is currently stored in the instruction cache but is not present in any BPU tables. The top of fig. 5 illustrates a head shadow branch. A branch instruction caused execution to jump to byte 24 and execute the non-shaded instructions. This causes an entire cache line to be pulled into the instruction cache, including the branch instruction starting at byte 19. The bottom of fig. 5 shows a tail shadow branch. In this case, the instruction at byte 12 jumped away from the cache line, causing the red branch instruction at byte 16 to not be executed (even though it is present in the instruction cache). Source: https://dl.acm.org/doi/10.1145/3676641.3716273 Skia The proposed design (Skia) allows the IAG to make accurate predictions for a subset of shadow branches, thus improving pipeline utilization and reducing instruction cache misses. The types of shadow branches which Skia supports are: Direct unconditional branches (target PC can be determined without looking at backend state) Function calls Source: https://dl.acm.org/doi/10.1145/3676641.3716273 When a cache line enters the instruction cache, the Shadow Branch Decoder (SBD) decodes just enough information to locate shadow branches in the cache line and determine the target PC (for direct unconditional branches and function calls). Metadata from the SBD is placed into two new branch prediction tables in the BPU: The U-SBB holds information about direct unconditional branches and function calls The R-SBB holds information about returns Source: https://dl.acm.org/doi/10.1145/3676641.3716273 Section 4 goes into more details about these structures including: Replacement policy How a shadow branch is upgraded into a first-class branch in the BTB Handling variable length instructions

0 views
annie's blog 1 months ago

Two small UI things that might not bother me if I were a completely different person

But I am who I am and these two small very small really inconsequential things enrage me so here we are STOP IT. If I am logging in do you think I want the pre-login home page to stay open in a separate tab NO. I do NOT. I am here for one purpose and one purpose only and that is to login. Tabs are precious. I do not have any to waste on the prior page, the pointless page, the unused and unneeded pre-logged-in home page that you insist on keeping open in its own tab. Do you think I’m going to tab back to it and read your latest homepage copy or peruse the social proof or NO. I am NOT. I am ALREADY using the product that is why I am here to LOG IN. Quit target blanking the login button. Want to do action? Click this button here on the right side! Want to see things related to the action you just took or will most likely take next? No problem! Click this button. Where is it? On the right side near the last button you clicked? NO IT IS WAY OVER HERE ON THE LEFT SIDE! SURPRISE! Click it. Go ahead. Want to do the final action in this sequence of clicks which have to be clicked sequentially to do the thing? Okay! Click the third button. Where is it? Here? On the left side where we’re now putting buttons? NO! On the right side where the first button was? ALSO NO! It’s at the BOTTOM. You fool. You absolute idiot. Why didn’t you know that.

0 views
Manuel Moreale 2 months ago

New site, kinda

If you’re reading this blog using RSS or via email (when I remember to send the content via email), you likely didn’t notice it. And if you’re reading my blog in the browser but are not a sharp observer, chances are, you also didn’t notice it. A new version of my site is live. At first glance, not much has changed. The typeface is still the same—love you, Iowan—the layout is still the same, the colours are still the same. For the most part, the site should still feel pretty much the same. So what has changed? A lot, especially under the hood. For example: I have rewritten the entire CSS, and I’m no longer using SASS since it’s no longer needed; interviews are now separate from regular content at the backend level and have their own dedicate URL structure (old URLs should still work, though); the site is now better structured to be expanded into something more akin to a digital garden than “just” a blog. Since I had to rewrite all the frontend code, I took this opportunity to tweak a few things here and there: quotes have a new style, the guestbook has been redesigned (go sign it if you haven’t already) , typography has been slightly tweaked in a couple of places, and the site should now scale much better on very big screens. More importantly, though, P&B interviews now have a more unique design—and a new colour scheme—something that makes me very happy. There are so many things I want to do for this series, but I just don’t have the time to dedicate to this, so I’m happy to have at least managed to give them a more unique identity here on the site. This space is still a work in progress. It will always be a work in progress, so expect things to change over time as I fine-tune minor details here and there. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

2 views
Dominik Weber 2 months ago

Rules for creating good-looking user interfaces, from a developer

Creating good-looking user interfaces has always been a struggle for me. If you're in the same camp, this might help. I recently redesigned [Lighthouse](https://lighthouseapp. io/), and during that process built a system that helped me create much better designs than I ever did before. This system is about achieving the best possible design with the least amount of effort

1 views
The Jolly Teapot 2 months ago

Reading my blog

This weekend I spent hours tweaking little cosmetic and technical aspects of this blog. I finally managed to change the HTML of the footnotes, which I am happy to say is now an element and not a element anymore. Semantically, I think it makes more sense like this. 1 I wonder why the footnotes module I use on Eleventy relies on a element, especially one that lacks a heading component, which creates a little issue in the W3C validator . This is what MDN says about the element: The HTML element represents a portion of a document whose content is only indirectly related to the document's main content. Asides are frequently presented as sidebars or call-out boxes. This is the explanation for : The HTML element represents a generic standalone section of a document, which doesn't have a more specific semantic element to represent it. Sections should always have a heading, with very few exceptions. The one that is best-fitted for footnotes is, to me, without question, . While I had the bonnet open and was getting my hands dirty, I also removed all the attributes generated by the footnotes module, so hopefully nothing will appear broken for you in your browser, RSS reader, or read-later app. 2 I also removed some unnecessary lines from the , keeping this website as light and simple as possible , as always. 3 Removing nearly meaningless lines of code to improve the site is great, but actually, what I should have done is add more entries (at least one). This is something of a tradition here : tweaking instead of writing, fine-tuning existing HTML instead of kneading new ideas. As I was working in the “back-office”, I had a few looks at pages and posts to see if everything was working as intended; Eleventy’s local mode is a great way to do this but I tend to prefer experimenting with my HTML and CSS in vivo . While I opened old posts to look at how they rendered, I ended up reading them. I noticed two things. First, that I seemingly forget about 85% of what I write. Reading these articles didn’t feel like reading something completely new — I remember the topic and the gist of what I said — but I was surprised at how much I forgot about these topics, rants , and comments. Writing makes me refine my thoughts, but I think publishing makes my brain click the “Move to bin” command for these now-safely-backed-up thoughts. Second, and this may sound a bit strange, but I enjoyed reading these old posts. Not in a “ oh, this is a good post ” way — some of them felt rather uninspired actually — but more as a “ oh, this is easy to read ” way. Of course my sentences flow in a familiar way: they are my sentences after all. Of course they are easy for me to read. Yet, it was somehow surprising to realise that I write a lot like I think, or even like I talk. It felt like travelling back in time and observing myself, and being curious about what I’ll say next. A little like rewatching a Columbo episode for the third or fourth time: you know the story, you know what’s going to happen, but you notice a new piece of furniture, hear a new line of dialogue, or realise that a famous actor had a small part in it all along . It’s like I get to know myself a little better every time I read my older posts. Is this weird? Anyway, before Christopher Nolan jumps in and buys the rights for this blog post, I realise it also means that if you’re reading this blog regularly and have never met me in person, well, surprise, you won’t be surprised. I talk like I write and write like I talk. If you’re reading this, you kind of already know me somehow. This is yet another reason why I love having a blog . Writing makes me think better, and thinking improves my writing. The motivation to write something new encourages me to read more, to learn more. And now I just realised that reading my own words is a pleasant way of reconnecting with part of myself. On top of these self-appreciation and self-improvement aspects of having a blog, managing a blog like this one, again, despite being a very simple and minimal blog, is also a fantastic way to learn how to operate a website, how to do CSS , HTML, etc. Very rarely does working on this site feel like actual work. It’s a hobby, it’s only joy. Sure, spending three hours at a time inside a JavaScript file, waiting for the site to recompile, refreshing the browser tabs, and starting over, again and again, just to change the way footnotes are displayed can indeed feel like a huge pain (and it was). But guess what? This whole chain of events apparently inspired me to write something new. I can’t wait to read this again in three years: “ This device isn’t a spaceship, it’s a time machine .” I am a little fascinated by HTML elements and what they are intended for. It pains me a little when I see “frameworks” like Tailwind CSS defaulting to using everywhere. I like that my articles are within an element, my navigation parts in a , etc. ↩︎ This is another advantage of using specific HTML: the CSS can just point to the element itself, no need for classes. ↩︎ Well, turns out I had to work on this harder than I expected, as my changes didn’t survive a new build of the site (apparently): this should be good now; noticed this two weeks after publishing this post. ↩︎ I am a little fascinated by HTML elements and what they are intended for. It pains me a little when I see “frameworks” like Tailwind CSS defaulting to using everywhere. I like that my articles are within an element, my navigation parts in a , etc. ↩︎ This is another advantage of using specific HTML: the CSS can just point to the element itself, no need for classes. ↩︎ Well, turns out I had to work on this harder than I expected, as my changes didn’t survive a new build of the site (apparently): this should be good now; noticed this two weeks after publishing this post. ↩︎

0 views
Cassidy Williams 2 months ago

I made a tree visualizer

I was going through some old folders on my laptop and found some old code from my old React Training teaching days for visualizing how component trees in React work, and turned it into its own standalone web app! Here’s the app, if you’re not in the mood to read more . Back when I was teaching React full time, one of my fellow teachers Brad made a tool that let you show a tree (like a data structure tree), and we would use it in our lectures to explain what prop drilling and context were, and how components could work together in React apps. I loved using the tool, and thought about some use cases for it when I found the old code, so I spruced it up a bit. It has some keyboard commands for usage: Most usefully (to me) is the ability to quickly make images for sharing them! The original code had each node maintain its own internal state and it was kind of recursive in how they rendered ( see exact lines here ). In retrospect, I probably should have changed it to be rendered with a big state tree at the top and serialized in the URL or local storage (kind of like what I did with PocketCal ), but eh, it works! See ya next time!

0 views
Martin Fowler 3 months ago

Research, Review, Rebuild: Intelligent Modernisation with MCP and Strategic Prompting

The Bahmni open-source hospital management system was began over nine years ago with a front end using AngularJS and an OpenMRS REST API. Rahul Ramesh wished to convert this to use a React + TypeScript front end with an HL7 FHIR API. In exploring how to do this modernization he used a structured prompting workflow of Research, Review, and Rebuild - together with Cline, Claude 3.5 Sonnet, Atlassian MCP server, and a filesystem MCP server. Changing a single control would normally take 3–6 days of manual effort, but with these tools was completed in under an hour at a cost of under $2.

0 views
Den Odell 3 months ago

Code Reviews That Actually Improve Frontend Quality

Most frontend reviews pass quickly. Linting's clean, TypeScript's happy, nothing looks broken. And yet: a modal won't close, a button's unreachable, an API call fails silently. The code was fine. The product wasn't . We say we care about frontend quality. But most reviews never look at the thing users actually touch. A good frontend review isn't about nitpicking syntax or spotting clever abstractions. It's about seeing what this code becomes in production. How it behaves. What it breaks. What it forgets. If you want to catch those bugs, you need to look beyond the diff. Here's what matters most, and how to catch these issues before they ship: When reviewing, start with the obvious question: what happens if something goes wrong? If the API fails, the user is offline, or a third-party script hangs, if the response is empty, slow, or malformed, will the UI recover? Will the user even know? If there's no loading state, no error fallback, no retry logic, the answer is probably no . And by the time it shows up in a bug report, the damage is already done. Once you've handled system failures, think about how real people interact with this code. Does reach every element it should? Does close the modal? Does keyboard focus land somewhere useful after a dialog opens? A lot of code passes review because it works for the developer who wrote it. The real test is what happens on someone else's device, with someone else's habits, expectations, and constraints. Performance bugs hide in plain sight. Watch out for nested loops that create quadratic time complexity: fine on 10 items, disastrous on 10,000: Recalculating values on every render is also a performance hit waiting to happen. And a one-line import that drags in 100KB of unused helpers? If you miss it now, Lighthouse will flag it later. The worst performance bugs rarely look ugly. They just feel slow. And by then, they've shipped. State problems don't always raise alarms. But when side effects run more than they should, when event listeners stick around too long, when flags toggle in the wrong order, things go wrong. Quietly. Indirectly. Sometimes only after the next deploy. If you don't trace through what actually happens when the component (or view) initializes, updates, or gets torn down, you won't catch it. Same goes for accessibility. Watch out for missing labels, skipped headings, broken focus traps, and no live announcements when something changes, like a toast message appearing without a screen reader ever announcing it. No one's writing maliciously; they're just not thinking about how it works without a pointer. You don't need to be an accessibility expert to catch these basics. The fixes aren't hard. The hard part is noticing. And sometimes, the problem isn't what's broken. It's what's missing. Watch out for missing empty states, no message when a list is still loading, and no indication that an action succeeded or failed. The developer knows what's going on. The user just sees a blank screen. Other times, the issue is complexity. The component fetches data, transforms it, renders markup, triggers side effects, handles errors, and logs analytics, all in one file. It's not technically wrong. But it's brittle. And no one will refactor it once it's merged. Call it out before it calcifies. Same with naming. A function called might sound harmless, until you realize it toggles login state, starts a network request, and navigates the user to a new route. That's not a click handler. It's a full user flow in disguise. Reviews are the last chance to notice that sort of thing before it disappears behind good formatting and familiar patterns. A good review finds problems. A great review gets them fixed without putting anyone on the defensive. Keep the focus on the code, not the coder. "This component re-renders on every keystroke" lands better than "You didn't memoize this." Explain why it matters. "This will slow down typing in large forms" is clearer than "This is inefficient." And when you point something out, give the next step. "Consider using here" is a path forward. "This is wrong" is a dead end. Call out what's done well. A quick "Nice job handling the loading state" makes the rest easier to hear. If the author feels attacked, they'll tune out. And the bug will still be there. What journey is this code part of? What's the user trying to do here? Does this change make that experience faster, clearer, or more resilient? If you can't answer that, open the app. Click through it. Break it. Slow it down. Better yet, make it effortless. Spin up a temporary, production-like copy of the app for every pull request. Now anyone, not just the reviewer, can click around, break things, and see the change in context before it merges. Tools like Vercel Preview Deployments , Netlify Deploy Previews , GitHub Codespaces , or Heroku Review Apps make this almost effortless. Catch them here, and they never make it to production. Miss them, and your users will find them for you. The real bugs aren't in the code; they're in the product, waiting in your next pull request.

0 views
Josh Comeau 3 months ago

An Interactive Guide to SVG Paths

SVG gives us many different primitives to work with, but by far the most powerful is the element. Unfortunately, it’s also the most inscrutable, with its compact Regex-style syntax. In this tutorial, we’ll demystify this infamous element and see some of the cool things we can do with it!

0 views