Latest Posts (10 found)
Ahmad Alfy 1 months ago

Your URL Is Your State

Couple of weeks ago when I was publishing The Hidden Cost of URL Design I needed to add SQL syntax highlighting. I headed to PrismJS website trying to remember if it should be added as a plugin or what. I was overwhelmed with the amount of options in the download page so I headed back to my code. I checked the file for PrismJS and at the top of the file, I found a comment containing a URL: I had completely forgotten about this. I clicked the URL, and it was the PrismJS download page with every checkbox, dropdown, and option pre-selected to match my exact configuration. Themes chosen. Languages selected. Plugins enabled. Everything, perfectly reconstructed from that single URL. It was one of those moments where something you once knew suddenly clicks again with fresh significance. Here was a URL doing far more than just pointing to a page. It was storing state, encoding intent, and making my entire setup shareable and recoverable. No database. No cookies. No localStorage. Just a URL. This got me thinking: how often do we, as frontend engineers, overlook the URL as a state management tool? We reach for all sorts of abstractions to manage state such as global stores, contexts, and caches while ignoring one of the web’s most elegant and oldest features: the humble URL. In my previous article, I wrote about the hidden costs of bad URL design . Today, I want to flip that perspective and talk about the immense value of good URL design. Specifically, how URLs can be treated as first-class state containers in modern web applications. Scott Hanselman famously said “ URLs are UI ” and he’s absolutely right. URLs aren’t just technical addresses that browsers use to fetch resources. They’re interfaces. They’re part of the user experience. But URLs are more than UI. They’re state containers . Every time you craft a URL, you’re making decisions about what information to preserve, what to make shareable, and what to make bookmarkable. Think about what URLs give us for free: URLs make web applications resilient and predictable. They’re the web’s original state management solution, and they’ve been working reliably since 1991. The question isn’t whether URLs can store state. It’s whether we’re using them to their full potential. Before we dive into examples, let’s break down how URLs encode state. Here’s a typical stateful URL: For many years, these were considered the only components of a URL. That changed with the introduction of Text Fragments , a feature that allows linking directly to a specific piece of text within a page. You can read more about it in my article Smarter than ‘Ctrl+F’: Linking Directly to Web Page Content . Different parts of the URL encode different types of state: Sometimes you’ll see multiple values packed into a single key using delimiters like commas or plus signs. It’s compact and human-readable, though it requires manual parsing on the server side. Developers often encode complex filters or configuration objects into a single query string. A simple convention uses key–value pairs separated by commas, while others serialize JSON or even Base64-encode it for safety. For flags or toggles, it’s common to pass booleans explicitly or to rely on the key’s presence as truthy. This keeps URLs shorter and makes toggling features easy. Another old pattern is bracket notation , which represents arrays in query parameters. It originated from early web frameworks like PHP where appending to a parameter name signals that multiple values should be grouped together. Many modern frameworks and parsers (like Node’s library or Express middleware) still recognize this pattern automatically. However, it’s not officially standardized in the URL specification, so behavior can vary depending on the server or client implementation. Notice how it even breaks the syntax highlighting on my website. The key is consistency . Pick patterns that make sense for your application and stick with them. Let’s look at real-world examples of URLs as state containers: PrismJS Configuration The entire syntax highlighter configuration encoded in the URL. Change anything in the UI, and the URL updates. Share the URL, and someone else gets your exact setup. This one uses anchor and not query parameters, but the concept is the same. GitHub Line Highlighting It links to a specific file while highlighting lines 108 through 136. Click this link anywhere, and you’ll land on the exact code section being discussed. Google Maps Coordinates, zoom level, and map type all in the URL. Share this link, and anyone can see the exact same view of the map. Figma and Design Tools Before shareable design links, finding an updated screen or component in a large file was a chore. Someone had to literally show you where it lived, scrolling and zooming across layers. Today, a Figma link carries all that context like canvas position, zoom level, selected element. Literally everything needed to drop you right into the workspace. E-commerce Filters This is one of the most common real-world patterns you’ll encounter. Every filter, sort option, and price range preserved. Users can bookmark their exact search criteria and return to it anytime. Most importantly, they can come back to it after navigating away or refreshing the page. Before we discuss implementation details, we need to establish a clear guideline for what should go into the URL. Not all state belongs in URLs. Here’s a simple heuristic: Good candidates for URL state: Poor candidates for URL state: If you are not sure if a piece of state belongs in the URL, ask yourself: If someone else clicking this URL, should they see the same state? If so, it belongs in the URL. If not, use a different state management approach. The modern API makes URL state management straightforward: The event fires when the user navigates with the browser’s Back or Forward buttons. It lets you restore the UI to match the URL, which is essential for keeping your app’s state and history in sync. Usually your framework’s router handles this for you, but it’s good to know how it works under the hood. React Router and Next.js provide hooks that make this even cleaner: Now that we’ve seen how URLs can hold application state, let’s look at a few best practices that keep them clean, predictable, and user-friendly. Don’t pollute URLs with default values: Use defaults in your code when reading parameters: For high-frequency updates (like search-as-you-type), debounce URL changes: When deciding between and , think about how you want the browser history to behave. creates a new history entry, which makes sense for distinct navigation actions like changing filters, pagination, or navigating to a new view — users can then use the Back button to return to the previous state. On the other hand, updates the current entry without adding a new one, making it ideal for refinements such as search-as-you-type or minor UI adjustments where you don’t want to flood the history with every keystroke. When designed thoughtfully, URLs become more than just state containers. They become contracts between your application and its consumers. A good URL defines expectations for humans, developers, and machines alike A well-structured URL draws the line between what’s public and what’s private, client and server, shareable and session-specific. It clarifies where state lives and how it should behave. Developers know what’s safe to persist, users know what they can bookmark, and machines know whats worth indexing. URLs, in that sense, act as interfaces : visible, predictable, and stable. Readable URLs explain themselves. Consider the difference between the two URLs below. The first one hides intent. The second tells a story. A human can read it and understand what they’re looking at. A machine can parse it and extract meaningful structure. Jim Nielsen calls these “ examples of great URLs ”. URLs that explain themselves. URLs are cache keys. Well-designed URLs enable better caching strategies: You can even visualize a user’s journey without any extra tracking code: Your analytics tools can track this flow without additional instrumentation. Every URL parameter becomes a dimension you can analyze. URLs can communicate API versions, feature flags, and experiments: This makes gradual rollouts and backwards compatibility much more manageable. Even with the best intentions, it’s easy to misuse URL state. Here are common pitfalls: The classic single-page app mistake: If your app forgets its state on refresh, you’re breaking one of the web’s fundamental features. Users expect URLs to preserve context. I remember a viral video from years ago where a Reddit user vented about an e-commerce site: every time she hit “Back,” all her filters disappeared. Her frustration summed it up perfectly. If users lose context, they lose patience. This one seems obvious, but it’s worth repeating: URLs are logged everywhere: browser history, server logs, analytics, referrer headers. Treat them as public. Choose parameter names that make sense. Future you (and your team) will thank you. If you need to base64-encode a massive JSON object, the URL probably isn’t the right place for that state. Browsers and servers impose practical limits on URL length (usually between 2,000 and 8,000 characters) but the reality is more nuanced. As this detailed Stack Overflow answer explains, limits come from a mix of browser behavior, server configurations, CDNs, and even search engine constraints. If you’re bumping against them, it’s a sign you need to rethink your approach. Respect browser history. If a user action should be “undoable” via the back button, use . If it’s a refinement, use . That PrismJS URL reminded me of something important: good URLs don’t just point to content. They describe a conversation between the user and the application. They capture intent, preserve context, and enable sharing in ways that no other state management solution can match. We’ve built increasingly sophisticated state management libraries like Redux, MobX, Zustand, Recoil and others. They all have their place but sometimes the best solution is the one that’s been there all along. In my previous article, I wrote about the hidden costs of bad URL design. Today, we’ve explored the flip side: the immense value of good URL design. URLs aren’t just addresses. They’re state containers, user interfaces, and contracts all rolled into one. If your app forgets its state when you hit refresh, you’re missing one of the web’s oldest and most elegant features. Shareability : Send someone a link, and they see exactly what you see Bookmarkability : Save a URL, and you’ve saved a moment in time Browser history : The back button just works Deep linking : Jump directly into a specific application state Path Segments ( ). Best used for hierarchical resource navigation : - User 123’s posts - Documentation structure - Application sections Query Parameters ( ). Perfect for filters , options , and configuration : - UI preferences - Pagination - Data filtering - Date ranges Anchor ( ). Ideal for client-side navigation and page sections: - GitHub line highlighting - Scroll to section - Single-page app routing (though it’s rarely used these days) Search queries and filters Pagination and sorting View modes (list/grid, dark/light) Date ranges and time periods Selected items or active tabs UI configuration that affects content Feature flags and A/B test variants Sensitive information (passwords, tokens, PII) Temporary UI states (modal open/closed, dropdown expanded) Form input in progress (unsaved changes) Extremely large or complex nested data High-frequency transient states (mouse position, scroll position) Same URL = same resource = cache hit Query params define cache variations CDNs can cache intelligently based on URL patterns

0 views
Ahmad Alfy 1 months ago

The Hidden Cost of URL Design

When we architected an e-commerce platform for one of our clients, we made what seemed like a simple, user-friendly decision: use clean, flat URLs. Products would live at , categories at , pages at . No prefixes, no or clutter. Minimalist paths that felt simple. This decision, which was made hastily and without proper discussion, would later cost us hours spent on optimization. The problem wasn’t the URLs themselves. It was that we treated URL design as a UX decision when it’s fundamentally an architectural decision with cascading technical implications. Every request to the application triggered two backend API calls. Every bot crawling a malformed URL hit the database twice. Every 404 was expensive. This article isn’t about URL best practices you’ve read a hundred times (keeping URLs short, avoiding special characters, or using hyphens instead of underscores). This is about something rarely discussed: how your URL structure shapes your entire application architecture, performance characteristics, and operational costs. Flat URLs like or feel right. They’re intuitive, readable, and align with how users think about content. No technical jargon, no hierarchy to remember. This design philosophy emerged from the SEO community’s consensus that simpler URLs perform better in search rankings. But here’s what the SEO guides don’t tell you: flat URLs trade determinism for aesthetics . When your URL is , your application cannot know whether you’re requesting: This ambiguity means your application must ask rather than know . And asking is expensive. Many traditional CMSs solved this problem decades ago. Systems like Joomla, WordPress (to some extent), and Drupal maintain dedicated SEF (Search Engine Friendly) URL tables which are essentially lookup dictionaries that map clean URLs to their corresponding entity types and IDs. When you request , these systems do a single, fast database lookup: One query, instant resolution, minimal overhead. The URL ambiguity is resolved at the database layer with an indexed lookup rather than through sequential API calls. But not every system works this way. In our case, Magento’s API architecture didn’t expose a unified URL resolution endpoint. The frontend had to query separate endpoints for products and categories, which brings us to our problem. In a structured URL system ( ), the routing decision is instant: In a flat URL system, you need a resolver: This might seem like a minor difference, a few extra database queries. But let’s see what this actually costs at scale. Our stack for this particular client consisted of a Nuxt.js frontend (running in SSR mode) and a Magento backend. Every URL that hit the application went through this flow: User requests: . Then, Nuxt SSR Server would query Magento API twice to confirm what the slug represented. If neither existed, it returned a 404. The diagram makes the inefficiency obvious: Let’s say we have: This will be translated to: Now scale this during a traffic spike, say Black Friday or a major product launch. Our backend autoscaling would kick in, spinning up new instances to handle what was essentially artificial load created by our URL design . We observed: The kicker? This wasn’t a bug. This was the architecture working exactly as designed. There’s another subtle issue: in systems without slug uniqueness constraints, a product and category could both use the same slug. Now your resolver doesn’t just need to check what exists. It needs to decide which one to serve. Do you prioritize products? Categories? First-created wins? This ambiguity isn’t just a performance problem; it’s a business logic problem. If a user comes from a marketing email expecting a product but lands on a category page instead, that’s a conversion lost. You might be experiencing this issue if: If you checked 3 or more of these, keep reading. Your URLs might be costing you more than you think. Faced with this problem on a platform with 100k+ SKUs, we had to choose a solution that balanced performance gains with implementation reality. URL restructuring with 301 redirects would be a massive undertaking. Maintaining redirect maps for that many products, ensuring no SEO disruption, and coordinating the migration was simply too risky and resource-intensive. Instead, we implemented a two-part solution that leveraged what we already had and made smart optimizations where it mattered most. We realized the Nuxt server already cached the category tree for building navigation menus. Categories don’t change frequently, so this cache was stable and reliable. We modified the URL resolver to: Result: We went from 2 backend calls per request to just 1 for product pages, and 0 additional calls for category pages (they already hit the cache). Categories resolved instantly, products required only one API call instead of two. Here’s the key insight: when users navigate within the application, we already know what they clicked on. A product card knows it’s linking to a product. A category menu knows it’s linking to a category. We updated all internal links to include a simple query parameter: Then in the route middleware: Result: Internal navigation happens purely on the client side with direct API calls. The server-side resolver is only used for: Since most traffic after the initial landing is internal navigation, this reduced our server-side resolution load by approximately 70-80%. Before optimization: After optimization: This solution had several advantages for our specific context: The query parameter approach might seem inelegant, but remember: these parameters only appear during internal navigation within the SPA. When users share links or search engines crawl, they see clean URLs like . The only exists in the client-side routing context. Here’s how requests are handled in our optimized system: If you’re starting fresh, you have the luxury of making informed decisions before the first line of code. Here’s what we wish we’d known and what we now recommend to clients. Default to deterministic URLs unless you have a compelling reason not to. Good starting point: It’s far easier to remove structure later (with redirects) than to add it. Going from to is a simple 301. Going the other direction means updating every existing URL, redirecting old URLs, potential SEO turbulence, and user confusion with bookmarks. Before finalizing your URL scheme, ask: Questions to answer: Decision matrix: When choosing flat URLs, explicitly budget for: Infrastructure: Development time: These aren’t failures. They’re the actual cost of flat URLs. If the business value (SEO, UX, brand consistency) exceeds these costs, great. But make the trade-off explicit rather than discovering it six months in. If you do use flat URLs, make slug uniqueness a hard constraint at the database or application level. This prevents the slug collision problem entirely. Yes, it means occasionally appending numbers, using prefixes or rejecting slugs. It’s far better than ambiguous runtime behavior. Whatever you choose, document why using an Architecture Decision Record (ADR): Future developers (including future you) will thank you. We should treat URL structure as a public API contract. Like any API, URLs: URL structure decisions are cheapest and most flexible at the start of a project (ideally in week one) when a quick discussion can define the architecture with little cost. Once development begins, changes require some rework but are still manageable. After launch, however, even minor adjustments trigger a chain reaction involving redirects, SEO checks, and cache updates. By the time scaling issues appear, restructuring URLs becomes a complex, time-consuming, and costly endeavor. Before finalizing URLs, bring together: The takeaway: invest early in thoughtful URL design to avoid expensive fixes later. A product named “Leather Jacket” A category called “Leather Jacket” A blog post titled “Leather Jacket” A landing page for a “Leather Jacket” campaign Nothing at all (a 404) Every valid page required 2 backend lookups (one fails, one succeeds) Every invalid URL triggered 2 backend lookups (both fail) Every crawler generated 2 database queries per attempt 100,000 page views per day 30% of traffic is bots/crawlers hitting invalid URLs Average API latency: 50ms per call Daily backend calls: 100,000 × 2 = 200,000 requests Bot overhead: 30,000 × 2 = 60,000 wasted requests Added latency per request: 100ms (2 × 50ms) Latency: P95 response times during peak traffic reached 800ms-1.2s Compute costs: Backend infrastructure costs were 40% higher than projected Vulnerability: During bot attacks or crawler storms, the 2x request multiplier meant we were essentially DDoSing ourselves Your application serves multiple entity types (products, categories, pages, posts) from flat URLs You’re using a headless/API-first architecture without unified URL resolution Your APM/monitoring shows 2+ similar backend queries per page request 404 pages have similar latency to valid pages (they shouldn’t) Bot traffic causes disproportionate backend load Backend autoscaling triggers don’t correlate with actual user traffic patterns Check cached categories first - If the slug matches a cached category, route directly to category handler (in-memory lookup, ~1ms) Query products if not a category - Only make one API call to check if it’s a product Return 404 if neither - No entity found, render 404 page Direct URL access (user types URL or bookmarks) External links (social media, search engines, emails) First page load Every request: 2 backend API calls Average response time: 800ms-1.2s (P95) Backend costs: 40% over projection Category pages (initial load): 0 additional backend calls (cached) Product pages (initial load): 1 backend call (50% reduction) Internal navigation: 0 server-side resolution (pure client-side) Average response time: 200-400ms (P95) Backend costs: Reduced by ~35% No URL changes - No redirects, no SEO impact, no user confusion Leveraged existing infrastructure - The category cache was already there Progressive enhancement - External/shared URLs still work perfectly (clean, no query params visible) Low implementation effort - Mostly frontend changes, minimal backend work Immediate impact - Deployed in one day. We didn’t have to wait for any changes to backend APIs. Does our backend provide unified URL resolution? (SEF tables, single endpoint) Can we query by slug across all entity types efficiently? What’s the database query cost for “does this slug exist?” Backend has SEF tables → Flat URLs viable Backend has separate endpoints → Structured URLs recommended Backend has no resolution → Build it or use structure Cache layer to store resolved slugs Additional backend capacity for resolution queries Building resolution logic Maintaining slug uniqueness Implementing cache invalidation Debugging cache consistency issues Define how clients (users, bots, search engines) interact with your system Create expectations that are expensive to break Have performance characteristics that affect the entire stack Must be versioned carefully (via redirects) if changed Should be designed with both current and future capabilities in mind Frontend team: What’s cleanest for users? Backend team: What can we resolve efficiently? DevOps: What are the caching implications? SEO/Marketing: What’s the measurable impact?

0 views
Ahmad Alfy 1 months ago

How Functional Programming Shaped (and Twisted) Frontend Development

A friend called me last week. Someone who’d built web applications back for a long time before moving exclusively to backend and infra work. He’d just opened a modern React codebase for the first time in over a decade. “What the hell is this?” he asked. “What are all these generated class names? Did we just… cancel the cascade? Who made the web work this way?” I laughed, but his confusion cut deeper than he realized. He remembered a web where CSS cascaded naturally, where the DOM was something you worked with , where the browser handled routing, forms, and events without twenty abstractions in between. To him, our modern frontend stack looked like we’d declared war on the platform itself. He asked me to explain how we got here. That conversation became this essay. A disclaimer before we begin : This is one perspective, shaped by having lived through the first browser war. I applied to make 24-bit PNGs work in IE6. I debugged hasLayout bugs at 2 AM. I wrote JavaScript when you couldn’t trust to work the same way across browsers. I watched jQuery become necessary, then indispensable, then legacy. I might be wrong about some of this. My perspective is biased for sure, but it also comes with the memory that the web didn’t need constant reinvention to be useful. There’s a strange irony at the heart of modern web development. The web was born from documents, hyperlinks, and a cascading stylesheet language. It was always messy, mutable, and gloriously side-effectful. Yet over the past decade, our most influential frontend tools have been shaped by engineers chasing functional programming purity: immutability, determinism, and the elimination of side effects. This pursuit gave us powerful abstractions. React taught us to think in components. Redux made state changes traceable. TypeScript brought compile-time safety to a dynamic language. But it also led us down a strange path. A one where we fought against the platform instead of embracing it. We rebuilt the browser’s native capabilities in JavaScript, added layers of indirection to “protect” ourselves from the DOM, and convinced ourselves that the web’s inherent messiness was a problem to solve rather than a feature to understand. The question isn’t whether functional programming principles have value. They do. The question is whether applying them dogmatically to the web (a platform designed around mutability, global scope, and user-driven chaos) made our work better, or just more complex. To understand why functional programming ideals clash with web development, we need to acknowledge what the web actually is. The web is fundamentally side-effectful. CSS cascades globally by design. Styles defined in one place affect elements everywhere, creating emergent patterns through specificity and inheritance. The DOM is a giant mutable tree that browsers optimize obsessively; changing it directly is fast and predictable. User interactions arrive asynchronously and unpredictably: clicks, scrolls, form submissions, network requests, resize events. There’s no pure function that captures “user intent.” This messiness is not accidental. It’s how the web scales across billions of devices, remains backwards-compatible across decades, and allows disparate systems to interoperate. The browser is an open platform with escape hatches everywhere. You can style anything, hook into any event, manipulate any node. That flexibility and that refusal to enforce rigid abstractions is the web’s superpower. When we approach the web with functional programming instincts, we see this flexibility as chaos. We see globals as dangerous. We see mutation as unpredictable. We see side effects as bugs waiting to happen. And so we build walls. Functional programming revolves around a few core principles: functions should be pure (same inputs → same outputs, no side effects), data should be immutable, and state changes should be explicit and traceable. These ideas produce code that’s easier to reason about, test, and parallelize, in the right context of course. These principles had been creeping into JavaScript long before React. Underscore.js (2009) brought map, reduce, and filter to the masses. Lodash and Ramda followed with deeper FP toolkits including currying, composition and immutability helpers. The ideas were in the air: avoid mutation, compose small functions, treat data transformations as pipelines. React itself started with class components and , hardly pure FP. But the conceptual foundation was there: treat UI as a function of state, make rendering deterministic, isolate side effects. Then came Elm, a purely functional language created by Evan Czaplicki that codified the “Model-View-Update” architecture. When Dan Abramov created Redux, he explicitly cited Elm as inspiration. Redux’s reducers are directly modeled on Elm’s update functions: . Redux formalized what had been emerging patterns. Combined with React Hooks (which replaced stateful classes with functional composition), the ecosystem shifted decisively toward FP. Immutability became non-negotiable. Pure components became the ideal. Side effects were corralled into . Through this convergence (library patterns, Elm’s rigor, and React’s evolution) Haskell-derived ideas about purity became mainstream JavaScript practice. In the early 2010s, as JavaScript applications grew more complex, developers looked to FP for salvation. jQuery spaghetti had become unmaintainable. Backbone’s two-way binding caused cascading updates (ironically, Backbone’s documentation explicitly advised against two-way binding saying “it doesn’t tend to be terribly useful in your real-world app” yet many developers implemented it through plugins). The community wanted discipline, and FP offered it: treat your UI as a pure function of state. Make data flow in one direction. Eliminate shared mutable state. React’s arrival in 2013 crystallized these ideals. It promised a world where : give it data, get back a component tree, re-render when data changes. No manual DOM manipulation. No implicit side effects. Just pure, predictable transformations. This was seductive. And in many ways, it worked. But it also set us on a path toward rebuilding the web in JavaScript’s image, rather than JavaScript in the web’s image. CSS was designed to be global. Styles cascade, inherit, and compose across boundaries. This enables tiny stylesheets to control huge documents, and lets teams share design systems across applications. But to functional programmers, global scope is dangerous. It creates implicit dependencies and unpredictable outcomes. Enter CSS-in-JS: styled-components, Emotion, JSS. The promise was component isolation. Styles scoped to components, no cascading surprises, no naming collisions. Styles become data , passed through JavaScript, predictably bound to elements. But this came at a cost. CSS-in-JS libraries generate styles at runtime, injecting them into tags as components mount. This adds JavaScript execution to the critical rendering path. Server-side rendering becomes complicated. You need to extract styles during the render, serialize them, and rehydrate them on the client. Debugging involves runtime-generated class names like . And you lose the cascade; the very feature that made CSS powerful in the first place. Worse, you’ve moved a browser-optimized declarative language into JavaScript, a single-threaded runtime. The browser can parse and apply CSS in parallel, off the main thread. Your styled-components bundle? That’s main-thread work, blocking interactivity. The web had a solution. It’s called a stylesheet. But it wasn’t pure enough. The industry eventually recognized these problems and pivoted to Tailwind CSS. Instead of runtime CSS generation, use utility classes. Instead of styled-components, compose classes in JSX. This was better, at least it’s compile-time, not runtime. No more blocking the main thread to inject styles. No more hydration complexity. But Tailwind still fights the cascade. Instead of writing once and letting it cascade to all buttons, you write on every single button element. You’ve traded runtime overhead for a different set of problems: class soup in your markup, massive HTML payloads, and losing the cascade’s ability to make sweeping design changes in one place. And here’s where it gets truly revealing: when Tailwind added support for nested selectors using (a feature that would let developers write more cascade-like styles), parts of the community revolted. David Khourshid (creator of XState) shared examples of using nested selectors in Tailwind, and the backlash was immediate. Developers argued this defeated the purpose of Tailwind, that it brought back the “problems” of traditional CSS, that it violated the utility-first philosophy. Think about what this means. The platform has cascade. CSS-in-JS tried to eliminate it and failed. Tailwind tried to work around it with utilities. And when Tailwind cautiously reintroduced a cascade-like feature, developers who were trained by years of anti-cascade ideology rejected it. We’ve spent so long teaching people that the cascade is dangerous that even when their own tools try to reintroduce platform capabilities, they don’t want them. We’re not just ignorant of the platform anymore. We’re ideologically opposed to it. React introduced synthetic events to normalize browser inconsistencies and integrate events into its rendering lifecycle. Instead of attaching listeners directly to DOM nodes, React uses event delegation. It listens at the root, then routes events to handlers through its own system. This feels elegant from a functional perspective. Events become data flowing through your component tree. You don’t touch the DOM directly. Everything stays inside React’s controlled universe. But native browser events already work. They bubble, they capture, they’re well-specified. The browser has spent decades optimizing event dispatch. By wrapping them in a synthetic layer, React adds indirection: memory overhead for event objects, translation logic for every interaction, and debugging friction when something behaves differently than the native API. Worse, it trains developers to avoid the platform. Developers learn React’s event system, not the web’s. When they need to work with third-party libraries or custom elements, they hit impedance mismatches. becomes a foreign API in their own codebase. Again: the web had this. The browser’s event system is fast, flexible, and well-understood. But it wasn’t controlled enough for the FP ideal of a closed system. The logical extreme of “UI as a pure function of state” is client-side rendering: the server sends an empty HTML shell, JavaScript boots up, and the app renders entirely in the browser. From a functional perspective, this is clean. Your app is a deterministic function that takes initial state and produces a DOM tree. From a web perspective, it’s a disaster. The browser sits idle while JavaScript parses, executes, and manually constructs the DOM. Users see blank screens. Screen readers get empty documents. Search engines see nothing. Progressive rendering which is one of the browser’s most powerful features, goes unused. The industry noticed. Server-side rendering came back. But because the mental model was still “JavaScript owns the DOM,” we got hydration : the server renders HTML, the client renders the same tree in JavaScript, then React walks both and attaches event handlers. During hydration, the page is visible but inert. Clicks do nothing, forms don’t submit. This is architecturally absurd. The browser already rendered the page. It already knows how to handle clicks. But because the framework wants to own all interactions through its synthetic event system, it must re-create the entire component tree in JavaScript before anything works. The absurdity extends beyond the client. Infrastructure teams watch in confusion as every user makes double the number of requests : the server renders the page and fetches data, then the client boots up and fetches the exact same data again to reconstruct the component tree for hydration. Why? Because the framework can’t trust the HTML it just generated. It needs to rebuild its internal representation of the UI in JavaScript to attach event handlers and manage state. This isn’t just wasteful, it’s expensive. Database queries run twice. API calls run twice. Cache layers get hit twice. CDN costs double. And for what? So the framework can maintain its pure functional model where all state lives in JavaScript. The browser had the data. The HTML had the data. But that data wasn’t in the right shape . It wasn’t a JavaScript object tree, so we throw it away and fetch it again. Hydration is what happens when you treat the web like a blank canvas instead of a platform with capabilities. The web gave us streaming HTML, progressive enhancement, and instant interactivity. We replaced it with JSON, JavaScript bundles, duplicate network requests, and “please wait while we reconstruct reality.” Consider the humble modal dialog. The web has , a native element with built-in functionality: it manages focus trapping, handles Escape key dismissal, provides a backdrop, controls scroll-locking on the body, and integrates with the accessibility tree. It exists in the DOM but remains hidden until opened. No JavaScript mounting required. It’s fast, accessible, and battle-tested by browser vendors. Now observe what gets taught in tutorials, bootcamps, and popular React courses: build a modal with elements. Conditionally render it when is true. Manually attach a click-outside handler. Write an effect to listen for the Escape key. Add another effect for focus trapping. Implement your own scroll-lock logic. Remember to add ARIA attributes. Oh, and make sure to clean up those event listeners, or you’ll have memory leaks. You’ve just written 100+ lines of JavaScript to poorly recreate what the browser gives you for free. Worse, you’ve trained developers to not even look for native solutions. The platform becomes invisible. When someone asks “how do I build a modal?”, the answer is “install a library” or “here’s my custom hook,” never “use .” The teaching is the problem. When influential tutorial authors and bootcamp curricula skip native APIs in favor of React patterns, they’re not just showing an alternative approach. They’re actively teaching malpractice. A generation of developers learns to build inaccessible soup because that’s what fits the framework’s reactivity model, never knowing the platform already solved these problems. And it’s not just bootcamps. Even the most popular component libraries make the same choice: shadcn/ui builds its Dialog component on Radix UI primitives, which use instead of the native element. There are open GitHub issues requesting native support, but the implicit message is clear: it’s easier to reimplement the browser than to work with it. The problem runs deeper than ignorance or inertia. The frameworks themselves increasingly struggle to work with the platform’s evolution. Not because the platform features are bad, but because the framework’s architectural assumptions can’t accommodate them. Consider why component libraries like Radix UI choose over . The native element manages its own state: it knows when it’s open, it handles its own visibility, it controls focus internally. But React’s reactivity model expects all state to live in JavaScript, flowing unidirectionally into the DOM. When a native element manages its own state, React’s mental model breaks down. Keeping in your React state synchronized with the element’s actual open/closed state becomes a nightmare of refs, effects, and imperative calls. Precisely what React was supposed to eliminate. Rather than adapt their patterns to work with stateful native elements, library authors reimplement the entire behavior in a way that fits the framework. It’s architecturally easier to build a fake dialog in JavaScript than to integrate with the platform’s real one. But the conflict extends beyond architectural preferences. Even when the platform adds features that developers desperately want, frameworks can’t always use them. Accordions? The web has and . Tooltips? There’s attribute and the emerging API. Date pickers? . Custom dropdowns? The web now supports styling elements with and pseudo-elements. You can even put elements with images inside elements now. It eliminates the need for the countless JavaScript select libraries that exist solely because designers wanted custom styling. Frameworks encourage conditional rendering and component state, so these elements don’t get rendered until JavaScript decides they should exist. The mental model is “UI appears when state changes,” not “UI exists, state controls visibility.” Even when the platform adds the exact features developers have been rebuilding in JavaScript for years, the ecosystem momentum means most developers never learn these features exist. And here’s the truly absurd part: even when developers do know about these new platform features, the frameworks themselves can’t handle them. MDN’s documentation for customizable elements includes this warning: “ Some JavaScript frameworks block these features; in others, they cause hydration failures when Server-Side Rendering (SSR) is enabled. ” The platform evolved. The HTML parser now allows richer content inside elements. But React’s JSX parser and hydration system weren’t designed for this. They expect to only contain text. Updating the framework to accommodate the platform’s evolution takes time, coordination, and breaking changes that teams are reluctant to make. The web platform added features that eliminate entire categories of JavaScript libraries, but the dominant frameworks can’t use those features without causing hydration errors. The stack that was supposed to make development easier now lags behind the platform it’s built on. The browser has native routing: tags, the History API, forward/back buttons. It has native forms: elements, validation attributes, submit events. These work without JavaScript. They’re accessible by default. They’re fast. Modern frameworks threw them out. React Router, Next.js’s router, Vue Router; they intercept link clicks, prevent browser navigation, and handle routing in JavaScript. Why? Because client-side routing feels like a pure state transition: URL changes, state updates, component re-renders. No page reload. No “lost” JavaScript state. But you’ve now made navigation depend on JavaScript. Ctrl+click to open in a new tab? Broken, unless you carefully re-implement it. Right-click to copy link? The URL might not match what’s rendered. Accessibility tools that rely on standard navigation patterns? Confused. Forms got the same treatment. Instead of letting the browser handle submission, validation, and accessibility, frameworks encourage JavaScript-controlled forms. Formik, React Hook Form, uncontrolled vs. controlled inputs; entire libraries exist to manage what already does. The browser can validate instantly, with no JavaScript. But that’s not reactive enough, so we rebuild validation in JavaScript, ship it to the client, and hope we got the logic right. The web had these primitives. We rejected them because they didn’t fit our FP-inspired mental model of “state flows through JavaScript.” Progressive enhancement used to be a best practice: start with working HTML, layer on CSS for style, add JavaScript for interactivity. The page works at every level. Now, we start with JavaScript and work backwards, trying to squeeze HTML out of our component trees and hoping hydration doesn’t break. We lost built-in accessibility. Native HTML elements have roles, labels, and keyboard support by default. Custom JavaScript widgets require attributes, focus management, and keyboard handlers. All easy to forget or misconfigure. We lost performance. The browser’s streaming parser can render HTML as it arrives. Modern frameworks send JavaScript, parse JavaScript, execute JavaScript, then finally render. That’s slower. The browser can cache CSS and HTML aggressively. JavaScript bundles invalidate on every deploy. We lost simplicity. is eight characters. A client-side router is a dependency, a config file, and a mental model. is self-documenting. A controlled form with validation is dozens of lines of state management. And we lost alignment with the platform. The browser vendors spend millions optimizing HTML parsing, CSS rendering, and event dispatch. We spend thousands of developer-hours rebuilding those features in JavaScript, slower. This isn’t a story of incompetence. Smart people built these tools for real reasons. By the early 2010s, JavaScript applications had become unmaintainable. jQuery spaghetti sprawled across codebases. Two-way data binding caused cascading updates that were impossible to debug. Teams needed discipline, and functional programming offered it: pure components, immutable state, unidirectional data flow. For complex, stateful applications (like dashboards with hundreds of interactive components, real-time collaboration tools, data visualization platforms) React’s model was genuinely better than manually wiring up event handlers and tracking mutations. The FP purists weren’t wrong that unpredictable mutation causes bugs. They were wrong that the solution was avoiding the platform’s mutation-friendly APIs instead of learning to use them well. But in the chaos of 2013, that distinction didn’t matter. React worked. It scaled. And Facebook was using it in production. Then came the hype cycle. React dominated the conversation. Every conference had React talks. Every tutorial assumed React as the starting point. CSS-in-JS became “modern.” Client-side rendering became the default. When big companies like Facebook, Airbnb, Netflix and others adopted these patterns, they became industry standards. Bootcamps taught React exclusively. Job postings required React experience. The narrative solidified: this is how you build for the web now. The ecosystem became self-reinforcing through its own momentum. Once React dominated hiring pipelines and Stack Overflow answers, alternatives faced an uphill battle. Teams that had already invested in React by training developers, building component libraries, establishing patterns are now facing enormous switching costs. New developers learned React because that’s what jobs required. Jobs required React because that’s what developers knew. The cycle fed itself, independent of whether React was the best tool for any particular job. This is where we lost the plot. Somewhere in the transition from “React solves complex application problems” to “React is how you build websites,” we stopped asking whether the problems we were solving actually needed these solutions. I’ve watched developers build personal blogs with Next.js. Sites that are 95% static content with maybe a contact form, because that’s what they learned in bootcamp. I’ve seen companies choose React for marketing sites with zero interactivity, not because it’s appropriate, but because they can’t hire developers who know anything else. The tool designed for complex, stateful applications became the default for everything, including problems the web solved in 1995 with HTML and CSS. A generation of developers never learned that most websites don’t need a framework at all. The question stopped being “does this problem need React?” and became “which React pattern should I use?” The platform’s native capabilities like progressive rendering, semantic HTML, the cascade, instant navigation are now considered “old-fashioned.” Reinventing them in JavaScript became “best practices.” We chased functional purity on a platform that was never designed for it. And we built complexity to paper over the mismatch. The good news: we’re learning. The industry is rediscovering the platform. HTMX embraces HTML as the medium of exchange. Server sends HTML, browser renders it, no hydration needed. Qwik resumable architecture avoids hydration entirely, serializing only what’s needed. Astro defaults to server-rendered HTML with minimal JavaScript. Remix and SvelteKit lean into web standards: forms that work without JS, progressive enhancement, leveraging the browser’s cache. These tools acknowledge what the web is: a document-based platform with powerful native capabilities. Instead of fighting it, they work with it. This doesn’t mean abandoning components or reactivity. It means recognizing that is a useful model inside your framework, not a justification to rebuild the entire browser stack. It means using CSS for styling, native events for interactions, and HTML for structure and then reaching for JavaScript when you need interactivity beyond what the platform provides. The best frameworks of the next decade will be the ones that feel like the web, not in spite of it. In chasing functional purity, we built a frontend stack that is more complex, more fragile, and less aligned with the platform it runs on. We recreated CSS in JavaScript, events in synthetic wrappers, rendering in hydration layers, and routing in client-side state machines. We did this because we wanted predictability, control, and clean abstractions. But the web was never meant to be pure. It’s a sprawling, messy, miraculous platform built on decades of emergent behavior, pragmatic compromises, and radical openness. Its mutability isn’t a bug. It’s the reason a document written in 1995 still renders in 2025. Its global scope isn’t dangerous. It’s what lets billions of pages share a design language. Maybe the web didn’t need to be purified. Maybe it just needed to be understood. I want to thank my friend Ihab Khattab for reviewing this piece and providing invaluable feedback.

0 views
Ahmad Alfy 3 months ago

Avoiding the Shiny Object Syndrome: When “Good Enough” Is Actually Perfect

As developers, we’re constantly bombarded with the latest and greatest tools. New frameworks drop every month, each promising to solve all our problems with cleaner syntax, better performance, and that magical developer experience we’ve all been craving. There’s an almost magnetic pull toward these shiny new objects, a whisper that says: “Your current stack is outdated. You’re falling behind. Time to modernize.” But sometimes (more often than we’d like to admit) chasing the shiny new thing leads us down a rabbit hole that costs far more than it delivers. Let me tell you about how I almost fell into this trap, and how stepping back taught me an important lesson about knowing when not to fix what isn’t broken. My blog has been running on Jekyll since 2013. It’s built on Ruby, it’s a static site generator, and honestly? It just works. The build time for my nearly 20 pages is under a second (literally). I write in Markdown, push to GitHub, and my content goes live. Simple, reliable, boring in all the best ways. For years, I used Disqus for comments. Sure, I’d heard the privacy concerns, but the integration was dead simple: drop in a script tag and you have a full commenting system. It worked perfectly… until it didn’t. Over time, the quality of Disqus ads became increasingly awful. We’re talking cheap scam-level bad. Usually featuring some questionable imagery that looked completely out of place on a technical blog. Imagine trying to discuss clean code architecture while the bottom of your blog displays what looks like a dating site gone wrong. That’s when the thought crept in: Maybe it’s time to modernize everything. Why stick with this old Jekyll stack when there are sexier static site generators out there? At work, we’ve been using Astro , and it’s fantastic. There are beautiful themes (free and commercial) with features I could only dream of back in 2013: dark mode, light mode, extended Markdown support, and those buttery-smooth ViewTransitions between pages. I dove in. Downloaded Astro, found a gorgeous theme, and at first glance, it seemed like everything I needed. This was it. Time to join the modern web development world. But then reality hit. This wasn’t going to be a simple swap. It was an investment. A significant one: For a blog where I publish once or twice a year (though I’m trying to change that habit), this was starting to look like a multi-week project. I took a step back and had what I can only describe as a Walter White moment: “We had a good thing, you stupid son of a b*tch!” Wait. What problem was I actually trying to solve? Bad Disqus ads. That’s it. I didn’t need a complete platform overhaul. I needed a better commenting system. As I researched the Astro theme more carefully, I noticed their primary commenting integration was something called Giscus . Curious, I investigated. Giscus is brilliant in its simplicity: it’s a GitHub app that turns your repository’s Discussions feature into a commenting system. Zero infrastructure on my end, no privacy concerns, and the setup is just configuring a script from their website. I tried it on my existing Jekyll blog. It worked flawlessly. This experience crystallized something important about our relationship with technology. The allure of modern tools often blinds us to what we’re actually trying to accomplish. Jekyll vs. Astro build times : Jekyll builds my entire site in under a second. The last time I used Astro on a project of similar size, it took over a minute. “Modern” doesn’t always mean better. Maintenance overhead : My Jekyll setup has been rock-solid for over a decade. Every migration carries the risk of introducing new complexities, dependencies, and potential failure points. Time investment : The hours I would have spent on migration could be better used creating content which is the actual purpose of having a blog. Before you embark on your next “modernization” project, ask yourself these questions: What specific problem am I solving? Write it down. Be precise. “The tech is old” isn’t a problem; it’s an observation. What’s the smallest change that solves this problem? Often, the solution is much simpler than a complete rewrite. What are the hidden costs? Migration time, learning curve, new dependencies, potential bugs, ongoing maintenance … etc. Is my current solution actually causing problems? Performance issues? Developer friction? Or is it just not the trendy choice? Where should my effort actually go? In my case, writing more content would benefit my blog far more than switching frameworks. There’s something deeply satisfying about tools that fade into the background and let you focus on what matters. My Jekyll blog doesn’t win any architecture awards, but it lets me write without friction and publishes reliably. Sometimes the most professional choice is sticking with what works. Your users don’t care if you’re using the latest framework; they care if your site loads fast and provides value. As long as your tools are working (and I mean truly working, not just limping along) there’s often no compelling reason to change them. The energy you save by not chasing every shiny object can be redirected toward what actually moves the needle: solving real problems, creating better content, or building features that matter to your users. The next time you feel that familiar pull toward the latest and greatest, pause. Ask yourself: am I solving a real problem, or am I just distracted by something shiny? P.S. Since I just swapped out Disqus for Giscus, help me put this new commenting system to the test! Drop a comment below and let me know your experience with shiny object syndrome! Markdown Migration : All my custom markdown hacks would need to be rewritten to use the theme’s extended features URL Structure : My existing URLs were different. I’d either need to dig deep into the theme’s internals or set up a complex redirect system Front Matter : The metadata structure had changed, requiring me to update every single post Content Audit : I’d need to test every page to ensure nothing broke in translation What specific problem am I solving? Write it down. Be precise. “The tech is old” isn’t a problem; it’s an observation. What’s the smallest change that solves this problem? Often, the solution is much simpler than a complete rewrite. What are the hidden costs? Migration time, learning curve, new dependencies, potential bugs, ongoing maintenance … etc. Is my current solution actually causing problems? Performance issues? Developer friction? Or is it just not the trendy choice? Where should my effort actually go? In my case, writing more content would benefit my blog far more than switching frameworks.

0 views
Ahmad Alfy 3 months ago

From Code That Works to Code That Matters: A PDF Security Feature Story

Or: Why being a valuable engineer means thinking beyond the technical requirements It started with a vulnerability report. Our Strapi -based platform had a classic security issue: the file uploader was allowing PDFs with embedded scripts; a pentester’s dream and our security nightmare. The fix seemed straightforward enough: block non-compliant PDFs by validating them against the PDF/A standard, which prohibits scripting and embedded content. I dove into the technical challenge. After some research, I found veraPDF , a binary tool that could validate PDF/A compliance. Using Strapi’s webhook system, I hooked into the file upload process, ran the scanner, and threw an error for non-compliant files. I was ready to celebrate. Running a binary from Node.js was new territory for me, and the JavaScript ecosystem had no good alternatives for PDF validation. The core functionality worked. Mission accomplished, right? As I prepared my merge request, I took another look at the implementation. When a PDF failed validation, the system returned a generic “500 Internal Server Error.” That nagging feeling hit, the one every engineer knows but doesn’t always listen to. This isn’t good enough. A 500 error suggests the server broke, but this was actually a client input issue; a 4xx error. More importantly, users would have no idea why their upload failed. After digging into Strapi’s error handling, I found the function and crafted a proper error response with a descriptive message. Better, but still not done. Then I realized something else: when validation failed, the uploaded file was still sitting on the server. The error stopped the process, but left digital debris behind. A quick cleanup function solved that. But wait, what if veraPDF isn’t installed on the server? I shared my branch with a colleague for testing, and sure enough, both compliant and non-compliant PDFs were failing validation. The binary wasn’t in his PATH. Now I needed to distinguish between “PDF validation failed” (user error, 4xx) and “veraPDF unavailable” (server configuration issue, 5xx), with appropriate error messages for each scenario. Here’s what struck me: the original requirement was simple “disallow PDFs with scripts.” My first implementation technically satisfied that requirement. But it would have been terrible in practice. The engineering solution worked, but it took multiple iterations to make it right : None of these additional considerations came from the product team or the security testers. They emerged from thinking like a user, considering edge cases, and caring about the overall experience. This experience reinforced something crucial: companies don’t just want engineers who can write code that works. They want engineers who think holistically about problems. The difference between a junior and senior engineer isn’t just technical complexity. It’s the ability to: This mindset—combining technical skills with product thinking and user empathy—is what makes engineers truly valuable. It’s what turns a feature request into a robust solution that actually solves the problem. What made this manageable was breaking down the problem into small, testable parts: Each iteration built on the last, gradually transforming working code into production-ready code. Technical skills will get you hired, but product thinking and user empathy will make you indispensable . The ability to see beyond the immediate technical requirement—to think about edge cases, user experience, and maintainability—is what separates good engineers from great ones. Next time you’re tempted to ship that first working version, pause and ask: “What would make this not just work, but work well?” Your users (and your future self) will thank you. What’s a time when you went beyond the basic requirements to create a better user experience? I’d love to hear your stories in the comments. First iteration : Core functionality ✓ Second iteration : Proper error codes and messages ✓ Third iteration : File cleanup ✓ Fourth iteration : Graceful handling of missing dependencies ✓ Think beyond the happy path : What happens when things go wrong? Consider the user experience : Even for internal tools and error states Anticipate deployment issues : What assumptions am I making about the environment? Write maintainable code : The next person (including future you) will thank you Can I run veraPDF from Node.js? Can I integrate with Strapi’s file upload lifecycle? Am I returning appropriate error codes and messages? Am I cleaning up properly on failures? Am I handling environment dependencies gracefully?

0 views
Ahmad Alfy 1 years ago

Smarter than ‘Ctrl+F’: Linking Directly to Web Page Content

Historically, we could link to a certain part of the page only if that part had an ID. All we needed to do was to link to the URL and add the document fragment (ID). If we wanted to link to a certain part of the page, we needed to anchor that part to link to it. This was until we were blessed with the Text fragments ! Text fragments are a powerful feature of the modern web platform that allows for precise linking to specific text within a web page without the need to add an anchor! This feature is complemented by the CSS pseudo-element, which provides a way to style the highlighted text. Text fragments work by appending a special syntax to the end of a URL; just like we used to append the ID after the hash symbol ( ). The browser interprets this part of the URL, searches for the specified text on the page, and then scrolls to and highlights that text if it supports text fragments. If the user attempts to navigate the document by pressing tab, the focus will move on to the next focusable element after the text fragment. Here’s the basic syntax for a text fragment URL: Following the hash symbol, we add this special syntax also known as fragment directive then followed by: For example, the following link: This text fragment we are using is “without relying on the presence of IDs” but it’s encoded . If you follow this link , it should look like the following: We can also highlight a range of text by setting the and the . Consider the following example from the same URL: The text fragment we are using is “using particular” followed by a comma then “don’t control”. If you follow this link , it should look like the following: We can also highlight multiple texts by using ampersands. Consider the following: If you follow this link , it should look like the following: One of the interesting behaviors about text fragments, is if you’re linking to hidden content that’s discoverable through find-in-page feature (e.g. children of element with hidden attribute set to or content of a closed details element), the hidden content will become visible. Let’s look at this behavior by linking to this article from Scott O’Hara’s blog. The blog contains the details element that is closed by default. If we linked to the text fragment inside the details element, it will open automatically: Note that this behavior is only available in Google Chrome as it’s the only browser to support discoverable content. If the browser supports text fragments, we can change the style of the highlighted text by using the pseudo-element Note that we are only allowed to change the following properties: Text fragments are currently supported in all the browsers . The pseudo-element is not yet supported is Safari but it’s now available in the Technology Preview version. If this feature is not supported in the browser, it will degrade gracefully and the page will load without highlighting or scrolling to the text. The default style for the highlight is different based on the browser. The color of the highlight is different across the different browsers. The highlighted area is bigger in Safari spanning the whole line-height. In Firefox and Chrome, only the text is highlighted and the spaces between the lines are empty. We can detect if the feature is supported or not using . It will return an empty FragmentDirective object, if supported or will return undefined if it’s not. My first encounter with text fragments was through links generated by Google Search results. Initially, I assumed it was a Chrome-specific feature and not part of a broader web standard. However, I soon realized that this functionality was actually built upon the open web, available to any browser that chooses to implement it. I’d love to see this feature used more broadly, particularly by responsible generative AI systems. Imagine AI that can provide direct, context-sensitive links to the exact content you’re interested in, using text fragments for precise references. This would not only increase transparency but also improve the user experience when navigating AI-generated content. Looking ahead, it would be fantastic if text fragments were more accessible to all users, not just those with technical knowledge. What if browsers offered built-in features that allowed non-technical users to highlight text and generate links to specific paragraphs with ease? This could be through a native browser feature or even a simple browser extension—either way, it would make deep linking a breeze for everyone. Finally, I’d like to express my sincere thanks to Hannah Olukoye and Jens Oliver Meiert for the time they’ve taken to share their invaluable feedback and corrections. It turns out that the ability to generate a link to a specific piece of text is already built into Chromium-based browsers, as Hosam Sultan clarified on X (formerly Twitter). If you’re using Chrome, simply highlight some text, right-click, and you’ll find the “Copy link to highlight” option in the context menu. : A text string preceded by a hyphen specifying what text should immediately precede the linked text. This helps the browser to link to the correct text in case of multiple matches. This part is not highlighted. : The beginning of the text you’re highlighting. : The ending of the text you’re highlighting. : A hyphen followed by a text string that behaves similarly to the prefix but comes after the text. Aslo helpful when multiple matches exist and doesn’t get highlighted with the linked text. background-color text-decoration and its associated properties (including text-underline-position and text-underline-offset) text-shadow stroke-color, fill-color, and stroke-width custom properties URL Fragment Text Directives - W3C Draft Community Group Report Text Fragments: MDN Style Highlights: CSSWG Draft Support for Text Fragments: CanIUse

0 views
Ahmad Alfy 1 years ago

Search friendly dropdown menu

Dropdown menus have been around for a long time. They are a common way to build navigation menus with a lot of items. When these kind of menus were first introduced, we relied on JavaScript to make them work ( Suckerfish menus anyone?). This is because the pseudo-class was not supported on non-interactable elements (like ) in older browsers. That’s not the case anymore, and we can now build dropdown menus that work without JavaScript. After the introduction of the pseudo-class, we can now build dropdown menus that work better with keyboard navigation. This is because the pseudo-class is triggered when an element or its child elements are focused. This means that when a user tabs to a child menu, the dropdown menu will be shown. This was a great improvement for accessibility and usability. I recently had a thought about how we can make dropdown menus even more user friendly. This thought came to me after an encounter with a Wordpress administration panel that had a lot of dropdown menus. I heavily rely on the search-in-page feature in my browser. Wordpress dropdown menus are not hidden from the search-in-page feature because they are implemented with a positioning technique that puts them outside of the viewport. This means that the search-in-page feature will find the dropdown menu items, but the user will not see them. This caused me a lot of frustration as I was trying to juggle between the different results I was getting. I have posted about the attribute’s value before on HTMHell Advent’s Calendar for 2023 and I thought that this could be a great solution for this problem. The attribute with the value will hide the element from the user until the user searches for the element. This means that the search-in-page feature will find the element, but the user will not see it until they search for it. Note : At the time of writing this post, the attribute with the value is an experimental feature that’s currently supported on Chrome and Edge . Let’s take a look at this basic dropdown menu. We will create a two-level dropdown menu using unordered lists. The second level will be hidden using the CSS property . We will then use the pseudo-class and to show the second level when the first level is hovered or focused. Moving your mouse over the “Shop” or “Services” menu items will show the second level of the dropdown menu. Now let’s make a few changes to make that menu work using the new attribute value . We will have to modify the CSS as well. The way we’re hiding the second level of the dropdown menu is by setting the property to . This is how the hidden attribute works by default. With the new attribute value , the content is hidden using the property. To understands the difference between the two, I recommend checking this awesome article on web.dev. Now that the hidden attribute is set, the content will be hidden by default. We will need to modify the CSS to show the content when the user move their cursor over the first level of the dropdown menu. Here is the modified version. Now try to search for “Electronics” using the search-in-page feature in your browser. You will see that the dropdown menu will open spontaneously! This is all working without JavaScript, just by using the attribute with the value . We will run into a little problem here. The elements displayed using the attribute value will be visible all the time. This isn’t like what we had earlier where the visibility state is toggled. Once the element is found, the hidden attribute will be removed and the element will be visible all the time. Watch the video below to see what happens when we search for items in two different menus. Luckily, JavaScript can help us with this. The new feature for the attribute comes with a JavaScript event called . This event is triggered when an element is found using the search-in-page feature. We can use this event to toggle the visibility of the other element. Let’s see how we can do this. Notice that we added event listeners for , , and to simulate the hover effect and focus effects. This is because the event is only triggered when the element is found using the search-in-page feature. These events will help us hide the other submenus when the user moves their cursor over or focuses on other the menu items. Here is a video demonstrating the full implementation: We now have a dropdown menu that can work with a little help of JavaScript and is search-in-page friendly. This is a great improvement for usability. In future iterations of this demo, we can use feature detection to make this demo work gracefully for older browsers. Additionally, evaluating the accessibility of this solution against established standards like WCAG and ARIA will be a crucial step to ensure inclusivity for all users. The attribute with the value is a great addition to the web platform and I think we will be relying on it more and more in the future. If I may ask for more things, it would be great to have a way to toggle the visibility of the element if a new match is found and a way to find the parent element of the text element. I wanted to move focus directly to the match anchor tag but the event doesn’t have this kind of information. This way, we can build more user-friendly interfaces for our users. Special thanks goes to Hatem Hosny and Konnor Rogers for their valuable feedback on this post. If you have any questions or suggestions, feel free to reach out to me on Twitter . Thanks for reading! The hidden attribute in HTML: HTMHell 2023 Advent Calendar Content visibility: web.dev HTML attribute: hidden: until-found value: caniuse.com

0 views
Ahmad Alfy 4 years ago

CSS Style Guide At Robusta

I’ve seen a lot of CSS style guides online, but I always found them talking more about how to choose a selector name and how to structure your components rather than talking about CSS itself. I do a lot of code review at Robusta and reviewing CSS is something I enjoy doing. I tried to collect the notes that I found myself leaving for my colleagues and decided to start this opinionated style guide. File names must be all lowercase and may include dashes (-), but no additional punctuation. Follow the convention that your project uses. Filenames’ extension must be or other preprocessor extensions ( , … etc). Source files are encoded in UTF-8 . Encoding should be specified in the file header using the directive. This should be added to the root file that include all the styles or to every other file that isn’t included in the root file. We follow the ITCSS methodology for writing CSS. ITCSS require the following directory structure: We might need to include styles (CSS specific to a UI library we are using), these ones are set between and layers. Each group of declarations should be written in a sepearate file. For example, to define the project’s box-model, we would write a file named in the layer. Note : One of the common mistakes developers do is that they put some rules in the wrong layer. For example, we might want to define the font that will be used in the website. Font is an inherited value so we usually write it using a selector. Developers would create a file in the layer and put the font declaration there. This is wrong because that file is specific to the style we need to define on the body (like , … etc). Setting the used font should be done in the layer by creating a file called and putting the font declaration there. We follow the BEM naming convention. Braces follow the Kernighan and Ritchie style as follow: Each time a new block is opened, the indent increases by one tab character. When the block ends, the indent returns to the previous indent level. The indent level applies to both code and comments throughout the block. Example: Using indentation is also encouraged in some cases where a value could be a list of tokens. Example: Each declaration is followed by a line-break. Every declaration must be terminated with a semicolon. Even if it’s the last declaration within a selector. A single blank line appears: Horizontal whitespace is used to separate the different parts of a declaration to facilitate reading. These are the rules to follow: Note: in some cases, horizontal white space is required otherwise the whole declaration will be invalid like the spaces between the operands of a function. Comments in CSS can only be written in a multi-line format ( ). Some languages like Sass allow single-line comments ( ). We usually don’t need to comment anything in CSS because it’s self descriptive, however, I find it valuable to document any magic numbers we may have. If you’re using a preprocessor, note that the comment doesn’t get compiled into the final output while the comment is preserved. The quotes we use in CSS are double quotes . Use the unit that’s stuitable for what you’re doing. Examples: Generally, we prefer to use the shorthand values instead of the expanded ones as long as these values are intended to be set. For example: Do not override a value with a shorthand value. For example: Do not write redundant shorthand values. For example: Psuedo-classes ( , , etc) should use the prefix, pseudo-elements ( , , , etc) should use the prefix. Try to order your blocks according to the specificity of the selectors from the least to the most specific. Do not combine vendor specific selectors with standard ones because it will make the whole declaration invalid. These are the important as well: Do not write duplicated values for the same property. For example: No empty blocks. For example: If you’re using an autoprefixer, don’t add a vendor prefix to the property. Autoprefixer will determine if the property is supported by the browsers using and . If it’s not, it will add the vendor prefix. For example: In case you have to use a vendor prefix, write the prefixed version of the property before the unprefixed one. For example: Do not use subpixel values. Subpixel values are not supported by all browsers and they can lead to inconsistent dimensions. For example: Inheritance is one of the most powerful features in CSS. It allows you to reuse styles from a parent selector. It’s preferred to make use of inheritance whenever possible. For example, is inherited from the parent element. If we use a generic selector we explicitly apply the to each element. Applying it to the parent element is a better practice. The keyword shouldn’t be used or at best limited to very narrow cases. The only place where you can see being used frequently is within the layer. For example: In almost all your work you will need to set the propert to . This will facilite the calculation of the dimensions of the elements. Some external libraries still use or assume that the is set to . To overcome this problem, We set it to on the root element, then inherit it to all the elements. This allows us to override it at any parent and all its siblings whenever we need. When defining a custom font using the at-rule, take care of the following: When you set font, always: Always remember that some elements like and doesn’t inherit the font family from their parent selectors, hence you should always specify the font family for them (using the keyword or by directly defining the desired font). For color values that permit it, 3 character hexadecimal notation is shorter and more succinct. Do not use keyword color values. Replace it with a hexadecimal notation. For example: Use all lowercase characters in hexadecimal notation. For example: In most cases where you want to use , you should clear the float property using the popular old clearfix hack. Do not use to hide scrollbars if that’s not the desired behavior. Fix the overflow problem by properly making sure the content doesn’t overflow. For example: CSS custom properties are a way to define variables that can be used in CSS. The rules that apply to picking up a good variable name applies to nameing the custom properties (like being representative to the value it holds, not being too generic, etc). When picking up names for our color variables, we follow the same methodology followed by the Material design and TailwindCSS. For more information about this, read this article . : settings will be used across the project like color variables, the fonts that will be used … etc. : globally used mixins and functions. It’s important not to output any CSS in the first 2 layers. : reset and/or normalize styles, box-sizing definition, etc. This is the first layer which generates actual CSS. : styling for bare HTML elements (like H1, A, etc.). These come with default styling from the browser so we can redefine them here. : class-based selectors which define undecorated design patterns, for example media object known from OOCSS, the grid …etc. : specific UI components. This is where majority of our work takes place and our UI components are often composed of Objects and Components. : utilities and helper classes with ability to override anything which goes before in the triangle, eg. hide helper class. No line break before the opening brace. Line break after the opening brace. Line break before the closing brace. After the character that separates between the selectors. After the opening braces before the declaration block or other block structures like or . After the character that terminates a declaration. After the closing braces after the declaration block or other block structures like or . Between a declaration and the next one. After the character that separates between different values for the same property (see the example mentionned earlier in the indentation section). Before the openning brace of a declaration block. After the character that separates the property from the value. Between the value and the keyword. After the character that is used to separate between some values like color`. Between the selector combinators. Percentage unit is stuiable when you define something related to its container. Pixel could be suitable when you really need a small value (1px, 2px … etc) instead of using and to avoid some bugs that happen with subpixel rendering. In most of the cases is unitless to let the value be calculated according to the element’s . Other units could lead to undesirable side effects or require modifiation to that value if we change the . The only exception to use a unit is usually when we need to vertically align the text withing a container with a fixed height. Do not use any unit when the value is zero except when you define a time value. It’s usually a bad idea to use for text generated from a WYSIWYG editor. Try not to nest more than 3 levels deep. Avoid duplicating selectors, it makes it harder to read and maintain. Media queries should be defined close to the elements they affect. Be careful when you’re using the pseudo-class because it affect the specificity of the selector. Read more about this here . Make sure you generate the fonts in the modern formats ( , , ) and load them in the same order. If you’re using different weights for the same font, make sure that: The at-rules have the same name. The property is set correctly. The at-rules are ordered ascendingly according to the weight. Use property and set its value to to ensure the users can see the contents soon enough with no flash of invisible text. If the font is used for custom icons, it should be set to to avoid displaying unreadable text (like square or any odd glyphs). provide a generic font family name. Enclose custom font names within double quotes.

0 views
Ahmad Alfy 4 years ago

Early detection of potential problems by checking frequently updated files using Git

One of the popular metrics used to assess the engineering team’s output is Code Churn. It has several definitions and each company and tool measures it differently. I like how Pluralsight defines it : Code Churn is when a developer re-writes their own code shortly after it has been checked in (typically within three weeks). Measuring Code Churn is difficult and requires tools that can analyze Git history. It tracks the change of lines of code overtime per contributor and the output is more than just the additions vs deletions. I am not going to talk about Code Churn too much because the article by Pluralsight does that very well. I want to share a similar concept we’re starting to experiment with. During my journey to find how Code Churn is measured, I found a way to find how many commits were made to files. Let’s take a look at the snippet below: When this command is executed inside a Git repository, it will list the top 10 files that have been committed to. The list will show the number of commits then the path of the file. Let’s take an example by running that command on the repository of next.js , we will get the following stats: That’s pretty much expected, the top 10 files are package configuration files. It might be because the maintainers are keeping the dependencies up to date all the time. If we are interested more in the code the developers write, we can modify the script to exclude these files using and regular expressions: Update (5th of March 2021) : as pointed out by my friend Ahmed El Gabri ; it’s better if we exclude the merge commits as well by using flag: We’ve excluded packages’ configurations, files, markdown files, and anything within the path . We will finally start to see the output we’re interested in: We can now see that most of the work is happening in . It’s receiving most of the commits from the contributors and it’s expected. After all, that’s the core of that package. In agile teams that work by sprints, by the end of each sprint, the team can get this data for analysis and discussion. A high number of commits to certain paths can happen due to many reasons like: Having an open discussion during sprint reviews could be the key to early detection of anything that can be fixed. The earlier command lists the frequency of commits in a specific branch since the beginning of the repository. For sprint review, it might be useful also to check that frequency during the sprint. Thankfully, Git allows us to log the changes after a specific date by using the flag. I ran this script on a couple of active projects we have and shared the numbers with the teams. On some projects, we were able to spot some problems quickly and took corrective actions. Others were showing inconclusive data where the number of commits was aligned with the output of the sprints. I am pretty confident that this procedure will help us improve the quality of our work and I expect I will write more about the results after we adopt it. If you’re interested in knowing how the script work, I’ve included a section below to explain it. Each line of the script produce an output. That output is manipulated by the next line. You can think of it like an assembly line where each workstation receives an input and update it. shows the commits log. The flags we provide modify the output and its format is a utility used to search in text. We use the flag that will instruct to return all the strings that doesn’t match the supplied RegExp (defined by the flag). Note that we’re using a Perl-compatible RegExp(PCRE) (defined by the flag). Note that PCRE doesn’t work by default on Mac, you will have to install GNU grep using brew and use instead. will sort the output, brining similar lines together to prepare the next utility to count their frequency. will filter out all the repeated lines. The count flag will display the number of how many times a path has been repeated. Again here we sort the output but this time by the number written at the beginning of each line (defined by ) and we supply the type of ordering as numeric ( ) and finally we reverse the order to display the higher numbers on top (defined by ). Finally, the command is used to display only the top 10 lines ( define that we’re interested in number of lines). Special thanks to Emad Elsaid for taking time to review this article. There are unresolved bugs that require intervention in the same file over and over. Maybe it’s because of poor quality, unhandled cases, or not enough tests. It might be an indicator that this file should be refactored into smaller modules. Changes are happening because of the continuously changing requirements. The file is a build artifact that should be taken outside of version control to be handled by CI/CD. Unnecessary updates caused by misconfigured linting tool in a contributor’s development environment. will show only the log of commits after certain date. will show only the names of the changed files. will exclude merge commits. defines the formatting of the output. In our case nothing is shown. Introduction to Code Churn - An article by Pluralsight. True Git Code Churn - A python script that can be used to measure Code Churn.

0 views
Ahmad Alfy 4 years ago

Architecture Decision Records

A couple of years ago I learned about Architecture Decision Records (ADRs) from Technology Radar and how they help software development teams document the architecture decisions they take during software design. In the beginning, I had an impression that these kinds of documents are suitable for projects of a certain size where a software architect is needed, but I was wrong. I was excited to try it because it didn’t require too much effort. I will have to write down a markdown file for every architecturally significant decision I take. That file should answer a couple of questions that I already think about most of the time. This time I will have to document it. The document is simple, it consists of the following: The following is an example of the document that I wrote when I decided to use PostCSS over Sass in an Angular based project: This was one of the earliest records I wrote. I felt this record is valuable because: The first instruction we had on this project’s readme file was to go read the ADRs. Over two years, we had people joining and leaving that project, no one was asking why did we pick PostCSS or why the selectors they wrote didn’t work. It was all documented. I started to encourage my colleagues to write ADRs as well and the results came back very positive. It helped everyone reason their decisions. The records were being shared across different projects that use similar stack and technology saving us all time and effort and helped with knowledge transfer. It’s worth mentioning that even rejected decisions should be documented. It’s important to keep that in mind, you’re not only documenting your accepted decisions. This helps others from proposing the same things over and over without a valid reason. ADRs have proven to be ver beneficial to us and I would definitely recommend it. If you want to try it I would suggest you follow the simplest form which is to write markdown files lying side by side with your source code. I’ve included several links in the resources section below. The decision we need to take. Status of that decision whether it’s a proposal, accepted decision, rejected, or superseded. Context , which is an explanation of the problem and all the circumstances around it. Considered options ; listing of all the considered options with the pros, cons, and impact of each. The decision that will be taken and all the reasons favoring that decision. The consequences of taking that decision whether it’s positive or not. It shows my analysis to my colleagues and the points I consider when I am making this decision. It documents the results in detail, explaining how it impacts the code we write. It highlights the negative results with links to existing issues for further follow-up and discussion. Documenting Architecture Decisions - By Michael Nygard, November 15, 2011 Homepage of the ADR GitHub organization Why write ADRs - The GitHub blog. Architecture decision record (ADR) examples for software planning, IT leadership, and template documentation. When Should I Write an Architecture Decision Record - Spotify engineering blog Command-line tools for working with Architecture Decision Records.

0 views