Posts in Web-development (20 found)

How I discover new (and old) blogs and websites

One of the great things about having a blog is that you get a space that is entirely yours, where you share whatever you want and you make it look exactly how you want it to look. It's a labor of creativity and self-expression. An encouraging aspect of having a blog is also being read by others. I love receiving emails from people who liked a post. It's just nice to know I'm not shouting into the void! But take for instance posts I wrote last year or many years ago. How do those get discovered? Perhaps you wrote an awesome essay on your favorite topic back in 2022. How can I or anyone else stumble upon your work? Making it easy to discover hidden gems from the indie web was my motivation for making powRSS . powRSS is a public RSS feed aggregator to help you find the side of the internet that seldom appears on corporate search engines. It surfaces posts and blogs going all the way back to 1995. You never know what you're going to find and I think it's really fun. Today I made a video showing how it works.

1 views

Robb Knight

This week on the People and Blogs series we have an interview with Robb Knight, whose blog can be found at rknight.me . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Hans and the other 124 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I'm a developer and dad to two girls living in Portsmouth on the south coast of the UK. By day I work for a SaaS company and in my own time I work on my many side projects . In a previous life I worked at a certain clown's restaurant which is where I met my wife some 15 years ago. Although developer is what I get paid to do I'm trying to move towards more making ; websites, stickers , shirts, art, whatever. I have no idea what that looks like yet or how it's going to pay my bills. I have a whole host of side projects I've worked on over the years; they're not all winners but they all serve, or served, a purpose. If I get lucky, they resonate with other people which is always nice. I've had a lot of blogs over the years, most of which would get a handful of posts before being abandoned. There was a version that ran on Tumblr which I did do for at least a year or two — any interesting posts from that have been saved. The current iteration is by far the longest serving and will be the final version. There's no chance of me wiping it all and starting again. This current version is part of my main website which is where I put everything . My toots on Mastodon start life as a note post , I post interesting links I find , and I log all the media I watch/play/whatever (I don't want to say consume, that's gross) in Almanac , which itself is on the third or fourth iteration. As I said above, I had done a few posts on the Tumblr-powered blog but if I look at my stats for posts, it was around 2022 when Twitter started to fall apart that I started to blog more. I was moving away from posting things directly onto social media sites and getting it onto my own site. I started writing more posts that just had a short idea or helpful tip because I realised not every post has to be some incredible think piece. My analytics show that these posts also tend to be the most popular which probably says more about the state of large, ad-riddled websites than it does about my writing. For example this post about disconnecting Facebook from Spotify is consistently in the top five posts on my site but you're never going to read that post unless you specifically need it. It's not a "good" post, it just exists. To call what I have a process would be a very liberal use of the word "process". If I have nothing to write about I just won't write anything, I have no desire to keep to a schedule and write just for the sake of it. Usually, I'll get prompted by something someone asks like "How did you do X on your website?" or I feel like I have something to say that would be interesting other people. I write my posts in Obsidian, then when they're ready to go I'll add them to my site. If I'm on my proper computer laptop I use my CLI tool to add a new post. If I'm on mobile, I use the very haphazard CMS I built. I'll proof read most things myself before posting and I rarely ask for anyone else's input but if I do want a second opinion it's going to be previous P&B interviewee , Keenan . Usually I'm able to get out what I want to say fairly succinctly without too much editing. A proper keyboard and ideally a desk to sit at is what I prefer when I'm writing (or coding) but I can live with just the keyboard. My desk setup makes some people's skin crawl because there's so much going on but I like having all the trinkets and knick knacks around me. I deeply dislike using my phone for most things outside of scrolling lists, like social media so I rarely write long posts on it. The small form factor just doesn't work for me at all but I also kind of need it to exist in the world. All my domains are registered with Porkbun and I manage the DNS with DNSControl - my main domain, rknight.me, has nearly 50 records for subdomains so managing those without DNSControl would not be a fun activity. Speaking of DNS I use Bunny for my DNS management and also use their CDN for images and other files I need to host. The website itself is, as are many of my side projects, built with Eleventy . Eleventy gives me the flexibility to do some interesting things with the posts and other content on my site which would be much harder with some other systems. The site gets built on Forge to a Hetzner server whenever I push an update to GitHub either via command line, or through the aforementioned CMS, and is also triggered at various points in the day to pull in my Mastodon posts. Assuming I actually had to the time to do it, I think I would start with the CMS first, before building anything of the actual site. It is a pain to update things when I'm not at my laptop but jamming features into my CMS is equally frustrating. If I wanted something off the shelf and easier to maintain I suspect I would choose Ghost or Pika . Many of these costs are part of my freelancing so are bundled with other sites I run and somewhat hidden but I'll do my best to outline what I do use. I have a single server on Hetzner that serves my main site as well as another 30 or so side projects so the cost is negligible per-site but it costs about $5 a month. Forge costs $12 a month to deploy my site along with other sites. The domain is $20 a year I think but that's it. I have a One a Month Club here and I have a handful of people supporting that way. I also use affiliate links for services I use and like which occasionally pays me a little bit. I think monetising blogs is fine, if it's done in a tasteful way. Dumping Google ads all over your site is terrible for everyone but hand-picked sponsors or referrals is a good way to find new services. Just keep it classy. I want to read sites that are about the person writing them. Photos of things people have done, blog posts about notebooks, wallpaper, food, everything. Things people enjoy. This is the second time I'm going to mention Keenan here because they write so wonderfully. They also have a podcast with Halsted called Friendship Material which is all kinds of lovely and joyful and everyone should listen. Alex writes some really interesting computing-related posts, like this one about using static websites as tiny archives . Annie is so smart and honest in her writing it brings me joy every time I see a new post from her. This post is a masterpiece . I'd be a terrible business boy if I didn't at least mention EchoFeed , an RSS cross posting service I run. I also have a podcast that used to be about tech but is now about snacks. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 114 interviews . Make sure to also say thank you to James Reeves and the other 124 supporters for making this series possible.

0 views
neilzone 3 days ago

Upgrading our time recording system from Kimai v1 to Kimai v2

I have used Kimai as a FOSS time recording system for probably the best part of 10 years. It is a great piece of software, allowing multiple users to record the time that they spend on different tasks, linked to different customers and projects. I use it for time tracking for decoded.legal, recording all my working time. I run it on a server which is not accessible from the Internet, so the fact that we were running the now long outdated v1 of the software did not bother me too much. But, as part of ongoing hygiene / system security stuff, I’ve had my eye on upgrading it to Kimai v2 for a while now, and I’ve finally got round to upgrading it. Fortunately, there is a clear upgrade path from v1 to v2 and It Just Worked. The installation of v2 was itself pretty straightforward, with clear installation instructions . I then imported the data from v1, and the migration/importer tool flagged a couple of issues which needed fixing (e.g. no email address associated with system users, which is now a requirement). The documentation was good in terms of how to deal with those. All in all, it took about 20 minutes to install the new software, and sort out DNS, the web server configuration, TLS, and so on, and then import the data from the old installation. I used the export functionality to compare the data in v2 with what I had in v1, to check that there were no (obvious, anyway) disparities. There were not, which was good! One of the changes in Kimai v2 is the ability to create customised exportable timesheets easily, using the GUI tool. This means that, within a couple of minutes, I had created the kind of timesheet that I provide to clients along with each monthly invoice, so that they can see exactly what I did on which of their matters, and how long I spent on it. For clients who prefer to pay on the basis of time spent, this is important. This is nothing fancy; just a clear summary on the front page, and then a detailed breakdown. I have yet to work out how to group the breakdown on a per-project basis, rather than a single chronological list, but I doubt that this will be much of a problem. I have yet to investigate the possibility for some automation, particularly around the generation of timesheets at the end of each month, one per customer. I’ll still check each of them by hand, of course, but automating their production would be nice. Or, even if not automated, just one click to produce them all. As with v1, Kimai v2 stores its data in MariaDB database, so automating backups is straightforward. Again, there are clear instructions , which is a good sign.

0 views
Den Odell 3 days ago

Escape Velocity: Break Free from Framework Gravity

Frameworks were supposed to free us from the messy parts of the web. For a while they did, until their gravity started drawing everything else into orbit. Every framework brought with it real progress. React, Vue, Angular, Svelte, and others all gave structure, composability, and predictability to frontend work. But now, after a decade of React dominance, something else has happened. We haven’t just built apps with React, we’ve built an entire ecosystem around it—hiring pipelines, design systems, even companies—all bound to its way of thinking. The problem isn’t React itself, nor any other framework for that matter. The problem is the inertia that sets in once any framework becomes infrastructure. By that point, it’s “too important to fail,” and everything nearby turns out to be just fragile enough to prove it. React is no longer just a library. It’s a full ecosystem that defines how frontend developers are allowed to think. Its success has created its own kind of gravity, and the more we’ve built within it, the harder it’s become to break free. Teams standardize on it because it’s safe: it’s been proven to work at massive scale, the talent pool is large, and the tooling is mature. That’s a rational choice, but it also means React exerts institutional gravity. Moving off it stops being an engineering decision and becomes an organizational risk instead. Solutions to problems tend to be found within its orbit, because stepping outside it feels like drifting into deep space. We saw this cycle with jQuery in the past, and we’re seeing it again now with React. We’ll see it with whatever comes next. Success breeds standardization, standardization breeds inertia, and inertia convinces us that progress can wait. It’s the pattern itself that’s the problem, not any single framework. But right now, React sits at the center of this dynamic, and the stakes are far higher than they ever were with jQuery. Entire product lines, architectural decisions, and career paths now depend on React-shaped assumptions. We’ve even started defining developers by their framework: many job listings ask for “React developers” instead of frontend engineers. Even AI coding agents default to React when asked to start a new frontend project, unless deliberately steered elsewhere. Perhaps the only thing harder than building on a framework is admitting you might need to build without one. React’s evolution captures this tension perfectly. Recent milestones include the creation of the React Foundation , the React Compiler reaching v1.0 , and new additions in React 19.2 such as the and Fragment Refs. These updates represent tangible improvements. Especially the compiler, which brings automatic memoization at build time, eliminating the need for manual and optimization. Production deployments show real performance wins using it: apps in the Meta Quest Store saw up to 2.5x faster interactions as a direct result. This kind of automatic optimization is genuinely valuable work that pushes the entire ecosystem forward. But here’s the thing: the web platform has been quietly heading in the same direction for years, building many of the same capabilities frameworks have been racing to add. Browsers now ship View Transitions, Container Queries, and smarter scheduling primitives. The platform keeps evolving at a fair pace, but most teams won’t touch these capabilities until React officially wraps them in a hook or they show up in Next.js docs. Innovation keeps happening right across the ecosystem, but for many it only becomes “real” once React validates the approach. Which is fine, assuming you enjoy waiting for permission to use the platform you’re already building on. The React Foundation represents an important milestone for governance and sustainability. This new foundation is a part of the Linux Foundation, and founding members include Meta, Vercel, Microsoft, Amazon, Expo, Callstack, and Software Mansion. This is genuinely good for React’s long-term health, providing better governance and removing the risk of being owned by a single company. It ensures React can outlive any one organization’s priorities. But it doesn’t fundamentally change the development dynamic of the framework. Yet. The engineers who actually build React still work at companies like Meta and Vercel. The research still happens at that scale, driven by those performance needs. The roadmap still reflects the priorities of the companies that fund full-time development. And to be fair, React operates at a scale most frameworks will never encounter. Meta serves billions of users through frontends that run on constrained mobile devices around the world, so it needs performance at a level that justifies dedicated research teams. The innovations they produce, including compiler-driven optimization, concurrent rendering, and increasingly fine-grained performance tooling, solve real problems that exist only at that kind of massive scale. But those priorities aren’t necessarily your priorities, and that’s the tension. React’s innovations are shaped by the problems faced by companies running apps at billions-of-users scale, not necessarily the problems faced by teams building for thousands or millions. React’s internal research reveals the team’s awareness of current architectural limitations. Experimental projects like Forest explore signal-like lazy computation graphs; essentially fine-grained reactivity instead of React’s coarse re-render model. Another project, Fir , investigates incremental rendering techniques. These aren’t roadmap items; they’re just research prototypes happening inside Meta. They may never ship publicly. But they do reveal something important: React’s team knows the virtual DOM model has performance ceilings and they’re actively exploring what comes after it. This is good research, but it also illustrates the same dynamic at play again: that these explorations happen behind the walls of Big Tech, on timelines set by corporate priorities and resource availability. Meanwhile, frameworks like Solid and Qwik have been shipping production-ready fine-grained reactivity for years. Svelte 5 shipped runes in 2024, bringing signals to mainstream adoption. The gap isn’t technical capability, but rather when the industry feels permission to adopt it. For many teams, that permission only comes once React validates the approach. This is true regardless of who governs the project or what else exists in the ecosystem. I don’t want this critique to take away from what React has achieved over the past twelve years. React popularized declarative UIs and made component-based architecture mainstream, which was a huge deal in itself. It proved that developer experience matters as much as runtime performance and introduced the idea that UI could be a pure function of input props and state. That shift made complex interfaces far easier to reason about. Later additions like hooks solved the earlier class component mess elegantly, and concurrent rendering through `` opened new possibilities for truly responsive UIs. The React team’s research into compiler optimization, server components, and fine-grained rendering pushes the entire ecosystem forward. This is true even when other frameworks ship similar ideas first. There’s value in seeing how these patterns work at Meta’s scale. The critique isn’t that React is bad, but that treating any single framework as infrastructure creates blind spots in how we think and build. When React becomes the lens through which we see the web, we stop noticing what the platform itself can already do, and we stop reaching for native solutions because we’re waiting for the framework-approved version to show up first. And crucially, switching to Solid, Svelte, or Vue wouldn’t eliminate this dynamic; it would only shift its center of gravity. Every framework creates its own orbit of tools, patterns, and dependencies. The goal isn’t to find the “right” framework, but to build applications resilient enough to survive migration to any framework, including those that haven’t been invented yet. This inertia isn’t about laziness; it’s about logistics. Switching stacks is expensive and disruptive. Retraining developers, rebuilding component libraries, and retooling CI pipelines all take time and money, and the payoff is rarely immediate. It’s high risk, high cost, and hard to justify, so most companies stay put, and honestly, who can blame them? But while we stay put, the platform keeps moving. The browser can stream and hydrate progressively, animate transitions natively, and coordinate rendering work without a framework. Yet most development teams won’t touch those capabilities until they’re built in or officially blessed by the ecosystem. That isn’t an engineering limitation; it’s a cultural one. We’ve somehow made “works in all browsers” feel riskier than “works in our framework.” Better governance doesn’t solve this. The problem isn’t React’s organizational structure; it’s our relationship to it. Too many teams wait for React to package and approve platform capabilities before adopting them, even when those same features already exist in browsers today. React 19.2’s `` component captures this pattern perfectly. It serves as a boundary that hides UI while preserving component state and unmounting effects. When set to , it pauses subscriptions, timers, and network requests while keeping form inputs and scroll positions intact. When revealed again by setting , those effects remount cleanly. It’s a genuinely useful feature. Tabbed interfaces, modals, and progressive rendering all benefit from it, and the same idea extends to cases where you want to pre-render content in the background or preserve state as users navigate between views. It integrates smoothly with React’s lifecycle and `` boundaries, enabling selective hydration and smarter rendering strategies. But it also draws an important line between formalization and innovation . The core concept isn’t new; it’s simply about pausing side effects while maintaining state. Similar behavior can already be built with visibility observers, effect cleanup, and careful state management patterns. The web platform even provides the primitives for it through tools like , DOM state preservation, and manual effect control. What . Yet it also exposes how dependent our thinking has become on frameworks. We wait for React to formalize platform behaviors instead of reaching for them directly. This isn’t a criticism of `` itself; it’s a well-designed API that solves a real problem. But it serves as a reminder that we’ve grown comfortable waiting for framework solutions to problems the platform already lets us solve. After orbiting React for so long, we’ve forgotten what it feels like to build without its pull. The answer isn’t necessarily to abandon your framework, but to remember that it runs inside the web, not the other way around. I’ve written before about building the web in islands as one way to rediscover platform capabilities we already have. Even within React’s constraints, you can still think platform first: These aren’t anti-React practices, they’re portable practices that make your web app more resilient. They let you adopt new browser capabilities as soon as they ship, not months later when they’re wrapped in a hook. They make framework migration feasible rather than catastrophic. When you build this way, React becomes a rendering library that happens to be excellent at its job, not the foundation everything else has to depend on. A React app that respects the platform can outlast React itself. When you treat React as an implementation detail instead of an identity, your architecture becomes portable. When you embrace progressive enhancement and web semantics, your ideas survive the next framework wave. The recent wave of changes, including the React Foundation, React Compiler v1.0, the `` component, and internal research into alternative architectures, all represent genuine progress. The React team is doing thoughtful work, but these updates also serve as reminders of how tightly the industry has become coupled to a single ecosystem’s timeline. That timeline is still dictated by the engineering priorities of large corporations, and that remains true regardless of who governs the project. If your team’s evolution depends on a single framework’s roadmap, you are not steering your product; you are waiting for permission to move. That is true whether you are using React, Vue, Angular, or Svelte. The framework does not matter; the dependency does. It is ironic that we spent years escaping jQuery’s gravity, only to end up caught in another orbit. React was once the radical idea that changed how we build for the web. Every successful framework reaches this point eventually, when it shifts from innovation to institution, from tool to assumption. jQuery did it, React did it, and something else will do it next. The React Foundation is a positive step for the project’s long-term sustainability, but the next real leap forward will not come from better governance. It will not come from React finally adopting signals either, and it will not come from any single framework “getting it right.” Progress will come from developers who remember that frameworks are implementation details, not identities. Build for the platform first. Choose frameworks second. The web isn’t React’s, it isn’t Vue’s, and it isn’t Svelte’s. It belongs to no one. If we remember that, it will stay free to evolve at its own pace, drawing the best ideas from everywhere rather than from whichever framework happens to hold the cultural high ground. Frameworks are scaffolding, not the building. Escaping their gravity does not mean abandoning progress; it means finding enough momentum to keep moving. Reaching escape velocity, one project at a time. Use native forms and form submissions to a server, then enhance with client-side logic Prefer semantic HTML and ARIA before reaching for component libraries Try View Transitions directly with minimal React wrappers instead of waiting for an official API Use Web Components for self-contained widgets that could survive a framework migration Keep business logic framework-agnostic, plain TypeScript modules rather than hooks, and aim to keep your hooks short by pulling logic from outside React Profile performance using browser DevTools first and React DevTools second Try native CSS features like , , scroll snap , , and before adding JavaScript solutions Use , , and instead of framework-specific alternatives wherever possible Experiment with the History API ( , ) directly before reaching for React Router Structure code so routing, data fetching, and state management can be swapped out independently of React Test against real browser APIs and behaviors, not just framework abstractions

0 views
Jim Nielsen 5 days ago

Browser APIs: The Web’s Free SaaS

Authentication on the web is a complicated problem. If you’re going to do it yourself, there’s a lot you have to take into consideration. But odds are, you’re building an app whose core offering has nothing to do with auth. You don’t care about auth. It’s an implementation detail. So rather than spend your precious time solving the problem of auth, you pay someone else to solve it. That’s the value of SaaS. What would be the point of paying for an authentication service, like workOS, then re-implementing auth on your own? They have dedicated teams working on that problem. It’s unlikely you’re going to do it better than them and still deliver on the product you’re building. There’s a parallel here, I think, to building stuff in the browser. Browsers provide lots of features to help you deliver good websites fast to an incredibly broad and diverse audience. Browser makers have teams of people who, day-in and day-out, are spending lots of time developing and optimizing new their offerings. So if you leverage what they offer you, that gives you an advantage because you don’t have to build it yourself. You could build it yourself. You could say “No thanks, I don’t want what you have. I’ll make my own.” But you don’t have to. And odds are, whatever you do build yourself, is not likely to be as fast as the highly-optimized subsystems you can tie together in the browser . And the best part? Unlike SasS, you don’t have to pay for what the browser offers you. And because you’re not paying, it can’t be turned off if you stop paying. , for example, is a free API that’ll work forever. That’s a great deal. Are you taking advantage? Reply via: Email · Mastodon · Bluesky

0 views
David Bushell 1 weeks ago

Better Alt Text

It’s been a rare week where I was able to (mostly) ignore client comms and do whatever I wanted! That means perusing my “todo” list, scoffing at past me for believing I’d ever do half of it, and plucking out a gem. One of those gems was a link to “Developing an alt text button for images on [James’ Coffee Blog]” . I like this feature. I want it on my blog! My blog wraps images and videos in a element with an optional caption. Reduced markup example below. How to add visible alt text? I decided to use declarative popover . I used popover for my glossary web component but that implementation required JavaScript. This new feature can be done script-free! Below is an example of the end result. Click the “ALT” button to reveal the text popover (unless you’re in RSS land, in which case visit the example , and if you’re not in Chrome, see below). To implement this I appended an extra and element with the declarative popover attributes after the image. I generate unique popover and anchor names in my build script. I can’t define them as inline custom properties because of my locked down content security policy . Instead I use the attribute function in CSS. Anchor positioning allows me to place these elements over the image. I could have used absolute positioning inside the if not for the caption extending the parent block. Sadly using means only one thing… My visible alt text feature is Chrome-only! I’ll pray for Interop 2026 salvation and call it progressive enhancement for now. To position the popover I first tried but that sits the popover around/outside the image. Instead I need to sit inside/above the image. The allows that. The button is positioned in a similar way. Aside from being Chrome-only I think this is a cool feature. Last time I tried to use anchor positioning I almost cried in frustration… so this was a success! It will force me to write better alt text. How do I write alt text good? Advice is welcome. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
iDiallo 1 weeks ago

Why should I accept all cookies?

Around 2013, my team and I finally embarked in upgrading our company's internal software to version 2.0. We had a large backlog of user complaints that we were finally addressing, with security at the top of the list. The very top of the list was moving away from plain text passwords. From the outside, the system looked secure. We never emailed passwords, we never displayed them, we had strict protocols for password rotation and management. But this was a carefully staged performance. The truth was, an attacker with access to our codebase could have downloaded the entire user table in minutes. All our security measures were pure theater, designed to look robust while a fundamental vulnerability sat in plain sight. After seeing the plain text password table, I remember thinking about a story that was also happening around the same time. A 9 year old boy who flew from Minneapolis to Las Vegas without a boarding pass . This was in an era where we removed our shoes and belts for TSA agents to humiliate us. Yet, this child was able, without even trying, to bypass all the theater that was built around the security measures. How did he get past TSA? How did he get through the gate without a boarding pass? How was he assigned a seat in the plane? How did he... there are just so many questions. Just like our security measures on our website, it was all a performance, an illusion. I can't help but see the same script playing out today, not in airports or codebases, but in the cookie consent banners that pop up on nearly every website I visit. It's always a variation of "This website uses cookies to enhance your experience. [Accept All] or [Customize]." Rarely is there a bold, equally prominent "Reject All" button. And when there is, the reject-all button will open a popup where you have to tweak some settings. This is not an accident; it's a dark pattern. It's the digital equivalent of a TSA agent asking, "Would you like to take the express lane or would you like to go through a more complicated screening process?" Your third option is to turn back and go home, which isn't really an option if you made it all the way to the airport. A few weeks back, I was exploring not just dark patterns but hostile software . Because you don't own the device you paid for, the OS can enforce decisions by never giving you any options. You don't have a choice. Any option you choose will lead you down the same funnel that benefits the company, and give you the illusion of agency. So, let's return to the cookie banner. As a user, what is my tangible incentive to click "Accept All"? The answer is: there is none. "Required" cookies are, by definition, non-negotiable for basic site function. Accepting the additional "performance," "analytics," or "marketing" cookies does not unlock a premium feature for me. It doesn't load the website faster or give me a cleaner layout. It does not improve my experience. My only "reward" for accepting all is that the banner disappears quickly. The incentive is the cessation of annoyance, a small dopamine hit for compliance. In exchange, I grant the website permission to track my behavior, build an advertising profile, and share my data with a shadowy network of third parties. The entire interaction is a rigged game. Whenever I click on the "Customize" option, I'm overwhelmed with the labyrinth of toggles and sub-menus designed to make rejection so tedious that "Accept All" becomes the path of least resistance. My default reaction is to reject everything. Doesn't matter if you use dark patterns, my eyes are trained to read the fine lines in a split second. But when that option is hidden, I've resorted to opening my browser's developer tools and deleting the banner element from the page altogether. It’s a desperate workaround for a system that refuses to offer a legitimate "no." Lately, I don't even bother clicking on reject all. I just delete the elements all together. Like I said, there are no incentives for me to interact with the menu. We eventually plugged that security vulnerability in our old application. We hashed the passwords and closed the backdoor, moving from security theater to actual security. The fix wasn't glamorous, but it was a real improvement. The current implementation of "choice" is largely privacy theater. It's a performance designed to comply with the letter of regulations like GDPR while violating their spirit. It makes users feel in control while systematically herding them toward the option that serves corporate surveillance. There is never an incentive to cookie tracking on the user end. So this theater has to be created to justify selling our data and turning us into products of each website we visit. But if you are like me, don't forget you can always use the developer tools to make the banner disappear. Or use uBlock. On Windows or Google Drive: "Get started" or "Remind me later." Where is "Never show this again"? On Twitter: "See less often" is the only option for an unwanted notification, never "Stop these entirely."

0 views
Ahmad Alfy 1 weeks ago

Your URL Is Your State

Couple of weeks ago when I was publishing The Hidden Cost of URL Design I needed to add SQL syntax highlighting. I headed to PrismJS website trying to remember if it should be added as a plugin or what. I was overwhelmed with the amount of options in the download page so I headed back to my code. I checked the file for PrismJS and at the top of the file, I found a comment containing a URL: I had completely forgotten about this. I clicked the URL, and it was the PrismJS download page with every checkbox, dropdown, and option pre-selected to match my exact configuration. Themes chosen. Languages selected. Plugins enabled. Everything, perfectly reconstructed from that single URL. It was one of those moments where something you once knew suddenly clicks again with fresh significance. Here was a URL doing far more than just pointing to a page. It was storing state, encoding intent, and making my entire setup shareable and recoverable. No database. No cookies. No localStorage. Just a URL. This got me thinking: how often do we, as frontend engineers, overlook the URL as a state management tool? We reach for all sorts of abstractions to manage state such as global stores, contexts, and caches while ignoring one of the web’s most elegant and oldest features: the humble URL. In my previous article, I wrote about the hidden costs of bad URL design . Today, I want to flip that perspective and talk about the immense value of good URL design. Specifically, how URLs can be treated as first-class state containers in modern web applications. Scott Hanselman famously said “ URLs are UI ” and he’s absolutely right. URLs aren’t just technical addresses that browsers use to fetch resources. They’re interfaces. They’re part of the user experience. But URLs are more than UI. They’re state containers . Every time you craft a URL, you’re making decisions about what information to preserve, what to make shareable, and what to make bookmarkable. Think about what URLs give us for free: URLs make web applications resilient and predictable. They’re the web’s original state management solution, and they’ve been working reliably since 1991. The question isn’t whether URLs can store state. It’s whether we’re using them to their full potential. Before we dive into examples, let’s break down how URLs encode state. Here’s a typical stateful URL: For many years, these were considered the only components of a URL. That changed with the introduction of Text Fragments , a feature that allows linking directly to a specific piece of text within a page. You can read more about it in my article Smarter than ‘Ctrl+F’: Linking Directly to Web Page Content . Different parts of the URL encode different types of state: Sometimes you’ll see multiple values packed into a single key using delimiters like commas or plus signs. It’s compact and human-readable, though it requires manual parsing on the server side. Developers often encode complex filters or configuration objects into a single query string. A simple convention uses key–value pairs separated by commas, while others serialize JSON or even Base64-encode it for safety. For flags or toggles, it’s common to pass booleans explicitly or to rely on the key’s presence as truthy. This keeps URLs shorter and makes toggling features easy. Another old pattern is bracket notation , which represents arrays in query parameters. It originated from early web frameworks like PHP where appending to a parameter name signals that multiple values should be grouped together. Many modern frameworks and parsers (like Node’s library or Express middleware) still recognize this pattern automatically. However, it’s not officially standardized in the URL specification, so behavior can vary depending on the server or client implementation. Notice how it even breaks the syntax highlighting on my website. The key is consistency . Pick patterns that make sense for your application and stick with them. Let’s look at real-world examples of URLs as state containers: PrismJS Configuration The entire syntax highlighter configuration encoded in the URL. Change anything in the UI, and the URL updates. Share the URL, and someone else gets your exact setup. This one uses anchor and not query parameters, but the concept is the same. GitHub Line Highlighting It links to a specific file while highlighting lines 108 through 136. Click this link anywhere, and you’ll land on the exact code section being discussed. Google Maps Coordinates, zoom level, and map type all in the URL. Share this link, and anyone can see the exact same view of the map. Figma and Design Tools Before shareable design links, finding an updated screen or component in a large file was a chore. Someone had to literally show you where it lived, scrolling and zooming across layers. Today, a Figma link carries all that context like canvas position, zoom level, selected element. Literally everything needed to drop you right into the workspace. E-commerce Filters This is one of the most common real-world patterns you’ll encounter. Every filter, sort option, and price range preserved. Users can bookmark their exact search criteria and return to it anytime. Most importantly, they can come back to it after navigating away or refreshing the page. Before we discuss implementation details, we need to establish a clear guideline for what should go into the URL. Not all state belongs in URLs. Here’s a simple heuristic: Good candidates for URL state: Poor candidates for URL state: If you are not sure if a piece of state belongs in the URL, ask yourself: If someone else clicking this URL, should they see the same state? If so, it belongs in the URL. If not, use a different state management approach. The modern API makes URL state management straightforward: The event fires when the user navigates with the browser’s Back or Forward buttons. It lets you restore the UI to match the URL, which is essential for keeping your app’s state and history in sync. Usually your framework’s router handles this for you, but it’s good to know how it works under the hood. React Router and Next.js provide hooks that make this even cleaner: Now that we’ve seen how URLs can hold application state, let’s look at a few best practices that keep them clean, predictable, and user-friendly. Don’t pollute URLs with default values: Use defaults in your code when reading parameters: For high-frequency updates (like search-as-you-type), debounce URL changes: When deciding between and , think about how you want the browser history to behave. creates a new history entry, which makes sense for distinct navigation actions like changing filters, pagination, or navigating to a new view — users can then use the Back button to return to the previous state. On the other hand, updates the current entry without adding a new one, making it ideal for refinements such as search-as-you-type or minor UI adjustments where you don’t want to flood the history with every keystroke. When designed thoughtfully, URLs become more than just state containers. They become contracts between your application and its consumers. A good URL defines expectations for humans, developers, and machines alike A well-structured URL draws the line between what’s public and what’s private, client and server, shareable and session-specific. It clarifies where state lives and how it should behave. Developers know what’s safe to persist, users know what they can bookmark, and machines know whats worth indexing. URLs, in that sense, act as interfaces : visible, predictable, and stable. Readable URLs explain themselves. Consider the difference between the two URLs below. The first one hides intent. The second tells a story. A human can read it and understand what they’re looking at. A machine can parse it and extract meaningful structure. Jim Nielsen calls these “ examples of great URLs ”. URLs that explain themselves. URLs are cache keys. Well-designed URLs enable better caching strategies: You can even visualize a user’s journey without any extra tracking code: Your analytics tools can track this flow without additional instrumentation. Every URL parameter becomes a dimension you can analyze. URLs can communicate API versions, feature flags, and experiments: This makes gradual rollouts and backwards compatibility much more manageable. Even with the best intentions, it’s easy to misuse URL state. Here are common pitfalls: The classic single-page app mistake: If your app forgets its state on refresh, you’re breaking one of the web’s fundamental features. Users expect URLs to preserve context. I remember a viral video from years ago where a Reddit user vented about an e-commerce site: every time she hit “Back,” all her filters disappeared. Her frustration summed it up perfectly. If users lose context, they lose patience. This one seems obvious, but it’s worth repeating: URLs are logged everywhere: browser history, server logs, analytics, referrer headers. Treat them as public. Choose parameter names that make sense. Future you (and your team) will thank you. If you need to base64-encode a massive JSON object, the URL probably isn’t the right place for that state. Browsers and servers impose practical limits on URL length (usually between 2,000 and 8,000 characters) but the reality is more nuanced. As this detailed Stack Overflow answer explains, limits come from a mix of browser behavior, server configurations, CDNs, and even search engine constraints. If you’re bumping against them, it’s a sign you need to rethink your approach. Respect browser history. If a user action should be “undoable” via the back button, use . If it’s a refinement, use . That PrismJS URL reminded me of something important: good URLs don’t just point to content. They describe a conversation between the user and the application. They capture intent, preserve context, and enable sharing in ways that no other state management solution can match. We’ve built increasingly sophisticated state management libraries like Redux, MobX, Zustand, Recoil and others. They all have their place but sometimes the best solution is the one that’s been there all along. In my previous article, I wrote about the hidden costs of bad URL design. Today, we’ve explored the flip side: the immense value of good URL design. URLs aren’t just addresses. They’re state containers, user interfaces, and contracts all rolled into one. If your app forgets its state when you hit refresh, you’re missing one of the web’s oldest and most elegant features. Shareability : Send someone a link, and they see exactly what you see Bookmarkability : Save a URL, and you’ve saved a moment in time Browser history : The back button just works Deep linking : Jump directly into a specific application state Path Segments ( ). Best used for hierarchical resource navigation : - User 123’s posts - Documentation structure - Application sections Query Parameters ( ). Perfect for filters , options , and configuration : - UI preferences - Pagination - Data filtering - Date ranges Anchor ( ). Ideal for client-side navigation and page sections: - GitHub line highlighting - Scroll to section - Single-page app routing (though it’s rarely used these days) Search queries and filters Pagination and sorting View modes (list/grid, dark/light) Date ranges and time periods Selected items or active tabs UI configuration that affects content Feature flags and A/B test variants Sensitive information (passwords, tokens, PII) Temporary UI states (modal open/closed, dropdown expanded) Form input in progress (unsaved changes) Extremely large or complex nested data High-frequency transient states (mouse position, scroll position) Same URL = same resource = cache hit Query params define cache variations CDNs can cache intelligently based on URL patterns

0 views
Herman's blog 1 weeks ago

Aggressive bots ruined my weekend

On the 25th of October Bear had its first major outage. Specifically, the reverse proxy which handles custom domains went down, meaning all custom domains started timing out. Unfortunately my monitoring tool failed to notify me, and it being a Saturday, I didn't notice the outage for longer than is reasonable. I apologise to everyone who was affected by it. First, I want to dissect the root cause, exactly what went wrong, and then provide the steps I've taken to mitigate this in the future. I wrote about The Great Scrape at the beginning of this year. The vast majority of web traffic is now bots, and it is becoming increasingly more hostile to have publicly available resources on the internet. There are 3 major kinds of bots currently flooding the internet: AI scrapers, malicious scrapers, and unchecked automations/scrapers. The first has been discussed at length. Data is worth something now that it is used as fodder to train LLMs, and there is a financial incentive to scrape, so scrape they will. They've depleted all human-created writing on the internet, and are becoming increasingly ravenous for new wells of content. I've seen this compared to the search for low-background-radiation steel , which is, itself, very interesting. These scrapers, however, are the easiest to deal with since they tend to identify themselves as ChatGPT, Anthropic, XAI, et cetera. They also tend to specify whether they are from user-initiated searches (think all the sites that get scraped when you make a request with ChatGPT), or data mining (data used to train models). On Bear Blog I allow the first kinds, but block the second, since bloggers want discoverability, but usually don't want their writing used to train the next big model. The next two kinds of scraper are more insidious. The malicious scrapers are bots that systematically scrape and re-scrape websites, sometimes every few minutes, looking for vulnerabilities such as misconfigured Wordpress instances, or and files, among other things, accidentally left lying around. It's more dangerous than ever to self-host, since simple mistakes in configurations will likely be found and exploited. In the last 24 hours I've blocked close to 2 million malicious requests across several hundred blogs. What's wild is that these scrapers rotate through thousands of IP addresses during their scrapes, which leads me to suspect that the requests are being tunnelled through apps on mobile devices, since the ASNs tend to be cellular networks. I'm still speculating here, but I think app developers have found another way to monetise their apps by offering them for free, and selling tunnel access to scrapers. Now, on to the unchecked automations. Vibe coding has made web-scraping easier than ever. Any script-kiddie can easily build a functional scraper in a single prompt and have it run all day from their home computer, and if the dramatic rise in scraping is anything to go by, many do. Tens of thousands of new scrapers have cropped up over the past few months, accidentally DDoSing website after website in their wake. The average consumer-grade computer is significantly more powerful than a VPS, so these machines can easily cause a lot of damage without noticing. I've managed to keep all these scrapers at bay using a combination of web application firewall (WAF) rules and rate limiting provided by Cloudflare, as well as some custom code which finds and quarantines bad bots based on their activity. I've played around with serving Zip Bombs , which was quite satisfying, but I stopped for fear of accidentally bombing a legitimate user. Another thing I've played around with is Proof of Work validation, making it expensive for bots to scrape, as well as serving endless junk data to keep the bots busy. Both of these are interesting , but ultimately are just as effective as simply blocking those requests, without the increased complexity. With that context, here's exactly went wrong on Saturday. Previously, the bottleneck for page requests was the web-server itself, since it does the heavy lifting. It automatically scales horizontally by up to a factor of 10, if necessary, but bot requests can scale by significantly more than that, so having strong bot detection and mitigation, as well as serving highly-requested endpoints via a CDN is necessary. This is a solved problem, as outlined in my Great Scrape post, but worth restating. On Saturday morning a few hundred blogs were DDoSed, with tens of thousands of pages requested per minute (from the logs it's hard to say whether they were malicious, or just very aggressive scrapers). The above-mentioned mitigations worked as expected, however the reverse-proxy—which sits up-stream of most of these mitigations—became saturated with requests and decided it needed to take a little nap. The big blue spike is what toppled the server. It's so big it makes the rest of the graph look flat. This server had been running with zero downtime for 5 years up until this point. Unfortunately my uptime monitor failed to alert me via the push notifications I'd set up, even though it's the only app I have that not only has notifications enabled (see my post on notifications ), but even has critical alerts enabled, so it'll wake me up in the middle of the night if necessary. I still have no idea why this alert didn't come through, and I have ruled out misconfiguration through various tests. This brings me to how I will prevent this from happening in the future. This should be enough to keep everything healthy. If you have any suggestions, or need help with your own bot issues, send me an email . The public internet is mostly bots, many of whom are bad netizens. It's the most hostile it's ever been, and it is because of this that I feel it's more important than ever to take good care of the spaces that make the internet worth visiting. The arms race continues... Redundancy in monitoring. I now have a second monitoring service running alongside my uptime monitor which will give me a phone call, email, and text message in the event of any downtime. More aggressive rate-limiting and bot mitigation on the reverse proxy. This already reduces the server load by about half. I've bumped up the size of the reverse proxy, which can now handle about 5 times the load. This is overkill, but compute is cheap, and certainly worth the stress-mitigation. I'm already bald. I don't need to go balder. Auto-restart the reverse-proxy if bandwidth usage drops to zero for more than 2 minutes. Added a status page, available at https://status.bearblog.dev for better visibility and transparency. Hopefully those bars stay solid green forever.

0 views
Christian Jauvin 1 weeks ago

Render a markdown table to docx with pandoc

Suppose you’re like me, in the middle of editing a Word document, and you realize that you’re fed up with Word. So you tell it to a colleague, who happily tells you: use pandoc , it can render docx files! So here you go, and you first try to convert this simple markdown: Then you run this command, with a lot of hope:

0 views
The Jolly Teapot 1 weeks ago

October 2025 blend of links

Some links don’t call for a full blog post, but sometimes I still want to share some of the good stuff I encounter on the web. Why it is so hard to tax the super-rich ・Very interesting and informative video, to the point that I wish it were a full series. Who knew I would one day be so fascinated by the topic of… checks notes … economics? jsfree.org ・Yes, a thousand yes to this collection of sites that work without needing any JavaScript. I don’t know if it’s the season or what, but these days I’m blocking JS every chance that I get. I even use DuckDuckGo again as a search engine because other search engines often require JavaScript to work. Elon Musk’s Grokipedia contains copied Wikipedia pages ・Just to be safe, I’ve immediately added a redirection on StopTheMadness so that the grokipedia domain is replaced by wikipedia.com (even if Wikipedia has its problems, especially in French). Also, what’s up with this shitty name? Why not Grokpedia ? I would still not care, but at least it wouldn’t sound as silly. POP Phone ・I don’t know for whom yet, but I will definitely put one of these under the Christmas tree this winter. (Via Kottke ) PolyCapture ・The app nerd in me is looking at these screenshots like a kid looks at a miniature train. (Via Daring Fireball ) Bari Weiss And The Tyranny Of False Balance ・“ You don’t need to close newspapers when you can convince editors that ‘balance’ means giving equal weight to demonstrable lies and documented facts. ” light-dark() ・Neat and elegant new CSS element that made me bring back the dark mode on this site, just to have an excuse to use it in the CSS. Eunoia: Words that Don't Translate ・Another link to add to your bookmark folder named “conversation starters.” (Via Dense Discovery ) Why Taste Matters More ・“ Taste gives you vision. It’s the lens through which you decide what matters, and just as importantly, what doesn’t. Without taste, design drifts into decoration or efficiency for efficiency’s sake. Devoid of feeling .” Tiga – Bugatti ・I recently realised that this truly fantastic song is already more than 10 years old, and I still can’t wrap my head around this fact. The video, just like the song, hasn’t aged one bit; I had forgotten how creative and fun it is. More “Blend of links” posts here Blend of links archive

0 views
Loren Stewart 1 weeks ago

I Built the Same App 10 Times: Evaluating Frameworks for Mobile Performance

Started evaluating 3 frameworks for work, ended up building 10. Next-gen frameworks (Marko, SolidStart, SvelteKit, Qwik) all deliver instant 35-39ms performance. The real differentiator? Bundle sizes range from 28.8 kB to 176.3 kB compressed. Choose based on your priorities, not microscopic FCP differences.

0 views
Jim Nielsen 1 weeks ago

Don’t Forget These Tags to Make HTML Work Like You Expect

I was watching Alex Petros’ talk and he has a slide in there titled “Incantations that make HTML work correctly”. This got me thinking about the basic snippets of HTML I’ve learned to always include in order for my website to work as I expect in the browser — like “Hey I just made a file on disk and am going to open it in the browser. What should be in there?” This is what comes to mind: Without , browsers may switch to quirks mode, emulating legacy, pre-standards behavior. This will change how calculations work around layout, sizing, and alignment. is what you want for consistent rendering. Or if you prefer writing markup like it’s 1998. Or even if you eschew all societal norms. It’s case-insensitive so they’ll all work. Declare the document’s language. Browsers, search engines, assistive technologies, etc. can leverage it to: Omit it and things will look ok, but lots of basic web-adjacent tools might get things wrong. Specifying it makes everything around the HTML work better and more accurately, so I always try to remember to include it. This piece of info can come back from the server as a header, e.g. But I like to set it in my HTML, especially when I’m making files on disk I open manually in the browser. This tells the browser how to interpret text, ensuring characters like é, ü, and others display correctly. So many times I’ve opened a document without this tag and things just don’t look right — like my smart quotes . For example: copy this snippet, stick it in an HTML file, and open it on your computer: Things might look a bit wonky. But stick a tag in there and you’ll find some relief. Sometimes I’ll quickly prototype a little HTML and think, “Great it’s working as I expect!” Then I go open it on mobile and everything looks tiny — “[Facepalm] you forgot the meta viewport tag!” Take a look at this screenshot, where I forgot the meta viewport tag on the left but included it on the right: That ever happen to you? No, just me? Well anyway, it’s a good ‘un to include to make HTML work the way you expect. I know what you’re thinking, I forgot the most important snippet of them all for writing HTML: Reply via: Email · Mastodon · Bluesky Get pronunciation and voice right for screen readers Improve indexing and translation accuracy Apply locale-specific tools (e.g. spell-checking)

0 views
iDiallo 1 weeks ago

Is RSS Still Relevant?

I'd like to believe that RSS is still relevant and remains one of the most important technologies we've created. The moment I built this blog, I made sure my feed was working properly. Back in 2013, the web was already starting to move away from RSS. Every few months, an article would go viral declaring that RSS was dying or dead. Fast forward to 2025, those articles are nonexistent, and most people don't even know what RSS is. One of the main advantages of an RSS feed is that it allows me to read news and articles without worrying about an algorithm controlling how I discover them. I have a list of blogs I'm subscribed to, and I consume their content chronologically. When someone writes an article I'm not interested in, I can simply skip it. I don't need to train an AI to detect and understand the type of content I don't like. Who knows, the author might write something similar in the future that I do enjoy. I reserve that agency to judge for myself. The fact that RSS links aren't prominently featured on blogs anymore isn't really a problem for me. I have the necessary tools to find them and subscribe on my own. In general, people who care about RSS are already aware of how to subscribe. Since I have this blog and have been posting regularly this year, I can actually look at my server logs and see who's checking my feed. From January 1st to September 1st, 2025, there were a total of 537,541 requests to my RSS feed. RSS readers often check websites at timed intervals to detect when a new article is published. Some are very aggressive and check every 10 minutes throughout the day, while others have somehow figured out my publishing schedule and only check a couple of times daily. RSS readers, or feed parsers, don't always identify themselves. The most annoying name I've seen is just , probably a Node.js script running on someone's local machine. However, I do see other prominent readers like Feedly, NewsBlur, and Inoreader. Here's what they look like in my logs: There are two types of readers: those from cloud services like Feedly that have consistent IP addresses you can track over time, and those running on user devices. I can identify the latter as user devices because users often click on links and visit my blog with the same IP address. So far throughout the year, I've seen 1,225 unique reader names. It's hard to confirm if they're truly unique since some are the same application with different versions. For example, Tiny Tiny RSS has accessed the website with 14 different versions, from version 22.08 to 25.10. I've written a script to extract as many identifiable readers as possible while ignoring the generic ones that just use common browser user agents. Here's the list of RSS readers and feed parsers that have accessed my blog: Raw list of RSS user agents here RSS might be irrelevant on social media, but that doesn't really matter. The technology is simple enough that anyone who cares can implement it on their platform. It's just a fancy XML file. It comes installed and enabled by default on several blogging platforms. It doesn't have to be the de facto standard on the web, just a good way for people who are aware of it to share articles without being at the mercy of dominant platforms.

1 views
Blargh 1 weeks ago

The strange webserver hot potato — sending file descriptors

I’ve previously mentioned my io-uring webserver tarweb . I’ve now added another interesting aspect to it. As you may or may not be aware, on Linux it’s possible to send a file descriptor from one process to another over a unix domain socket. That’s actually pretty magic if you think about it. You can also send unix credentials and SELinux security contexts, but that’s a story for another day. I want to run some domains using my webserver “tarweb”. But not all. And I want to host them on a single IP address, on the normal HTTPS port 443. Simple, right? Just use nginx’s ? Ah, but I don’t want nginx to stay in the path. After SNI (read: “browser saying which domain it wants”) has been identified I want the TCP connection to go directly from the browser to the correct backend. I’m sure somewhere on the internet there’s already an SNI router that does this, but all the ones I found stay in line with the request path, adding a hop. A few reasons: Livecount has an open websocket to every open browser tab in the world reading a given page, so they add up. (no, it doesn’t log. It just keeps count) I built a proof of concept SNI router . It is a frontline server receiving TCP connections, on which it then snoops the SNI from the TLS ClientHello, and routes the connection according to its given rules. Anything it reads from the socket is sent along to the real backend along with the file descriptor. So the backend (in my use that’s tarweb) needs to have code cooperating to receive the new connection. It’s not the cleanest code, but it works. I got ChatGPT to write the boring “parse the TLS record / ClientHello” parts. Rust is a memory safe language, so “how bad could it be?”. :-) It seems to work for all the currently used TLS versions. As I said, it requires the backend to be ready to receive “hey, here’s a file descriptor, and here’s the first few hundred bytes you should treat as if you’ve read them from the client”. File descriptors don’t have an operation to “unread”. If they did then this would be easier. Then it would “just” be a matter of giving a backend webserver a file descriptor. For some use cases that could mean starting a new webserver process that reads and writes from stdin/stdout. Not super efficient to go back to the fork-exec-per-connection model from the previous century, but it would achieve the direct connection. But the details are academic. We do need to pass along the snooped bytes somehow, or the TLS handshake won’t succeed. Which means it does need cooperation from the backend. Because the SNI router never writes to the client, and therefore doesn’t perform a TLS handshake, it doesn’t need any private keys or certificates. The SNI router has no secrets, and sees no secrets. I also added a mode that proxies the TCP connection, if some SNI should be routed to a different server. But of course then it’s not possible to pass the file descriptor. So encrypted bytes will bounce on the SNI router for that kind of flow. But still the SNI router is not able to decrypt anything. A downside is of course that bouncing the connection around the world will slow it down, add latency, and waste resources. So pass the file descriptor where possible. So now my setup has the SNI router accept the connection, and then throw the very file descriptor over to tarweb, saying “you deal with this TCP connection”. Tarweb does the TLS handshake, and then throws the TLS session keys over to the kernel, saying “I can’t be bothered doing encryption, you do it”, and then actually handles the HTTP requests. Well actually, there’s another strange indirection. When tarweb receives a file descriptor, it uses io-uring “registered files” to turn it into a “fixed file handle”, and closes the original file descriptor. On the kernel side there’s still a file descriptor of course, but there’s nothing in : This improves performance a bit on the linux kernel side. The SNI router does not use io-uring. At least not yet. The SNI router’s job is much smaller (doesn’t even do a TLS handshake), much more brief (it almost immediately passes the file descriptor to tarweb), and much less concurrency (because of the connections being so short lived as far as it’s concerned), that it may not be worth it. In normal use the SNI router only needs these syscalls per connection: At the risk of going off on an unrelated tangent, HTTP/3 (QUIC-based) has an interesting way of telling a client to “go over there” . A built in load balancer inside the protocol, you could say, sparing the load balancer needing to proxy everything. This opens up opportunities to steer not just on SNI, and is much more flexible than DNS, all without needing the “proxy” to be inline. E.g. say a browser is in Sweden, and you have servers in Norway and Italy. And say you have measured, and find that it would be best if the browser connected to your Norway server. But due to peering agreements and other fun stuff, Italy will be preferred on any BGP anycasted address. You then have a few possible options, and I do mean they’re all possible: The two DNS-based ones also have the valid concern that screwing up DNS can have bad consequences . If you can leave DNS alone that’s better. Back to HTTP/3. If you’ve set up HTTP/3 it may be because you care about latency. It’s then easier to act on information you have about every single connection. On an individual connection basis you can tell the browser in Sweden that it should now talk to the servers in Norway. All without DNS or anycast. Which is nice, because running a webserver is hard enough. Also running a dynamic DNS service or anycast has even more opportunities to blow up fantastically. I should add that HTTP/3 doesn’t have the “running out of file descriptors” problem. Being based on UDP you can run your entire service with just a single file descriptor. Connections are identified by IDs, not 5-tuples. So why didn’t I just use HTTP/3? No support for that (yet). From some skimming ESNI should “just work”, with just a minor decryption operation in the SNI router. ECH seems harder. It should still be doable, but the SNI router will need to do the full handshake, or close to it. And after taking its routing decision it needs to transfer the encryption state to the backend, along with the file descriptor. This is not impossible, of course. It’s similar to how tarweb passes the TLS session keys to the kernel. But it likely does mean that the SNI router needs to have access to both the TLS session keys and maybe even the domain TLS private keys. But that’s a problem for another day. Having all bytes bounce on the SNI router triples the number of total file descriptors for the connection. (one on the backend, then one each on the router for upstream and downstream). There are limits per process and system wide, and the more you have the more you need to juggle them in code. It also wastes CPU and RAM. I want the backend to know the real client IP address, via or similar, on the socket itself. I don’t want restarting nginx to cut existing connections to backends. I’d like to use TLS keys that the nginx user doesn’t have access to. I used for livecount , and last time I got blog posts on HackerNews nginx ran out of file descriptors, and started serving 500 for it serving just plain old static files on disk. For now I’ve moved livecount to a different port, but in the long run I want it back on port 443, and yet isolated from nginx so that the latter keeps working even if livecount is overloaded. for the new connection, a few hundred bytes of ClientHello, of same size to pass it on, to forget the file descriptor. Have the browser connect to , with Norway-specific IP addresses. Not great. People will start bookmarking these URLs, and what happens when you move your Norway servers to Denmark? now goes to servers in Denmark? Use DNS based load balancing, giving Swedish browsers the Norway unicast IPs. Yes… but this is WAY more work than you probably think. And WAY less reliable at giving the best experience for the long tail. And sometimes your most important customer is in that long tail. Try to traffic engineer the whole Internet with BGP announcement tweaks. Good luck with that, for the medium to long tail. Install servers in Sweden, and any other place you may have users. Then you can anycast your addresses from there, and have full control of how you proxy (or packet by packet traffic engineer over tunnels) them. Expensive if you have many locations you need to do this in. Some traffic will still go to the wrong anycast entry point, but pretty feasible though expensive. HTTP/3 is complex. You can build a weird io-uring kTLS based webserver on a weekend, and control everything (except TLS handshakes). Implementing HTTP/3 from scratch, and controlling everything, is a different beast. HTTP/1 needs to still work. Not all clients support HTTP/3, and HTTP/1 or 2 is even used to bootstrap HTTP/3 via its header. Preferred address in HTTP/3 is just a suggestion. Browsers don’t have to actually move. tarweb was first written in C++ . livecount keeps long lived connection . tarweb rewritten in Rust, and using io-uring . You can redirect with Javascript , but this still has the problem. I passed file descriptors between processes in injcode , but it was only ever a proof of concept that only worked on 32bit x86, and the code doesn’t look like it actually does it? Anyway I can’t expect to remember code from 17 years ago.

0 views
Hugo 2 weeks ago

SEO failure

Well, this is embarrassing. I just realized the SEO for [** hakanai.io **](http://hakanai.io) has been completely broken for months. SEO is theoretically my sole and unique acquisition channel for the SAAS. But: - 10 months after purchasing the domain name, I still have a mediocre ** domain rating **: 14, compared to 36 for Bloggrify and 29 for my own blog. - if I search for the name hakanai, I don't even find [** hakanai.io **](http://hakanai.io) in the search results. - 400 unique visitors per month when I was getting double that at the beginning of the year. That's a 50% drop starting in June - not exactly the growth curve you want to see. Not great... I just discovered that a large part of my pages have been deindexed. I don't know why. I wonder if Google didn't at some point consider the site fraudulent, perhaps because of the pSEO (** programmatic SEO **) that I had implemented with the [** Blog starter Kit **](https://blog-starter-kit.hakanai.io/). It also seems to correspond to the moment I moved the documentation, the blog, and the blog starter kit to subdomains. I made this choice because it was technically simpler and many sources online seemed to say that subdomains and subdirectories were equivalent for SEO. There are quite a few conflicting opinions on this. What is certain is that the current result leans more in favor of subdirectories... Anyway, I'm trying to fix it urgently. I've moved the blog and documentation back to subdirectories. And I set the entire blog starter kit to noindex to see if that changes anything. That was a week ago. For now, no result. It's a bit worrying. Let's see... If you've dealt with Google deindexing issues before, I'm all ears. And if not, well, you'll get to watch me figure it out in real-time. Maybe :)

0 views
Manuel Moreale 2 weeks ago

Romina Malta

This week on the People and Blogs series we have an interview with Romina Malta, whose blog can be found at romi.link . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Piet Terheyden and the other 122 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I’m Romina Malta, a graphic artist and designer from Buenos Aires. Design found me out of necessity: I started with small commissions and learned everything by doing. What began as a practical skill became a way of thinking and a way to connect the things I enjoy: image, sound, and structure. Over time, I developed a practice with a very specific and recognizable imprint, working across music, art, and technology. I take on creative direction and design projects for artists, record labels, and cultural spaces, often focusing on visual identity, books, and printed matter. I also run door.link , a personal platform where I publish mixtapes. It grew naturally from my habit of spending time digging for music… searching, buying, and finding sounds that stay with me. The site became a way to archive that process and to share what I discover. Outside of my profession, I like traveling, writing, and spending long stretches of time alone at home. That’s usually when I can think clearly and start new ideas. The journal began as a way to write freely, to give shape to thoughts that didn’t belong to my design work or to social media. I wanted a slower space where things could stay in progress, where I could think through writing. I learned to read and write unusually early, with a strange speed, in a family that was almost illiterate, which still makes it more striking to me. I didn’t like going to school, but I loved going to the library. I used to borrow poetry books, the Bible, short novels, anything I could find. Every reading was a reason to write, because reading meant getting to know the world through words. That was me then, always somewhere between reading and writing. Over the years that habit never left. A long time ago I wrote on Blogger, then on Tumblr, and later through my previous websites. Each version reflected a different moment in my life, different interests, tones, and ways of sharing. The format kept changing, but the reason stayed the same: I’ve always needed to write things down, to keep a trace of what’s happening inside and around me. For me, every design process involves a writing process. Designing leads me to write, and writing often leads me back to design. The journal became the space where those two practices overlap, where I can translate visual ideas into words and words into form. Sometimes the texts carry emotion; other times they lean toward a kind of necessary dramatism. I like words, alone, together, read backwards. I like letters too; I think of them as visual units. The world inside my mind is a constant conversation, and the journal is where a part of that dialogue finds form. There’s no plan behind it. It grows slowly, almost unnoticed, changing with whatever I’m living or thinking about. Some months I write often, other times I don’t open it for weeks. But it’s always there, a reminder that part of my work happens quietly, and that sometimes the most meaningful things appear when nothing seems to be happening. Writing usually begins with something small, a sentence I hear, a word that stays, or an image I can’t stop thinking about. I write when something insists on being written. There is no plan or schedule; it happens when I have enough silence to listen. I don’t do research, but I read constantly. Reading moves the language inside me. It changes how I think, how I describe, how I look at things. Sometimes reading becomes a direct path to writing, as if one text opened the door to another. I love writing on the computer. The rhythm of typing helps me find the right tempo for my thoughts. I like watching the words appear on the screen, one after another, almost mechanically. It makes me feel that something is taking shape outside of me. When I travel, I often write at night in hotels. The neutral space, the different air, the sound of another city outside the window, all create a certain kind of attention that I can’t find at home. The distance, in some way, sharpens how I think. Sometimes I stop in the middle of a sentence and return to it days later. Other times I finish in one sitting and never touch it again. It depends on how it feels. Writing is less about the result and more about the moment when the thought becomes clear. You know, writing and design are part of the same process. Both are ways of organizing what’s invisible, of trying to give form to something I can barely define. Designing teaches me how to see, and writing teaches me how to listen. Yes, space definitely influences how I work. I notice it every time I travel. Writing in hotels, for example, changes how I think. There’s something about being in a neutral room, surrounded by objects that aren’t mine, that makes me more observant. I pay attention differently. At home I’m more methodical. I like having a desk, a comfortable chair, and a bit of quiet. I usually work at night or very early in the morning, when everything feels suspended. I don’t need much: my laptop, a notebook, paper, pencils around. Light is important to me. I prefer dim light, sometimes just a lamp, enough to see but not enough to distract. Music helps too, especially repetitive sounds that make time stretch. I think physical space shapes how attention flows. Sometimes I need stillness, sometimes I need movement. A familiar room can hold me steady, while an unfamiliar one can open something unexpected. Both are necessary. The site is built on Cargo, which I’ve been using for a few years. I like how direct it feels… It allows me to design by instinct, adjusting elements visually instead of through code. For the first time, I’m writing directly on a page, one text over another, almost like layering words in a notebook. It’s a quiet process. Eventually I might return to using a service that helps readers follow and archive new posts more easily, but for now I enjoy this way. I don’t think I would change much. The formats have changed, the platforms too, but the impulse behind it is the same. Writing online has always been a way to think in public. Maybe I’d make it even simpler. I like when a website feels close to a personal notebook… imperfect, direct, and a bit confusing at times. The older I get, the more I value that kind of simplicity. If anything, I’d try to document more consistently. Over the years I’ve lost entire archives of texts and images because of platform changes or broken links. Now I pay more attention to preserving what I make, both online and offline. Other than that, I’d still keep it small and independent. It costs very little. Just the domain, hosting, and the time it takes to keep it alive. I don’t see it as a cost but as part of the work, like having a studio, or paper, or ink. It’s where things begin before they become something else. I’ve never tried to monetise the blog. It doesn’t feel like the right space for that. romi.link/journal exists outside of that logic; it’s not meant to sell or promote anything. It’s more like an open notebook, a record of thought. That said, I understand why people monetise their blogs. Writing takes time and energy, and it’s fair to want to sustain it. I’ve supported other writers through subscriptions or by buying their publications, and I think that’s the best way to do it, directly, without the noise of algorithms or ads. I’ve been reading Fair Companies for a while now. Not necessarily because I agree with everything, of course, but because it’s refreshing to find other points of view. I like when a site feels personal, when you can sense that someone is genuinely curious. Probably Nicolas Boullosa Hm… No mucho. Lately I’ve been thinking about how fragile the internet feels. Everything moves too quickly, and yet most of what we publish disappears almost instantly. Keeping a personal site today feels like keeping a diary in public: it’s small, quiet, and mostly unseen, but it resists the speed of everything else. I find comfort in that slowness. Now that you're done reading the interview, go check the blog . If you're looking for more content, go read one of the previous 112 interviews . Make sure to also say thank you to Jim Mitchell and the other 122 supporters for making this series possible.

1 views
iDiallo 2 weeks ago

The TikTok Model is the Future of the Web

I hate to say it, but when I wake up in the morning, the very first thing I do is check my phone. First I turn off my alarm, I've made it a habit to wake up before it goes off. Then I scroll through a handful of websites. Yahoo Finance first, because the market is crazy. Hacker News, where I skim titles to see if AWS suffered an outage while I was sleeping. And then I put my phone down before I'm tempted to check my Twitter feed. I've managed to stay away from TikTok, but the TikTok model is finding its way to every user's phone whether we like it or not. On TikTok, you don't surf the web. You don't think of an idea and then research it. Instead, based entirely on your activity in the app, their proprietary algorithm decides what content will best suit you. For their users, this is the best thing since sliced bread. For the tech world, this is the best way to influence your users. Now, the TikTok model is no longer reserved for TikTok, but has spread to all social media. What worries me is that it's also going to infect the entire World Wide Web. Imagine this for a second: You open your web browser. Instead of a search bar or a list of bookmarks, you're greeted by an endless, vertically scrolling stream of content. Short videos, news snippets, product listings, and interactive demos. You don't type anything, you just swipe what you don't like and tap what you do. The algorithm learns, and soon it feels like the web is reading your mind. You're served exactly what you didn't know you wanted. Everything is effortless, because the content you see feels like something you would have searched for yourself. With AI integrations like Google's Gemini being baked directly into the browser, this TikTok-ification of the entire web is the logical next step. We're shifting from a model of surfing the web to one where the web is served to us. This looks like peak convenience. If these algorithms can figure out what you want to consume without you having to search for it, what's the big deal? The web is full of noise, and any tool that can cut through the clutter and help surface the gems should be a powerful discovery tool. But the reality doesn't entirely work this way. There's something that always gets in the way: incentives. More accurately, company incentives. When I log into my Yahoo Mail (yes, I still have one), the first bolded email on top isn't actually an email. It's an ad disguised as an email. When I open the Chrome browser, I'm presented with "Sponsored content" I might be interested in. Note that Google Discover is supposed to be the ultimate tool for discovering content, but their incentives are clear: they're showing you sponsored content first. The model for content that's directly served to you is designed to get you addicted. It isn't designed for education or fulfillment; it's optimized for engagement. The goal is to provide small, constant dopamine hits, keeping you in a state of perpetual consumption without ever feeling finished. It's browsing as a slot machine, not a library. What happens when we all consume a unique, algorithmically-generated web? We lose our shared cultural space. After the last episode of Breaking Bad aired, I texted my coworkers: "Speechless." The reply was, "Best TV show in history." We didn't need more context to understand what we were all talking about. With personalized content, this shared culture is vanishing. The core problem isn't algorithmic curation itself, but who it serves. The algorithms are designed to benefit the company that made them, not the user. And as the laws of "enshittification" dictate, any platform that locks in its users will eventually turn the screws, making the algorithm worse for you to better serve its advertisers or bottom line . Algorithmic solutions often fix problems that shouldn't exist in the first place. Think about your email. The idea of "algorithmically sorted email" only makes sense if your inbox is flooded with spam, newsletters you never wanted, and automated notifications. You need a powerful AI to find the real human messages buried in the noise. But here's the trick: your email shouldn't be flooded with that junk to begin with. If we had better norms, stricter regulations, and more respectful systems, your inbox would contain only meaningful correspondence. In that world, you wouldn't want an algorithm deciding what's important. You'd just read your emails. The same is true for the web. The "noise" the TikTok model promises to solve, the SEO spam, the clickbait, the low-value content, is largely a product of an ad-driven attention economy. Instead of fixing that root problem, the algorithmic model just builds a new, even more captivating layer on top of it. It doesn't clean up the web; it just gives you a more personalized and addictive filter bubble to live inside. The TikTok model of the web is convenient, addictive, and increasingly inevitable. But it's not the only future. It's the path of least resistance for platforms seeking growth and engagement at all costs . There is an alternative, though. No, you don't have to demand more from these platforms. You don't have to vote for a politician. You don't even have to do much. The very first thing to do is remember your own agency. You are in control of the web you see and use. Change the default settings on your device. Delete the apps that are taking advantage of you. Use an ad blocker. If you find creators making things you like, look for ways to support them directly. Be the primary curator of your digital life. It requires some effort, of course. But it's worth it, because the alternative is letting someone else decide what you see, what you think about, and how you spend your time. The web can still be a tool for discovery and connection rather than a slot machine optimized for your attention. You just have to choose to make it that way.

1 views
Maurycy 2 weeks ago

You already have a git server:

If you have a git repository on a server with ssh access, you can just clone it: You can then work on it locally and push your changes back to the origin server. By default, git won’t let you push to the branch that is currently checked out, but this is easy to change: This is a great way to sync code between multiple computers or to work on server-side files without laggy typing or manual copying. If you want to publish your code, just point your web server at the git repo: … although you will have to run this command server-side to make it cloneable: That’s a lot of work, so let’s set up a hook to do that automatically: Git hooks are just shell scripts, so they can do things like running a static site generator: This is how I’ve been doing this blog for a while now: It’s very nice to be able to type up posts locally (no network lag), and then push them to the server and have the rest handled automatically. It’s also backed up by default: If the server breaks, I’ve still got the copy on my laptop, and if my laptop breaks, I can download everything from the server. Git’s version tracking also prevents accidental deletions, and if something breaks, it’s easy to figure out what caused it.

0 views