Posts in Web-development (20 found)

Domains as "Internet Handles"

A little while ago I cam across a post by Dan Abramov , a name that until then didn’t ring a bell, but who appears to be a former Meta employee and member of the React core team. The post links to a website made by Abramov , that addresses the issues of how, quote, every time you sign up for a new social app, you have to rush to claim your username , how, quote, if someone else got there first, too bad and how, quote, that username only works on that one app anyway . The website goes on: This is silly. The internet has already solved this problem. There already exists a kind of handle that works anywhere on the internet—it’s called a domain . A domain is a name you can own on the internet, like or . Most creators on the internet today don’t own a domain. Why not? Until recently, you could only use a domain for a website or custom email. But personal websites have mostly fallen out of fashion, and each social app sports its own kind of handles. However, open social apps are starting to change that. These apps let you use any internet domain you own as a handle Abramov highlights a familiar pain point: On every new platform, users must scramble to secure their preferred username, often discovering it was taken years ago. Domains, he suggests, solve this by offering a globally unique namespace. However, this solution introduces an even greater scarcity problem, amongst other more important issues. Short, meaningful domain names have been scarce for decades. Most desirable combinations of common words, short names, or initials were claimed long before modern social platforms even existed. For example, just like our author, I, too, would have loved to use or as my handle on e.g. Bluesky . Sadly, however, I’m more than two decades late for that, as the former seemingly belongs to a Russian company, and the latter to a namesake somewhere in Bavaria, Germany. Domain marketplaces and registries still list alternatives , but these often come with premium or recurring fees far exceeding what the average user is willing to pay. When platforms require domains as identity tokens, a user whose preferred domain is unavailable loses access to that identity everywhere , not just on a single platform. Unlike usernames, which can often be adapted with simple variations (e.g. adding punctuation), domains offer no such flexibility. TLD constraints mean that once a desirable domain is taken, there may be no practical semantic alternative. Domain scarcity does not solve the “handle availability” problem, it instead exacerbates it by moving contention from individual platforms to the internet’s global naming infrastructure. Usernames exist within individual platforms and their loss, while inconvenient, usually has contained consequences. Losing a username typically means losing access to a single isolated data silo (platform). Domains, by contrast, are subject to a multilayered hierarchy of control involving domain registrars, TLD operators, ICANN-affiliated registries and the DNS root zone. By using a domain as a cross-platform handle , users tie their entire online identity to this centralized, multi-stakeholder governance structure. Misconduct, even just alleged, on one platform could result in escalations to a registrar or registry, potentially leading to domain suspension. A suspended domain invalidates not just a handle on one platform, but an entire online identity across all services using that identifier. The risks extend beyond platform moderation. A compromised mailbox, a malware incident on a web server, or an automated threat-intelligence flag from entities such as the internet’s favorite bully Spamhaus can lead to domain suspension. In such scenarios, users may face lengthy appeals processes involving opaque third-party entities that wield far more power than a typical platform operator. Domains were designed for hosting services, not for acting as the cornerstone of individual identity. Using them as universal handles places disproportionate power in the hands of infrastructure operators who were never intended to serve as arbiters of personal identity. If you’re a long-time reader of this website you probably already knew that privacy must come up at some point. Well, here it is: Traditional username-based systems allow users to separate their personal identity from their public persona. After all, not everyone might want others to know about their activity in the Taylor Swift forum of FanForum.com , and that’s fine. Domains, however, increasingly erode this layer of privacy. While privacy-respecting domain registrars still exist, the mainstream domain ecosystem overwhelmingly encourages or requires KYC, traceable payment methods and paid WHOIS privacy services to maintain the illusion of privacy. Most users will register domains using a credit card or similar traceable payment method through large commercial registrars. Even if WHOIS privacy is enabled, metadata leakage and billing records remain. In the context of social identities, this creates an environment where domain-based handles can be correlated with real-world identities far more easily than pseudonymous usernames. A user posting under a domain such as time-to-get-swifty.com could find their identity exposed not through any platform breach, but simply through the structural nature of domain registration. Usernames are free. Domains are not. Even the cheapest domains incur recurring costs. More desirable, short, memorable, or branded names often command high premiums or elevated renewal fees. While this financial burden may appear negligible to, let’s say, former well-paid Meta employees who consider their online presence a professional asset, the majority of internet users do not attach the same value to domain ownership. For many, especially outside tech-centric circles, the ROI of maintaining a personal domain is negligible or non-existent. A farmer participating in an agricultural forum is unlikely to find value in purchasing and renewing a domain like solely to participate in an online community. Any identity system that introduces ongoing financial requirements creates unfair barriers to participation and risks entrenching socioeconomic inequality in digital spaces. Abramov ’s argument positions domains as a universal, user-controlled solution to fragmented identity systems. While his vision aligns with broader goals of data portability and user autonomy, domains introduce significant drawbacks that usernames do not suffer from: Greater scarcity and reduced availability, centralized infrastructure vulnerabilities and governance risks, reduced privacy and increased traceability, and recurring financial burdens for users. With statements like “You don’t have to squat handles anymore. Own a domain, and you can log into any open social app” the author makes it sound like domain names are less exclusive than simple usernames, when it’s clearly the other way around, and they fail to recognize that squatting is far worse of an issue for domains than it is for simple usernames. Moreover, the reliance of on conventional DNS infrastructure undermines the self-sovereignty that decentralized identifier systems aspire to. Without a complementary decentralized naming layer (e.g. Handshake ) domain-based identities merely exchange one set of constraints and issues for another (vastly more dangerous and impactful) one. For these reasons, users and platform developers should think carefully before adopting domains as universal “internet handles” . Usernames, for all their imperfections, remain simpler, safer, more private, and more equitable for everyday identity on the web, at least until the truly decentralized future is here. While one might say that the handle is merely a representation of the underlying decentralized ID , a loss of the domain will nevertheless come with functional implications across every service that uses it. Luckily, platforms that implement domain handles continue to offer accounts under their own domains for the time being, so that at least for uninformed users nothing really changes (on the surface). Note: I have an account on a platform that supports domain handles and I am using the feature in order to be able to make informed statements. The account is, however, nothing that is crucial to my existence on the internet. If my domain should spontaneously combust that account would be the least of my worries. Instead, I’d be more troubled about this site and its related services, which is why I have a fallback domain . While I’m sure the author of internethandle.org didn’t intend to, some statements on the website “sound” somewhat out of touch, or at the very least tone-deaf , e.g.: Most creators on the internet today don’t own a domain. Why not? Until recently, you could only use a domain for a website or custom email. But personal websites have mostly fallen out of fashion […] Dan , personal websites haven’t fallen out of fashion , but have suffered under the World Wide Web altered (dare I say destroyed ?) by the very companies you supported building as part of your previous roles and, to some extent, as part of the technologies you’re working with. Just because you, and the people you surround yourself with, seemingly don’t care about the small web it doesn’t mean it has fallen out of fashion ; If anything, personal websites are gaining popularity and are the weapon of choice against the enshittification of the web by companies like Meta and others.

1 views
iDiallo 3 days ago

Let users zoom in on mobile devices

This is a bit of a rant. Maybe my eyes are not as good as they used to be. When I read an article that has pictures on them, I like to zoom in to see the details. You might think this makes no sense, I just have to pinch the screen to zoom in. You would be right, but some websites intentionally prevent you from zooming in. Here is an example, the straw that broke the camel's back so to speak. I was reading an interesting article on substack about kids who ran away in the 60s , and it has these pictures of letters from those kids. Handwritten letters that complement the story and I really wanted to read. But have you tried reading text from a picture in an article on a phone? Again, it could just be what happens when you spend 35 years in front of screens. CSS alone is not enough to properly turn a page responsive on a mobile device. The browser needs to know how we want to size the viewport properly. For that we have a viewport property that gives the browser a hint on how to size the page. Since we've started making pages responsive yesteryear, I've relied on a single configuration and have rarely ever found a reason to change it: The is set to the current device's width, mobile or desktop, it doesn't matter. The is set to 1. The documentation is a bit confusing, I consider the scale to just be the initial zoom level. That's really all you need to know about the viewport if you are building a webpage and want to make it display properly on a mobile device. But of course, the article I'm complaining about has different settings. Here is what they have: The properties I'm complaining about are and . The first one says users can't zoom in period. Why would you prevent users from zooming in? This is such a terrible setting that you can set your browser to ignore this setting. But for good measures, they added , which means even if you are allowed to zoom, the maximum zoom level is one... which means you can't zoom. Yes, I disabled zoom to make a point It's a terrible experience all the way around. When I read articles that have pictures, I can't zoom in! I can't properly look at the pictures. There are a few platforms that I've noticed have these settings. Substack and Medium are the most annoying. Now, when I know an article is from those platforms, I just ignore them. The only time you ever need to override users from zooming is if it's a web game. Other than that, it's just plain annoying.

0 views

Discovering the indieweb with calm tech

When social media first entered my life, it came with a promise of connection. Facebook connected college-aged adults in a way that was previously impossible, helping to shape our digital generation. Social media was our super-power and we wielded it to great effect. Yet social media today is a noisy, needy, mental health hazard. They push distracting notifications, constantly begging us to “like and subscribe”, and trying to trap us in endless scrolling. They have become sirens that lure us into their ad-infested shores with their saccharine promise of dopamine. How can we defeat these monsters that have invaded deep into our world, while still staying connected? A couple weeks ago I stumbled into a great browser extension, StreetPass for Mastodon . The creator, tvler , built it to help people find each other on Mastodon. StreetPass autodiscovers Mastodon verification links as you browse the web, building a collection of Mastodon accounts from the blogs and personal websites you’ve encountered. StreetPass is a beautiful example of calm technology . When StreetPass finds Mastodon profiles it doesn’t draw your attention with a notification, it quietly adds the profile to a list, knowing you’ll check in when you’re ready. StreetPass recognizes that there’s no need for an immediate call to action. Instead it allows the user to focus on their browsing, enriching their experience in the background. The user engages with StreetPass when they are ready, and on their own terms. StreetPass is open source and available for Firefox , Chrome , and Safari . Inspired by StreetPass, I applied this technique to RSS feed discovery. Blog Quest is a web browser extension that helps you discover and subscribe to blogs. Blog Quest checks each page for auto-discoverable RSS and Atom feeds (using links) and quietly collects them in the background. When you’re ready to explore the collected feeds, open the extension’s drop-down window. The extension integrates with several feed readers, making subscription management nearly effortless. Blog Quest is available for both Firefox and Chrome . The project is open source and I encourage you to build your own variants. I reject the dead Internet theory: I see a vibrant Internet full of humans sharing their experiences and seeking connection. Degradation of the engagement-driven web is well underway, accelerated by AI slop. But the independent web works on a different incentive structure and is resistant to this effect. Humans inherently create, connect, and share: we always have and we always will. If you choose software that works in your interest you’ll find that it’s possible to make meaningful online connections without mental hazard. Check out StreetPass and Blog Quest to discover a decentralized, independent Internet that puts you in control. Edward Armitage: The Siren (1888)

0 views
Manuel Moreale 5 days ago

Come on John

For all I know, John O'Nolan is a cool dude. He’s the founder of Ghost , a project that is also really cool. You know what’s also cool? RSS. And guess what, John just announced he’s working on a new RSS app (Reader? Tool? Service?) called Alcove and he blogged about it . All this is nice. All this is cool. The more people build tools and services for the open web, the better. Having said all that though, John: If you want to follow along with this questionable side project of undefined scope, I'm sharing live updates of progress on Twitter, here. You are on your own blog, your own corner of the web, powered by the platform you’re the CEO of, a blog that also serves content via RSS, the thing you’re building a tool for, and you’re telling people to follow the progress on fucking Twitter? Come on John. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

66 views

The what, how, and why of CSS clamp()

It’s Blogvent, day 4, where I blog daily in December! CSS is cool and you should use it. In a sentence, lets you assign a value to a CSS property between a minimum and a maximum range, and uses a preferred value in that range. It’s really helpful for responsive layouts and typography! has three parameters, in order: You can assign it to a property, like so: The column width here is always between 200px and 400px wide, and defaults to 40% of its container width. If that 40% is less than 200px, the width will be 200px. Similarly, if that 40% is more than 400px, the width will be 400px. Or, another example: The font size here is always between 16px and 24px, and defaults to 4% of the screen’s width. If a screen is 1000px wide, that means the font size would be 40px if it were that exact 4%, but with this function, it is capped at 24px. It’s shorter! Honestly that’s why. You can accomplish a lot with a single line of (that is arguably easier to maintain) than a set of media queries. It reduces reliance on multiple rules and functions. A typical media query approach for a column width might be: But with , you could do: This is way shorter, and I would argue, easier to read and maintain! CSS is widely supported , so you can safely use it across your apps and websites. If you’d like to learn more, here’s some handy links for ya: Until next time! A minimum value A preferred value A maximum value Clamp Calculator CSS documentation CSS Tricks Almanac:

0 views
Rob Zolkos 1 weeks ago

Vanilla CSS is all you need

Back in April 2024, Jason Zimdars from 37signals published a post about modern CSS patterns in Campfire . He explained how their team builds sophisticated web applications using nothing but vanilla CSS. No Sass. No PostCSS. No build tools. The post stuck with me. Over the past year and a half, 37signals has released two more products (Writebook and Fizzy) built on the same nobuild philosophy. I wanted to know if these patterns held up. Had they evolved? I cracked open the source code for Campfire, Writebook, and Fizzy and traced the evolution of their CSS architecture. What started as curiosity became genuine surprise. These are not just consistent patterns. They are improving patterns. Each release builds on the last, adopting progressively more modern CSS features while maintaining the same nobuild philosophy. These are not hobby projects. Campfire is a real-time chat application. Writebook is a publishing platform. Fizzy is a full-featured project management tool with kanban boards, drag-and-drop, and complex state management. Combined, they represent nearly 14,000 lines of CSS across 105 files. Not a single line touches a build tool. Let me be clear: there is nothing wrong with Tailwind . It is a fantastic tool that helps developers ship products faster. The utility-first approach is pragmatic, especially for teams that struggle with CSS architecture decisions. But somewhere along the way, utility-first became the only answer. CSS has evolved dramatically. The language that once required preprocessors for variables and nesting now has: 37signals looked at this landscape and made a bet: modern CSS is powerful enough. No build step required. Three products later, that bet is paying off. Open any of these three codebases and you find the same flat structure: That is it. No subdirectories. No partials. No complex import trees. One file per concept, named exactly what it does. Zero configuration. Zero build time. Zero waiting. I would love to see something like this ship with new Rails applications. A simple starting structure with , , , and already in place. I suspect many developers reach for Tailwind not because they prefer utility classes, but because vanilla CSS offers no starting point. No buckets. No conventions. Maybe CSS needs its own omakase. Jason’s original post explained OKLCH well. It is the perceptually uniform color space all three apps use. The short version: unlike RGB or HSL, OKLCH’s lightness value actually corresponds to perceived brightness. A 50% lightness blue looks as bright as a 50% lightness yellow. What is worth noting is how this foundation remains identical across all three apps: Dark mode becomes trivial: Every color that references these primitives automatically updates. No duplication. No separate dark theme file. One media query, and the entire application transforms. Fizzy takes this further with : One color in, four harmonious colors out. Change the card color via JavaScript ( ), and the entire card theme updates automatically. No class swapping. No style recalculation. Just CSS doing what CSS does best. Here is a pattern I did not expect: all three applications use units for horizontal spacing. Why characters? Because spacing should relate to content. A gap between words feels natural because it is literally the width of a character. As font size scales, spacing scales proportionally. This also makes their responsive breakpoints unexpectedly elegant: Instead of asking “is this a tablet?”, they are asking “is there room for 100 characters of content?” It is semantic. It is content-driven. It works. Let me address the elephant in the room. These applications absolutely use utility classes: The difference? These utilities are additive , not foundational. The core styling lives in semantic component classes. Utilities handle the exceptions: the one-off layout adjustment, the conditional visibility toggle. Compare to a typical Tailwind component: And the 37signals equivalent: Yes, it is more CSS. But consider what you gain: If there is one CSS feature that changes everything, it is . For decades, you needed JavaScript to style parents based on children. No more. Writebook uses it for a sidebar toggle with no JavaScript: Fizzy uses it for kanban column layouts: Campfire uses it for intelligent button styling: This is CSS doing what you used to need JavaScript for. State management. Conditional rendering. Parent selection. All declarative. All in stylesheets. What fascinated me most was watching the architecture evolve across releases. Campfire (first release) established the foundation: Writebook (second release) added modern capabilities: Fizzy (third release) went all-in on modern CSS: You can see a team learning, experimenting, and shipping progressively more sophisticated CSS with each product. By Fizzy, they are using features many developers do not even know exist. CSS Layers solve the specificity wars that have plagued CSS since the beginning. It does not matter what order your files load. It does not matter how many classes you chain. Layers determine the winner, period. One technique appears in all three applications that deserves special attention. Their loading spinners use no images, no SVGs, no JavaScript. Just CSS masks. Here is the actual implementation from Fizzy’s : The keyframes live in a separate file: Three dots, bouncing in sequence: The means it automatically inherits the text color. Works in any context, any theme, any color scheme. Zero additional assets. Pure CSS creativity. The default browser element renders as a yellow highlighter. It works, but it is not particularly elegant. Fizzy takes a different approach for search result highlighting: drawing a hand-drawn circle around matched terms. Here is the implementation from : The HTML structure is . The empty exists solely to provide two pseudo-elements ( and ) that draw the left and right halves of the circle. The technique uses asymmetric border-radius values to create an organic, hand-drawn appearance. The makes the circle semi-transparent against the background, switching to in dark mode for proper blending. Search results for: webhook No images. No SVGs. Just borders and border-radius creating the illusion of a hand-drawn circle. Fizzy and Writebook both animate HTML elements. This was notoriously difficult before. The secret is . Here is the actual implementation from Fizzy’s : The variable is defined globally as . Open Dialog This dialog animates in and out using pure CSS. The rule defines where the animation starts from when an element appears. Combined with , you can now transition between and . The modal smoothly scales and fades in. The backdrop fades independently. No JavaScript animation libraries. No manually toggling classes. The browser handles it. I am not suggesting you abandon your build tools tomorrow. But I am suggesting you reconsider your assumptions. You might not need Sass or PostCSS. Native CSS has variables, nesting, and . The features that needed polyfills are now baseline across browsers. You might not need Tailwind for every project. Especially if your team understands CSS well enough to build a small design system. While the industry sprints toward increasingly complex toolchains, 37signals is walking calmly in the other direction. Is this approach right for everyone? No. Large teams with varying CSS skill levels might benefit from Tailwind’s guardrails. But for many projects, their approach is a reminder that simpler can be better. Thanks to Jason Zimdars and the 37signals team for sharing their approach openly. All code examples in this post are taken from the Campfire, Writebook, and Fizzy source code. For Jason’s original deep-dive into Campfire’s CSS patterns, see Modern CSS Patterns and Techniques in Campfire . If you want to learn modern CSS, these three codebases are an exceptional classroom. Native custom properties (variables) Native nesting Container queries The selector (finally, a parent selector) CSS Layers for managing specificity for dynamic color manipulation , , for responsive sizing without media queries HTML stays readable. tells you what something is, not how it looks. Changes cascade. Update once, every button updates. Variants compose. Add without redefining every property. Media queries live with components. Dark mode, hover states, and responsive behavior are co-located with the component they affect. OKLCH colors Custom properties for everything Character-based spacing Flat file organization View Transitions API for smooth page changes Container queries for component-level responsiveness for entrance animations CSS Layers ( ) for managing specificity for dynamic color derivation Complex chains replacing JavaScript state

0 views
iDiallo 1 weeks ago

Why my Redirect rules from 2013 still work and yours don't

Here is something that makes me proud of my blog. The redirect rule I wrote for my very first article 12 years ago still works! This blog was an experiment. When I designed it, my intention was to try everything possible and not care if it broke. In fact, I often said that if anything broke, it would be an opportunity for me to face a new challenge and learn. I designed the website as best as I could, hoping that it would break so I could fix it. What I didn't take into account was that some things are much harder to fix than others. More specifically: URLs. Originally, this was the format of the URL: You can blame Derek Sivers for that format . But then I thought, what if I wanted to add pages that weren't articles? It would be hard to differentiate a blog entry from anything else. So I switched to the more common blog format: Perfect. But should the month have a leading zero? I went with the leading zero. But then I introduced a bug: Yes, I squashed the leading zero from the months. This meant that there were now two distinct URLs that pointed to the same content, and Google doesn't like duplicate content in its search results. Of course, that same year, I wrote an article that went super viral. Yes, my server crashed . But more importantly, people bookmarked and shared several articles from my blog everywhere. Once your links are shared they become permanent. They may get an entry in the wayback machine, they will be shared in forums, someone will make a point and cite you as a source. I could no longer afford to change the URLs or break them in any way. If I fixed the leading zero bug now, one of the URLs would lead to a 404. I had to implement a more complex solution. So in my file, I added a new redirect rule that kept the leading zero intact and redirected all URLs with a missing zero back to the version with a leading zero. Problem solved. Note that my was growing out of control, and there was always the temptation to edit it live. When I write articles, sometimes I come up with a title, then later change my mind. For example, my most popular article was titled "Fired by a machine" (fired-by-a-machine). But a couple of days after writing it, I renamed it to "When the machine fired me" (when-the-machine-fired-me). Should the old URL remain intact despite the new title? Should the URL match the new title? What about the old URL? Should it lead to a 404 or redirect to the new one? In 2014, after reading some Patrick McKenzie , I had this great idea of removing the month and year from the URL. This is what the URL would look like: Okay, no problem. All I needed was one more redirect rule. I don't like losing links, especially after Google indexes them. So my rule has always been to redirect old URLs to new ones and never lose anything. But my file was growing and becoming more complex. I'd also edited it multiple times on my server, and it was becoming hard to sync it with the different versions I had on different machines. So I ditched it. I've created a new .conf file with all the redirect rules in place. This version is always committed into my repo and has been consistently updated since. When I deploy new code to my server, the conf file is included in my apache.conf and my rules remain persistent. And the redirectrules.conf file looks something like this: I've rewritten my framework from scratch and gone through multiple designs. Whenever I look through my logs, I'm happy to see that links from 12 years ago are properly redirecting to their correct destinations. URLs are forever, but your infrastructure doesn't have to be fragile. The reason my redirect rules still work after more than a decade isn't because I got everything right the first time. I still don't get it right! But it's because I treated URL management as a first-class problem that deserved its own solution. Having a file living only on your server? It's a ticking time bomb. The moment I moved my redirect rules into a .conf file and committed it to my repo, I gained the ability to deploy with confidence. My redirects became code, not configuration magic that might vanish during a server migration. Every URL you publish is a promise. Someone bookmarked it, shared it, or linked to it. Breaking that promise because you changed your mind about a title or URL structure is not an option. Redirect rules are cheap and easy. But you can never recover lost traffic. I've changed URL formats three times and renamed countless articles. Each time, I added redirects rather than replacing them. Maybe it's just my paranoia, but the web has a long memory, and you never know which old link will suddenly matter. Your redirect rules from last year might not work because they're scattered across multiple .htaccess files, edited directly on production servers, and never version controlled. Mine still work because they travel with my code, surviving framework rewrites, server migrations, and a decade of second thoughts about URL design. The Internet never forgets... as long as the redirect rules are in place.

0 views
DHH 1 weeks ago

Fizzy is our fun, modern take on Kanban (and we made it open source!)

Kanban is a simple, practical approach to visually managing processes and backlogs by moving work cards from one progress column to another. Toyota came up with it to track their production lines back in the middle of the 20th century, but it's since been applied to all sorts of industries with great effect. And Fizzy is our new fun, modern take on it in digital form. We're certainly not the first to take a swing at this, not even for software development. Since the early 2000s, there's been a movement to use the Kanban concept to track bugs, issues, and ideas in our industry. And countless attempts to digitize the concept over the years.  But as with so much other software, good ideas can grow cumbersome and unwieldy surprisingly quickly. Fizzy is a fresh reset of an old idea. We need more of that.  Very little software is ever the final word on solving interesting problems. Even products that start out with great promise and simplicity tend to accumulate cruft and complexity over time. A healthy ecosystem needs a recurring cycle of renewal. We've taken this mission to heart not just with Fizzy's fun, colorful, and modern implementation of the Kanban concept, but also in its distribution.  Fizzy is available as a service we run where you get 1,000 cards for free, and then it's $20/month for unlimited usage. But we're also giving you access to the entire code base, and invite enterprising individuals and companies to run their own instance totally free of charge. This is done under the O'Saasy License, which is basically the do-whatever-you-want-just-don't-sue MIT License, but with a carve-out that reserves the commercialization rights to run Fizzy as SaaS for us as the creators. That means it's not technically Open Source™, but the source sure is open, and you can find it on our public GitHub repository. That open source is what we run too. So new features or bugs fixes accepted on GitHub will make it into both our Fizzy SaaS offering and what anyone can run on their own hardware. We've already had a handful of contributions go live like this! Ultimately, it's our plan to let data flow freely between the SaaS and the local installations. You'll be able to start an account on your own instance, and then, if you'd rather we just run it for you, take that data with you into the managed setup. Or the other way around! In an age where SaaS companies come and go, pivot one way or the other, I think it's a great reassurance that the source code is freely available, and that any work put into a SaaS account is portable to your own installation later. I'm also just a huge fan of being able to View Source. Traditionally, that's been reserved to the front end (and even that has been disappearing due to the scourge of minimization, transpiling, and bundling), but I'm usually even more interested in seeing how things are built on the backend. Fizzy allows you full introspection into that. Including the entire history of how the product was built, pull request by pull request. It's a great way to learn how modern Rails applications are put together! So please give Fizzy a spin. Whether you're working on software, with a need to track those bugs and feature requests, or you're in an entirely different business and need a place for your particular issues and ideas. Fizzy is a fresh, fun way to manage it all, Kanban style. Enjoy!

0 views
Rob Zolkos 1 weeks ago

Fizzy Webhooks: What You Need to Know

Fizzy is a new issue tracker ( source available ) from 37signals with a refreshingly clean UI. Beyond looking good, it ships with a solid webhook system for integrating with external services. For most teams, webhooks are the bridge between the issues you track and the tools you already rely on. They let you push events into chat, incident tools, reporting pipelines, and anything else that speaks HTTP. If you are evaluating Fizzy or planning an integration, understanding what these webhooks can do will save you time. I also put together a short PDF with the full payload structure and example code, which I link at the end of this post if you want to go deeper. Here are a few ideas for things you could build on top of Fizzy’s events: If you want to go deeper, you can also build more opinionated tools that surface insights and notify people who never log in to Fizzy: Here is how to set it up. Step 1. Visit a board and click the Webhook icon in the top right. Step 2. Give the webhook a name and the payload URL and select the events you want to be alerted to. Step 3. Once the webhook saves you will see a summary of how it is setup and most importantly the webhook secret which you will need for your handler for securing the webhook. There is also a handy event log showing you when an event was delivered. Since I like to tinker with these sorts of things, I built a small webhook receiver to capture and document the payload structures. Fizzy sends HTTP POST requests to your configured webhook URL when events occur. Each request includes an header containing an HMAC-SHA256 signature of the request body. The verification process is straightforward: Fizzy covers the essential card lifecycle events: The approach was straightforward: I wrote a small Ruby script using WEBrick to act as a webhook receiver. The script listens for incoming POST requests, verifies the HMAC-SHA256 signature (using the webhook secret Fizzy provides when you configure webhooks), and saves each event as a separate JSON file with a timestamp and action name. This made it easy to review and compare the different event types later. To expose my local server to the internet, I used ngrok to create a temporary public URL pointing to port 4002. I then configured Fizzy’s webhook settings with this ngrok URL and selected the event types I wanted to capture. With everything set up, I went through Fizzy’s UI and manually triggered each available event: creating cards, adding comments, assigning and unassigning users, moving cards between columns and boards, marking cards as done, reopening them, postponing cards to “Not Now”, and sending cards back to triage. Each action fired a webhook that my script captured and logged. In total, I captured 13 webhook deliveries covering 10 different action types. The only event I could not capture was “Card moved to Not Now due to inactivity” — Fizzy triggers this automatically after a period of card inactivity, so it was not practical to reproduce during this test. Card body content is not included. The card object in webhook payloads only contains the , not the full description or body content. Comments include both and versions, but cards do not. Since Fizzy doesn’t have a public API ( DHH is working on it ), you can’t fetch the full card content programmatically - you’ll need to use the field to view the card in the browser. Column data is only present when relevant. The object only appears on , , and events - the events where a card actually moves to a specific column. IDs are strings, not integers. All identifiers in the payload are strings like , not numeric IDs. I created a short webhook documentation based on this research: FIZZY_WEBHOOKS.pdf It includes the full payload structure, all event types with examples, and code samples for signature verification in both Ruby and JavaScript. Hopefully this helps you get up and running with Fizzy’s webhooks. Let me know if you discover additional events or edge cases. Since the source code is available, you can also submit PRs to fix or enhance aspects of the webhook system if you find something missing or want to contribute improvements. A team metrics dashboard that tracks how long cards take to move from to and which assignees or boards close issues the fastest. Personal Slack or Teams digests that send each person a daily summary of cards they created, were assigned, or closed based on , , , and events. A churn detector that flags cards that bounce between columns or get sent back to triage repeatedly using , , and . A cross-board incident view that watches to keep a separate dashboard of cards moving into your incident or escalation boards. A comment activity stream that ships events into a search index or knowledge base so you can search discussions across boards. Stakeholder status reports that email non-technical stakeholders a weekly summary of key cards: what was created, closed, postponed, or sent back to triage on their projects. You can group by label, board, or assignee and generate charts or narrative summaries from , , , and events. Capacity and load alerts that watch for people who are getting overloaded. For example, you could send a notification to a manager when someone is assigned more than N open cards, or when cards assigned to them sit in the same column for too long without a or event. SLA and escalation notifications that integrate with PagerDuty or similar tools. When certain cards (for example, labeled “Incident” or on a specific board) are not closed within an agreed time window, you can trigger an alert or automatically move the card to an escalation board using , , and . Customer-facing status updates that keep clients in the loop without giving them direct access to Fizzy. You could generate per-customer email updates or a small status page based on events for cards tagged with that customer’s name, combining , , and to show progress and recent discussion. Meeting prep packs that assemble the last week’s events for a given board into a concise agenda for standups or planning meetings. You can collate newly created cards, reopened work, and high-churn items from , , , and , then email the summary to attendees before the meeting. - new card created - card moved to a column / - assignment changes - card moved to Done - card reopened from Done - card moved to Not Now - card moved back to Maybe? - card moved to different board - comment added to a card

0 views
マリウス 1 weeks ago

disable-javascript.org

With several posts on this website attracting significant views in the last few months I had come across plenty of feedback on the tab gimmick implemented last quarter . While the replies that I came across on platforms like the Fediverse and Bluesky were lighthearted and oftentimes with humor, the visitors coming from traditional link aggregators sadly weren’t as amused about it. Obviously a large majority of people disagreeing with the core message behind this prank appear to be web developers, who’s very existence quite literally depends on JavaScript, and who didn’t hold back to express their anger in the comment sections as well as through direct emails. Unfortunately, most commenters are missing the point. This email exchange is just one example of feedback that completely misses the point: I just found it a bit hilarious that your site makes notes about ditching and disable Javascript, and yet Google explicitly requires it for the YouTube embeds. Feels weird. The email contained the following attachment: Given the lack of context I assume that the author was referring to the YouTube embeds on this website (e.g. on the keyboard page). Here is my reply: Simply click the link on the video box that says “Try watching this video on www.youtube.com” and you should be directed to YouTube (or a frontend of your choosing with LibRedirect [1]) where you can watch it. Sadly, I don’t have the influence to convince YouTube to make their video embeds working without JavaScript enabled. ;-) However, if more people would disable JavaScript by default, maybe there would be a higher incentive for server-side-rendering and video embeds would at the very least show a thumbnail of the video (which YouTube could easily do, from a technical point of view). Kind regards! [1]: https://libredirect.github.io It also appears that many of the people disliking the feature didn’t care to properly read the highlighted part of the popover that says “Turn JavaScript off, now, and only allow it on websites you trust!” : Indeed - and the author goes on to show a screenshot of Google Trends which, I’m sure, won’t work without JavaScript turned on. This comment perfectly encapsulates the flawed rhetoric. Google Trends (like YouTube in the previous example) is a website that is unlikely to exploit 0-days in your JavaScript engine, or at least that’s the general consensus. However, when you clicked on a link that looks like someone typed it in by putting their head on the keyboard , that led you to a website you obviously didn’t know beforehand, it’s a different story. What I’m advocating for is to have JavaScript disabled by default for everything unknown to you , and only enable it for websites that you know and trust . Not only is this approach going to protect you from jump-scares , regardless whether that’s a changing tab title, a popup, or an actual exploit, but it will hopefully pivot the thinking of particularly web developers back from “Let’s render the whole page using JavaScript and display nothing if it’s disabled” towards “Let’s make the page as functional as possible without the use of JavaScript and only sprinkle it on top as a way to make the experience better for anyone who choses to enable it” . It is mind boggling how this simple take is perceived as militant techno-minimalism and can provoke such salty feedback. I keep wondering whether these are the same people that consider to be a generally okay way to install software …? One of the many commenters that however did agree with the approach that I’m taking on this site had put it fairly nicely: About as annoying as your friend who bumped key’ed his way into your flat in 5 seconds waiting for you in the living room. Or the protest blocking the highway making you late for work. Many people don’t realize that JavaScript means running arbitrary untrusted code on your machine. […] Maybe the hacker ethos has changed, but I for one miss the days of small pranks and nudges to illustrate security flaws, instead of ransomware and exploits for cash. A gentle reminder that we can all do better, and the world isn’t always all that friendly. As the author of this comment correctly hints, the hacker ethos has in fact changed. My guess is that only a tiny fraction of the people that are actively commenting on platforms like Hacker News or Reddit these days know about, let’s say, cDc ’s Back Orifice , the BOFH stories, bash.org , and all the kickme.to/* links that would trigger a disconnect in AOL ’s dialup desktop software. Hence, the understanding about how far pranks in the 90s and early 2000s really went simply isn’t there. And with most things these days required to be politically correct , having the tab change to what looks like a Google image search for “sam bankman-fried nudes” is therefor frowned upon by many, even when the reason behind it is to inform. Frankly, it seems that conformism has eaten not only the internet, but to an extent the whole world, when an opinion that goes ever so slightly against the status quo is labelled as some sort of extreme view . To feel even just a “tiny bit violated by” something as mundane as a changing text and icon in the browser’s tab bar seems absurd, especially when it is you that allowed my website to run arbitrary code on your computer! Because I’m convinced that a principled stance against the insanity that is the modern web is necessary, I am doubling down on this effort by making it an actual initiative: disable-javascript.org disable-javascript.org is a website that informs the average user about some of the most severe issues affecting the JavaScript ecosystem and browsers/users all over the world, and explains in simple terms how to disable JavaScript in various browsers and only enable it for specific, trusted websites. The site is linked on the JavaScript popover that appears on this website, so that visitors aren’t only pranked into hopefully disabling JavaScript, but can also easily find out how to do so. disable-javascript.org offers a JavaScript-snippet that is almost identical to the one in use by this website, in case you would like to participate in the cause. Of course, you can as well simply link to disable-javascript.org from anywhere on your website to show your support. If you’d like to contribute to the initiative by extending the website with valuable info, you can do so through its Git repository . Feel free to open pull-requests with the updates that you would like to see on disable-javascript.org . :-)

0 views
fLaMEd fury 1 weeks ago

Ain't Enough To Go 'Round In This World

What’s going on, Internet? December crept up fast and suddenly it’s twenty three days until Christmas. I’ve been enjoying getting out more and seeing live music. There’s so much more happening up here in Auckland and it has been good getting back into gigs. I started the month with Tom Scott’s Anitya show at the Civic . A week later I questioned my own sanity by going out to another gig with some wonderful friends on a Tuesday night right before flying to Sydney for the first of two work trips. Sydney was great. It was good catching up with and see work mates in person, but also mentally exhausting. Flying back to Auckland for the weekend added to the fatigue, but I liked the change of pace. I even managed to catch up with some of my cousins and aunt for dinner. Having the chance to do that on work trips is a nice bonus. Meanwhile the house hunting and weekends of endless open homes finally came to an end. My wife viewed a place while I was in Sydney and pushed it through the offer stage. The offer was accepted conditionally before I’d even seen the house. We went unconditional a week later and only then did I walk through it for the first time. After more than sixty open homes this year, buying a place that needs work makes more sense for us than blowing our budget on something “liveable” but missing basics like linen cupboards, wardrobes, or a proper laundry. This way we get to shape it how we want. I’m excited for the new year. While catching up and surfing the web, one particular link making the rounds that claimed personal websites are dead, which I obviously disagree with and replied to . Finally, I finished up my Firefox Container configuration and shared it for anyone to try out . Let me know if you found the container setup useful. With all that going on, I still found time to watch a bunch of shows, listen to a lot of music, pick up tonne of new records, and make a few updates around the site. Here’s November in full. I watched a bunch of episodes on the flights back and forth from Sydney. No movies this month. What happened there. I carried on with The Chair Company, which wrapped up its first season yesterday. Such a bizarre show. No idea when the next season is coming but I’ll be sticking with it. I finished Andor season 3. What a damn good show. I’ve got Rogue One queued up to wrap up the story, even though I’ve already seen it three times. I’m still watching South Park. It’s fun, but I’m tired of the White House plot line (I’m sure Matt & Trey are too). I miss the boys just being kidsw. I’ll probably go back to season 1 soon to remind myself how the show has changed and evolved over the years. Some absolute classic episodes around seasons 6-7. Plu1bus caught my attention and I’m working through it as episodes release. Interesting premise and am enjoying watching the story unfold. On the flight I spotted the UK show Dope Girls and gave it a go. I forgot about it once I landed, but I’ll finish the remaining four episodes soon now that writing this post has reminded me. I started and finished season 2 of The Vince Staples Show. It leans into the same bizarre energy as later seasons of Atlanta. Low stakes, easy to watch, and fun. I also started Educators. Silly, very New Zealand, and perfect fifteen minute episodes when I don’t want to think and have an awkward laugh. I got through three books this month. Gabriel’s Bay by Catherine Robertson was a solid read with plenty of local flavour and a warm story. 7th Circle kept me hooked as it pushed further into the Shadow Grove universe I got into last year reading through the Maddison Kate books. I’m fully here for the messy plotlines and the drama threaded between the raunchy sex scenes. I’m here for it. I also read Atmosphere by Taylor Jenkins Reid. Her books are always epically tragic and beautiful at the same time, and this one absolutely delivered on both fronts. Trying to decide if I want to read the rest of the 7th Circle books this month or dive into something heavier like Project Hail Mary. This month saw my usual mix of pop, hip hop, and early-2000s. Mokomokai ended up as my top artist of the month, with Olivia Rodrigo, D12, Tadpole, Eminem, and Westside Gunn all getting steady playtime. Top albums were a mix - SOUR by Olivia Rodrigo at the top, followed by Tadpole’s The Buddhafinger, Mokomokai’s latest release PONO!, and both Heels Have Eyes records from Westside Gunn. MGK’s Tickets to My Downfall also crept back into rotation with the (All Access) release of five new tracks to the orignal album. MGK has a gig here next year - do I want to go see him in concert? I mean I like Tickets To My Downfall but think he’s a ballbag. Dilemas. Track of the month was “Verona” by Elemeno P, with “Kitty” by The Presidents of the United States of America, (thanks to riding in the car with my son) and a few Olivia Rodrigo singles scattered through the top ten. Mokomokai showed up again with “Roof Racks”, because sometimes I’m just in the mood for something agressive. November 2025 saw my largest vinyl haul ever. I took advantage of the 20 percent off vinyl sale at JB Hi-Fi, burned through a stack of saved vouchers, and grabbed a few special pieces elsewhere. The links are a bit of a mix this month and there’s a lot of them. Enjoy. Not a huge month for website work. I fixed up some CSS, finished rolling out categories and tags across all my posts, and cleaned up a few lingering bits of front-matter. I still need to build the individual category pages and rethink how this data is displayed on the posts index and on each post. The posts page itself needs a refresh too. I’m not loving the masonry card layout anymore. This update was brought to you by Alright by Tadpole Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website. Tom Scott – Anitya from the gig MOKOMOKAI – PONO , WHAKAREHU , and Mokomokai all direct from their website in a special bundle which included the last remaining copies of the Mokomokai Vinyl 1st pressing in Red & Black Marble Fleetwood Mac – Rumours — JB Hi-Fi Eminem – The Slim Shady LP (Expanded Edition) — JB Hi-Fi Stellar* – Mix — JB Hi-Fi Tadpole – The Buddhafinger , and The Medusa — JB Hi-Fi D12 – Devil’s Night (IVC Edition) — Interscope Vinyl Collective, orange variant with posters and D12 sticker in a beaufiful, heavy gatefold sleeve The psychological cost of having an RSS feed Filip explores the anxiety that comes with writing a blog knowing it has an RSS feed. My first months in cyberspace Phil Gyford remembers the excitement and optimism of being online in 1995. Steps Towards a Web without The Internet AJ Roach imagines a web that could exist without the internet, built from small, local networks instead of centralised infrastructure. Should Your Indieweb Site Be Mobile Friendly? MKUltra.Monster experiments with making old-web design mobile-friendly without losing its classic feel. I ❤ shortcuts #3: read a random blog post Hyde shares a neat script to help randomly surf the independent web. In Praise of RSS and Controlled Feeds of Information rkert writes about why syndication still matters and how sharing content across the open web helps sites stay connected. Who’s a blog for? Cobb thinks through who a blog is really for and why writing for yourself remains the most sustainable approach. Maintaining a Music Library, Ten Years On Brian Schrader reflects on maintaining his personal music library over a decade and why owning your collection still matters. ChatGPT’s Atlas: The Browser That’s Anti-Web - Anil Dash Anil Dash argues that Atlas isn’t just an unusual browser but an anti-web tool that strips context from sites and traps users in a closed, distorted version of the internet. I know you don’t want them to want AI, but… - Anil Dash Anil Dash questions how we should react to Firefox adding AI features. He suggests die-hard fans need to look past the knee-jerk outrage and ask whether Firefox is actually trying to offer a safer, more privacy-minded version of tools their non-technical friends are already using. Early web memories - roundup post Winther rounds up early web memories from the recent Bear Blog Carnival - gutted I missed this as it was happening! Blogs used to be very different. Jetgirl looks back at how blogs used to work, from tight-knit communities to slower, more personal writing, and how different that feels compared to today. PicoSSG Pico is a tiny static site generator focused on simplicity, giving you a lightweight way to build plain HTML sites without a full framework. Personal blogs are back, should niche blogs be next? Disassociated writes about the return of personal blogs and why niche blogs might be the next wave as people move away from algorithmic platforms. Feeds and algorithms have freed us from personal websites Disassociated pushes back on the idea that platform feeds are “good enough,” arguing that treating Medium profiles as websites misses the point, and that personal sites still matter because they give you control rather than renting space inside someone else’s algorithm. Small Web, Big Voice Afranca writes about how the small web still carries real weight, showing that personal sites and hand-built spaces can have a bigger impact than their size suggests. How to Protect Your Privacy from ChatGPT and Other Chatbots Mozilla explains how to protect your privacy when using ChatGPT and other AI tools, focusing on data control, account settings, and reducing what these systems can collect about you.

0 views

Helping agents debug webapps

I've spent a fair bit of time over the past year having agents build webapps for me. Typically, they're built out of some nodejs backend and then some client-side js framework. One debugging pattern that comes up again and again is that there's a bug in the client side JavaScript. Often a current-gen model running in a coding agent is able to solve a client-side bug just by inspecting the code. When it works, it's amazing. But "often" is not the same thing as "every time". If the agent can't solve the problem by inspection it will often fire up a browser MCP and attempt to debug the problem interactively. What it's really trying to do is to get a peek at the browser's console log. This works, but it burns a ton of tokens and takes forever . There's a better way. One of the first things I ask my agents to build when we're doing web dev is a frontend to backend bridge for console logs. There are two parts to this: A tiny development-mode JavaScript shim in the frontend code that sends almost any (or ) message to a backend API endpoint. You want to be careful to make sure that the shim doesn't try to send its own "I can't talk to the backend endpoint" errors to the backend. A backend endpoint that receives frontend log messages from clients and logs them to the server log. With those two things, or whomever you've got coding for you can see frontend and backend log messages in one place, just by tailing a log. It's amazingly useful, really straightforward and so quick to build. It turns out that it's helpful for any humans who are working on your software, too. A tiny development-mode JavaScript shim in the frontend code that sends almost any (or ) message to a backend API endpoint. You want to be careful to make sure that the shim doesn't try to send its own "I can't talk to the backend endpoint" errors to the backend. A backend endpoint that receives frontend log messages from clients and logs them to the server log.

0 views
iDiallo 1 weeks ago

How I Became a Spam Vector

There are several reasons for Google to downrank a website from their search results. My first experience with downranking was on my very first day at a job in 2011. The day I walked into the building, Google released their first Panda update . My new employer, being a "content creator," disappeared from search results. This was a multi-million dollar company that had teams of writers and a portfolio of websites. They depended on Google, and not appearing in search meant we went on code red that first day. But it's not just large companies. Just this year, as AI Overview has dominated the search page, I've seen traffic to this blog falter. At one point, the number of impressions was increasing, yet the number of clicks declined. I mostly blamed it on AI Overview, but it didn't take long before impressions also dropped. It wasn't such a big deal to me since the majority of my readers now come through RSS. Looking through my server logs, I noticed that web crawlers had been accessing my search page at an alarming rate. And the search terms were text promoting spammy websites: crypto, gambling, and even some phishing sites. That seemed odd to me. What's the point of searching for those terms on my website if it's not going to return anything? In fact, there was a bug on my search page. If you entered Unicode characters, the page returned a 500 error. I don't like errors, so I decided to fix it. You can now search for Unicode on my search page. Yay! But it didn't take long for traffic to my website to drop even further. I didn't immediately make the connection, I continued to blame AI Overview. That was until I saw the burst of bot traffic to the search page. What I didn't take into account was that now that my search page was working, when you entered a spammy search term, it was prominently displayed on the page and in the page title. What I failed to see was that this was a vector for spammers to post links to my website. Even if those weren't actual anchor tags on the page, they were still URLs to spam websites. Looking through my logs, I can trace the sharp decline of traffic to this blog back to when I fixed the search page by adding support for Unicode. I didn't want to delete my search page, even though it primarily serves me for finding old posts. Instead, I added a single meta tag to fix the issue: What this means is that crawlers, like Google's indexing crawler, will not index the search page. Since the page is not indexed, the spammy content will not be used as part of the website's ranking. The result is that traffic has started to pick up once more. Now, I cannot say with complete certainty that this was the problem and solution to the traffic change. I don't have data from Google. However, I can see the direct effect, and I can see through Google Search Console that the spammy search pages are being added to the "no index" issues section. If you are experiencing something similar with your blog, it's worth taking a look through your logs, specifically search pages, to see if spammy content is being indirectly added. I started my career watching a content empire crumble under Google's algorithm changes, and here I am years later, accidentally turning my own blog into a spam vector while trying to improve it. The tools and tactics may have evolved, but something never changes. Google's search rankings are a delicate ecosystem, and even well-intentioned changes can have serious consequences. I often read about bloggers that never look past the content they write. Meaning, they don't care if you read it or not. But the problem comes when someone else takes advantage of your website's flaws. If you want to maintain control over your website, you have to monitor your traffic patterns and investigate anomalies. AI Overviews is most likely responsible for the original traffic drop, and I don't have much control over that. But it was also a convenient scape goat to blame everything on and excuse not looking deeper. I'm glad at least that my fix was something simple that anyone can implement.

1 views
pabloecortez 1 weeks ago

I made an 88x31 button for powRSS

I think 88x31 buttons are cute so I made one for powRSS. You can grab it from https://powrss.com/powrss-88-31.png Here it is on my blog's footer!

0 views
Jim Nielsen 1 weeks ago

Malicious Traffic and Static Sites

I wrote about the 404s I serve for robots.txt . Now it’s time to look at some of the other common 404s I serve across my static sites (as reported by Netlify’s analytics): I don’t run WordPress, but as you can see I still get a lot of requests for resources. All of my websites are basically just static files on disk, meaning only GET requests are handled (no POST, PUT, PATCH, etc.). And there’s no authentication anywhere. So when I see these requests, I think: “Sure is nice to have a static site where I don’t have to worry about server maintenance and security patches for all those resources.” Of course, that doesn’t mean running a static site protects me from being exploited by malicious, vulnerability-seeking traffic. Here are a few more common requests I’m serving a 404 to: With all the magic building and bundling we do as an industry, I can see how easy it would be to have some sensitive data in your source repo (like the ones above) end up in your build output. No wonder there are bots scanning the web for these common files! So be careful out there. Just because you’ve got a static site doesn’t mean you’ve got no security concerns. Fewer, perhaps, but not none. Reply via: Email · Mastodon · Bluesky

0 views
Karboosx 1 weeks ago

Homemade tracking system without use of third-party libs like Google Analytics

Tired of sending your analytics data to Google? Build with me a simple, self-hosted tracking system from scratch that respects user privacy, detects bots, and keeps everything on your own servers.

0 views
pabloecortez 1 weeks ago

Black Friday for You and Me

Yesterday it was Thanksgiving and I had the privilege of spending the holiday with my family. We have a tradition of doing a toast going around the table and sharing at least one thing for which we are grateful. I want to share with you a story that started last year, in January of 2024, when a family friend named Germán reached out to me for help with a website for his business. Germán is in his 50s, he went to school for mechanical engineering in Mexico and about twenty years ago he moved to the United States. Today he owns a restaurant in Las Vegas with his wife and also runs a logistics company for distributing produce. We met the last week of January, he told me that he was looking to build a website for his restaurant and eventually build up his infrastructure so most of his business could be automated. His current workflow required his two sons to run the business along with him. They managed everything manually on expensive proprietary software. There were lots of things that could be optimized, so I agreed to jump on board and we have been collaborating ever since. What I assumed would be a developer type of position instead became more of a peer-mentorship relationship. Germán is curious, intelligent, and hard working. It didn't take long for me to notice that he didn't just want to have software or services running "in the background" while he occupied himself with other tasks. He wanted to have a thorough understanding of all the software he adopted. "I want to learn but I simply don't have the patience," he told me during one of our first meetings. At first I admit I thought this was a bit of a red flag (sorry Germán haha) but it all began to make sense when he showed me his books. He had paid thousands of dollars for a Wordpress website that only listed his services and contact information. The company he had hired offered an expensive SEO package for a monthly fee. My time in open source and the indieweb had blinded me to how abusive the "web development" industry had become. I'm referring to those local agencies that take advantage of unsuspecting clients and charge them for every little thing. I began making Germán's website and we went back and forth on assets, copy, menus, we began putting together a project and everything went smoothly. He was happy that he got to see how I built things. During this time I would journal through my work on his project and e-mail my notes to him. He loved it. Next came a new proposition. While the static site was nice to have an online presence, what he was after was getting into e-commerce. His wife, Sarah, makes artisanal beauty products and custom clothes. Her friends would message her on Facebook to ask what new stuff she was working on and she would send pictures to them from her phone. She would have benefitted from having a website, but after the bad experience they had had with the agency, they weren't too enthused about the prospect of hiring them for another project. I met with both of them again for this new project and we talked for hours, more like coworkers this time around. We eventually came to the conclusion that it would be more rewarding for them to really learn how to put their own shop together. I acted more as a coach or mentor than a developer. We'd sit together and activate accounts, fill out pages, choose themes. I was providing a safe space for them to be curious about technology, make mistakes, learn from them, and immediately get feedback on technical details so they could stay on a safe path. I'm so grateful for that opportunity afforded to me by Germán and his family. I've thought about how that approach would look if applied to the indieweb. It's always so exciting for me to see what the friends I've made here are working on. I know the open web becomes stronger when more independent projects are released, as we have more options to free ourselves from the corporate web that has stifled so much of the creativity and passion that I love and miss from the internet. I want to keep doing this. If you are building something on your own, have been out of the programming world for a while but want to start again, or maybe you are almost done and need a little boost in confidence (or accountability!) to reach the finish line and ship, I'm here to help. Check out my coaching page to find out more. I'm excited about the prospect of a community of builders who care about self-reliance and releasing software that puts people first. Perhaps this Black Friday you could choose to invest in yourself :-)

0 views
Kix Panganiban 1 weeks ago

Utteranc.es is really neat

It's hard to find privacy-respecting (read: not Disqus) commenting systems out there. A couple of good ones recommended by Bear are Cusdis and Komments -- but I'm not a huge fan of either of them: Then I realized that there's a great alternative that I've used in the past: utteranc.es . Its execution is elegant: you embed a tiny JS file on your blog posts, and it will map every page to Github Issues in a Github repo. In my case, I created this repo specifically for that purpose. Neat! I'm including utteranc.es in all my blog posts moving forward. You can check out how it looks below: Cusdis styling is very limited. You can only set it to dark or light mode, with no control over the specific HTML elements and styling. It's fine but I prefer something that looks a little neater. Komments requires manually creating a new page for every new post that you make. The idea is that wherever you want comments, you create a page in Komments and embed that page into your webpage. So you can have 1 Komments page per blog post, or even 1 Komments page for your entire blog.

0 views
The Tymscar Blog 1 weeks ago

Imgur Geo-Blocked the UK, So I Geo-Unblocked My Entire Network

Imgur decided to block UK users. Honestly? I don’t really care that much. I haven’t actively browsed the site in years. But it used to be everywhere. Back when Reddit embedded everything on Imgur, maybe fifteen years ago, it was genuinely useful. Then Reddit built their own image hosting, Discord did the same, and Imgur slowly faded into the background. Except it never fully disappeared. And since the block, I keep stumbling across Imgur links that just show “unavailable.” It’s mildly infuriating.

0 views
Langur Monkey 1 weeks ago

Google *unkills* JPEG XL?

I’ve written about JPEG XL in the past. First, I noted Google’s move to kill the format in Chromium in favor of the homegrown and inferior AVIF. 1 2 Then, I had a deeper look at the format, and visually compared JPEG XL with AVIF on a handful of images. The latter post started with a quick support test: “If you are browsing this page around 2023, chances are that your browser supports AVIF but does not support JPEG XL.” Well, here we are at the end of 2025, and this very sentence still holds true. Unless you are one of the 17% of users using Safari 3 , or are adventurous enough to use a niche browser like Thorium or LibreWolf , chances are you see the AVIF banner in green and the JPEG XL image in black/red. The good news is, this will change soon. In a dramatic turn of events, the Chromium team has reversed its tag, and has decided to support the format in Blink (the engine behind Chrome/Chromium/Edge). Given Chrome’s position in the browser market share, I predict the format will become a de factor standard for images in the near future. I’ve been following JPEG XL since its experimental support in Blink. What started as a promising feature was quickly axed by the team in a bizarre and ridiculous manner. First, they asked the community for feedback on the format. Then, the community responded very positively. And I don’t only mean a couple of guys in their basement. Meta , Intel , Cloudinary , Adobe , , , Krita , and many more. After that came the infamous comment: [email protected] [email protected] #85 Oct 31, 2022 12:34AM Thank you everyone for your comments and feedback regarding JPEG XL. We will be removing the JPEG XL code and flag from Chromium for the following reasons: Yes, right, “ not enough interest from the entire ecosystem ”. Sure. Anyway, following this comment, a steady stream of messages pointed out how wrong that was, from all the organizations mentioned above and many more. People were noticing in blog posts, videos, and social media interactions. Strangely, the following few years have been pretty calm for JPEG XL. However, a few notable events did take place. First, the Firefox team showed interest in a JPEG XL Rust decoder , after describing their stance on the matter as “neutral”. They were concerned about the increased attack surface resulting from including the current 100K+ lines C++ reference decoder, even though most of those lines are testing code. In any case, they kind of requested a “memory-safe” decoder. This seems to have kick-started the Rust implementation, jxl-rs , from Google Research. To top it off, a couple of weeks ago, the PDF Association announced their intent to adopt JPEG XL as a preferred image format in their PDF specification. The CTO of the PDF Association, Peter Wyatt, expressed their desire to include JPEG XL as the preferred format for HDR content in PDF files. 4 All of this pressure exerted steadily over time made the Chromium team reconsider the format. They tried to kill it in favor of AVIF, but that hasn’t worked out. Rick Byers, on behalf of Chromium, made a comment in the Blink developers Google group about the team welcoming a performant and memory-safe JPEG XL decoder in Chromium. He stated that the change of stance was in light of the positive signs from the community we have exposed above (Safari support, Firefox updating their position, PDF, etc.). Quickly after that, the Chromium issue state was changed from to . This is great news for the format, and I believe it will give it the final push for mass adoption. The format is excellent for all kinds of purposes, and I’ll be adopting it pretty much instantly for this and the Gaia Sky website when support is shipped. Some of the features that make it superior to the competition are: For a full codec feature breakdown, see Battle of the Codecs . JPEG XL is the future of image formats. It checks all the right boxes, and it checks them well. Support in the overwhelmingly most popular browser engine is probably going to be a crucial stepping stone in the format’s path to stardom. I’m happy that the Chromium team reconsidered their inclusion, but I am sad that it took so long and so much pressure from the community to achieve it. https://aomediacodec.github.io/av1-avif/   ↩︎ https://jpegxl.info/resources/battle-of-codecs.html   ↩︎ https://radar.cloudflare.com/reports/browser-market-share-2025-q1   ↩︎ https://www.youtube.com/watch?v=DjUPSfirHek&t=2284s   ↩︎ https://youtu.be/qc2DvJpXh-A   ↩︎ Experimental flags and code should not remain indefinitely There is not enough interest from the entire ecosystem to continue experimenting with JPEG XL The new image format does not bring sufficient incremental benefits over existing formats to warrant enabling it by default By removing the flag and the code in M110, it reduces the maintenance burden and allows us to focus on improving existing formats in Chrome Lossless re-compression of JPEG images. This means you can re-compress your current JPEG library without losing information and benefit from a ~30% reduction in file size for free. This is a killer feature that no other format has. Support for wide gamut and HDR. Support for image sizes of up to 1,073,741,823x1,073,741,824. You won’t run out of image space anytime soon. AVIF is ridiculous in this aspect, capping at 8,193x4,320. WebP goes up to 16K 2 , while the original 1992 JPEG supports 64K 2 . Maximum of 32 bits per channel. No other format (except for the defunct JPEG 2000) offers this. Maximum of 4,099 channels. Most other formats support 4 or 5, with the exception of JPEG 2000, which supports 16,384. JXL is super resilient to generation loss. 5 JXL supports progressive decoding, which is essential for web delivery, IMO. WebP or HEIC have no such feature. Progressive decoding in AVIF was added a few years back. Support for animation. Support for alpha transparency. Depth map support. https://aomediacodec.github.io/av1-avif/   ↩︎ https://jpegxl.info/resources/battle-of-codecs.html   ↩︎ https://radar.cloudflare.com/reports/browser-market-share-2025-q1   ↩︎ https://www.youtube.com/watch?v=DjUPSfirHek&t=2284s   ↩︎ https://youtu.be/qc2DvJpXh-A   ↩︎

0 views