Posts in Web-development (20 found)

Warning: containment breach in cascade layer!

CSS cascade layers are the ultimate tool to win the specificity wars. Used alongside the selector, specificity problems are a thing of the past. Or so I thought. Turns out cascade layers are leakier than a xenonite sieve. Cross-layer shenanigans can make bad CSS even badder. I discovered a whole new level of specificity hell. Scroll down if you dare! There are advantages too, I’ll start with a neat trick. To setup this trick I’ll quickly cover my favoured CSS methodology for a small website. I find defining three cascade layers is plenty. In I add my reset styles , custom properties, anything that touches a global element, etc. In I add the core of the website. In I add classes that look suspiciously like Tailwind , for pragmatic use. Visually-hidden is a utility class in my system. I recently built a design where many headings and UI elements used an alternate font with a unique style. It made practical sense to use a utility class like the one below. This is but a tribute, the real class had more properties. The class is DRY and easily integrated into templates and content editors. Adding this to the highest cascade layer makes sense. I don’t have to worry about juggling source order or overriding properties on the class itself. I especially do not have to care about specificity or slap everywhere like a fool. This worked well. Then I zoom further into the Figma picture and was betrayed! The design had an edge case where letter-spacing varied for one specific component. It made sense for the design. It did not make sense for my system. If you remember, my cascade layer takes priority over my layer so I can’t simply apply a unique style to the component. For the sake of a demo let’s assume my component has this markup. I want to change back to the normal letter-spacing. Oops, I’ve lost the specificity war regardless of what selector I use. The utility class wins because I set it up to win. My “escape hatch” uses custom property fallback values . In most cases is not defined and the default is applied. For my edge case component I can ‘configure’ the utility class. I’ve found this to be an effective solution that feels logical and intuitive. I’m working with the cascade. It’s a good thing that custom properties are not locked within cascade layers! I don’t think anyone would expect that to happen. In drafting this post I was going to use an example to show the power of cascade layers. I was going to say that not even wins. Then I tested my example and found that does actually override higher cascade layers. It breaches containment too! What colour are the paragraphs? Suffice it to say that things get very weird. See my CodePen . Spoiler: blue wins. I’m sure there is a perfectly cromulent reason for this behaviour but on face value I don’t like it! Bleh! I feel like should be locked within a cascade layer. I don’t even want to talk about the inversion… I’m sure there are GitHub issues, IRC logs, and cave wall paintings that discuss how cascade layers should handle — they got it wrong! The fools! We could have had something good here! Okay, maybe I’m being dramatic. I’m missing the big picture, is there a real reason it has to work this way? It just feels… wrong? I’ve never seen a use case for that wasn’t tear-inducing technical debt. Permeating layers with feels wrong even though custom properties behaving similar feels right. It’s hard to explain. I reckon if you’ve built enough websites you’ll get that sense too? Or am I just talking nonsense? I subscribe to the dogma that says should never be used but it’s not always my choice . I build a lot of bespoke themes. The WordPress + plugin ecosystem is the ultimate specificity war. WordPress core laughs in the face of “CSS methodology” and loves to put styles where they don’t belong . Plugin authors are forced to write even gnarlier selectors. When I finally get to play, styles are an unmitigated disaster. Cascade layers can curtail unruly WordPress plugins but if they use it’s game over; I’m back to writing even worse code. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
iDiallo Yesterday

Back button hijacking is going away

When websites are blatantly hostile, users close them to never come back. Have you ever downloaded an app, realized it was deceptive, and deleted it immediately? It's a common occurrence for me. But there is truly hostile software that we still end up using daily. We don't just delete those apps because the hostility is far more subtle. It's like the boiling frog, the heat turns up so slowly that the frog enjoys a nice warm bath before it's fully cooked. With clever hostile software, they introduce one frustrating feature at a time. Every time I find myself on LinkedIn, it's not out of pleasure. Maybe it's an email about an enticing job. Maybe it's an article someone shared with me. Either way, before I click the link, I have no intention of scrolling through the feed. Yet I end up on it anyway, not because I want to, but because I've been tricked. You see, LinkedIn employs a trick called back button hijacking. You click a LinkedIn URL that a friend shared, read the article, and when you're done, you click the back button expecting to return to whatever app you were on before. But instead of going back, you're still on LinkedIn. Except now, you are on the homepage, where your feed loads with enticing posts that lure you into scrolling. How did that happen? How did you end up on the homepage when you only clicked on a single link? That's back button hijacking. Here's how it works. When you click the original LinkedIn link, you land on a page and read the article. In the background, LinkedIn secretly gets to work. Using the JavaScript method, it swaps the page's URL to the homepage. The method doesn't add an entry to the browser's history. Then LinkedIn manually pushes the original URL you landed on into the history stack. This all happens so fast that the user never notices any change in the URL or the page. As far as the browser is concerned, you opened the LinkedIn homepage and then clicked on a post to read it. So when you click the back button, you're taken back to the homepage, the feed loads, and you're presented with the most engaging post to keep you on the platform. If you spent a few minutes reading the article, you probably won't even remember how you got to the site. So when you click back and see the feed, you won't question it. You'll assume nothing deceptive happened. While LinkedIn only pushes you one level down in the history state, more aggressive websites can break the back button entirely. They push a new history state every time you try to go back, effectively trapping you on their site. In those cases, your only option is to close the tab. I've also seen developers unintentionally break the back button, often when implementing a search feature. On a search box where each keystroke returns a result, an inexperienced developer might push a new history state on every keystroke, intending to let users navigate back to previous search terms. Unfortunately, this creates an excessive number of history entries. If you typed a long search query, you'd have to click the back button for every character (including spaces) just to get back to the previous page. The correct approach is to only push the history state when the user submits or leaves the search box ( ). As of yesterday, Google announced a new spam policy to address this issue. Their reasoning: People report feeling manipulated and eventually less willing to visit unfamiliar sites. As we've stated before, inserting deceptive or manipulative pages into a user's browser history has always been against our Google Search Essentials. Any website using these tactics will be demoted in search results: Pages that are engaging in back button hijacking may be subject to manual spam actions or automated demotions, which can impact the site's performance in Google Search results. To give site owners time to make any needed changes, we're publishing this policy two months in advance of enforcement on June 15, 2026. I'm not sure how much search rankings affect LinkedIn specifically, but in the grand scheme of things, this is a welcome change. I hope this practice is abolished entirely.

0 views

Self-hosting and surviving the front page of Hacker News

IntroLast October, my post “Why I Ditched Disqus for My Blog” unexpectedly reached the #1 spot on Hacker News - and stayed on the front page for more than 11 hours. Overnight, a blog I usually run quietly from an old laptop-turned-server at home was under the kind of scrutiny I never prepared for. For context, a couple of years ago, I migrated from Ghost to Hugo, and started self-hosting in a truly DIY way - a setup that could reliably handle my modest regular traffic.

0 views
Josh Comeau 2 days ago

Squash and Stretch

Have you ever heard of Disney’s 12 Basic Principles of Animation? In this tutorial, we’ll explore how we can use the very first principle to create SVG micro-interactions that feel way more natural and believable. It’s one of those small things that has a big impact.

0 views
Kev Quirk 3 days ago

Adding a Book Editor to My Pure Blog Site

Regular readers will know that I've been on quite the CMS journey over the years. WordPress, Grav, Jekyll, Kirby, my own little Hyde thing, and now Pure Blog . I won't bore you with the full history again, but the short version is: I kept chasing just the right amount of power and simplicity, and I think Pure Blog might actually be it. But there was one nagging thing. I have a books page that's powered by a YAML data file, which creates a running list of everything I've read with ratings, summaries, and the occasional opinion. It worked great, but editing it meant cracking open a YAML file in my editor and being very careful not to mess up the indentation. Not ideal. So I decided to build a proper admin UI for it. And in doing so, I've confirmed that Pure Blog is exactly what I wanted it to be - flexible and hackable. I added a new Books tab to the admin content page, and a dedicated editor page. It's got all the fields I need - title, author, genre, dates, a star rating dropdown, and a Goodreads URL. I also added CodeMirror editors for the summary and opinion fields, so I have all the markdown goodness they offer in the post and page editors. The key thing is that none of this touched the Pure Blog core. Not a single line. My new book list in Pure Blog A book being edited Pure Blog has a few mechanisms that make this kind of thing surprisingly clean: is auto-loaded after core, so any custom functions I define there are available everywhere — including in admin pages. I put my function here, which takes the books data and writes it back to the data file, then clears the cache — exactly like saving a normal post does. Again, zero core changes. is the escape hatch for when I do need to override a core file. I added both (where I added the Books tab) and (the new editor) to the ignore list , so future Pure Blog updates won't mess with them. It's a simple text file, one path per line. Patch what you need, ignore it, and move on. is where it gets a bit SSG-ish. The books page is powered by — a PHP file that loads the YAML, sorts it by read date, and renders the whole page. It's essentially a template, not unlike a Liquid or Nunjucks layout in Jekyll or Eleventy. Same idea for the books RSS feed . Using a YAML data file for books made more sense to me, rather than markdown files like a post or a page, as it's all metadata really. There's no real "content" for these entries. Put those three things together and you've got something pretty nifty. A customisable admin UI, safe core patching, and template-driven data pages — all without a plugin system or any framework magic. Bloody. Brilliant. I spent years chasing the perfect CMS, and a big part of what I was looking for was this . The ability to build exactly what I need without having to fight the platform, or fork it, or bolt on a load of plugins. With Kirby, I could do this kind of thing, but the learning curve was steep and the blueprint system took me ages to get my head around. With Jekyll/Hyde, I had the SSG flexibility, but no web-based CMS I could login to and create content - I needed my laptop. Pure Blog sits in a really nice middle ground — it's got a proper admin interface out of the box, but it gets out of the way when you want to extend it. I'm chuffed with how the book editor turned out. It's a small thing, but it's exactly what I wanted, and the fact that it all lives outside of core means I can update Pure Blog without worrying about losing any of it. Now, if you'll excuse me, I have some books to log. 📚 Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
Kev Quirk 3 days ago

How I Discover New Blogs

Finding a new blog to read is one of my favourite things to do online. It genuinely brings me joy. Right now I have 230 sites that I follow in my RSS reader, Miniflux . If I ever want to spend some time reading, I'll usually open Miniflux over my Mastodon client, Moshidon. There's no likes, boosts, hashtags etc. just interesting people sharing interesting opinions. It's lovely. So how do I discover these blogs? There's many ways to do it, but here's some that I've found most successful, ranked from most useful, to least. When someone I already enjoy reading links to a post from another blogger, either just to share their posts, or to add their own commentary to the conversation. This (to me at least) is the most useful way to discover new blogs to read. It's the entire premise of the Indieweb, so if you own a blog, please make sure you're linking to other blogs in your posts. 🙃 There are a number of great small/indie web aggregators out there, and there seems to be new ones popping up all the time. Here's a list of some of my favourites: I tend to use these as a kind of extended RSS reader. So if I'm up to date on my RSS feeds, I'll use these as a way to continue hunting for new people to follow. Truth is, I actually spend more time on these sites than I do on the fediverse. Speaking of which... There's lots of cool people on the fediverse , and many of them have blogs. Even those who don't blog will regularly share links to posts they've enjoyed. I also nose at hashtags of the topics that interest me, rather than just the timeline of people I follow. So remember to add hashtags to your posts - they're a great way to aid discovery. 👍🏻 This last bucket is just everything else ; where I naturally find my way to a blog while surfing the net. I've discovered some great blogs this way, but it's becoming harder and harder to find indie blogs this way, as discoverability on the web has been overtaken by AI summaries and SEO. 😏 It's still possible though. There's plenty of interesting people out there, creating great posts for us all to enjoy. The indie web is thriving, and if you're not taking advantage of it, you're missing out! Why not take a look at a couple of the sites I've listed above and see what you discover? It's a tonne of fun. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . Bear Blog Discover Blogosphere Kagi Small Web

0 views
iDiallo 5 days ago

Your friends are hiding their best ideas from you

Back in college, the final project in our JavaScript class was to build a website. We were a group of four, and we built the best website in class. It was for a restaurant called the Coral Reef. We found pictures online, created a menu, and settled on a solid theme. I was taking a digital art class in parallel, so I used my Photoshop skills to place our logo inside pictures of our fake restaurant. All of a sudden, something clicked. We were admiring our website on a CRT monitor when my classmate pulled me aside. She had an idea. A business idea. An idea so great that she couldn't share it with the rest of the team. She whispered, covering her mouth with one hand so a lip reader couldn't steal this fantastic idea: "what if we build websites for people?" This was the 2000s, of course it was a fantastic idea. The perfect time to spin up an online business after a market crash. But what she didn't know was that, while I was in class in the mornings, my afternoons were spent scouring Craigslist and building crappy websites for a hundred to two hundred dollars a piece. I wasn't going to share my measly spoils. If anything, this was the perfect time to build that kind of service. That's a great idea , I said. There is something satisfying about having an idea validated. A sort of satisfaction we get from the acknowledgment. We are smart, and our ideas are good. Whenever someone learned that I was a developer, they felt this urge to share their "someday" idea. It's an app, a website, or some technology I couldn't even make sense of. I used to try to dissect these ideas, get to the nitty-gritty details, scrutinize them. But that always ended in hostility. "Yeah, you don't get it. You probably don't have enough experience" was a common response when I didn't give a resounding yes. I don't get those questions anymore, at least not framed in the same way. I have worked for decades in the field, and I even have a few failed start-ups under my belt. I'm ready to hear your ideas. But that job has been taken, not by another eager developer with even more experience, or maybe a successful start-up on their résumé. No, not a person. AI took this job. Somewhere behind a chatbot interface, an AI is telling one of your friends that their idea is brilliant. Another AI is telling them to write out the full details in a prompt and it will build the app in a single stroke. That friend probably shared a localhost:3000 link with you, or a Lovable app, last year. That same friend was satisfied with the demo they saw then and has most likely moved on. In the days when I stood as a judge, validating an idea was rarely what sparked a business. The satisfaction was in the telling. And today, a prompt is rarely a spark either. In fact, the prompt is not enough. My friends share a link to their ChatGPT conversation as proof that their idea is brilliant. I can't deny it, the robot has already spoken. I'm not the authority on good or bad ideas. I've called ideas stupid that went on to make millions of dollars. (A ChatGPT wrapper for SMS, for instance.) A decade ago, I was in Y Combinator's Startup School. In my batch, there were two co-founders: one was the developer, and the other was the idea guy. In every meeting, the idea guy would come up with a brand new idea that had nothing to do with their start-up. The instructor tried to steer him toward being the salesman, but he wouldn't budge. "My talent is in coming up with ideas," he said. We love having great ideas. We're just not interested in starting a business, because that's what it actually takes. A friend will joke, "here's an idea" then proceeds to tell me their idea. "If you ever build it, send me my share." They are not expecting me to build it. They are happy to have shared a great idea. As for my classmate, she never spoke of the business again. But over the years, she must have sent me at least a dozen clients. It was a great idea after all.

0 views

BlogLog April 10 2026

Subscribe via email or RSS I added a new page to my blog in the header showing all the specifications of my homelab and self-hosted services. It will be updated as I continue to update my services or infrastructure. Fixed misspellings in Overview of My Homelab post.

0 views
David Bushell 5 days ago

No-stack web development

This year I’ve been asked more than ever before what web development “stack” I use. I always respond: none. We shouldn’t have a go-to stack! Let me explain why. My understanding is that a “stack” is a choice of software used to build a website. That includes language and tooling, libraries and frameworks , and heaven forbid: subscription services. Text editors aren’t always considered part of the stack but integration is a major factor. Web dev stacks often manifest as used to install hundreds of megs of JavaScript, Blazing Fast ™ Rust binaries, and never ending supply chain attacks . A stack is also technical debt, non-transferable knowledge, accelerated obsolescence, and vendor lock-in. That means fragility and overall unnecessary complication. Popular stacks inevitably turn into cargo cults that build in spite of the web, not for it. Let’s break that down. If you have a go-to stack, you’ve prescribed a solution before you’ve diagnosed a problem. You’ve automatically opted in to technical baggage that you must carry the entire project. Project doesn’t fit the stack? Tough; shoehorn it to fit. Stacks are opinionated by design. To facilitate their opinions, they abstract away from web fundamentals. It takes all of five minutes for a tech-savvy person to learn JSON . It takes far, far longer to learn Webpack JSON . The latter becomes useless knowledge once you’ve moved on to better things. Brain space is expensive. Other standards like CSS are never truly mastered but learning an abstraction like Tailwind will severely limit your understanding. Stacks are a collection of move-fast-and-break churnware; fleeting software that updates with incompatible changes, or deprecates entirely in favour of yet another Rust refactor. A basic HTML document written 20 years ago remains compatible today. A codebase built upon a stack 20 months ago might refuse to play. The cost of re-stacking is usually unbearable. Stack-as-a-service is the endgame where websites become hopelessly trapped. Now you’re paying for a service that can’t fix errors . You’ve sacrificed long-term stability and freedom for “developer experience”. I’m not saying you should code artisanal organic free-range websites. I’m saying be aware of the true costs associated with a stack. Don’t prescribed a solution before you’ve diagnosed a problem. Choose the right tool for each job only once the impact is known. Satisfy specific goals of the website, not temporary development goals. Don’t ask a developer what their stack is without asking what problem they’re solving. Be wary of those who promote or mandate a default stack. Be doubtful of those selling a stack. When you develop for a stack, you risk trading the stability of the open web platform, that is to say: decades of broad backwards compatibility, for GitHub’s flavour of the month. The web platform does not require build toolchains. Always default to, and regress to, the fundamentals of CSS, HTML, and JavaScript. Those core standards are the web stack. Yes, you’ll probably benefits from more tools. Choose them wisely. Good tools are intuitive by being based on standards, they can be introduced and replaced with minimal pain. My only absolute advice: do not continue legacy frameworks like React . If that triggers an emotional reaction: you need a stack intervention! It may be difficult to accept but Facebook never was your stack; it’s time to move on. Use the tool, don’t become the tool. Edit: forgot to say: for personal projects, the gloves are off. Go nuts! Be the churn. Learn new tools and even code your own stack. If you’re the sole maintainer the freedom to make your own mistakes can be a learning exercise in itself. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
Jim Nielsen 6 days ago

Fewer Computers, Fewer Problems: Going Local With Builds & Deployments

Me, in 2025, on Mastodon : I love tools like Netlify and deploying my small personal sites with But i'm not gonna lie, 2025 might be the year I go back to just doing builds locally and pushing the deploys from my computer. I'm sick of devops'ing stupid stuff because builds work on my machine and I have to spend that extra bit of time to ensure they also work on remote linux computers. Not sure I need the infrastructure of giant teams working together for making a small personal website. It’s 2026 now, but I finally took my first steps towards this. One of the ideas I really love around the “local-first” movement is this notion that everything canonical is done locally, then remote “sync” is an enhancement. For my personal website, I want builds and deployments to work that way. All data, build tooling, deployment, etc., happens first and foremost on my machine. From there, having another server somewhere else do it is purely a “progressive enhancement”. If it were to fail, fine. I can resort back to doing it locally very easily because all the tooling is optimized for local build and deployment first (rather than being dependent on fixing some remote server to get builds and deployments working). It’s amazing how many of my problems come from the struggle to get one thing to work identically across multiple computers . I want to explore a solution that removes the cause of my problem, rather than trying to stabilize it with more time and code. “The first rule of distributed computing is don’t distribute your computing unless you absolutely have to” — especially if you’re just building personal websites. So I un-did stuff I previously did (that’r right, my current predicament is self-inflicted — imagine that). My notes site used to work like this : It worked, but sporadically. Sometimes it would fail, then start working again, all without me changing anything. And when it did work, it often would take a long time — like five, six minutes to run a build/deployment. I never could figure out the issue. Some combination of Netlify’s servers (which I don’t control and don’t have full visibility into) talking to Dropbox’s servers (which I also don’t control and don’t have full visibility into). I got sick of trying to make a simple (but distributed) build process work across multiple computers when 99% of the time, I really only need it to work on one computer. So I turned off builds in Netlify, and made it so my primary, local computer does all the work. Here are the trade-offs: The change was pretty simple. First, I turned off builds in Netlify. Now when I Netlify does nothing. Next, I changed my build process to stop pulling markdown notes from the Dropbox API and instead pull them from a local folder on my computer. Simple, fast. And lastly, as a measure to protect myself from myself, I cloned the codebase for my notes to a second location on my computer. This way I have a “working copy” version of my site where I do local development, and I have a clean “production copy” of my site which is where I build/deploy from. This helps ensure I don’t accidentally build and deploy my “working copy” which I often leave in a weird, half-finished state. In my I have a command that looks like this: That’s what I run from my “clean” copy. It pulls down any new changes, makes sure I have the latest deps, builds the site, then lets Netlify’s CLI deploy it. As extra credit, I created a macOS shortcut So I can do , type “Deploy notes.jim-nielsen.com” to trigger a build, then watch the little shortcut run to completion in my Mac’s menubar. I’ve been living with this setup for a few weeks now and it has worked beautifully. Best part is: I’ve never had to open up Netlify’s website to check the status of a build or troubleshoot a deployment. That’s an enhancement I can have later — if I want to. Reply via: Email · Mastodon · Bluesky Content lives in Dropbox Code is on GitHub Netlify’s servers pull both, then run a build and deploy the site What I lose : I can no longer make edits to notes, then build/deploy the site from my phone or tablet. What I gain : I don’t have to troubleshoot build issues on machines I don’t own or control. Now, if it “works on my machine”, it works period.

0 views
matduggan.com 6 days ago

You can absolutely have an RSS dependent website in 2026

I write stuff here. Sometimes the stuff is good. Sometimes it reads like I wrote it at 2 AM after an argument with a YAML file, which is because I did. But one decision I made early on was that I didn't want to offer an email newsletter. Part of this was simple economics. At one point I did have a Subscribe button up, and enough people clicked it that the cost of actually sending those emails started to resemble a real bill. Sending thousands of emails when you have no ads, no sponsors, and no monetization strategy beyond "I guess people will just... read it?" doesn't make a lot of financial sense. But the bigger reason — the one I actually care about — is that I didn't want a database full of email addresses sitting under my control if I could possibly avoid it. There's a particular flavor of anxiety that comes with being the custodian of other people's personal data, a low-grade dread not unlike realizing you've been entrusted with someone's elderly cat for two weeks and the cat has a medical condition. I can't lose data I don't have. I never need to lie awake wondering whether some user is reusing their bank password to log into my website just to manage their subscription preferences. The best way I can safeguard user data is by never having any in the first place. It's not a security strategy you'll find in any textbook, but it is airtight. Now, when I explained this philosophy to people who run similar websites, the reaction was — and I'm being generous here — warm laughter . The kind of laughter you get when you ask if an apartment in Copenhagen is under $1,000,000. Email newsletters are the only way to run a site like this, they said. RSS is dead, they said. You might as well be distributing your writing via carrier pigeon or community bulletin board. One person looked at me the way you'd look at someone who just announced they were going to navigate cross-country using only a paper atlas. Not angry. Just sad. I'm lucky in that I'm not trying to get anyone to pay me to come here. If I were, the math would probably change. I'd be out there A/B testing subject lines and agonizing over open rates like everyone else, slowly losing pieces of my soul in a spreadsheet. But if your question is simply, "Can I make a hobbyist website that actual humans will find and read without an email newsletter?" — the answer is a resounding yes. And I have the logs to prove it. All of this is from Nginx access.log. These logs get rotated daily and don't include the majority of requests that hit the Cloudflare cache before they ever reach my server, so the real numbers are higher. But I think they're reasonably representative of the overall shape of things. About half my traffic is readers hitting or — people who have, of their own free will, pointed an RSS reader at my site and said yes, tell me when this person has opinions again . The other half are arriving via a specific link they stumbled across somewhere in the wild. If we do a deeper dive into that specific RSS traffic, we learn a few interesting things. The user-agent breakdown shows the usual suspects — the RSS readers you'd expect, the ones that have been around long enough to have their own Wikipedia articles. There are also some abusers in the metrics. I have no idea what "Daily-AI-Morning" is, but whatever it's doing, it's polling my feed with the frantic energy of someone refreshing a package tracking page on delivery day. The time distribution, though, is pretty good — spread out across the day in a way that suggests real humans checking their feeds at real human intervals, rather than a single bot hammering me every thirty seconds. My conclusion is this: if you want to run a website that relies primarily on RSS instead of email newsletters, you absolutely can. The list of RSS readers hasn't dramatically changed in a long time, which is actually reassuring — it means the ecosystem is stable, not dead. The people who use RSS really use RSS. They're not trend-chasers. They're the type who still have a working bookmark toolbar. They are, in the best possible sense, your people. Effectively, if you make your site RSS-friendly and you test it in NetNewsWire, you will — slowly, quietly, without a single "SUBSCRIBE FOR MORE" pop-up — build a real audience of people who actually want to read what you write. No email database required. No passwords to leak. No giant confusing subscription system.

0 views
Kev Quirk 1 weeks ago

Obfuscating My Contact Email

I stumbled across this great post by Spencer Mortensen yesterday, which tested different email obfuscation techniques against real spambots to see which ones actually work. It's a fascinating read, and I'd recommend checking it out if you're into that sort of thing. The short version is that spambots scrape your HTML looking for email addresses. If your address is sitting there in plain text, they'll hoover it up. But if you encode each character as a HTML entity , the browser still renders and uses it correctly, while most bots haven't got a clue what they're looking at. From Spencer's testing, this approach blocks around 95% of harvesters, which is good enough for me. On this site, my contact email shows up in two places: Both pull from the value in Pure Blog's config, so I only needed to make a couple of changes. The reply button lives in , which is obviously a PHP file. So the fix there was straightforward - I ditched the shortcode and used PHP directly to encode the address character by character into HTML entities: Each character becomes something like , which is gibberish to a bot, but perfectly readable to a human using a browser. The shortcode still gets replaced normally by Pure Blog after the PHP runs, so the subject line still works as expected. The contact page is a normal page in Pure Blog, so it's Markdown under the hood. This means I can't drop PHP into it. Instead, I used Pure Blog's hook , which runs after shortcodes have already been processed. By that point, has been replaced with the plain email address, so all I needed to do was swap it for the encoded version: This goes in , and now any page content that passes through Pure Blog's function will have the email automatically encoded. So if I decide to publish my elsewhere, it should automagically work. As well as the obfuscation, I also set up my email address as a proper alias rather than relying on a catch-all to segregate emails . That way, if spam does somehow get through, I can nuke the alias, create a new one, and update it in Pure Blog's settings page. Is this overkill? Probably. But it was a fun little rabbit hole, and now I can feel smug about it. 🙃 Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . The Reply by email button at the bottom of every post. My contact page .

0 views
iDiallo 1 weeks ago

AI Did It in 12 Minutes. It Took Me 10 Hours to Fix It

I've been working on personal projects since the 2000s. One thing I've always been adamant about is understanding the code I write. Even when Stack Overflow came along, I was that annoying guy who told people not to copy and paste code into their repos. Instead, they should read it and adapt it to their specific case. On personal projects, I've applied this to a fault. Projects never get done because I'm reading and editing code to make it work exactly as I want. I am by no means trying to convince you that my code is high quality. Every day, I regret the design choices I made for this very blog. But at the very least, I like to understand the code that powers my projects. So you can imagine how I struggle with the reviewing part when AI writes a large chunk of our daily work. Large language models are just so verbose, and often produce large blocks of code that don't even get used. I don't want to attribute it to malice (wasting your tokens) when I know this is an emergent technology we are all still adapting to. But it doesn't help that there is just so much code to review. What I tell myself when I review an AI-generated PR is: if I don't have a mental model of how the application works, how can I be of any use when it fails? This weekend, I decided to tackle a project I've been postponing since I created this blog over a decade ago. I needed a nice interface to upload assets, such as images, to go with each blog post. According to my git history, I started work on it 11 years ago but never finished. Yet I've still been uploading images regularly to go with blog posts. How? Using and . Like a caveman. Whenever I finish writing a blog post, I create images in Photoshop or Photopea, fire up FileZilla to connect to my file server, and drag and drop my assets. Then, of course, I have to log back into the blog's admin panel and provide the correct URLs for those assets. It's as primitive as it gets. This was a perfect project to tackle with AI. The good news is that I was able to complete it in a single day. The bad news. Well, I read through close to 5,000 lines of code. This is the first time I've used an LLM to write PHP code. I used the GLM-5 model from z.ai, which I've often played with for generating graphs from data. I'll have to try it on a few more PHP projects before I commit to this opinion, but it consistently wrote spaghetti code . Spaghetti code, to me, is anything where logic is embedded directly in a template. HTML, CSS, and database queries all crammed into the same file. That's really hard to maintain. I did realize, though, that my instructions were in direct contradiction with the agent's system prompt. It was explicit in the thinking process: This is NOT a Next.js project - the user explicitly wants a PHP app. So I should NOT use the Next.js project that's already initialized. I should create a PHP application. Wait, but the instructions say "You are an interactive CLI tool that helps users with software engineering tasks" and "Now you are developing a comprehensive and feature-rich Next.js project from scratch." But the user is explicitly asking for a PHP app. I should follow the user's request. It did generate a bunch of Node.js code, which I was able to remove manually. Luckily, it kept the PHP project in its own folder. If you're wondering how 12 files contain ~5,000 lines of code, I wondered the same. But that's what spaghetti code does. I set it up locally, ran and , and a few more files and folders were generated. When I finally ran the application, it didn't work. I spent a few hours working through permissions, updating the install script, and modifying the SQLite setup. I thought StackOverflow was dead, but I don't think I would have gotten SQLite working without it. One error, for example, was that SQLite kept throwing a warning that it was running in read-only mode. Apparently, you have to make the parent folder writable (not just the database file) to enable write mode. It had been a long time since I'd manually d files in PHP. I normally use namespaces and autoload. Since this project was generated from scratch, I had to hunt down various statements that all had incorrect paths. Once I sorted those out, I had to deal with authentication. PHP sessions come with batteries included, you call and you can read and write session variables via the global. But I couldn't figure out why it kept failing. When I created a standalone test file, sessions worked fine. But when loaded through the application, values weren't being saved. I spent a good while debugging before I found that was missing from the login success flow. When I logged in, the page redirected to the dashboard, but every subsequent action that required authentication immediately kicked me out. Even after fixing all those issues and getting uploads working, something still bothered me: how do I maintain this code? How do I add new pages to manage uploaded assets? Do I add meatballs directly to the spaghetti? Or do I just trust the AI agent to know where to put new features? Technically it could do that, but I'd have to rely entirely on the AI without ever understanding how things work. So I did the only sane thing: I rewrote a large part of the code and restructured the project. Maybe I should have started there, but I didn't know what I wanted until I saw it. Which is probably why I had been dragging this project along for 11 years. Yes, now I have 22 files, almost double the original count. But the code is also much simpler at just 1,254 lines. There's far less cognitive load when it comes to fixing bugs. There's still a lot to improve, but it's a much leaner foundation. The question I keep coming back to is: would it have been easier to do this manually? Well, the timeline speaks for itself. I had been neglecting this project for years. Without AI, I probably never would have finished it. That said, it would have been easier to build on my existing framework. My blog's framework has been tested for years and has accumulated a lot of useful features: a template engine, a working router, an auth system, and more. All things I had to re-engineer from scratch here. If I'd taken the time to work within my own framework, it probably would have taken less time overall. But AI gave me the illusion that the work could be done much faster. Z.ai generated the whole thing in just 12 minutes. It took an additional 10 hours to clean it up and get it working the way I wanted. This reminds me of several non-technical friends who built/vibe-coded apps last year. The initial results looked impressive. Most of them don't have a working app anymore, because they realized that the cleanup is just as important as the generation if you want something that actually holds together. I can only imagine what "vibe-debugging" looks like. I'm glad I have a working app, but I'm not sure I can honestly call this vibe-coded. Most, if not all, of the files have been rewritten. When companies claim that a significant percentage of their code is AI-generated , do their developers agree? For me, it's unthinkable to deploy code I haven't vetted and understood. But I'm not the benchmark. In the meantime, I think I've earned the right to say this the next time I ship an AI-assisted app: "I apologize for so many lines of code - I didn't have time to write a shorter app."

0 views
Jim Nielsen 1 weeks ago

I Tried Vibing an RSS Reader and My Dreams Did Not Come True

Simon Willison wrote about how he vibe coded his dream presentation app for macOS . I also took a stab at vibe coding my dream app: an RSS reader. To clarify: Reeder is my dream RSS app and it already exists, so I guess you could say my dreams have already come true? But I’ve kind of always wanted to try an app where my RSS feed is just a list of unread articles and clicking any one opens it in the format in which it was published (e.g. the original website). So I took a stab at it. (Note: the backend portion of this was already solved, as I simply connected to my Feedbin account via the API .) First I tried a macOS app because I never would’ve tried a macOS app before. Xcode, Swift, a Developer Account? All completely outside my wheelhouse. But AI helped be get past that hurdle of going from nothing to something . It was fun to browse articles and see them in situ . A lot of folks have really great personal websites so it’s fun to see their published articles in that format. This was pretty much pure vibes. I didn’t really look at the code at all because I knew I wouldn’t understand any of it. I got it working the first night I sat down and tried it. It was pretty crappy but it worked. From there I iterated. I’d use it for a day, fix things that were off, keep using it, etc. Eventually I got to the point where I thought: I’m picky about software, so the bar for my dreams is high. But I’m also lazy, so my patience is quite low. The intersection of: the LLM failing over and over + my inability to troubleshoot any of it + not wanting to learn = a bad combination for persevering through debugging. Which made me say: “Screw it, I’ll build it as a website!” But websites don’t really work for this kind of app because of CORS. I can’t just stick an article’s URL in an and preview it because certain sites have cross site headers that don’t allow it to display under another domain. But that didn’t stop me. I tried building the idea anyway as just a list view. I could install this as a web app on my Mac and I'd get a simple list view: Anytime I clicked on a link, it would open in my default browser. Actually not a bad experience. It worked pretty decent on my phone too. Once I visited my preview deploy, I could "isntall" it to my home screen and then when I opened it, I'd have my latest unread articles. Clicking on any of them would open a webview that I could easily dismiss and get back to my list. Not too bad. But not what I wanted, especially on desktop. It seemed like the only option to 1) get exactly what I wanted, and 2) distribute it — all in a way that I could understand in case something went wrong or I had to overcome an obstacle — was to make a native app. At this point, I was thinking: “I’m too tired to learn Apple development right now, and I’ve worked for a long time on the web, so I may as well leverage the skills that I got.” So I vibed an Electron app because Electron will let me get around the cross site request issues of a website. This was my very first Electron app and, again, the LLM helped me go from nothing to something quite quickly (but this time I could understand my something way better). The idea was the same: unread articles on the left, a preview of any selected articles on the right. Here’s a screenshot: It’s fine. Not really what I want. But it’s a starting point. Is it better than Reeder? Hell no. Is it my wildest dreams realized? Also no. But it’s a prototype of an idea I’ve wanted to explore. I”m not sure I’ll go any further on it. It’s hacky enough that I can grasp a vision for what it could be. The question is: do I actually want this? Is this experience something I want in the long run? I think it could be. But I have to figure out exactly how I want to build it as a complementary experience to my preferred way of going through my RSS feed. Which won't be your preference. Which is why I'm not sharing it. So what’s my takeaway from all this? I don’t know. That’s why I’m typing this all out in a blog post. Vibe coding is kinda cool. It lets you go from “blank slate” to “something” way faster and easier than before. But you have to be mindful of what you make easy . You know what else is easy? Fast food. But I don’t want that all the time. In fact, vibe coding kinda left me with that feeling I get after indulging in social media, like “What just happened? Two hours have passed and what did I even spend my time doing? Just mindlessly chasing novelty?” It’s fun and easy to mindlessly chasing your whims. But part of me thinks the next best step for this is to sit and think about what I actually want, rather than just yeeting the next prompt out. I’ve quipped before that our new timelines are something like: The making from nothing isn't as hard anymore. But everything after that still is. Understanding it. Making it good. Distributing it. Supporting it. Maintaining it. All that stuff. When you know absolutely nothing about those — like I did with macOS development — things are still hard. After all this time vibing, instead of feeling closer to my dream, I actually kinda feel further from it. Like the LLM helped close the gap in understanding what it would actually take for me to realize my dreams. Which made me really appreciate the folks who have poured a lot of time and thought and effort into building RSS readers I use on a day-to-day basis. Thank you makers of Feedbin & Reeder & others through the years. I’ll gladly pay you $$$ for your thought and care. In the meantime, I may or may not be over here slowly iterating on my own supplemental RSS experience. In fact, I might’ve just found the name: RxSSuplement. Reply via: Email · Mastodon · Bluesky Ok, I could use this on my personal computer. I don’t know that I’ll be able to iterate on this much more because its getting more complicated and failing more and more with each ask ( I was just trying to move some stupid buttons around in the UI and the AI was like, “Nah bro, I can’t.”) I have no idea how I’d share this with someone. I don’t think I’d be comfortable sharing this with someone (even though I think I did things like security right by putting credentials in the built-in keychain, etc.) I guess this is where the road stops. Nothing -> Something? 1hr. Something -> Something Good ? 1 year.

0 views
fLaMEd fury 1 weeks ago

Link Dump: March 2026

What’s going on, Internet? Trying something different. All the pages I bookmarked this month, no life updates in between. Want more? Check out all my bookmarks at /bookmarks/ and subscribe to the bookmarks feed . Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website. Scroll trīgintā ūnus by Shellsharks - Sharing the latest edition of scrolls, posting online without overthinking. I Am Happier Writing Code by Hand by Abhinav Omprakash - Letting AI write his code kills the satisfaction that made programming worth doing. You Are The Driver (The AI Is Just Typing) by Keith - AI coding tools are only useful once you already know what you’re doing. They automate typing, not thinking. Oceania Web Atlas by Zachary Kai - Collects personal websites from across Oceania into one tidy, human-scaled directory. Building the Good Web by Brennan - Building for users instead of against them is what separates the good web from everything else. Unpolished human websites by Joel - Keep your website messy and human. What is Digital Garage - Digital Tinker’s website is a workshop built for joy, not productivity. Creation without pressure. How to feel at home on the Internet by Jatan Mehta - Is having your own domain really the only way to truly “own” your online space? Endgame for the Open Web by Anil Dash - Is 2026 the last year we have a chance to put a stop on the dismantling of the open web?

0 views
Jason Scheirer 1 weeks ago

Golang Webview Installer for Wails 3

Top Matter : Codeberg for the library , doc for the library . I’ve forked Lea Anthony’s library that eventually made its way into core Wails for two reasons: So here we are. I want it in Wails 3 and it’s not there I want to shave a meg off the binary size by not providing the embedded installer exe

0 views
Susam Pal 1 weeks ago

Wander Console 0.4.0

Wander Console 0.4.0 is the fourth release of Wander, a small, decentralised, self-hosted web console that lets visitors to your website explore interesting websites and pages recommended by a community of independent website owners. To try it, go to susam.net/wander/ . This release brings a few small additions as well as a few minor fixes. You can find the previous release pages here: /code/news/wander/ . The sections below discuss the current release. Wander Console now supports wildcard patterns in ignore lists. An asterisk ( ) anywhere in an ignore pattern matches zero or more characters in URLs. For example, an ignore pattern like can be used to ignore URLs such as this: These ignore patterns are specified in a console's wander.js file. These are very important for providing a good wandering experience to visitors. The owner of a console decides what links they want to ignore in their ignore patterns. The ignore list typically contains commercial websites that do not fit the spirit of the small web, as well as defunct or incompatible websites that do not load in the console. A console with a well maintained ignore list ensures that a visitor to that console has a lower likelihood of encountering commercial or broken websites. For a complete description of the ignore patterns, see Customise Ignore List . By popular demand , Wander now adds a query parameter while loading a recommended web page in the console. The value of this parameter is the console that loaded the recommended page. For example, if you encounter midnight.pub/ while using the console at susam.net/wander/ , the console loads the page using the following URL: This allows the owner of the recommended website to see, via their access logs, that the visit originated from a Wander Console. While this is the default behaviour now, it can be customised in two ways. The value can be changed from the full URL of the Wander Console to a small identifier that identifies the version of Wander Console used (e.g. ). The query parameter can be disabled as well. For more details, see Customise 'via' Parameter . In earlier versions of the console, when a visitor came to your console to explore the Wander network, it picked the first recommendation from the list of recommended pages in it (i.e. your file). But subsequent recommendations came from your neighbours' consoles and then their neighbours' consoles and so on recursively. Your console (the starting console) was not considered again unless some other console in the network linked back to your console. A common way to ensure that your console was also considered in subsequent recommendations too was to add a link to your console in your own console (i.e. in your ). Yes, this created self-loops in the network but this wasn't considered a problem. In fact, this was considered desirable, so that when the console picked a console from the pool of discovered consoles to find the next recommendation, it considered itself to be part of the pool. This workaround is no longer necessary. Since version 0.4.0 of Wander, each console will always consider itself to be part of the pool from which it picks consoles. This means that the web pages recommended by the starting console have a fair chance of being picked for the next web page recommendation. The Wander Console loads the recommended web pages in an element that has sandbox restrictions enabled. The sandbox properties restrict the side effects the loaded web page can have on the parent Wander Console window. For example, with the sandbox restrictions enabled, a loaded web page cannot redirect the parent window to another website. In fact, these days most modern browsers block this and show a warning anyway, but we also block this at a sandbox level too in the console implementation. It turned out that our aggressive sandbox restrictions also blocked legitimate websites from opening a link in a new tab. We decided that opening a link in a new tab is harmless behaviour and we have relaxed the sandbox restrictions a little bit to allow it. Of course, when you click such a link within Wander console, the link will open in a new tab of your web browser (not within Wander Console, as the console does not have any notion of tabs). Although I developed this project on a whim, one early morning while taking a short break from my ongoing studies of algebraic graph theory, the subsequent warm reception on Hacker News and Lobsters has led to a growing community of Wander Console owners. There are two places where the community hangs out at the moment: If you own a personal website but you have not set up a Wander Console yet, I suggest that you consider setting one up for yourself. You can see what it looks like by visiting mine at /wander/ . To set up your own, follow these instructions: Install . It just involves copying two files to your web server. It is about as simple as it gets. Read on website | #web | #technology Wildcard Patterns The 'via' Query Parameter Console Picker Algorithm Allow Links that Open in New Tab New consoles are announced in this thread on Codeberg: Share Your Wander Console . We also have an Internet Relay Chat (IRC) channel named #wander on the Libera IRC network. This is a channel for people who enjoy building personal websites and want to talk to each other. You are welcome to join this channel, share your console URL, link to your website or recent articles as well as share links to other non-commercial personal websites.

0 views
David Bushell 1 weeks ago

CSS subgrid is super good

I’m all aboard the CSS subgrid train. Now I’m seeing subgrid everywhere. Seriously, what was I doing before subgrid? I feel like I was bashing rocks together. Consider the follower HTML: The content could be simple headings and paragraphs. It could also be complex HTML patterns from a Content Management System (CMS) like the WordPress block editor, or ACF flexible content (a personal favourite). Typically when working with CMS output, the main content will be restricted to a maximum width for readable line lengths. We could use a CSS grid to achieve such a layout. Below is a visual example using the Chromium dev tools to highlight grid lines. This example uses five columns with no gap resulting in six grid lines. The two outer most columns are meaning they can expand to fill space or collapse to zero-width. The two inner columns are which act as a margin. The centre column is the smallest or two values; either , or the full viewport width (minus the margins). Counting grid line correctly requires embarrassing finger math and pointing at the screen. Thankfully we can name the lines. I set a default column of for all child elements. Of course, we could have done this the old fashioned way. Something like: But grid has so much more potential to unlock! What if a fancy CMS wraps a paragraph in a block with the class . This block is expected to magically extend a background to the full-width of the viewport like the example below. This used to be a nightmare to code but with CSS subgrid it’s a piece of cake. We break out of the column by changing the to — that’s the name I chose for the outer most grid lines. We then inherit the parent grid using the template. Finally, the nested children are moved back to the column. The selector keeps specificity low. This allows a single class to override the default column. CSS subgrid isn’t restricted to one level. We could keep nesting blocks inside each other and they would all break containment. If we wanted to create a “boxed” style we can simply change the to instead of . This is why I put the margins inside. In hindsight my grid line names are probably confusing, but I don’t have time to edit the examples so go paint your own bikeshed :) On smaller viewports below the outer most columns collapse to zero-width and the “boxed” style looks exactly like the style. This approach is not restricted to one centred column. See my CodePen example and the screenshot below. I split the main content in half to achieve a two-column block where the text edge still aligns, but the image covers the available space. CSS subgrid is perfect for WordPress and other CMS content that is spat out as a giant blob of HTML. We basically have to centre the content wrapper for top-level prose to look presentable. With the technique I’ve shown we can break out more complex block patterns and then use subgrid to align their contents back inside. It only takes a single class to start! Here’s the CodePen link again if you missed it. Look how clean that HTML is! Subgrid helps us avoid repetitive nested wrappers. Not to mention any negative margin shenanigans. Powerful stuff, right? Browser support? Yes. Good enough that I’ve not had any complaints. Your mileage may vary, I am not a lawyer. Don’t subgrid and drive. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
Maurycy 2 weeks ago

GopherTree

While gopher is usually seen as a proto-web, it's really closer to FTP. It has no markup format, no links and no URLs. Files are arranged in a hierarchically, and can be in any format. This rigid structure allows clients to get creative with how it's displayed ... which is why I'm extremely disappointed that everyone renders gopher menus like shitty websites: You see all that text mixed into the menu? Those are informational selectors: a non-standard feature that's often used to recreate hypertext. I know this "limited web" aesthetic appeals to certain circles, but it removes the things that make the protocol interesting. It would be nice to display gopher menus like what they are, a directory tree : This makes it easy to browse collections of files, and help avoid the Wikipedia problem: Absentmindedly clicking links until you realize it's 3 AM and you have a thousand tabs open... and that you never finished what you wanted to read in the first place. I've made the decision to hide informational selectors by default . These have two main uses: creating faux hypertext and adding ASCII art banners. ASCII art banners are simply annoying: Having one in each menu looks cute in a web browser, but having 50 copies cluttering up the directory tree is... not great. Hypertext doesn't work well. In the strict sense, looking ugly is better then not working at all — but almost everyone who does this also hosts on the web, so it's not a huge loss. The client also has a built in text viewer , with pagination and proper word-wrap. It supports both UTF-8 and Latin-1 text encodings, but this has to be selected manually: gopher has no mechanism to indicate encoding. (but most text looks the same in both) Bookmarks work by writing items to a locally stored gopher menu, which also serves as a "homepage" of sorts. Because it's just a file, I didn't bother implementing any advanced editing features: any text editor works fine for that. The bookmark code is UNIX/Linux specific, but porting should be possible. All this fits within a thousand lines of C code , the same as my ultra-minimal web browser. While arguably a browser, it was practically unusable: lacking basic features like a back button or pagination. The gopher version of the same size is complete enough to replace Lynx as my preferred client. Usage instructions can be found at the top of the source file. /projects/gopher/gophertree.c : Source and instructions /projects/tinyweb/ : 1000 line web browser https://datatracker.ietf.org/doc/html/rfc1436 : Gopher RFC

0 views
Maurycy 2 weeks ago

My ramblings are available over gopher

It has recently come to my attention that people need a thousand lines of C code to read my website. This is unacceptable. For simpler clients, my server supports gopher: The response is just a text file: it has no markup, no links and no embedded content. For navigation, gopher uses specially formatted directory-style menus: The first character on a line indicates the type of the linked resource: The type is followed by a tab-separated list containing a display name, file path, hostname and port. Lines beginning with an "i" are purely informational and do not link to anything. (This is non-standard, but widely used) Storing metadata in links is weird to modern sensibilities , but it keeps the protocol simple. Menus are the only thing that the client has to understand: there's no URLs, no headers, no mime types — the only thing sent to the server is the selector (file path), and the only thing received is the file. ... as a bonus, this one liner can download files: That's quite clunky , but there are lots of programs that support it. If you have Lynx installed, you should be able to just point it at this URL: ... although you will want to put in because it's not 1991 anymore [Citation Needed] I could use informational lines to replicate the webs navigation by making everything a menu — but that would be against the spirit of the thing: gopher is document retrieval protocol, not a hypertext format. Instead, I converted all my blog posts in plain text and set up some directory-style navigation. I've actually been moving away from using inline links anyways because they have two opposing design goals: While reading, links must be normal text. When you're done, links must be distinct clickable elements. I've never been able to find a good compromise: Links are always either distracting to the reader, annoying to find/click, or both. Also, to preempt all the emails : ... what about Gemini? (The protocol, not the autocomplete from google.) Gemini is the popular option for non-web publishing... but honestly, it feels like someone took HTTP and slapped markdown on top of it. This is a Gemini request... ... and this is an HTTP request: For both protocols, the server responds with metadata followed by hypertext. It's true that HTTP is more verbose, but 16 extra bytes doesn't create a noticeable difference. Unlike gopher, which has a unique navigation model and is of historical interest , Gemini is just the web but with limited features... so what's the point? I can already write websites that don't have ads or autoplaying videos, and you can already use browsers that don't support features you don't like. After stripping away all the fluff (CSS, JS, etc) the web is quite simple: a functional browser can be put together in a weekend. ... and unlike gemini, doing so won't throw out 35 years of compatibility: Someone with Chrome can read a barebones website, and someone with Lynx can read normal sites. Gemini is a technical solution to an emotional problem . Most people have a bad taste for HTTP due to the experience of visiting a commercial website. Gemini is the obvious choice for someone looking for "the web but without VC types". It doesn't make any sense when I'm looking for an interesting (and humor­ously outdated) protocol. /projects/tinyweb/ : A browser in 1000 lines of C ... /about.html#links : ... and thoughts on links for navigation. https://www.rfc-editor.org/rfc/rfc1436.html : Gopher RFC https://lynx.invisible-island.net/ : Feature complete text-based web browser

0 views