Posts in Open-source (20 found)

Under the hood of Canada Spends with Brendan Samek

I talked to Brendan Samek about Canada Spends , a project from Build Canada that makes Canadian government financial data accessible and explorable using a combination of Datasette, a neat custom frontend, Ruby ingestion scripts, sqlite-utils and pieces of LLM-powered PDF extraction. Here's the video on YouTube . Sections within that video: Build Canada is a volunteer-driven non-profit that launched in February 2025 - here's some background information on the organization, which has a strong pro-entrepreneurship and pro-technology angle. Canada Spends is their project to make Canadian government financial data more accessible and explorable. It includes a tax sources and sinks visualizer and a searchable database of government contracts, plus a collection of tools covering financial data from different levels of government. The project maintains a Datasette instance at api.canadasbilding.com containing the data they have gathered and processed from multiple data sources - currently more than 2 million rows plus a combined search index across a denormalized copy of that data. The highest quality government financial data comes from the audited financial statements that every Canadian government department is required to publish. As is so often the case with government data, these are usually published as PDFs. Brendan has been using Gemini to help extract data from those PDFs. Since this is accounting data the numbers can be summed and cross-checked to help validate the LLM didn't make any obvious mistakes. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . 02:57 Data sources and the PDF problem 05:51 Crowdsourcing financial data across Canada 07:27 Datasette demo: Search and facets 12:33 Behind the scenes: Ingestion code 17:24 Data quality horror stories 20:46 Using Gemini to extract PDF data 25:24 Why SQLite is perfect for data distribution datasette.io , the official website for Datasette sqlite-utils.datasette.io for more on Canada Spends BuildCanada/CanadaSpends on GitHub

0 views
Manuel Moreale 4 days ago

On open protocols

It’s Saturday morning, and I’m sitting here at my desk, working on client projects and sipping my coffee. While taking a break, I was clicking around the web, as one does, and found a post titled “ Is Pixelfed sawing off the branch that the Fediverse is sitting on? ” by Ploum ( also featured on P&B ). I find this topic quite interesting, so I’m gonna take a moment to share my thoughts. I don’t have skin in the game, I’m not on any of these social media platforms, and I frankly don’t even care about the outcome of this situation. I’m just an external observer in this context. Quick summary of the situation: I can’t stress enough that this is just a quick summary, and you should read the original post . There’s also a discussion happening on Mastodon , if you want to see what others are saying. I can see where Ploum is coming from, his concerns are definitely valid, and he’s motivated by good intentions. At the same time, though, I find his position a bit perplexing. Isn’t the point of an open protocol, like ActivityPub, to provide a structure that can be used by others to build whatever they want? If someone wants to build a service, on top of AP, that only displays content of a certain type, they should be able to do so. Granted, they should make it very clear to the people who sign up for it that some filtering is happening, but if those same people are cool with that, then I don’t see the issue. If tomorrow I wake up and I want to make an AP-based service that only serves audio content and is designed to encourage people sending voice messages to each other , I should be able to do so, without being required to also implement everything else that’s available on the protocol. In his post, Ploum uses the idea of a TextFed service “that will never display posts with pictures”. If you ask me, that would be a totally reasonable project, especially if you want to build something that is not very resource-intensive, since you’re only dealing with text, and you don’t want to mess with media content. Why shouldn’t you be able to build such a thing on top of AP? Why should you be forced to accept videos and images coming from the rest of the Fediverse if that’s not what you want? Also, it’s hard for me to square this whole line of argument with the concept of moderation. If you can’t trust a user to figure out by themselves that by signing up to something like Pixelfed, they only get a subset of the content available on the fediverse, then I don’t see how you can’t trust them to understand that, depending on which server they join, some other servers might be blocked. Does that mean the Fediverse should not have moderation? A protocol is either open or it is not. And if it’s open, we should accept that some people might use it in ways we do not agree with. And that’s ok. But again, I'm not a fediverse user, so maybe my intuition here is entirely wrong. So feel free to reach out to let me know why I'm wrong. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs Pixelfed is a decentralised Instagram alternative built on the ActivityPub protocol and focused on images and videos In order to do that, the app is designed to silently drop content that doesn’t contain images or videos, and so text-only content is not displayed on people’s timelines In his post, Ploum is arguing that this is wrong because, in doing that, Pixelfeed is not behaving in a way that is in line with the rest of the Fediverse and can undermine the whole ActivityPub endeavor

0 views

Discovering the indieweb with calm tech

When social media first entered my life, it came with a promise of connection. Facebook connected college-aged adults in a way that was previously impossible, helping to shape our digital generation. Social media was our super-power and we wielded it to great effect. Yet social media today is a noisy, needy, mental health hazard. They push distracting notifications, constantly begging us to “like and subscribe”, and trying to trap us in endless scrolling. They have become sirens that lure us into their ad-infested shores with their saccharine promise of dopamine. How can we defeat these monsters that have invaded deep into our world, while still staying connected? A couple weeks ago I stumbled into a great browser extension, StreetPass for Mastodon . The creator, tvler , built it to help people find each other on Mastodon. StreetPass autodiscovers Mastodon verification links as you browse the web, building a collection of Mastodon accounts from the blogs and personal websites you’ve encountered. StreetPass is a beautiful example of calm technology . When StreetPass finds Mastodon profiles it doesn’t draw your attention with a notification, it quietly adds the profile to a list, knowing you’ll check in when you’re ready. StreetPass recognizes that there’s no need for an immediate call to action. Instead it allows the user to focus on their browsing, enriching their experience in the background. The user engages with StreetPass when they are ready, and on their own terms. StreetPass is open source and available for Firefox , Chrome , and Safari . Inspired by StreetPass, I applied this technique to RSS feed discovery. Blog Quest is a web browser extension that helps you discover and subscribe to blogs. Blog Quest checks each page for auto-discoverable RSS and Atom feeds (using links) and quietly collects them in the background. When you’re ready to explore the collected feeds, open the extension’s drop-down window. The extension integrates with several feed readers, making subscription management nearly effortless. Blog Quest is available for both Firefox and Chrome . The project is open source and I encourage you to build your own variants. I reject the dead Internet theory: I see a vibrant Internet full of humans sharing their experiences and seeking connection. Degradation of the engagement-driven web is well underway, accelerated by AI slop. But the independent web works on a different incentive structure and is resistant to this effect. Humans inherently create, connect, and share: we always have and we always will. If you choose software that works in your interest you’ll find that it’s possible to make meaningful online connections without mental hazard. Check out StreetPass and Blog Quest to discover a decentralized, independent Internet that puts you in control. Edward Armitage: The Siren (1888)

0 views
Manuel Moreale 5 days ago

Come on John

For all I know, John O'Nolan is a cool dude. He’s the founder of Ghost , a project that is also really cool. You know what’s also cool? RSS. And guess what, John just announced he’s working on a new RSS app (Reader? Tool? Service?) called Alcove and he blogged about it . All this is nice. All this is cool. The more people build tools and services for the open web, the better. Having said all that though, John: If you want to follow along with this questionable side project of undefined scope, I'm sharing live updates of progress on Twitter, here. You are on your own blog, your own corner of the web, powered by the platform you’re the CEO of, a blog that also serves content via RSS, the thing you’re building a tool for, and you’re telling people to follow the progress on fucking Twitter? Come on John. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

66 views
Carlos Becker 5 days ago

OpenSource Fridays Brasil

I was in a live stream with Pachi Parra , talking a bit about my background, and about GoReleaser.

0 views
Martin Fowler 6 days ago

Fragments Dec 4

Rob Bowley summarizes a study from Carnegie Mellon looking on the impact of AI on a bunch of open-source software projects. Like any such study, we shouldn’t take its results as definitive, but there seems enough there to make it a handy data point. The key point is that the AI code probably reduced the quality of the code base - at least if static code analysis can be trusted to determine quality. And perhaps some worrying second-order effects This study shows more than 800 popular GitHub projects with code quality degrading after adopting AI tools. It’s hard not to see a form of context collapse playing out in real time. If the public code that future models learn from is becoming more complex and less maintainable, there’s a real risk that newer models will reinforce and amplify those trends, producing even worse code over time. ❄                ❄                ❄                ❄                ❄ Rob’s post is typical of much of the thoughtful writing on AI. We can see its short-term benefits, but worry about its long-term impact. But on a much deeper note is this lovely story from Jim Highsmith . Jim has turned 0x50, and has spent the last decade fighting Parkinson’s disease. To help him battle it he has two AI assisted allies. Between my neural implants and Byron’s digital guidance, I now collaborate with two adaptive systems: one for motion, one for thought. Neither replaces me. Both extend me. If you read anything on AI this week, make it be this . It offers a positive harbinger for our future and opens my mind to a whole different perspective of the role of AI in it ❄                ❄                ❄                ❄                ❄ Anthropic recently announced that it disrupted a Chinese state-sponsored operation abusing Claude Code. Jim Gumbley looks at the core lesson to learn from this, that we have to understand the serious risk of AI Jailbreaking New AI tools are able to analyze your attack surface at the next level of granularity. As a business leader, that means you now have two options: wait for someone else to run AI-assisted vulnerability detection against your attack surface, or run it yourself first. ❄                ❄                ❄                ❄                ❄ There’s plenty of claims that AI Vibe Coding can replace software developers, something that folks like me (perhaps with a bias) think unlikely. Gergely Orosz shared this tidbit Talked with an exec at a tech company who is obsessed with AI and has been for 3 years. Not a developer but company makes software. Uses AI for everything, vibe codes ideas. Here’s the kicker: Has a team of several devs to implement his vibe coded prototypes to sg workable I’d love to hear more about this (and similar stories) ❄                ❄                ❄                ❄                ❄ Nick Radcliffe writes about a month of using AI I spent a solid month “pair programming” with Claude Code, trying to suspend disbelief and adopt a this-will-be-productive mindset. More specifically, I got Claude to write well over 99% of the code produced during the month. I found the experience infuriating, unpleasant, and stressful before even worrying about its energy impact. Ideally, I would prefer not to do it again for at least a year or two. The only problem with that is that it “worked”. He stresses that his approach is the “polar opposite” of Vibe Coding. The post is long, and rambles a bit, but is worthwhile because he talks in detail about his workflow and how he uses the tool. Such posts are important so we can learn the nitty-gritty of how our programming habits are changing. ❄                ❄                ❄                ❄                ❄ Along similar lines is a post of Brian Chambers on his workflow, that he calls Issue-Driven Development (and yes, I’m also sick of the “something-driven” phraseology). As with much of the better stuff I’ve heard about AI assisted work, it’s all about carefully managing the context window, ensuring the AI is focused on the right things and not distracted by textual squirrels.

0 views
Jason Fried 1 weeks ago

Introducing Fizzy, our newest product

Have you noticed that every issue and idea tracking tool you loved slowly morphed into boring, sluggish, corporate bloatware? Trello put on 40 pounds of cruft. Jira started charging by the migraine. Asana tried to become everything to everyone. GitHub Issues slipped into a steady state of decline. The whole category is a 20 car pileup of complexity. Time to route around that mess. Today we’re introducing Fizzy. Kanban as it should be, not as it has been. Fizzy is a fresh take on cards and columns, with a few twists, human-nature inspired defaults, and a vibrant interface that’s the opposite of the bland and boring software the industry has been flinging at you for years. Kanban has been around since the 1940s, and Trello brought it into the mainstream in 2011. Since then, some version of column-based kanban-style organization has found its way into any collaboration tool worth its salt. But most have over salted the dish. What was simple is now complicated. What was clear is now cluttered. What just worked now takes work. Fizzy presses reset, reconsiders what really matters, and presents a refreshing way to kanban that just feels right. It’s friendly, colorful, straightforward, and fast as hell. We still use Basecamp for our big, intensive projects, but lately we’ve been reaching for Fizzy to run the smaller ones. It’s perfect for tracking bugs, issues, and ideas, and it shines for lighter, self-contained workflows like podcasts or video production. We didn’t expect it, but Fizzy’s so good it might even cannibalize Basecamp on the lighter side of project management. We’d be thrilled. How much is it? It’s not much for so much. Everyone gets 1000 cards for free. Beyond that, we’ll host your account for just $20/month for unlimited cards and unlimited users. One price for all and everything. No tiers, no “contact us.” No pricing chart at all — just a price tag, like on a pair of jeans. And here’s a surprise... Fizzy is open source! If you’d prefer not to pay us, or you want to customize Fizzy for your own use, you can run it yourself for free forever. Have a great idea? Submit a PR to contribute to the code base and improve the product for everyone. It’s the best of all worlds. No excuses. Every idea comes back around. It’s time for take two on kanban. Fizzy’s our hat in the ring. Let’s make this platform insanely great, together. Come on in! Visit fizzy.do to check it out and sign up for free! -Jason

0 views
DHH 1 weeks ago

Fizzy is our fun, modern take on Kanban (and we made it open source!)

Kanban is a simple, practical approach to visually managing processes and backlogs by moving work cards from one progress column to another. Toyota came up with it to track their production lines back in the middle of the 20th century, but it's since been applied to all sorts of industries with great effect. And Fizzy is our new fun, modern take on it in digital form. We're certainly not the first to take a swing at this, not even for software development. Since the early 2000s, there's been a movement to use the Kanban concept to track bugs, issues, and ideas in our industry. And countless attempts to digitize the concept over the years.  But as with so much other software, good ideas can grow cumbersome and unwieldy surprisingly quickly. Fizzy is a fresh reset of an old idea. We need more of that.  Very little software is ever the final word on solving interesting problems. Even products that start out with great promise and simplicity tend to accumulate cruft and complexity over time. A healthy ecosystem needs a recurring cycle of renewal. We've taken this mission to heart not just with Fizzy's fun, colorful, and modern implementation of the Kanban concept, but also in its distribution.  Fizzy is available as a service we run where you get 1,000 cards for free, and then it's $20/month for unlimited usage. But we're also giving you access to the entire code base, and invite enterprising individuals and companies to run their own instance totally free of charge. This is done under the O'Saasy License, which is basically the do-whatever-you-want-just-don't-sue MIT License, but with a carve-out that reserves the commercialization rights to run Fizzy as SaaS for us as the creators. That means it's not technically Open Source™, but the source sure is open, and you can find it on our public GitHub repository. That open source is what we run too. So new features or bugs fixes accepted on GitHub will make it into both our Fizzy SaaS offering and what anyone can run on their own hardware. We've already had a handful of contributions go live like this! Ultimately, it's our plan to let data flow freely between the SaaS and the local installations. You'll be able to start an account on your own instance, and then, if you'd rather we just run it for you, take that data with you into the managed setup. Or the other way around! In an age where SaaS companies come and go, pivot one way or the other, I think it's a great reassurance that the source code is freely available, and that any work put into a SaaS account is portable to your own installation later. I'm also just a huge fan of being able to View Source. Traditionally, that's been reserved to the front end (and even that has been disappearing due to the scourge of minimization, transpiling, and bundling), but I'm usually even more interested in seeing how things are built on the backend. Fizzy allows you full introspection into that. Including the entire history of how the product was built, pull request by pull request. It's a great way to learn how modern Rails applications are put together! So please give Fizzy a spin. Whether you're working on software, with a need to track those bugs and feature requests, or you're in an entirely different business and need a place for your particular issues and ideas. Fizzy is a fresh, fun way to manage it all, Kanban style. Enjoy!

0 views
Rob Zolkos 1 weeks ago

The Making of Fizzy, Told by Git

Today Fizzy was released and the entire source code of its development history is open for anyone to see . DHH announced on X that the full git history is available - a rare opportunity to peek behind the curtain of how a 37signals product comes together. I cloned down the repository and prompted Claude Code: “Can you go through the entire git history and write a documentary about the development of this application. What date the first commit was. Any major tweaks, changes and decisions and experiments. You can take multiple passes and use sub-agents to build up a picture. Make sure to cite commits for any interesting things. If there is anything dramatic then make sure to see if you can figure out decision making. Summarize at the end but the story should go into STORY.md” It responded with: “This is a fascinating task! Let me create a comprehensive investigation plan and use multiple agents to build up a complete picture of this project’s history.” Here is the story of Fizzy - as interpreted by Claude - from the trail of git commits. Enjoy! A chronicle of 18 months of development at Basecamp, told through 8,152 commits. At 1:19 PM on a summer Friday, Kevin McConnell typed the words that would begin an 18-month journey: Within hours, the foundation was laid. The team moved with practiced efficiency: By end of day, the skeleton of a Rails application stood ready. But what would it become? One month after inception, Jason Zimdars introduced the application’s first real identity: A “Splat” — the name evokes something chaotic, impactful, unexpected. Like a bug hitting your windshield on a summer drive. The original data model was simple: The next day brought the visual metaphor that would define the early application: The windshield was the canvas. Splats appeared on it like bugs on glass — colorful, slightly chaotic, each one a piece of information demanding attention. The commits reveal urgency. Something important was coming: The all-hands demo. Approximately one month after project inception, Fizzy (then still called “Splat”) was shown to the entire company. The pressure to polish was evident in the commit messages. Seven days after the windshield metaphor was established, Jason Zimdars typed four words that would reshape the application’s identity: The chaotic “splat” gave way to something gentler — bubbles floating on a windshield , like soap suds catching light. The animation changed from aggressive splattering to gentle floating: Perfect circles gave way to hand-drawn blob shapes. The team was discovering what their product was through the act of building it. A new interaction pattern emerged: When users “boosted” a bubble, it would puff up and float away — like champagne fizz rising. The animation: The metaphor was crystallizing. Bubbles. Fizzing. Effervescence. The name would come soon. In a single day, the application found its final name through two commits: 42 files changed. The model, controllers, views, tests — everything touched. Hours later: Fizzy. The name captured everything: the bubbles, the effervescence, the playful energy of the interface. Visual design had driven product naming — the team discovered what they were building through the act of building it. The flat list of bubbles needed structure: But “Projects” didn’t feel right. Eight days later: Then “Bucket” became “Collection.” Eventually, “Collection” would become “Board.” The terminology dance — Projects → Buckets → Collections → Boards — reveals a team searching for the right mental model. They ultimately landed on the familiar “Board” metaphor, aligning with tools like Trello and Linear. David Heinemeier Hansson, creator of Ruby on Rails and co-founder of Basecamp, made his first contribution with characteristic pragmatism: He deleted an unused image file. It was a statement of intent. Within two days, DHH’s fingerprints were everywhere: He upgraded the entire application to Rails 8 release candidate and systematically added HTTP caching throughout. DHH’s most distinctive contribution was his crusade against what he called “anemic” code — thin wrappers that explain nothing and add needless indirection. He used this term 15 times in commit messages: Philosophy: Code should either add explanatory value OR hide implementation complexity. Thin wrappers that do neither are “anemic” and should be eliminated. Then came April 2025. DHH made 323 commits in a single month — 55% of his total contributions compressed into 30 days. This was a surgical strike. He: His commit messages tell the story: In DHH’s philosophy: deletion is a feature, not a bug. After 10 months as “Bubbles,” another transformation: 333 files changed. “Pop” (completing a bubble) became “Closure” (closing a card). The playful metaphor gave way to task management vocabulary. The final architectural piece: Fizzy had become a kanban board . Cards lived in columns. Columns could be customized, colored, reordered. The application had evolved from “bugs on a windshield” to a sophisticated project management tool. Collections became Boards. The transformation was complete: Original (July 2024): Final (November 2025): A Claude-powered AI assistant that could answer questions about project content. Born, restricted to staff, then removed entirely. Perhaps replaced by the more ambitious MCP (Model Context Protocol) integration — making Fizzy AI-native at the protocol level rather than bolting on a chatbot. Emoji reactions for cards and comments. Added. Removed. Then added again. The git history shows healthy debate — not everything that ships stays shipped, and not everything removed stays gone. Saved custom views were replaced by ephemeral quick filters. Complexity gave way to simplicity. Predefined workflows with stages were removed in favor of ad-hoc column organization. Users would create their own structure. The MCP (Model Context Protocol) branch represents cutting-edge AI integration — allowing Claude and other AI assistants to interact with Fizzy programmatically. An manifest advertises Fizzy’s capabilities to AI clients. Status: Removed from main, but the infrastructure remains fascinating. This is one of the earliest explorations of making traditional web applications AI-native. Multiple parallel branches exploring different approaches to mobile column navigation. Scroll snapping. Contained scrolling. Swipeable columns. The problem remains unsolved — there’s no “one true way” for mobile kanban navigation. Making Fizzy work with SQLite in addition to MySQL. Simpler local development. Better portability. The search index was even sharded into 16 tables ( through ) for scale. The proprietary SAAS features were extracted into a separate gem. What remained was a clean, open-source Rails application. After 18 months of development, 8,152 commits, and countless pivots, Fizzy became open source. Jason Zimdars (2,217 commits) — The visual architect. From “Let’s try bubbles” to pixel-perfect polish. Jorge Manrubia (2,053 commits) — The engineering backbone. Consistent, prolific, essential. Andy Smith (1,007 commits) — Front-end craftsmanship and UI refinement. Mike Dalessio (875 commits) — Infrastructure, performance, the recent dashboard work. David Heinemeier Hansson (586 commits) — The architectural enforcer. Rails modernization and the war on anemic code. Kevin McConnell (351 commits) — Started it all with “New Rails app.” Jose Farias (341 commits) — Feature development and testing. Stanko K.R. (239 + 54 commits) — Security hardening and webhook restrictions. Jeffrey Hardy (100 commits) — Early infrastructure and modernization. Jason Fried (7 commits) — The occasional “Small copy adjustment” from the CEO. July 2024 (v0.1): September 2024 (v0.2): November 2025 (v1.0): The story of Fizzy is the story of discovery through building . The team didn’t know they were building a kanban board when they started with “splats on a windshield.” They found out through iteration. Key lessons: Names matter, but they can change. Splat → Bubble → Card. Project → Bucket → Collection → Board. The right name emerges through use. Deletion is a feature. Boosts, Fizzy Ask, custom views, workflows — removing the wrong features is as important as adding the right ones. Architecture evolves. The final column-based kanban system looks nothing like the original flat list of splats. DHH’s philosophy: Remove anemic code. Keep transactions short. Use the latest Rails. Delete more than you add. Design drives naming. “Fizzy” emerged from the visual metaphor of bubbles puffing up and floating away — the design informed the brand. Open source takes extraction. 18 months of SAAS development needed careful separation before the core could be shared. The git history of Fizzy is a masterclass in iterative product development. 8,152 commits. 25+ contributors. 18 months. One application that discovered its identity through the act of creation. “Let’s try bubbles.” — Jason Zimdars, July 31, 2024 Documentary compiled December 2, 2025 Based on analysis of the Fizzy git repository First Commit: June 21, 2024 Total Commits: 8,152 Contributors: 25+ Lines of Code Changed: Hundreds of thousands Name Changes: 4 (Splat → Bubble → Card; Project → Bucket → Collection → Board) Features Removed: At least 4 major ones DHH Commits in April 2025 Alone: 323 1:23 PM — Gemfile updated ( ) 3:47 PM — Rubocop configured ( ) 4:07 PM — Minimal authentication flow ( ) 4:29 PM — CSS reset and base styles ( ) 4:46 PM — Brakeman security scanning added ( ) Removed the entire Boosts feature ( ) — 299 lines across 27 files, gone Eliminated activity scoring ( , , ) Extracted RESTful controllers from overloaded ones ( , ) Enforced transaction discipline ( — “No long transactions!”) Splats on a Windshield Cards → Columns → Boards → Accounts Jason Zimdars (2,217 commits) — The visual architect. From “Let’s try bubbles” to pixel-perfect polish. Jorge Manrubia (2,053 commits) — The engineering backbone. Consistent, prolific, essential. Andy Smith (1,007 commits) — Front-end craftsmanship and UI refinement. Mike Dalessio (875 commits) — Infrastructure, performance, the recent dashboard work. David Heinemeier Hansson (586 commits) — The architectural enforcer. Rails modernization and the war on anemic code. Kevin McConnell (351 commits) — Started it all with “New Rails app.” Jose Farias (341 commits) — Feature development and testing. Stanko K.R. (239 + 54 commits) — Security hardening and webhook restrictions. Jeffrey Hardy (100 commits) — Early infrastructure and modernization. Jason Fried (7 commits) — The occasional “Small copy adjustment” from the CEO. July 24, 2024: “Handful of tweaks before all-hands” — Demo day pressure July 31, 2024: “Let’s try bubbles” — The visual pivot September 4, 2024: “Splat -> Fizzy” — Finding the name April 2025: DHH’s 323-commit refactoring blitz October 2025: “Remove Fizzy Ask” — The AI feature that didn’t survive November 28, 2025: “Initial README and LICENSE” — Going public Rails 8.x — Always on the latest, sometimes ahead of stable Hotwire (Turbo + Stimulus) — No heavy JavaScript framework Solid Queue & Solid Cache — Rails-native background jobs and caching SQLite + MySQL support — Database flexibility Kamal deployment — Modern container orchestration UUID primary keys — Using UUIDv7 for time-ordering Multi-tenancy — Account-based data isolation Names matter, but they can change. Splat → Bubble → Card. Project → Bucket → Collection → Board. The right name emerges through use. Deletion is a feature. Boosts, Fizzy Ask, custom views, workflows — removing the wrong features is as important as adding the right ones. Architecture evolves. The final column-based kanban system looks nothing like the original flat list of splats. DHH’s philosophy: Remove anemic code. Keep transactions short. Use the latest Rails. Delete more than you add. Design drives naming. “Fizzy” emerged from the visual metaphor of bubbles puffing up and floating away — the design informed the brand. Open source takes extraction. 18 months of SAAS development needed careful separation before the core could be shared.

0 views
Michael Lynch 1 weeks ago

My First Impressions of MeshCore Off-Grid Messaging

When my wife saw me playing with my new encrypted radio, she asked what it was for. “Imagine,” I said, “if I could type a message on my phone and send it to you, and the message would appear on your phone. Instantly!” She wasn’t impressed. “It also works if phone lines are down due to a power outage… or societal collapse.” Still nothing. “If we’re not within radio range of each other, we can route our messages through a mesh network of our neighbors’ radios. But don’t worry! The radios encrypt our messages end-to-end, so nobody else can read what we’re saying.” By this point, she’d left the room. My wife has many wonderful qualities, but, if I’m being honest, “enthusiasm for encrypted off-grid messaging” has never been one of them. The technology I was pitching to my wife was, of course, MeshCore. If you’d like to skip to the end, check out the summary . MeshCore is software that runs on inexpensive long-range (LoRa) radios . LoRa radios transmit up to several miles depending on how clear the path is. Unlike HAM radios, you don’t need a license to broadcast over LoRa frequencies in the US, so anyone can pick up a LoRa radio and start chatting. MeshCore is more than just sending messages over radio. The “mesh” in the name is because MeshCore users form a mesh network. If Alice wants to send a message to her friend Charlie, but Charlie’s out of range of her radio, she can route her message through Bob, another MeshCore user in her area, and Bob will forward the message to Charlie. If Alice is within radio range of Bob but not Charlie, she can tell Bob’s MeshCore radio to forward her message to Charlie. I’m not exactly a doomsday prepper, but I plan for realistic disaster scenarios like extended power outages, food shortages, and droughts. When I heard about MeshCore, I thought it would be neat to give some devices to friends nearby so we could communicate in an emergency. And if it turned out that we’re out of radio range of each other, maybe I could convince a few neighbors to get involved as well. We could form a messaging network that’s robust against power failures and phone outages. MeshCore is a newer implementation of an idea that was popularized by a technology called Meshtastic . I first heard about Meshtastic from Tyler Cipriani’s 2022 blog post . I thought the idea sounded neat, but Tyler’s conclusion was that Meshtastic was too buggy and difficult for mainstream adoption at the time. I have no particular allegiance to MeshCore or Meshtastic, as I’ve never tried either. Some people I follow on Mastodon have been excited about MeshCore, so I thought I’d check it out. Most MeshCore-compatible devices are also compatible with Meshtastic, so I can easily experiment with one and later try the other. I only have a limited understanding of the differences between Meshtastic and MeshCore, but what I gather is that MeshCore’s key differentiator is preserving bandwidth. Apparently, Meshtastic hits scaling issues when many users are located close to each other. The Meshtastic protocol is chattier than MeshCore, so I’ve seen complaints that Meshtastic chatter floods the airwaves and interferes with message delivery. MeshCore attempts to solve that problem by minimizing network chatter. I should say at this point that I’m not a radio guy. It seems like many people in the LoRa community are radio enthusiasts who have experience with HAM radios or other types of radio broadcasting. I’m a tech-savvy software developer, but I know nothing about radio communication. If I have an incorrect mental model of radio transmission, that’s why. The MeshCore firmware runs on a couple dozen devices, but the official website recommends three devices in particular. The cheapest one is the Heltec v3. I bought two for $27/ea. At $27, the Heltec v3 is the cheapest MeshCore-compatible device I could find. I connected the Heltec v3 to my computer via the USB-C port and used the MeshCore web flasher to flash the latest firmware. I selected “Heltec v3” as my device, “Companion Bluetooth” as the mode, and “v1.9.0” as the version. I clicked “Erase device” since this was a fresh install. Then, I used the MeshCore web app to pair the Heltec with my phone over Bluetooth. Okay, I’ve paired my phone with my MeshCore device, but… now what? The app doesn’t help me out much in terms of onboarding. I try clicking “Map” to see if there are any other MeshCore users nearby. Okay, that’s a map of New Zealand. I live in the US, so that’s a bit surprising. Even if I explore the map, I don’t see any MeshCore activity anywhere, so I don’t know what the map is supposed to do. The map of New Zealand reminded me that different countries use different radio frequencies for LoRa, and if the app defaults to New Zealand’s location, it’s probably defaulting to New Zealand broadcast frequencies as well. I went to settings and saw fields for “Radio Settings,” and I clicked them expecting a dropdown, but it expects me to enter a number. And then I noticed a subtle “Choose Preset” button, which listed presets for different countries that were “suggested by the community.” I had no idea what any of them meant, but who am I to argue with the community? I chose “USA/Canada (Recommended).” I also noticed that the settings let me change my device name, so that seemed useful: It seemed like there were no other MeshCore users within range of me, which I expected. That’s why I bought the second Heltec. I repeated the process with an old phone and my second Heltec v3, but they couldn’t see each other. I eventually realized that I’d forgotten to configure my second device for the US frequency. This is another reason I wish the MeshCore app took initial onboarding more seriously. Okay, they finally see each other! They can both publish messages to the public channel. My devices could finally talk to each other over a public channel. If I communicate with friends over MeshCore, I don’t want to broadcast our whole conversation over the public channel, so it was time to test out direct messaging. I expected some way to view a contact in the public channel and send them a direct message, but I couldn’t. Clicking their name did nothing. There’s a “Participants” view, but the only option is to block, not send a direct message. This seems like an odd design choice. If a MeshCore user posts to the public channel, why can’t I talk to them? I eventually figured out that I have to “Advert.” There are three options: “Zero Hop,” “Flood Routed,” and “To Clipboard.” I don’t know what any of these mean, but I figure “flood” sounds kind of rude, whereas “Zero Hop” sounds elegant, so I do a “Zero Hop.” Great! Device 2 now sees device 1. Let’s say hi to Device 1 from Device 2. Whoops, what’s wrong? Maybe I need to “Advert” from Device 2 as well? Okay, I do, and voila! Messages now work. This is a frustrating user experience. If I have to advert from both ends, why did MeshCore let me send a message on a half-completed handshake? I’m assuming “Advert” is me announcing my device’s public key, but I don’t understand why that’s an explicit step I have to do ahead of time. Why can’t MeshCore do that implicitly when I post to a public channel or attempt to send someone a direct message? Anyway, I can talk to myself in both public channels and DMs. Onward! The Heltec v3 boards were a good way to experiment with MeshCore, but they’re impractical for real-world scenarios. They require their own power source, and a phone to pair. I wanted to power it from my phone with a USB-C to USB-C cable, but the Heltec board wouldn’t power up from my phone. In a real emergency, that’s too many points of failure. The MeshCore website recommends two other MeshCore-compatible devices, so I ordered those: the Seeed SenseCAP T-1000e ($40) and the Lilygo T-Deck+ ($100). I bought the Seeed SenseCAP T-1000e (left) and the Lilygo T-Deck+ (right) to continue experimenting with MeshCore. The T-1000e was a clear improvement over the Heltec v3. It’s self-contained and has its own battery and antenna, which feels simpler and more robust. It’s also nice and light. You could toss it into a backpack and not notice it’s there. The T-1000e feels like a more user-friendly product compared to the bare circuit board of the Heltec v3. Annoyingly, the T-1000e uses a custom USB cable, so I can’t charge it or flash it from my computer with one of my standard USB cables: The Seeed T-1000e uses a custom USB cable for charging and flashing. I used the web flasher for the Heltec, but I decided to try flashing the T-1000e directly from source: I use Nix, and the repo conveniently has a , so the dependencies installed automatically with . I then flashed the firmware for the T-1000e like this: From there, I paired the T-1000e with my phone, and it was basically the same as using the Heltec. The only difference was that the T-1000e has no screen, so it defaults to the Bluetooth pairing password of . Does that mean anyone within Bluetooth range can trivially take over my T-1000e and read all my messages? It also seems impossible to turn off the T-1000e, which is undesirable for a broadcasting device. The manufacturer advises users to just leave it unplugged for several days until the battery runs out. Update : MeshCore contributor Frieder Schrempf just fixed this in commit 07e7e2d , which is included in the v.1.11.0 MeshCore firmware. You can now power off the device by holding down the button at the top of the T-1000e. Now it was time to test the Lilygo T-Deck. This was the part of MeshCore I’d been most excited about since the very beginning. If I handed my non-techy friends a device like the T-1000e, there were too many things that could go wrong in an actual emergency. “Oh, you don’t have the MeshCore app? Oh, you’re having trouble pairing it with your phone? Oh, your phone battery is dead?” The T-Deck looked like a 2000s era Blackberry. It seemed dead-simple to use because it was an all-in-one device: no phone pairing step or app to download. I wanted to buy a bunch, and hand them out to my friends. If society collapsed and our city fell into chaos, we’d still be able to chat on our doomsday hacker Blackberries like it was 2005. As soon as I turned on my T-Deck, my berry was burst. This was not a Blackberry at all. As a reminder, this is what a Blackberry looked like in 2003: A Blackberry smartphone in 2003 Before I even get to the T-Deck software experience, the hardware itself is so big and clunky. We can’t match the quality of a hardware product that we produced 22 years ago ? Right off the bat, the T-Deck was a pain to use. You navigate the UI by clicking a flimsy little thumbwheel in the center of the device, but it’s temperamental and ignores half of my scrolls. Good news: there’s a touchscreen. But the touchscreen misses half my taps: There are three ways to “click” a UI element. You can click the trackball, push the “Enter” key, or tap the screen. Which one does a particular UI element expect? You just have to try all three to find out! I had a hard time even finding instructions for how to reflash the T-Deck+. I found this long Jeff Geerling video where he expresses frustration with how long it took him to find reflashing instructions… and then he never explains how he did it! This is what worked for me: Confusingly, there’s no indication that the device is in DFU mode. I guess the fact that the screen doesn’t load is sort of an indication. On my system, I also see logs indicating a connection. Once I figured out how to navigate the T-Deck, I tried messaging, and the experience remained baffling. For example, guess what screen I’m on here: What does this screen do? If you guessed “chat on Public channel,” you’re a better guesser than I am, because the screen looks like nothing to me. Even when it displays chat messages, it only vaguely looks like a chat interface: Oh, it’s a chat UI. I encountered lots of other instances of confusing UX, but it’s too tedious to recount them all here. The tragic upshot for me is that this is not a device I’d rely on in an emergency. There are so many gotchas and dead-ends in the UX that would trip people up and prevent them from communicating with me. Even though the T-Deck broke my heart, I still hoped to use MeshCore with a different device. I needed to see how these devices worked in the real world rather than a few inches away from each other on my desk. First, I took my T-1000e to a friend’s house about a mile away and tried messaging the Heltec back in my home office. The transmission failed, as it seemed the two devices couldn’t see each other at all from that distance. Okay, fair enough. I’m in a suburban neighborhood, and there are lots of houses, trees, and cars between my house and my friend’s place. The next time I was riding in a car away from my house, I took along my T-1000e and tried messaging the Heltec v3 in my office. One block away: messages succeeded. Three blocks away: still working. Five blocks away: failure. And then I was never able to reach my home device until returning home later that day. Maybe the issue is the Heltec? I keep trying to leave the Heltec at home, but I read that the Heltec v3 has a particularly weak antenna. I tried again by leaving my T-1000e at home and taking the T-Deck out with me. I could successfully message my T-1000e from about five blocks away, but everything beyond that failed. The other part of the MeshCore ecosystem I haven’t mentioned yet is repeaters. The SenseCAP Solar P1-Pro , a solar-powered MeshCore repeater MeshCore repeaters are like WiFi extenders. They receive MeshCore messages and re-broadcast them to extend their reach. Repeaters are what create the “mesh” in MeshCore. The repeaters send messages to other repeaters and carry your MeshCore messages over longer distances. There are some technologically cool repeaters available. They’re solar powered with an internal battery, so they run independently and can survive a few days without sun. The problem was that I didn’t know how much difference a repeater makes. A repeater with a strong antenna would broadcast messages well, but does that solve my problem? If my T-Deck can’t send messages to my T-1000e from six blocks away, how is it going to reach the repeater? By this point, my enthusiasm for MeshCore had waned, and I didn’t want to spend another $100 and mount a broadcasting device to my house when I didn’t know how much it would improve my experience. MeshCore’s firmware is open-source , so I took a look to see if there was anything I could do to improve the user experience on the T-Deck. The first surprise with the source code was that there were no automated tests. I wrote simple unit tests , but nobody from the MeshCore team has responded to my proposal, and it’s been about two months. From casually browsing, the codebase feels messy but not outrageously so. It’s written in C++, and most of the classes have a large surface area with 20+ non-private functions and fields, but that’s what I see in a lot of embedded software projects. Another code smell was that my unit test calls the function, which encodes raw bytes to a hex string . MeshCore’s implementation depends on headers for two crypto libraries , even though the function has nothing to do with cryptography. It’s the kind of needless coupling MeshCore would avoid if they wrote unit tests for each component. My other petty gripe was that the code doesn’t have consistent style conventions. Someone proposed using the file that’s already in the repo , but a maintainer closed the issue with the guidance, “Just make sure your own IDE isn’t making unnecessary changes when you do a commit.” Why? Why in 2025 do I have to think about where to place my curly braces to match the local style? Just set up a formatter so I don’t have to think about mundane style issues anymore. I originally started digging into the MeshCore source to understand the T-Deck UI, but I couldn’t find any code for it. I couldn’t find the source to the MeshCore Android or web apps either. And then I realized: it’s all closed-source. All of the official MeshCore client implementations are closed-source and proprietary. Reading the MeshCore FAQ , I realized critical components are closed-source. What!?! They’d advertised this as open-source! How could they trick me? And then I went back to the MeshCore website and realized they never say “open-source” anywhere. I must have dreamed the part where they advertised MeshCore as open-source. It just seems like such an open-source thing that I assumed it was. But I was severely disappointed to discover that critical parts of MeshCore are proprietary. Without open-source clients, MeshCore doesn’t work for me. I’m not an open-source zealot, and I think it’s fine for software to be proprietary, but the whole point of off-grid communication is decentralization and technology freedom, so I can’t get on board with a closed-source solution. Some parts of the MeshCore ecosystem are indeed open-source and liberally licensed, but critically the T-Deck firmware, the web app, and the mobile apps are all closed-source and proprietary. The firmware I flashed to my Heltec v3 and T-1000e is open-source, but the mobile and Android apps (clients) I used to use the radios were closed-source and proprietary. As far as I see, there are no open-source MeshCore clients aside from the development CLI . I still love the idea of MeshCore, but it doesn’t yet feel practical for communicating in an emergency. The software is too difficult to use, and I’ve been unable to send messages farther than five blocks (about 0.3 miles). I’m open to revisiting MeshCore, but I’m waiting on open-source clients and improvements in usability. Disconnect the T-Deck from USB-C. Power off the T-Deck. Connect the T-Deck to your computer via the USB-C port. Hold down the thumbwheel in the center. Power on the device. It is incredibly cool to send text messages without relying on a big company’s infrastructure. The concept delights the part of my brain that enjoys disaster prep. MeshCore runs on a wide variety of low-cost devices, many of which also work for Meshtastic. There’s an active, enthusiastic community around it. All of the official MeshCore clients are closed-source and proprietary. The user experience is too brittle for me to rely on in an emergency, especially if I’m trying to communicate with MeshCore beginners. Most of the hardware assumes you’ll pair it with your mobile phone over Bluetooth, which introduces many more points of failure and complexity. The only official standalone device is the T-Deck+, but I found it confusing and frustrating to use. There’s no written getting started guide. There’s a FAQ , but it’s a hodgepodge of details without much organization. There’s a good unofficial intro video , but I prefer text documentation.

0 views
xenodium 1 weeks ago

At one with your code

While in the mood to goof around with Emacs, CLI, and image rendering , I've revised an idea to generate some sort of art from your codebase (or any text really). That is, given an image, generate a textual representation, potentially using source code as input. With that, here's one : a utility to transform images into character art using text from your codebase. Rather than tell you more about it, best to see it in action. Just a bit of fun. That's all there is to it. While I've only run it on macOS, 's written in Go, so should be fairly portable. I'd love to know if you get it running on Linux. The code's on GitHub . If you're on macOS, I've added a Homebrew on GitHub , so you should just be able to install with: Having fun with ? Enjoying this blog or my projects ? I am an 👉 indie dev 👈. Help make it sustainable by ✨ sponsoring ✨ Need a blog? I can help with that . Maybe buy my iOS apps too ;)

0 views
xenodium 1 weeks ago

Bending Emacs - Episode 7: Eshell built-in commands

With my recent rinku post and Bending Emacs episode 6 both fresh in mind, I figured I may as well make another Bending Emacs episode, so here we are: Bending Emacs Episode 7: Eshell built-in commands Check out the rinku post for a rundown of things covered in the video. Liked the video? Please let me know. Got feedback? Leave me some comments . Please go like my video , share with others, and subscribe to my channel . If there's enough interest, I'll continue making more videos! Enjoying this content or my projects ? I am an indie dev. Help make it sustainable by ✨ sponsoring ✨ Need a blog? I can help with that . Maybe buy my iOS apps too ;)

0 views

Self-hosting my photos with Immich

For every cloud service I use, I want to have a local copy of my data for backup purposes and independence. Unfortunately, the tool stopped working in March 2025 when Google restricted the OAuth scopes, so I needed an alternative for my existing Google Photos setup. In this post, I describe how I have set up Immich , a self-hostable photo manager. Here is the end result: a few (live) photos from NixCon 2025 : I am running Immich on my Ryzen 7 Mini PC (ASRock DeskMini X600) , which consumes less than 10 W of power in idle and has plenty of resources for VMs (64 GB RAM, 1 TB disk). You can read more about it in my blog post from July 2024: When I saw the first reviews of the ASRock DeskMini X600 barebone, I was immediately interested in building a home-lab hypervisor (VM host) with it. Apparently, the DeskMini X600 uses less than 10W of power but supports latest-generation AMD CPUs like the Ryzen 7 8700G! Read more → I installed Proxmox , an Open Source virtualization platform, to divide this mini server into VMs, but you could of course also install Immich directly on any server. I created a VM (named “photos”) with 500 GB of disk space, 4 CPU cores and 4 GB of RAM. For the initial import, you could assign more CPU and RAM, but for normal usage, that’s enough. I (declaratively) installed NixOS on that VM as described in this blog post: For one of my network storage PC builds, I was looking for an alternative to Flatcar Container Linux and tried out NixOS again (after an almost 10 year break). There are many ways to install NixOS, and in this article I will outline how I like to install NixOS on physical hardware or virtual machines: over the network and fully declaratively. Read more → Afterwards, I enabled Immich, with this exact configuration: At this point, Immich is available on , but not over the network, because NixOS enables a firewall by default. I could enable the option, but I actually want Immich to only be available via my Tailscale VPN, for which I don’t need to open firewall access — instead, I use to forward traffic to : Because I have Tailscale’s MagicDNS and TLS certificate provisioning enabled, that means I can now open https://photos.example.ts.net in my browser on my PC, laptop or phone. At first, I tried importing my photos using the official Immich CLI: Unfortunately, the upload was not running reliably and had to be restarted manually a few times after running into a timeout. Later I realized that this was because the Immich server runs background jobs like thumbnail creation, metadata extraction or face detection, and these background jobs slow down the upload to the extent that the upload can fail with a timeout. The other issue was that even after the upload was done, I realized that Google Takeout archives for Google Photos contain metadata in separate JSON files next to the original image files: Unfortunately, these files are not considered by . Luckily, there is a great third-party tool called immich-go , which solves both of these issues! It pauses background tasks before uploading and restarts them afterwards, which works much better, and it does its best to understand Google Takeout archives. I ran as follows and it worked beautifully: My main source of new photos is my phone, so I installed the Immich app on my iPhone, logged into my Immich server via its Tailscale URL and enabled automatic backup of new photos via the icon at the top right. I am not 100% sure whether these settings are correct, but it seems like camera photos generally go into Live Photos, and Recent should cover other files…?! If anyone knows, please send an explanation (or a link!) and I will update the article. I also strongly recommend to disable notifications for Immich, because otherwise you get notifications whenever it uploads images in the background. These notifications are not required for background upload to work, as an Immich developer confirmed on Reddit . Open Settings → Apps → Immich → Notifications and un-tick the permission checkbox: Immich’s documentation on backups contains some good recommendations. The Immich developers recommend backing up the entire contents of , which is on NixOS. The subdirectory contains SQL dumps, whereas the 3 directories , and contain all user-uploaded data. Hence, I have set up a systemd timer that runs to copy onto my PC, which is enrolled in a 3-2-1 backup scheme . Immich (currently?) does not contain photo editing features, so to rotate or crop an image, I download the image and use GIMP . To share images, I still upload them to Google Photos (depending on who I share them with). The two most promising options in the space of self-hosted image management tools seem to be Immich and Ente . I got the impression that Immich is more popular in my bubble, and Ente made the impression on me that its scope is far larger than what I am looking for: Ente is a service that provides a fully open source, end-to-end encrypted platform for you to store your data in the cloud without needing to trust the service provider. On top of this platform, we have built two apps so far: Ente Photos (an alternative to Apple and Google Photos) and Ente Auth (a 2FA alternative to the deprecated Authy). I don’t need an end-to-end encrypted platform. I already have encryption on the transit layer (Tailscale) and disk layer (LUKS), no need for more complexity. Immich is a delightful app! It’s very fast and generally seems to work well. The initial import is smooth, but only if you use the right tool. Ideally, the official could be improved. Or maybe could be made the official one. I think the auto backup is too hard to configure on an iPhone, so that could also be improved. But aside from these initial stumbling blocks, I have no complaints.

0 views
xenodium 1 weeks ago

Rinku: CLI link previews

In my last Bending Emacs episode, I talked about overlays and used them to render link previews in an Emacs buffer. While the overlays merely render an image, the actual link preview image is generated by rinku , a tiny command line utility I built recently. leverages macOS APIs to do the actual heavy lifting, rendering/capturing a view off screen, and saving to disk. Similarly, it can fetch preview metadata, also saving the related thumbnail to disk. In both cases, outputs to JSON. By default, fetches metadata for you. In this instance, the image looks a little something like this: On the other hand, the flag generates a preview, very much like the ones you see in native macOS and iOS apps. Similarly, the preview renders as follows: While overlays is one way to integrate anywhere in Emacs, I had been meaning to look into what I can do for eshell in particular. Eshell is just another buffer , and while overlays could do the job, I wanted a shell-like experience. After all, I already knew we can echo images into an eshell buffer . Before getting to on , there's a related hack I'd been meaning to get to for some time… While we're all likely familiar with the cat command, I remember being a little surprised to find that offers an alternative elisp implementation. Surprised too? Go check it! Where am I going with this? Well, if eshell's command is an elisp implementation, we know its internals are up for grabs , so we can technically extend it to display images too. is just another function, so we can advice it to add image superpowers. I was pleasantly surprised at how little code was needed. It basically scans for image arguments to handle within advice and otherwise delegates to 's original implementation. And with that, we can see our freshly powered-up command in action: By now, you may wonder why the detour when the post was really about ? You see, this is Emacs, and everything compounds! We can now leverage our revamped command to give similar superpowers to , by merely adding an function. As we now know, outputs things to JSON, so we can use to parse the process output and subsequently feed the image path to . can also output link titles, so we can show that too whenever possible. With that, we can see the lot in action: While non-Emacs users are often puzzled by how frequently we bring user flows and integrations on to our beloved editor, once you learn a little elisp, you start realising how relatively easily things can integrate with one another and pretty much everything is up for grabs . Reckon and these tips will be useful to you? Enjoying this blog or my projects ? I am an 👉 indie dev 👈. Help make it sustainable by ✨ sponsoring ✨ Need a blog? I can help with that . Maybe buy my iOS apps too ;)

0 views
pabloecortez 1 weeks ago

Black Friday for You and Me

Yesterday it was Thanksgiving and I had the privilege of spending the holiday with my family. We have a tradition of doing a toast going around the table and sharing at least one thing for which we are grateful. I want to share with you a story that started last year, in January of 2024, when a family friend named Germán reached out to me for help with a website for his business. Germán is in his 50s, he went to school for mechanical engineering in Mexico and about twenty years ago he moved to the United States. Today he owns a restaurant in Las Vegas with his wife and also runs a logistics company for distributing produce. We met the last week of January, he told me that he was looking to build a website for his restaurant and eventually build up his infrastructure so most of his business could be automated. His current workflow required his two sons to run the business along with him. They managed everything manually on expensive proprietary software. There were lots of things that could be optimized, so I agreed to jump on board and we have been collaborating ever since. What I assumed would be a developer type of position instead became more of a peer-mentorship relationship. Germán is curious, intelligent, and hard working. It didn't take long for me to notice that he didn't just want to have software or services running "in the background" while he occupied himself with other tasks. He wanted to have a thorough understanding of all the software he adopted. "I want to learn but I simply don't have the patience," he told me during one of our first meetings. At first I admit I thought this was a bit of a red flag (sorry Germán haha) but it all began to make sense when he showed me his books. He had paid thousands of dollars for a Wordpress website that only listed his services and contact information. The company he had hired offered an expensive SEO package for a monthly fee. My time in open source and the indieweb had blinded me to how abusive the "web development" industry had become. I'm referring to those local agencies that take advantage of unsuspecting clients and charge them for every little thing. I began making Germán's website and we went back and forth on assets, copy, menus, we began putting together a project and everything went smoothly. He was happy that he got to see how I built things. During this time I would journal through my work on his project and e-mail my notes to him. He loved it. Next came a new proposition. While the static site was nice to have an online presence, what he was after was getting into e-commerce. His wife, Sarah, makes artisanal beauty products and custom clothes. Her friends would message her on Facebook to ask what new stuff she was working on and she would send pictures to them from her phone. She would have benefitted from having a website, but after the bad experience they had had with the agency, they weren't too enthused about the prospect of hiring them for another project. I met with both of them again for this new project and we talked for hours, more like coworkers this time around. We eventually came to the conclusion that it would be more rewarding for them to really learn how to put their own shop together. I acted more as a coach or mentor than a developer. We'd sit together and activate accounts, fill out pages, choose themes. I was providing a safe space for them to be curious about technology, make mistakes, learn from them, and immediately get feedback on technical details so they could stay on a safe path. I'm so grateful for that opportunity afforded to me by Germán and his family. I've thought about how that approach would look if applied to the indieweb. It's always so exciting for me to see what the friends I've made here are working on. I know the open web becomes stronger when more independent projects are released, as we have more options to free ourselves from the corporate web that has stifled so much of the creativity and passion that I love and miss from the internet. I want to keep doing this. If you are building something on your own, have been out of the programming world for a while but want to start again, or maybe you are almost done and need a little boost in confidence (or accountability!) to reach the finish line and ship, I'm here to help. Check out my coaching page to find out more. I'm excited about the prospect of a community of builders who care about self-reliance and releasing software that puts people first. Perhaps this Black Friday you could choose to invest in yourself :-)

0 views
fLaMEd fury 1 weeks ago

Contain The Web With Firefox Containers

What’s going on, Internet? While tech circles are grumbling about Mozilla stuffing AI features into Firefox that nobody asked for (lol), I figured I’d write about a feature people might actually like if they’re not already using it. This is how I’m containing the messy sprawl of the modern web using Firefox Containers. After the ability to run uBlock Origin, containers are easily one of Firefox’s best features. I’m happy to share my setup that helps contain the big bad evil and annoying across the web. Not because I visit these sites often or on purpose. I usually avoid them. But for the moments where I click something without paying attention, or I need to open a site just to get a piece of information and failing (lol, login walls), or I end up somewhere I don’t wanta to be. Containers stop that one slip from bleeding into the rest of my tabs. Firefox holds each site in its own space so nothing spills into the rest of my browsing. Here’s how I’ve split things up. Nothing fancy. Just tidy and logical. Nothing here is about avoiding these sites forever. It’s about containing them so they can’t follow me around. I use two extensions together: MAC handles the visuals. Containerise handles the rules. You can skip MAC and let Containerise auto create containers, but you lose control over colours and icons, so everything ends up looking the same. I leave MAC’s site lists empty so it doesn’t clash with Containerise. Containerise becomes the single source of truth. If I need to open something in a specific container, I just right click and choose Open in Container. Containers don’t fix the surveillance web, but they do reduce the blast radius. One random visit to Google, Meta, Reddit or Amazon won’t bleed into my other tabs. Cookies stay contained. Identity stays isolated. Tracking systems get far less to work with. Well, that’s my understanding of it anyway. It feels like one of the last features in modern browsers that still puts control back in the user’s hands, without having to give up the open web. Just letting you know that I used ChatGPT (in a container) to help me create the regex here - there was no way I was going to be able to figure that out myself. So while Firefox keeps pandering to the industry with AI features nobody asked for (lol), there’s still a lot to like about the browser. Containers, uBlock Origin, and the general flexibility of Firefox still give you real control over your internet experience. Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website. Firefox Multi Account Containers (MAC) for creating and customising the containers (names, colours, icons). Containerise for all the routing logic using regex rules.

0 views
Kix Panganiban 1 weeks ago

Utteranc.es is really neat

It's hard to find privacy-respecting (read: not Disqus) commenting systems out there. A couple of good ones recommended by Bear are Cusdis and Komments -- but I'm not a huge fan of either of them: Then I realized that there's a great alternative that I've used in the past: utteranc.es . Its execution is elegant: you embed a tiny JS file on your blog posts, and it will map every page to Github Issues in a Github repo. In my case, I created this repo specifically for that purpose. Neat! I'm including utteranc.es in all my blog posts moving forward. You can check out how it looks below: Cusdis styling is very limited. You can only set it to dark or light mode, with no control over the specific HTML elements and styling. It's fine but I prefer something that looks a little neater. Komments requires manually creating a new page for every new post that you make. The idea is that wherever you want comments, you create a page in Komments and embed that page into your webpage. So you can have 1 Komments page per blog post, or even 1 Komments page for your entire blog.

0 views
Langur Monkey 1 weeks ago

Google *unkills* JPEG XL?

I’ve written about JPEG XL in the past. First, I noted Google’s move to kill the format in Chromium in favor of the homegrown and inferior AVIF. 1 2 Then, I had a deeper look at the format, and visually compared JPEG XL with AVIF on a handful of images. The latter post started with a quick support test: “If you are browsing this page around 2023, chances are that your browser supports AVIF but does not support JPEG XL.” Well, here we are at the end of 2025, and this very sentence still holds true. Unless you are one of the 17% of users using Safari 3 , or are adventurous enough to use a niche browser like Thorium or LibreWolf , chances are you see the AVIF banner in green and the JPEG XL image in black/red. The good news is, this will change soon. In a dramatic turn of events, the Chromium team has reversed its tag, and has decided to support the format in Blink (the engine behind Chrome/Chromium/Edge). Given Chrome’s position in the browser market share, I predict the format will become a de factor standard for images in the near future. I’ve been following JPEG XL since its experimental support in Blink. What started as a promising feature was quickly axed by the team in a bizarre and ridiculous manner. First, they asked the community for feedback on the format. Then, the community responded very positively. And I don’t only mean a couple of guys in their basement. Meta , Intel , Cloudinary , Adobe , , , Krita , and many more. After that came the infamous comment: [email protected] [email protected] #85 Oct 31, 2022 12:34AM Thank you everyone for your comments and feedback regarding JPEG XL. We will be removing the JPEG XL code and flag from Chromium for the following reasons: Yes, right, “ not enough interest from the entire ecosystem ”. Sure. Anyway, following this comment, a steady stream of messages pointed out how wrong that was, from all the organizations mentioned above and many more. People were noticing in blog posts, videos, and social media interactions. Strangely, the following few years have been pretty calm for JPEG XL. However, a few notable events did take place. First, the Firefox team showed interest in a JPEG XL Rust decoder , after describing their stance on the matter as “neutral”. They were concerned about the increased attack surface resulting from including the current 100K+ lines C++ reference decoder, even though most of those lines are testing code. In any case, they kind of requested a “memory-safe” decoder. This seems to have kick-started the Rust implementation, jxl-rs , from Google Research. To top it off, a couple of weeks ago, the PDF Association announced their intent to adopt JPEG XL as a preferred image format in their PDF specification. The CTO of the PDF Association, Peter Wyatt, expressed their desire to include JPEG XL as the preferred format for HDR content in PDF files. 4 All of this pressure exerted steadily over time made the Chromium team reconsider the format. They tried to kill it in favor of AVIF, but that hasn’t worked out. Rick Byers, on behalf of Chromium, made a comment in the Blink developers Google group about the team welcoming a performant and memory-safe JPEG XL decoder in Chromium. He stated that the change of stance was in light of the positive signs from the community we have exposed above (Safari support, Firefox updating their position, PDF, etc.). Quickly after that, the Chromium issue state was changed from to . This is great news for the format, and I believe it will give it the final push for mass adoption. The format is excellent for all kinds of purposes, and I’ll be adopting it pretty much instantly for this and the Gaia Sky website when support is shipped. Some of the features that make it superior to the competition are: For a full codec feature breakdown, see Battle of the Codecs . JPEG XL is the future of image formats. It checks all the right boxes, and it checks them well. Support in the overwhelmingly most popular browser engine is probably going to be a crucial stepping stone in the format’s path to stardom. I’m happy that the Chromium team reconsidered their inclusion, but I am sad that it took so long and so much pressure from the community to achieve it. https://aomediacodec.github.io/av1-avif/   ↩︎ https://jpegxl.info/resources/battle-of-codecs.html   ↩︎ https://radar.cloudflare.com/reports/browser-market-share-2025-q1   ↩︎ https://www.youtube.com/watch?v=DjUPSfirHek&t=2284s   ↩︎ https://youtu.be/qc2DvJpXh-A   ↩︎ Experimental flags and code should not remain indefinitely There is not enough interest from the entire ecosystem to continue experimenting with JPEG XL The new image format does not bring sufficient incremental benefits over existing formats to warrant enabling it by default By removing the flag and the code in M110, it reduces the maintenance burden and allows us to focus on improving existing formats in Chrome Lossless re-compression of JPEG images. This means you can re-compress your current JPEG library without losing information and benefit from a ~30% reduction in file size for free. This is a killer feature that no other format has. Support for wide gamut and HDR. Support for image sizes of up to 1,073,741,823x1,073,741,824. You won’t run out of image space anytime soon. AVIF is ridiculous in this aspect, capping at 8,193x4,320. WebP goes up to 16K 2 , while the original 1992 JPEG supports 64K 2 . Maximum of 32 bits per channel. No other format (except for the defunct JPEG 2000) offers this. Maximum of 4,099 channels. Most other formats support 4 or 5, with the exception of JPEG 2000, which supports 16,384. JXL is super resilient to generation loss. 5 JXL supports progressive decoding, which is essential for web delivery, IMO. WebP or HEIC have no such feature. Progressive decoding in AVIF was added a few years back. Support for animation. Support for alpha transparency. Depth map support. https://aomediacodec.github.io/av1-avif/   ↩︎ https://jpegxl.info/resources/battle-of-codecs.html   ↩︎ https://radar.cloudflare.com/reports/browser-market-share-2025-q1   ↩︎ https://www.youtube.com/watch?v=DjUPSfirHek&t=2284s   ↩︎ https://youtu.be/qc2DvJpXh-A   ↩︎

0 views
Uros Popovic 1 weeks ago

How to use Linux vsock for fast VM communication

Discover how to bypass the network stack for Host-to-VM communication using Linux Virtual Sockets (AF_VSOCK). This article details how to use these sockets to build a high-performance gRPC service in C++ that communicates directly over the hypervisor bus, avoiding TCP/IP overhead entirely.

0 views
Corrode 1 weeks ago

Canonical

What does it take to rewrite the foundational components of one of the world’s most popular Linux distributions? Ubuntu serves over 12 million daily desktop users alone, and the systems that power it, from sudo to core utilities, have been running for decades with what Jon Seager, VP of Engineering for Ubuntu at Canonical, calls “shaky underpinnings.” In this episode, we talk to Jon about the bold decision to “oxidize” Ubuntu’s foundation. We explore why they’re rewriting critical components like sudo in Rust, how they’re managing the immense risk of changing software that millions depend on daily, and what it means to modernize a 20-year-old operating system without breaking the internet. CodeCrafters helps you become proficient in Rust by building real-world, production-grade projects. Learn hands-on by creating your own shell, HTTP server, Redis, Kafka, Git, SQLite, or DNS service from scratch. Start for free today and enjoy 40% off any paid plan by using this link . Canonical is the company behind Ubuntu, one of the most widely-used Linux distributions in the world. From personal desktops to cloud infrastructure, Ubuntu powers millions of systems globally. Canonical’s mission is to make open source software available to people everywhere, and they’re now pioneering the adoption of Rust in foundational system components to improve security and reliability for the next generation of computing. Jon Seager is VP Engineering for Ubuntu at Canonical, where he oversees the Ubuntu Desktop, Server, and Foundations teams. Appointed to this role in January 2025, Jon is driving Ubuntu’s modernization strategy with a focus on Communication, Automation, Process, and Modernisation. His vision includes adopting memory-safe languages like Rust for critical infrastructure components. Before this role, Jon spent three years as VP Engineering building Juju and Canonical’s catalog of charms. He’s passionate about making Ubuntu ready for the next 20 years of computing. Juju - Jon’s previous focus, a cloud orchestration tool GNU coretuils - The widest used implementation of commands like ls, rm, cp, and more uutils coreutils - coreutils implementation in Rust sudo-rs - For your Rust based sandwiches needs LTS - Long Term Support, a release model popularized by Ubuntu coreutils-from-uutils - List of symbolic links used for coreutils on Ubuntu, some still point to the GNU implementation man: sudo -E - Example of a feature that sudo-rs does not support SIMD - Single instruction, multiple data rust-coreutils - The Ubuntu package with all it’s supported CPU platforms listed fastcat - Matthias’ blogpost about his faster version of systemd-run0 - Alternative approach to sudo from the systemd project AppArmor - The Linux Security Module used in Ubuntu PAM - The Pluggable Authentication Modules, which handles all system authentication in Linux SSSD - Enables LDAP user profiles on Linux machines ntpd-rs - Timesynchronization daemon written in Rust which may land in Ubuntu 26.04 Trifecta Tech Foundation - Foundation supporting sudo-rs development Sequioa PGP - OpenPGP tools written in Rust Mir - Canonicals wayland compositor library, uses some Rust Anbox Cloud - Canonical’s Android streaming platform, includes Rust components Simon Fels - Original creator of Anbox and Anbox Cloud team lead at Canonical LXD - Container and VM hypervisor dqlite - SQLite with a replication layer for distributed use cases, potentially being rewritten in Rust Rust for Linux - Project to add Rust support to the Linux kernel Nova GPU Driver - New Linux OSS driver for NVIDIA GPUs written in Rust Ubuntu Asahi - Community project for Ubuntu on Apple Silicon debian-devel: Hard Rust requirements from May onward - Parts of apt are being rewritten in Rust (announced a month after the recording of this episode) Go Standard Library - Providing things like network protocols, cryptographic algorithms, and even tools to handle image formats Python Standard Library - The origin of “batteries included” The Rust Standard Library - Basic types, collections, filesystem access, threads, processes, synchronisation, and not much more clap - Superstar library for CLI option parsing serde - Famous high-level serilization and deserialization interface crate Jon Seager’s Website Jon’s Blog: Engineering Ubuntu For The Next 20 Years Canonical Blog Ubuntu Blog Canonical Careers: Engineering - Apply your Rust skills in the Linux ecosystem

0 views