Posts in Open-source (20 found)

Bending Emacs - Episode 7: Eshell built-in commands

With my recent rinku post and Bending Emacs episode 6 both fresh in mind, I figured I may as well make another Bending Emacs episode, so here we are: Bending Emacs Episode 7: Eshell built-in commands Check out the rinku post for a rundown of things covered in the video. Liked the video? Please let me know. Got feedback? Leave me some comments . Please go like my video , share with others, and subscribe to my channel . If there's enough interest, I'll continue making more videos! Enjoying this content or my projects ? I am an indie dev. Help make it sustainable by ✨ sponsoring ✨ Need a blog? I can help with that . Maybe buy my iOS apps too ;)

0 views

Self-hosting my photos with Immich

For every cloud service I use, I want to have a local copy of my data for backup purposes and independence. Unfortunately, the tool stopped working in March 2025 when Google restricted the OAuth scopes, so I needed an alternative for my existing Google Photos setup. In this post, I describe how I have set up Immich , a self-hostable photo manager. Here is the end result: a few (live) photos from NixCon 2025 : I am running Immich on my Ryzen 7 Mini PC (ASRock DeskMini X600) , which consumes less than 10 W of power in idle and has plenty of resources for VMs (64 GB RAM, 1 TB disk). You can read more about it in my blog post from July 2024: When I saw the first reviews of the ASRock DeskMini X600 barebone, I was immediately interested in building a home-lab hypervisor (VM host) with it. Apparently, the DeskMini X600 uses less than 10W of power but supports latest-generation AMD CPUs like the Ryzen 7 8700G! Read more → I installed Proxmox , an Open Source virtualization platform, to divide this mini server into VMs, but you could of course also install Immich directly on any server. I created a VM (named “photos”) with 500 GB of disk space, 4 CPU cores and 4 GB of RAM. For the initial import, you could assign more CPU and RAM, but for normal usage, that’s enough. I (declaratively) installed NixOS on that VM as described in this blog post: For one of my network storage PC builds, I was looking for an alternative to Flatcar Container Linux and tried out NixOS again (after an almost 10 year break). There are many ways to install NixOS, and in this article I will outline how I like to install NixOS on physical hardware or virtual machines: over the network and fully declaratively. Read more → Afterwards, I enabled Immich, with this exact configuration: At this point, Immich is available on , but not over the network, because NixOS enables a firewall by default. I could enable the option, but I actually want Immich to only be available via my Tailscale VPN, for which I don’t need to open firewall access — instead, I use to forward traffic to : Because I have Tailscale’s MagicDNS and TLS certificate provisioning enabled, that means I can now open https://photos.example.ts.net in my browser on my PC, laptop or phone. At first, I tried importing my photos using the official Immich CLI: Unfortunately, the upload was not running reliably and had to be restarted manually a few times after running into a timeout. Later I realized that this was because the Immich server runs background jobs like thumbnail creation, metadata extraction or face detection, and these background jobs slow down the upload to the extent that the upload can fail with a timeout. The other issue was that even after the upload was done, I realized that Google Takeout archives for Google Photos contain metadata in separate JSON files next to the original image files: Unfortunately, these files are not considered by . Luckily, there is a great third-party tool called immich-go , which solves both of these issues! It pauses background tasks before uploading and restarts them afterwards, which works much better, and it does its best to understand Google Takeout archives. I ran as follows and it worked beautifully: My main source of new photos is my phone, so I installed the Immich app on my iPhone, logged into my Immich server via its Tailscale URL and enabled automatic backup of new photos via the icon at the top right. I am not 100% sure whether these settings are correct, but it seems like camera photos generally go into Live Photos, and Recent should cover other files…?! If anyone knows, please send an explanation (or a link!) and I will update the article. I also strongly recommend to disable notifications for Immich, because otherwise you get notifications whenever it uploads images in the background. These notifications are not required for background upload to work, as an Immich developer confirmed on Reddit . Open Settings → Apps → Immich → Notifications and un-tick the permission checkbox: Immich’s documentation on backups contains some good recommendations. The Immich developers recommend backing up the entire contents of , which is on NixOS. The subdirectory contains SQL dumps, whereas the 3 directories , and contain all user-uploaded data. Hence, I have set up a systemd timer that runs to copy onto my PC, which is enrolled in a 3-2-1 backup scheme . Immich (currently?) does not contain photo editing features, so to rotate or crop an image, I download the image and use GIMP . To share images, I still upload them to Google Photos (depending on who I share them with). The two most promising options in the space of self-hosted image management tools seem to be Immich and Ente . I got the impression that Immich is more popular in my bubble, and Ente made the impression on me that its scope is far larger than what I am looking for: Ente is a service that provides a fully open source, end-to-end encrypted platform for you to store your data in the cloud without needing to trust the service provider. On top of this platform, we have built two apps so far: Ente Photos (an alternative to Apple and Google Photos) and Ente Auth (a 2FA alternative to the deprecated Authy). I don’t need an end-to-end encrypted platform. I already have encryption on the transit layer (Tailscale) and disk layer (LUKS), no need for more complexity. Immich is a delightful app! It’s very fast and generally seems to work well. The initial import is smooth, but only if you use the right tool. Ideally, the official could be improved. Or maybe could be made the official one. I think the auto backup is too hard to configure on an iPhone, so that could also be improved. But aside from these initial stumbling blocks, I have no complaints.

0 views
xenodium Yesterday

Rinku: CLI link previews

In my last Bending Emacs episode, I talked about overlays and used them to render link previews in an Emacs buffer. While the overlays merely render an image, the actual link preview image is generated by rinku , a tiny command line utility I built recently. leverages macOS APIs to do the actual heavy lifting, rendering/capturing a view off screen, and saving to disk. Similarly, it can fetch preview metadata, also saving the related thumbnail to disk. In both cases, outputs to JSON. By default, fetches metadata for you. In this instance, the image looks a little something like this: On the other hand, the flag generates a preview, very much like the ones you see in native macOS and iOS apps. Similarly, the preview renders as follows: While overlays is one way to integrate anywhere in Emacs, I had been meaning to look into what I can do for eshell in particular. Eshell is just another buffer , and while overlays could do the job, I wanted a shell-like experience. After all, I already knew we can echo images into an eshell buffer . Before getting to on , there's a related hack I'd been meaning to get to for some time… While we're all likely familiar with the cat command, I remember being a little surprised to find that offers an alternative elisp implementation. Surprised too? Go check it! Where am I going with this? Well, if eshell's command is an elisp implementation, we know its internals are up for grabs , so we can technically extend it to display images too. is just another function, so we can advice it to add image superpowers. I was pleasantly surprised at how little code was needed. It basically scans for image arguments to handle within advice and otherwise delegates to 's original implementation. And with that, we can see our freshly powered-up command in action: By now, you may wonder why the detour when the post was really about ? You see, this is Emacs, and everything compounds! We can now leverage our revamped command to give similar superpowers to , by merely adding an function. As we now know, outputs things to JSON, so we can use to parse the process output and subsequently feed the image path to . can also output link titles, so we can show that too whenever possible. With that, we can see the lot in action: While non-Emacs users are often puzzled by how frequently we bring user flows and integrations on to our beloved editor, once you learn a little elisp, you start realising how relatively easily things can integrate with one another and pretty much everything is up for grabs . Reckon and these tips will be useful to you? Enjoying this blog or my projects ? I am an 👉 indie dev 👈. Help make it sustainable by ✨ sponsoring ✨ Need a blog? I can help with that . Maybe buy my iOS apps too ;)

0 views
pabloecortez 2 days ago

Black Friday for You and Me

Yesterday it was Thanksgiving and I had the privilege of spending the holiday with my family. We have a tradition of doing a toast going around the table and sharing at least one thing for which we are grateful. I want to share with you a story that started last year, in January of 2024, when a family friend named Germán reached out to me for help with a website for his business. Germán is in his 50s, he went to school for mechanical engineering in Mexico and about twenty years ago he moved to the United States. Today he owns a restaurant in Las Vegas with his wife and also runs a logistics company for distributing produce. We met the last week of January, he told me that he was looking to build a website for his restaurant and eventually build up his infrastructure so most of his business could be automated. His current workflow required his two sons to run the business along with him. They managed everything manually on expensive proprietary software. There were lots of things that could be optimized, so I agreed to jump on board and we have been collaborating ever since. What I assumed would be a developer type of position instead became more of a peer-mentorship relationship. Germán is curious, intelligent, and hard working. It didn't take long for me to notice that he didn't just want to have software or services running "in the background" while he occupied himself with other tasks. He wanted to have a thorough understanding of all the software he adopted. "I want to learn but I simply don't have the patience," he told me during one of our first meetings. At first I admit I thought this was a bit of a red flag (sorry Germán haha) but it all began to make sense when he showed me his books. He had paid thousands of dollars for a Wordpress website that only listed his services and contact information. The company he had hired offered an expensive SEO package for a monthly fee. My time in open source and the indieweb had blinded me to how abusive the "web development" industry had become. I'm referring to those local agencies that take advantage of unsuspecting clients and charge them for every little thing. I began making Germán's website and we went back and forth on assets, copy, menus, we began putting together a project and everything went smoothly. He was happy that he got to see how I built things. During this time I would journal through my work on his project and e-mail my notes to him. He loved it. Next came a new proposition. While the static site was nice to have an online presence, what he was after was getting into e-commerce. His wife, Sarah, makes artisanal beauty products and custom clothes. Her friends would message her on Facebook to ask what new stuff she was working on and she would send pictures to them from her phone. She would have benefitted from having a website, but after the bad experience they had had with the agency, they weren't too enthused about the prospect of hiring them for another project. I met with both of them again for this new project and we talked for hours, more like coworkers this time around. We eventually came to the conclusion that it would be more rewarding for them to really learn how to put their own shop together. I acted more as a coach or mentor than a developer. We'd sit together and activate accounts, fill out pages, choose themes. I was providing a safe space for them to be curious about technology, make mistakes, learn from them, and immediately get feedback on technical details so they could stay on a safe path. I'm so grateful for that opportunity afforded to me by Germán and his family. I've thought about how that approach would look if applied to the indieweb. It's always so exciting for me to see what the friends I've made here are working on. I know the open web becomes stronger when more independent projects are released, as we have more options to free ourselves from the corporate web that has stifled so much of the creativity and passion that I love and miss from the internet. I want to keep doing this. If you are building something on your own, have been out of the programming world for a while but want to start again, or maybe you are almost done and need a little boost in confidence (or accountability!) to reach the finish line and ship, I'm here to help. Check out my coaching page to find out more. I'm excited about the prospect of a community of builders who care about self-reliance and releasing software that puts people first. Perhaps this Black Friday you could choose to invest in yourself :-)

0 views
fLaMEd fury 2 days ago

Contain The Web With Firefox Containers

What’s going on, Internet? While tech circles are grumbling about Mozilla stuffing AI features into Firefox that nobody asked for (lol), I figured I’d write about a feature people might actually like if they’re not already using it. This is how I’m containing the messy sprawl of the modern web using Firefox Containers. After the ability to run uBlock Origin, containers are easily one of Firefox’s best features. I’m happy to share my setup that helps contain the big bad evil and annoying across the web. Not because I visit these sites often or on purpose. I usually avoid them. But for the moments where I click something without paying attention, or I need to open a site just to get a piece of information and failing (lol, login walls), or I end up somewhere I don’t wanta to be. Containers stop that one slip from bleeding into the rest of my tabs. Firefox holds each site in its own space so nothing spills into the rest of my browsing. Here’s how I’ve split things up. Nothing fancy. Just tidy and logical. Nothing here is about avoiding these sites forever. It’s about containing them so they can’t follow me around. I use two extensions together: MAC handles the visuals. Containerise handles the rules. You can skip MAC and let Containerise auto create containers, but you lose control over colours and icons, so everything ends up looking the same. I leave MAC’s site lists empty so it doesn’t clash with Containerise. Containerise becomes the single source of truth. If I need to open something in a specific container, I just right click and choose Open in Container. Containers don’t fix the surveillance web, but they do reduce the blast radius. One random visit to Google, Meta, Reddit or Amazon won’t bleed into my other tabs. Cookies stay contained. Identity stays isolated. Tracking systems get far less to work with. Well, that’s my understanding of it anyway. It feels like one of the last features in modern browsers that still puts control back in the user’s hands, without having to give up the open web. Just letting you know that I used ChatGPT (in a container) to help me create the regex here - there was no way I was going to be able to figure that out myself. So while Firefox keeps pandering to the industry with AI features nobody asked for (lol), there’s still a lot to like about the browser. Containers, uBlock Origin, and the general flexibility of Firefox still give you real control over your internet experience. Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website. Firefox Multi Account Containers (MAC) for creating and customising the containers (names, colours, icons). Containerise for all the routing logic using regex rules.

0 views
Kix Panganiban 2 days ago

Utteranc.es is really neat

It's hard to find privacy-respecting (read: not Disqus) commenting systems out there. A couple of good ones recommended by Bear are Cusdis and Komments -- but I'm not a huge fan of either of them: Then I realized that there's a great alternative that I've used in the past: utteranc.es . Its execution is elegant: you embed a tiny JS file on your blog posts, and it will map every page to Github Issues in a Github repo. In my case, I created this repo specifically for that purpose. Neat! I'm including utteranc.es in all my blog posts moving forward. You can check out how it looks below: Cusdis styling is very limited. You can only set it to dark or light mode, with no control over the specific HTML elements and styling. It's fine but I prefer something that looks a little neater. Komments requires manually creating a new page for every new post that you make. The idea is that wherever you want comments, you create a page in Komments and embed that page into your webpage. So you can have 1 Komments page per blog post, or even 1 Komments page for your entire blog.

0 views
Uros Popovic 3 days ago

How to use Linux vsock for fast VM communication

Discover how to bypass the network stack for Host-to-VM communication using Linux Virtual Sockets (AF_VSOCK). This article details how to use these sockets to build a high-performance gRPC service in C++ that communicates directly over the hypervisor bus, avoiding TCP/IP overhead entirely.

0 views
Corrode 3 days ago

Canonical

What does it take to rewrite the foundational components of one of the world’s most popular Linux distributions? Ubuntu serves over 12 million daily desktop users alone, and the systems that power it, from sudo to core utilities, have been running for decades with what Jon Seager, VP of Engineering for Ubuntu at Canonical, calls “shaky underpinnings.” In this episode, we talk to Jon about the bold decision to “oxidize” Ubuntu’s foundation. We explore why they’re rewriting critical components like sudo in Rust, how they’re managing the immense risk of changing software that millions depend on daily, and what it means to modernize a 20-year-old operating system without breaking the internet. CodeCrafters helps you become proficient in Rust by building real-world, production-grade projects. Learn hands-on by creating your own shell, HTTP server, Redis, Kafka, Git, SQLite, or DNS service from scratch. Start for free today and enjoy 40% off any paid plan by using this link . Canonical is the company behind Ubuntu, one of the most widely-used Linux distributions in the world. From personal desktops to cloud infrastructure, Ubuntu powers millions of systems globally. Canonical’s mission is to make open source software available to people everywhere, and they’re now pioneering the adoption of Rust in foundational system components to improve security and reliability for the next generation of computing. Jon Seager is VP Engineering for Ubuntu at Canonical, where he oversees the Ubuntu Desktop, Server, and Foundations teams. Appointed to this role in January 2025, Jon is driving Ubuntu’s modernization strategy with a focus on Communication, Automation, Process, and Modernisation. His vision includes adopting memory-safe languages like Rust for critical infrastructure components. Before this role, Jon spent three years as VP Engineering building Juju and Canonical’s catalog of charms. He’s passionate about making Ubuntu ready for the next 20 years of computing. Juju - Jon’s previous focus, a cloud orchestration tool GNU coretuils - The widest used implementation of commands like ls, rm, cp, and more uutils coreutils - coreutils implementation in Rust sudo-rs - For your Rust based sandwiches needs LTS - Long Term Support, a release model popularized by Ubuntu coreutils-from-uutils - List of symbolic links used for coreutils on Ubuntu, some still point to the GNU implementation man: sudo -E - Example of a feature that sudo-rs does not support SIMD - Single instruction, multiple data rust-coreutils - The Ubuntu package with all it’s supported CPU platforms listed fastcat - Matthias’ blogpost about his faster version of systemd-run0 - Alternative approach to sudo from the systemd project AppArmor - The Linux Security Module used in Ubuntu PAM - The Pluggable Authentication Modules, which handles all system authentication in Linux SSSD - Enables LDAP user profiles on Linux machines ntpd-rs - Timesynchronization daemon written in Rust which may land in Ubuntu 26.04 Trifecta Tech Foundation - Foundation supporting sudo-rs development Sequioa PGP - OpenPGP tools written in Rust Mir - Canonicals wayland compositor library, uses some Rust Anbox Cloud - Canonical’s Android streaming platform, includes Rust components Simon Fels - Original creator of Anbox and Anbox Cloud team lead at Canonical LXD - Container and VM hypervisor dqlite - SQLite with a replication layer for distributed use cases, potentially being rewritten in Rust Rust for Linux - Project to add Rust support to the Linux kernel Nova GPU Driver - New Linux OSS driver for NVIDIA GPUs written in Rust Ubuntu Asahi - Community project for Ubuntu on Apple Silicon debian-devel: Hard Rust requirements from May onward - Parts of apt are being rewritten in Rust (announced a month after the recording of this episode) Go Standard Library - Providing things like network protocols, cryptographic algorithms, and even tools to handle image formats Python Standard Library - The origin of “batteries included” The Rust Standard Library - Basic types, collections, filesystem access, threads, processes, synchronisation, and not much more clap - Superstar library for CLI option parsing serde - Famous high-level serilization and deserialization interface crate Jon Seager’s Website Jon’s Blog: Engineering Ubuntu For The Next 20 Years Canonical Blog Ubuntu Blog Canonical Careers: Engineering - Apply your Rust skills in the Linux ecosystem

0 views
The Coder Cafe 4 days ago

Linus Torvalds vs. Ambiguous Abstractions

🎄 If you’re planning to do Advent of Code this year, join The Coder Cafe leaderboard: . I’ll find a few prizes for the winner(s). If you’re new to Advent of Code, I wrote a short introduction last year, and I also wrote a blog post called I Completed All 8 Advents of Code in One Go: Here Are the Lessons I Learned if you’re interested. I’ve also created a custom channel in the Discord channel. Join the Discord ☕ Welcome to The Coder Cafe! Today, we discuss a recent comment from Linus Torvalds about the use of a helper function. Get cozy, grab a coffee, and let’s begin! In August 2025, there was (yet another) drama involving Linus Torvalds replying on a pull request: No. This is garbage and it came in too late. I asked for early pull requests because I’m traveling, and if you can’t follow that rule, at least make the pull requests good. This adds various garbage that isn’t RISC-V specific to generic header files. And by “garbage” I really mean it. This is stuff that nobody should ever send me, never mind late in a merge window. Like this crazy and pointless make_u32_from_two_u16() “helper”. That thing makes the world actively a worse place to live. It’s useless garbage that makes any user incomprehensible, and actively WORSE than not using that stupid “helper”. If you write the code out as “(a << 16) + b”, you know what it does and which is the high word. Maybe you need to add a cast to make sure that ‘b’ doesn’t have high bits that pollutes the end result, so maybe it’s not going to be exactly pretty, but it’s not going to be wrong and incomprehensible either. In contrast, if you write make_u32_from_two_u16(a,b) you have not a f^%$ing clue what the word order is . IOW, you just made things WORSE, and you added that “helper” to a generic non-RISC-V file where people are apparently supposed to use it to make other code worse too. So no. Things like this need to get bent. It does not go into generic header files, and it damn well does not happen late in the merge window. Let’s not discuss the rudeness of this comment (it’s atrocious). Instead, let’s focus on the content itself. , a popular newsletter, wrote a post about it: the main point Linus makes here is that good code optimizes for reducing cognitive load . {…] Humans have limited working memory capacity - let’s say the human brain can only store 4-7 “chunks” at at time. Each abstraction or helper function costs a chunk slot. Each abstractions costs more tokens. I share the view that good code optimizes for reducing cognitive load 1 , but I don’t understand Linus’s comment in exactly the same way. Yes, Linus is virulent about the helper function, but in my opinion, his main argument isn’t simply that an abstraction costs a “chunk slot” as mentioned; it’s rather that this isn’t the right abstraction. Here is the code added in the pull request: This macro builds a 32-bit integer by putting one 16-bit value in the high half and the other in the low half. For example: The main problem with this macro isn’t necessarily that it exists. It’s that its intent (meaning what it tries to accomplish) could have been clearer. Indeed, the helper’s name doesn’t tell which word is high and which one is low and that’s exactly what Linus is calling out with “ you have not a f^%$ing clue what the word order is ”. Because we can’t get the intent from the name ( ), we have to open the macro to understand the order. That’s precisely why it costs a “chunk slot.”: not because the abstraction exists, but because it’s an ambiguous one. If we wanted to keep using a macro, a better approach, in my opinion 2 , would be to encode the word order in the name itself ( = most significant word, = least significant word): In this case, the word order is carried by the macro name, which makes it a clearer abstraction. Reading the call site doesn’t require opening the macro to understand the word order: Such an abstraction doesn’t cost a “chunk slot” in terms of cognitive load. Its intent is clear from the name, so we don’t need to load an extra piece of information into our working memory to understand it. In summary, if we want to optimize for cognitive load, there’s not necessarily an issue with using helper functions. But if we do, we should make the abstraction as explicit as possible, and that starts with a clear function name that conveys what it tries to accomplish. Missing direction in your tech career? At The Coder Cafe, we serve timeless concepts with your coffee to help you master the fundamentals. Written by a Google SWE and trusted by thousands of readers, we support your growth as an engineer, one coffee at a time. Readability Cognitive Load Nested Code Re: [GIT PULL] RISC-V Patches for the 6.17 Merge Window, Part 1 - Linus Torvalds // The discussion. GitHub // The code proposed in the pull request Linus and the two youts // Interestingly, the macro was plain wrong when the second word was negative. The full explanation is here. ❤️ If you enjoyed this post, please hit the like button. 💬 Where do you draw the line between “helpful” and “harmful” abstraction? Leave a comment At least most of the time. Sometimes we must optimize for performance at the expense of cognitive load. Mr Torvalds, if you see this and you disagree, please do not insult me. In August 2025, there was (yet another) drama involving Linus Torvalds replying on a pull request: No. This is garbage and it came in too late. I asked for early pull requests because I’m traveling, and if you can’t follow that rule, at least make the pull requests good. This adds various garbage that isn’t RISC-V specific to generic header files. And by “garbage” I really mean it. This is stuff that nobody should ever send me, never mind late in a merge window. Like this crazy and pointless make_u32_from_two_u16() “helper”. That thing makes the world actively a worse place to live. It’s useless garbage that makes any user incomprehensible, and actively WORSE than not using that stupid “helper”. If you write the code out as “(a << 16) + b”, you know what it does and which is the high word. Maybe you need to add a cast to make sure that ‘b’ doesn’t have high bits that pollutes the end result, so maybe it’s not going to be exactly pretty, but it’s not going to be wrong and incomprehensible either. In contrast, if you write make_u32_from_two_u16(a,b) you have not a f^%$ing clue what the word order is . IOW, you just made things WORSE, and you added that “helper” to a generic non-RISC-V file where people are apparently supposed to use it to make other code worse too. So no. Things like this need to get bent. It does not go into generic header files, and it damn well does not happen late in the merge window. Let’s not discuss the rudeness of this comment (it’s atrocious). Instead, let’s focus on the content itself. , a popular newsletter, wrote a post about it: the main point Linus makes here is that good code optimizes for reducing cognitive load . {…] Humans have limited working memory capacity - let’s say the human brain can only store 4-7 “chunks” at at time. Each abstraction or helper function costs a chunk slot. Each abstractions costs more tokens. I share the view that good code optimizes for reducing cognitive load 1 , but I don’t understand Linus’s comment in exactly the same way. Yes, Linus is virulent about the helper function, but in my opinion, his main argument isn’t simply that an abstraction costs a “chunk slot” as mentioned; it’s rather that this isn’t the right abstraction. Here is the code added in the pull request: This macro builds a 32-bit integer by putting one 16-bit value in the high half and the other in the low half. For example: The main problem with this macro isn’t necessarily that it exists. It’s that its intent (meaning what it tries to accomplish) could have been clearer. Indeed, the helper’s name doesn’t tell which word is high and which one is low and that’s exactly what Linus is calling out with “ you have not a f^%$ing clue what the word order is ”. Because we can’t get the intent from the name ( ), we have to open the macro to understand the order. That’s precisely why it costs a “chunk slot.”: not because the abstraction exists, but because it’s an ambiguous one. If we wanted to keep using a macro, a better approach, in my opinion 2 , would be to encode the word order in the name itself ( = most significant word, = least significant word): In this case, the word order is carried by the macro name, which makes it a clearer abstraction. Reading the call site doesn’t require opening the macro to understand the word order: Such an abstraction doesn’t cost a “chunk slot” in terms of cognitive load. Its intent is clear from the name, so we don’t need to load an extra piece of information into our working memory to understand it. In summary, if we want to optimize for cognitive load, there’s not necessarily an issue with using helper functions. But if we do, we should make the abstraction as explicit as possible, and that starts with a clear function name that conveys what it tries to accomplish. Missing direction in your tech career? At The Coder Cafe, we serve timeless concepts with your coffee to help you master the fundamentals. Written by a Google SWE and trusted by thousands of readers, we support your growth as an engineer, one coffee at a time. Resources More From the Programming Category Readability Cognitive Load Nested Code Re: [GIT PULL] RISC-V Patches for the 6.17 Merge Window, Part 1 - Linus Torvalds // The discussion. GitHub // The code proposed in the pull request Linus and the two youts // Interestingly, the macro was plain wrong when the second word was negative. The full explanation is here.

0 views
Simon Willison 6 days ago

sqlite-utils 4.0a1 has several (minor) backwards incompatible changes

I released a new alpha version of sqlite-utils last night - the 128th release of that package since I started building it back in 2018. is two things in one package: a Python library for conveniently creating and manipulating SQLite databases and a CLI tool for working with them in the terminal. Almost every feature provided by the package is available via both of those surfaces. This is hopefully the last alpha before a 4.0 stable release. I use semantic versioning for this library, so the 4.0 version number indicates that there are backward incompatible changes that may affect code written against the 3.x line. These changes are mostly very minor: I don't want to break any existing code if I can avoid it. I made it all the way to version 3.38 before I had to ship a major release and I'm sad I couldn't push that even further! Here are the annotated release notes for 4.0a1. This change is for type hint enthusiasts. The Python library used to encourage accessing both SQL tables and SQL views through the syntactic sugar - but tables and view have different interfaces since there's no way to handle a on a SQLite view. If you want clean type hints for your code you can now use the and methods instead. A new feature, not a breaking change. I realized that supporting a stream of lists or tuples as an option for populating large tables would be a neat optimization over always dealing with dictionaries each of which duplicated the column names. I had the idea for this one while walking the dog and built the first prototype by prompting Claude Code for web on my phone. Here's the prompt I used and the prototype report it created , which included a benchmark estimating how much of a performance boost could be had for different sizes of tables. I was horrified to discover a while ago that I'd been creating SQLite columns called FLOAT but the correct type to use was REAL! This change fixes that. Previously the fix was to ask for tables to be created in strict mode. As part of this I also figured out recipes for using as a development environment for the package, which are now baked into the Justfile . This one is best explained in the issue . Another change which I would have made earlier but, since it introduces a minor behavior change to an existing feature, I reserved it for the 4.0 release. Back in 2018 when I started this project I was new to working in-depth with SQLite and incorrectly concluded that the correct way to create tables and columns named after reserved words was like this: That turned out to be a non-standard SQL syntax which the SQLite documentation describes like this : A keyword enclosed in square brackets is an identifier. This is not standard SQL. This quoting mechanism is used by MS Access and SQL Server and is included in SQLite for compatibility. Unfortunately I baked it into the library early on and it's been polluting the world with weirdly escaped table and column names ever since! I've finally fixed that, with the help of Claude Code which took on the mind-numbing task of updating hundreds of existing tests that asserted against the generated schemas. The above example table schema now looks like this: This may seem like a pretty small change but I expect it to cause a fair amount of downstream pain purely in terms of updating tests that work against tables created by ! I made this change first in LLM and decided to bring it to for consistency between the two tools. One last minor ugliness that I waited for a major version bump to fix. Update : Now that the embargo has lifted I can reveal that a substantial amount of the work on this release was performed using a preview version of Anthropic's new Claude Opus 4.5 model . Here's the Claude Code transcript for the work to implement the ability to use an iterator over lists instead of dictionaries for bulk insert and upsert operations. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . Breaking change : The method now only works with tables. To access a SQL view use instead. ( #657 ) The and methods can now accept an iterator of lists or tuples as an alternative to dictionaries. The first item should be a list/tuple of column names. See Inserting data from a list or tuple iterator for details. ( #672 ) Breaking change : The default floating point column type has been changed from to , which is the correct SQLite type for floating point values. This affects auto-detected columns when inserting data. ( #645 ) Now uses in place of for packaging. ( #675 ) Tables in the Python API now do a much better job of remembering the primary key and other schema details from when they were first created. ( #655 ) Breaking change : The and mechanisms no longer skip values that evaluate to . Previously the option was needed, this has been removed. ( #542 ) Breaking change : Tables created by this library now wrap table and column names in in the schema. Previously they would use . ( #677 ) The CLI argument now accepts a path to a Python file in addition to accepting a string full of Python code. It can also now be specified multiple times. ( #659 ) Breaking change: Type detection is now the default behavior for the and CLI commands when importing CSV or TSV data. Previously all columns were treated as unless the flag was passed. Use the new flag to restore the old behavior. The environment variable has been removed. ( #679 )

0 views
マリウス 6 days ago

Be Your Own Privacy-Respecting Google, Bing & Brave

Search engines have long been a hot topic of debate, particularly among the tinfoil-hat-wearing circles on the internet. After all, these platforms are in a unique position to collect vast amounts of user data and identify individuals with unsettling precision. However, with the shift from traditional web search, driven by search queries and result lists, to a LLM-powered question-and-answer flow across major platforms, concerns have grown and it’s no longer just about privacy: Today, there’s increasing skepticism about the accuracy of the results. In fact, it’s not only harder to discover new information online, but verifying the accuracy of these AI-generated answers has become a growing challenge. As with any industry upended by new technology, a flood of alternatives is hitting the market, promising to be the antidote to the established players. However, as history has shown, many of these newcomers are unlikely to live up to their initial hype in the long run. Meanwhile, traditional search services are either adopting the same LLM-driven approach or shutting down entirely . However, as long as major search engines still allow software to tap into their vast databases without depending too heavily on their internal algorithms and AI-generated answers, there’s some hope. We can take advantage of these indexes and create our own privacy-respecting search engines that prioritize the content we actually want to see. Let’s check how to do so using the popular metasearch engine SearxNG on OpenBSD ! SearXNG is a free and open-source metasearch engine, initially forked from Searx after its discontinuation, which can tap into over 70 different search engines to receive search results from. Note: SearXNG is not a search engine but a metasearch engine, which means that it does not have its own index but instead it uses existing indexes from e.g. Google , Brave , Bing , Mojeek , and others. What SearXNG does is that it runs your search query through all of the search engines that you have enabled on your SearXNG instance, onto which it applies custom prioritization and removal rules in an effort to tailor the results to your taste . SearXNG is not particularly resource-intensive and doesn’t require significant storage space, as it does not maintain its own search index. However, depending on your performance requirements, you may need to choose between slightly longer wait times or higher costs, especially for cloud instances. I tested SearXNG on a Vultr instance with 1 vCPU and 1GB of RAM, and it performed adequately. That said, for higher traffic or more demanding usage, you’ll need to allocate more CPU and RAM to ensure optimal performance. Let’s start by setting up the base system. This guide assumes you’re using the latest version of OpenBSD (7.8, at the time of writing) and that you’ve already configured and secured SSH access. Additionally, your firewall should be set up to allow traffic on ports 22, 80, and 443. Ideally, you should also have implemented preventive measures against flooding and brute-force attacks, such as PF ’s built-in rate limiting. Note: I’m going to use as domain for this specific setup, as well as as hostname for the SearXNG instance. Make sure to replace these values with your domain/preferred hostname in the configuration files below. First, let’s install the dependencies that we need: The default configuration of redis works just fine for now, so we can enable and start the service right away: Next, we create a dedicated user for SearXNG : With the newly created user we clone the SearXNG repository from GitHub and set up a Python virtual environment : Next, we copy the default configuration from the repository to ; Make sure to beforehand: While the default settings will work just fine it’s advisable to configure the according to your requirements. One key element that will make or break your experience with SearXNG is the plugin and its configuration. Make sure to enable the plugin: … and make sure to properly configure it: The configuration tells SearXNG to rewrite specific URLs. This is especially useful if you’re not running LibRedirect but would still like results from e.g. X.com to open on Xcancel.com instead. The configuration contains URLs that you want SearXNG to completely remove from your search results, e.g. Pinterest , Facebook or LinkedIn (unless you need those for OSINT ). The configuration lists URLs that SearXNG should de-prioritize in your search results. The setting, on the other hand, does the exact opposite: It instructs SearXNG to prioritize results from the listed URLs. If you need examples for those files feel free to check the lycos.lol repository . PS: Definitely make sure to change the ! We’re going to run SearXNG using uWSGI , a popular Python web application server. To do so, we create the file with the following content: Next, we create the file with the following content: This way we can use to enable and run uWSGI by issuing the following commands: Info: In case the startup should fail, it is always possible to and start uWSGI manually to see what the issue might be: For serving the Python web application we use Nginx . Therefor, we create with the following content: We include this file in our main configuration: Note: I’m not going to dive into the repetitive SSL setup, but you can find plenty other write-ups on this site that explain how to configure it on OpenBSD. Next, we enable Nginx and start it: You should be able to access your SearXNG instance by navigating to in a browser. In case you encounter issues with the semaphores required for interprocess communication within uWSGI , make sure to check [the settings][sminfo] and increase specifically the parameter, e.g. by adding the following line to : As can be seen, setting up a SearXNG instance on OpenBSD is fairly easy and doesn’t require much work. However, configuring it to your liking so that you can get the search results you’re interested in is going to require more effort and time. Especially the plugin is likely something that will evolve over time, the more you’ll use the search engine. At this point, however, you’re ready to enjoy your self-hosted, privacy-respecting metasearch engine based upon SearXNG ! :-) I had registered the domain for this closed-access SearXNG instance. However, a day after the domain became active, NIC.LOL set the domain status to . I asked Njalla , my registrar, if they would know more and their reply was: Right now the domain in question has the status code “serverHold”. serverHold is a status code set by the registry (the one that manage the whole TLD) and that means they have suspended the domain name because the domain violated their terms or rules. Upon further investigation, it became clear that the domain was falsely flagged by everyone’s favorite tax-haven-based internet bully, Spamhaus . After all, when the domain was dropped globally the only thing that was visible on the domain’s Nginx was an empty page. The domain also didn’t have (and still hasn’t) any MX records configured. I reached out to Spamhaus who replied with the following message: Thank you for contacting the Spamhaus Ticketing system, It appears that this ticket was submitted using a disposable or temporary email address; because of this, we cannot confirm its authority. To ensure that we can help you, please do not use a temporary email address (this includes freemails such as gmail.com, hotmail.com, etc) and ensure that the ticket contains the following: When these issues have been resolved, another ticket may be opened to request removal. – Regards, Marvin Adams The Spamhaus Project Spamhaus flagged the domain I just purchased, which I could have used for sending email. Upon contacting them, they then closed my ticket because I was using a temporary email address instead of, let’s say, my own lycos.lol domain. And even though it was a free or temporary email that I had sent the email from, I thought it was my domain registrar’s responsibility to handle KYC, not Spamhaus ’s. I’ve always known that Spamhaus is an incompetent and corrupt organization, but I didn’t fully realize how mentally challenged they are until now. Also, shoutout to NIC.LOL for happily taking my cash without providing any support in this matter whatsoever. This serves as a harsh reminder that the once fun place we called the internet is dead and that everything these days is controlled by corporations which you’re always at the mercy of. It also highlights how misleading and inaccurate some popular posts on sites like Hacker News can be, e.g. “Become unbannable from your email” . They’re not just lacking in detail but they’re obviously wrong with the unbannable part. After some back-and-forth, I managed to get back online and set up the SearXNG instance. The instance will be available to members of the community channel . Additionally, I’ve taken further steps to protect this website from future hostility by Spamhaus: Say hello to ! More on that in a future status update . Footnote: The artwork was generated using AI and further botched by me using the greatest image manipulation program . Learn why . Information that makes clear the requestor’s authority over the domain or IP Details on how the issue(s) have been addressed Reference any other Spamhaus removal ticket numbers related to this case

0 views
The Tymscar Blog 1 weeks ago

OpenAI Demo'd Fixing Issue #2472 Live. It's Still Open.

During OpenAI’s GPT-5 launch event, they demoed the model’s ability to fix real bugs in production code. Live on stage. In their own repository. The kind of demo that makes CTOs reach for their credit cards and engineers nervously update their resumes. There’s just one small problem: the fix they promised to merge “right after the show” is still sitting there, unmerged, three and a half months later. At exactly 1 hour and 7 minutes into their launch video, they started working on issue #2472 in their openai-python repository.

0 views
Lambda Land 1 weeks ago

Typst for Your Code Blocks

I started using Typst about a month ago to write my dissertation proposal. I had seen Typst before and decided to keep an eye on it as it matured. While it still is very much in development, it is mature enough that I was able to rewrite my dissertation proposal from an org-mode → LaTeX pipeline to pure Typst in about an hour with no major hiccups. In fact, most things got simpler as a consequence of using Typst. Typst is a typesetting system written in Rust designed to be a replacement for LaTeX . LaTeX is the de-facto standard for typesetting technical documents thanks to its unsurpassed support for rendering mathematical formulae and its attention to excellent typeography . Both LaTeX and Typst operate by transforming a markup language into an output format like PDF. I am working on a presentation to give as part of my oral defense of my dissertation proposal. ( Note: I am not defending my dissertation yet—first I have to justify my plan of research to my PhD committee.) I found a way to use Typst to get gorgeous source code blocks at minimal cost. I like having good syntax highlighting in my technical presentations, but getting properly highlighted code was either shoddy or labor-intensive. The tradeoff is: I will still be using the highlight-each-word technique when I need to show some code and simulate editing it; the “Magic Move” transition in Keynote makes these kinds of code-editing demos easy to build and easy for the audience to follow. However, the majority of the time I’m just displaying code on the screen. I built a Typst template and associated theme file for code blocks. Now, if I have some code I want to put on a slide, I write a Typst file like the following and put into e.g. : Then I run and I get a PDF file with a transparent background that looks like this: (That’s obviously a PNG file so that it displays nicely here on the web. The real output of that command is a PDF file.) I can take that PDF file with a transparent background and drop it straight into my Keynote presentation. Typst takes care of all the syntax highlighting and it’s been good enough for my needs. Typst is still pretty new software. It has some rough edges and I will not be asking conferences to support Typst for their submissions until all those corners have been smoothed out. However, I am hopeful for Typst’s future, and anywhere where I can get away with just submitting a PDF without the source, I will be using Typst. The things that Typst does better than LaTeX right now: Typst has good typography and bibliography support. It can work with BibLaTeX files, so you can start using Typst without having to rewrite your whole bibliography. Citation syntax is simple and easy to figure out. Typst sill has a bit of a way to go before it does everything that the venerable LaTeX Microtype package does, but it’s making progress in this area. Typst is free and open-source; you can contribute on their GitHub repository. It is written in Rust and the code seems to be well-organized. They have a hosted collaboration platform that is proprietary; you can subscribe to this, and the funds spent here go towards paying a few full-time developers to work on both the closed-source collaboration platform and improving the open-source compiler. I think this is a neat model and I hope it lets Typst get off the ground and get the adoption it will need to survive and (hopefully!) supplant LaTeX as the typesetting system of choice for technical audiences. Incredibly friendly syntax and rendering model. I went from not knowing anything about Typst to reproducing my résumé perfectly in an hour . I even made use of fancy things like functions. Excellent documentation. Did I mention how quickly I learned how to use Typst? It is easy to find the thing you want to customize. Instantaneous build times. Anyone who works with LaTeX will be familiar with 20+ second build times. Typst is so fast that it can live-rerender documents multiple times a second.

0 views

Gemini 2.5 Pro system prompt

After a disillusioning exchange with Gemini yesterday and user "spijdar" on Hacker News providing some insight into the system prompt, I was curious about it and dumped it. Not sure if this is well-known info or not? Anyways, here it goes: I think it explains how something like my aforementioned conversation can happen very easily. (The block isn't even labeled explicitly, the poor model needs to figure out on its own what it refers to? Did an engineer just do ? Or maybe those are some invisible tokens that the model knows but just can't regurgitate back?)

0 views
Bill Mill 2 weeks ago

Licensing will not save us

I enjoy this piece by Erlend Sogge Heggen which argues that we, the open source developers, ought not to freely give away our work because it advantages the capitalists and fascists that are leveraging the fruits of our labor to make untold millions for themselves without giving back to the community they're building off or the world at large. I've been using the computer long enough to remember how hard it was to get your hands on software that was interesting and useful. Without open source software, I'm certain that computing would not have progressed as far as it has, and that I and many others would not have the careers we enjoy because we wouldn't have found a way in. The class of business leaders who have built on open source software (and who often started as developers themselves) has taken a heavy toll on the world without returning the value they owe, but I also fear returning to a world where a privileged class of people has access to the source code for every important application, and interested people have to choose whether to break the law to satisfy their curiosity. More and more developers are playing around with licensing to try and defend themselves from the predatory practices of the tech elite, and I'm here for it - but that cannot be the whole solution. Only organization and community can protect the fruits of OSS (or a future OSS-like?) labor from explotation. It can't be one community, because there are many different aims, cultures, and viewpoints, but there can be many interlinked communities sharing tools, knowledge and practice. The sooner we start building the practices, habits and (less importantly) software to make it easy to build, maintain, and own communities of practice, the sooner we will make it possible to share the fruits of our labor in a less-destructive manner. Licensing alone won't save us, we need to build stable social organizations and learn how to empower them.

0 views
(think) 2 weeks ago

Burst-driven Development: My Approach to OSS Projects Maintenance

I’ve been working on OSS projects for almost 15 years now. Things are simple in the beginning - you’ve got a single project, no users to worry about and all the time and the focus in world. Things changed quite a bit for me over the years and today I’m the maintainer of a couple of dozen OSS projects in the realms of Emacs, Clojure and Ruby mostly. People often ask me how I manage to work on so many projects, besides having a day job, that obviously takes up most of my time. My recipe is quite simple and I refer to it as “burst-driven development”. Long ago I’ve realized that it’s totally unsustainable for me to work effectively in parallel on several quite different projects. That’s why I normally keep a closer eye on my bigger projects (e.g. RuboCop, CIDER, Projectile and nREPL), where I try to respond quickly to tickets and PRs, while I typically do (focused) development only on 1-2 projects at a time. There are often (long) periods when I barely check a project, only to suddenly decide to revisit it and hack vigorously on it for several days or weeks. I guess that’s not ideal for the end users, as some of them might feel that I “undermaintain” some (smaller) projects much of the time, but this approach has worked for me very well for quite a while. The time I’ve spent develop OSS projects has taught me that: To illustrate all of the above with some example, let me tell you a bit about copilot.el 0.3 . I became the primary maintainer of about 9 months ago. Initially there were many things about the project that were frustrating to me that I wanted to fix and improve. After a month of relatively focused work I had mostly achieved my initial goals and I’ve put the project on the backburner for a while, although I kept reviewing PRs and thinking about it in the background. Today I remembered I hadn’t done a release there in quite a while and 0.3 was born. Tomorrow I might remember about some features in Projectile that have been in the back of my mind for ages and finally implement them. Or not. I don’t have any planned order in which I revisit my projects - I just go wherever my inspiration (or current problems related the projects) take me. And that’s a wrap. Nothing novel here, but I hope some of you will find it useful to know how do I approach the topic of multi-project maintenance overall. The “job” of the maintainers is sometimes fun, sometimes tiresome and boring, and occasionally it’s quite frustrating. That’s why it’s essential to have a game plan for dealing with it that doesn’t take a heavy toll on you and make you eventually hate the projects that you lovingly developed in the past. Keep hacking! few problems require some immediate action you can’t always have good ideas for how to improve a project sometimes a project is simply mostly done and that’s OK less is more “hammock time” is important

0 views