Posts in Open-source (20 found)
Robin Moffatt Yesterday

Alternatives to MinIO for single-node local S3

In late 2025 the company behind MinIO decided to abandon it to pursue other commercial interests. As well as upsetting a bunch of folk, it also put the cat amongst the pigeons of many software demos that relied on MinIO to emulate S3 storage locally, not to mention build pipelines that used it for validating S3 compatibility. In this blog post I’m going to look at some alternatives to MinIO. Whilst MinIO is a lot more than 'just' a glorified tool for emulating S3 when building demos, my focus here is going to be on what is the simplest replacement. In practice that means the following: Must have a Docker image. So many demos are shipped as Docker Compose, and no-one likes brewing their own Docker images unless really necessary. Must provide S3 compatibility. The whole point of MinIO in these demos is to stand-in for writing to actual S3. Must be free to use, with a strong preference for Open Source (per OSI definition ) licence e.g. Apache 2.0. Should be simple to use for a single-node deployment Should have a clear and active community and/or commercial backer. Any fule can vibe-code some abandon-ware slop, or fork a project in a fit of enthusiasm—but MinIO stood the test of time until now and we don’t want to be repeating this exercise in six months' time. Bonus points for excellent developer experience (DX), smooth configuration, good docs, etc. What I’m not looking at is, for example, multi-node deployments, distributed storage, production support costs, GUI capabilities, and so on. That is, this blog post is not aimed at folk who were using MinIO as self-managed S3 in production. Feel free to leave a comment below though if you have useful things to add in this respect :) My starting point for this is a very simple Docker Compose stack: DuckDB to read and write Iceberg data that’s stored on S3, provided by MinIO to start with. You can find the code here . The Docker Compose is pretty straightforward: DuckDB, obviously, along with Iceberg REST Catalog MinIO (S3 local storage) , which is a MinIO CLI and used to automagically create a bucket for the data. When I insert data into DuckDB: it ends up in Iceberg format on S3, here in MinIO: In each of the samples I’ve built you can run the to verify it. Let’s now explore the different alternatives to MinIO, and how easy they are to switch MinIO out for. I’ve taken the above project and tried to implement it with as few changes to use the replacement for MinIO. I’ve left the MinIO S3 client, in place since that’s no big deal to replace if you want to rip out MinIO completely (s3cmd, CLI, etc etc). 💾 Example Docker Compose Version tested: ✅ Docker image (5M+ pulls) ✅ Licence: Apache 2.0 ✅ S3 compatibility Ease of config: 👍👍 Very easy to implement, and seems like a nice lightweight option. 💾 Example Docker Compose Version tested: Ease of config: ✅✅ ✅ Docker image (100k+ pulls) ✅ Licence: Apache 2.0 ✅ S3 compatibility RustFS also includes a GUI: 💾 Example Docker Compose Version tested: ✅ Docker image (5M+ pulls) ✅ Licence: Apache 2.0 ✅ S3 compatibility Ease of config: 👍 This quickstart is useful for getting bare-minimum S3 functionality working. (That said, I still just got Claude to do the implementation…). Overall there’s not too much to change here; a fairly straightforward switchout of Docker images, but the auth does need its own config file (which as with Garage, I inlined in the Docker Compose). SeaweedFS comes with its own basic UI which is handy: The SeaweedFS website is surprisingly sparse and at a glance you’d be forgiven for missing that it’s an OSS project, since there’s a "pricing" option and the title of the front page is "SeaweedFS Enterprise" (and no GitHub link that I could find!). But an OSS project it is, and a long-established one: SeaweedFS has been around with S3 support since its 0.91 release in 2018 . You can also learn more about SeaweedFS from these slides , including a comparison chart with MinIO . 💾 Example Docker Compose Version tested: ✅ Docker image (also outdated ones on Docker Hub with 5M+ pulls) ✅ Licence: Apache 2.0 ✅ S3 compatibility Ease of config: 👍 Formerly known as S3 Server, CloudServer is part of a toolset called Zenko, published by Scality. It drops in to replace MinIO pretty easily, but I did find it slightly tricky at first to disentangle the set of names (cloudserver/zenko/scality) and what the actual software I needed to run was. There’s also a slightly odd feel that the docs link to an outdated Docker image. 💾 Example Docker Compose Ease of config: 😵 Version tested: ✅ Docker image (1M+ pulls) ✅ Licence: AGPL ✅ S3 compatibility I had to get a friend to help me with this one. As well as the container, I needed another to do the initial configuration, as well as a TOML config file which I’ve inlined in the Docker Compose to keep things concise. Could I have sat down and RTFM’d to figure it out myself? Yes. Do I have better things to do with my time? Also, yes. So, Garage does work, but gosh…it is not just a drop-in replacement in terms of code changes. It requires different plumbing for initialisation, and it’s not simple at that either. A simple example: . Excellent for production hygiene…overkill for local demos, and in fact somewhat of a hindrance TBH. 💾 Example Docker Compose Version tested: ✅ Docker images (1M+ pulls) ✅ Licence: Apache 2.0 ✅ S3 compatibility Ozone was spun out of Apache Hadoop (remember that?) in 2020 , having been initially created as part of the HDFS project back in 2015. Ease of config: 😵 It does work as a replacement for MinIO, but it is not a lightweight alternative; neither I nor Claude could figure out how to deploy it with any fewer than four nodes. It gives heavy Hadoop vibes, and I wouldn’t be rushing to adopt it for my use case here. I took one look at the installation instructions and noped right out of this one! Ozone (above) is heavyweight enough; I’m sure both are great at what they do, but they are not a lightweight container to slot into my Docker Compose stack for local demos. Everyone loves a bake-off chart, right? gaul/s3proxy ( Git repo ) Single contributor ( Andrew Gaul ) ( Git repo ) Fancy website but not much detail about the company ( Git repo ) Single contributor ( Chris Lu ), Enterprise option available Zenko CloudServer ( Git repo ) Scality (commercial company) 5M+ (outdated version) ( Git repo ) NGI/NLnet grants Apache Ozone ( Git repo ) Apache Software Foundation 1 Docker pulls is a useful signal but not an absolute one given that a small number of downstream projects using the image in a frequently-run CI/CD pipeline could easily distort this figure. I got side-tracked into writing this blog because I wanted to update a demo in which currently MinIO was used. So, having tried them out, which of the options will I actually use? SeaweedFS - yes. S3Proxy - yes. RustFS - maybe, but very new project & alpha release. CloudServer - yes, maybe? Honestly, put off by it being part of a suite and worrying I’d need to understand other bits of it to use it—probably unfounded though. Garage - no, config too complex for what I need. Apache Ozone - lol no. I mean to cast no shade on those options against which I’ve not recorded a ; they’re probably excellent projects, but just not focussed on my primary use case (simple & easy to configure single-node local S3). A few parting considerations to bear in mind when choosing a replacement for MinIO: Governance . Whilst all the projects are OSS, only Ozone is owned by a foundation (ASF). All the others could, in theory , change their licence at the drop of a hat (just like MinIO did). Community health . What’s the "bus factor"? A couple of the projects above have a very long and healthy history—but from a single contributor. If they were to abandon the project, would someone in the community fork and continue to actively develop it? Must have a Docker image. So many demos are shipped as Docker Compose, and no-one likes brewing their own Docker images unless really necessary. Must provide S3 compatibility. The whole point of MinIO in these demos is to stand-in for writing to actual S3. Must be free to use, with a strong preference for Open Source (per OSI definition ) licence e.g. Apache 2.0. Should be simple to use for a single-node deployment Should have a clear and active community and/or commercial backer. Any fule can vibe-code some abandon-ware slop, or fork a project in a fit of enthusiasm—but MinIO stood the test of time until now and we don’t want to be repeating this exercise in six months' time. Bonus points for excellent developer experience (DX), smooth configuration, good docs, etc. DuckDB, obviously, along with Iceberg REST Catalog MinIO (S3 local storage) , which is a MinIO CLI and used to automagically create a bucket for the data. ✅ Docker image (5M+ pulls) ✅ Licence: Apache 2.0 ✅ S3 compatibility ✅ Docker image (100k+ pulls) ✅ Licence: Apache 2.0 ✅ S3 compatibility ✅ Docker image (5M+ pulls) ✅ Licence: Apache 2.0 ✅ S3 compatibility ✅ Docker image (also outdated ones on Docker Hub with 5M+ pulls) ✅ Licence: Apache 2.0 ✅ S3 compatibility ✅ Docker image (1M+ pulls) ✅ Licence: AGPL ✅ S3 compatibility ✅ Docker images (1M+ pulls) ✅ Licence: Apache 2.0 ✅ S3 compatibility SeaweedFS - yes. S3Proxy - yes. RustFS - maybe, but very new project & alpha release. CloudServer - yes, maybe? Honestly, put off by it being part of a suite and worrying I’d need to understand other bits of it to use it—probably unfounded though. Garage - no, config too complex for what I need. Apache Ozone - lol no. Governance . Whilst all the projects are OSS, only Ozone is owned by a foundation (ASF). All the others could, in theory , change their licence at the drop of a hat (just like MinIO did). Community health . What’s the "bus factor"? A couple of the projects above have a very long and healthy history—but from a single contributor. If they were to abandon the project, would someone in the community fork and continue to actively develop it?

0 views
Kev Quirk Yesterday

Linux in the Air

Sal talks about how Linux is going through somewhat of a revival at the moment, as well as some of his own thoughts on the whole Mac vs Windows vs Linux debacle. Read Post → I think a lot of this Linux revival is thanks to a perfect storm going on in the OS space, namely: I’ve been back on Linux (specifically Ubuntu) since I bought my Framework 13 , and I’ve been very happy. The only issues I’ve really had are with some apps being blurry under Wayland, but I’ve been able to easily work around these issues. Sal has had some similar problems with Wayland, but has also managed to work around them. My son also runs Linux on his iMac , and I’m about to replace Windows 10 on my wife’s X1 Carbon with Ubuntu too. So we’re going to be a Linux household very soon. And you know what? It’s fine. My son doesn’t know (or care) that he’s running Linux. My wife will be in the same boat - as long as she can check her emails, browse the web, and manage our finances in a spreadsheet, she’s good. Linux based operating systems are great, and I’m thrilled they’re going through this revival. If you’re thinking about switching, I’d implore you to do so - remember you can always try before you “buy” with a live USB. So there’s no commitment required. If you do switch, please remember to donate to your distro of choice. ❤ Thanks for reading this post via RSS. RSS is great, and you're great for using it. ❤️ You can reply to this post by email , or leave a comment . Microsoft forcing many users to buy new hardware because of arbitrary hardware requirements, as well as forcing users to have an online accounts. Apple completely screwing up MacOS Tahoe with their Liquid Glass update.

0 views
Simon Willison 3 days ago

My answers to the questions I posed about porting open source code with LLMs

Last month I wrote about porting JustHTML from Python to JavaScript using Codex CLI and GPT-5.2 in a few hours while also buying a Christmas tree and watching Knives Out 3. I ended that post with a series of open questions about the ethics and legality of this style of work. Alexander Petros on lobste.rs just challenged me to answer them , which is fair enough! Here's my attempt at that. You can read the original post for background, but the short version is that it's now possible to point a coding agent at some other open source project and effectively tell it "port this to language X and make sure the tests still pass" and have it do exactly that. Here are the questions I posed along with my answers based on my current thinking. Extra context is that I've since tried variations on a similar theme a few more times using Claude Code and Opus 4.5 and found it to be astonishingly effective. I decided that the right thing to do here was to keep the open source license and copyright statement from the Python library author and treat what I had built as a derivative work, which is the entire point of open source. After sitting on this for a while I've come down on yes, provided full credit is given and the license is carefully considered. Open source allows and encourages further derivative works! I never got upset at some university student forking one of my projects on GitHub and hacking in a new feature that they used. I don't think this is materially different, although a port to another language entirely does feel like a slightly different shape. Now this one is complicated! It definitely hurts some projects because there are open source maintainers out there who say things like "I'm not going to release any open source code any more because I don't want it used for training" - I expect some of those would be equally angered by LLM-driven derived works as well. I don't know how serious this problem is - I've seen angry comments from anonymous usernames, but do they represent genuine open source contributions or are they just angry anonymous usernames? If we assume this is real, does the loss of those individuals get balanced out by the increase in individuals who CAN contribute to open source because they can now get work done in a few hours that might previously have taken them a few days that they didn't have to spare? I'll be brutally honest about that question: I think that if "they might train on my code / build a derived version with an LLM" is enough to drive you away from open source, your open source values are distinct enough from mine that I'm not ready to invest significantly in keeping you. I'll put that effort into welcoming the newcomers instead. The much bigger concern for me is the impact of generative AI on demand for open source. The recent Tailwind story is a visible example of this - while Tailwind blamed LLMs for reduced traffic to their documentation resulting in fewer conversions to their paid component library, I'm suspicious that the reduced demand there is because LLMs make building good-enough versions of those components for free easy enough that people do that instead. I've found myself affected by this for open source dependencies too. The other day I wanted to parse a cron expression in some Go code. Usually I'd go looking for an existing library for cron expression parsing - but this time I hardly thought about that for a second before prompting one (complete with extensive tests) into existence instead. I expect that this is going to quite radically impact the shape of the open source library world over the next few years. Is that "harmful to open source"? It may well be. I'm hoping that whatever new shape comes out of this has its own merits, but I don't know what those would be. I'm not a lawyer so I don't feel credible to comment on this one. My loose hunch is that I'm still putting enough creative control in through the way I direct the models for that to count as enough human intervention, at least under US law, but I have no idea. I've come down on "yes" here, again because I never thought it was irresponsible for some random university student to slap an Apache license on some bad code they just coughed up on GitHub. What's important here is making it very clear to potential users what they should expect from that software. I've started publishing my AI-generated and not 100% reviewed libraries as alphas, which I'm tentatively thinking of as "alpha slop" . I'll take the alpha label off once I've used them in production to the point that I'm willing to stake my reputation on them being decent implementations, and I'll ship a 1.0 version when I'm confident that they are a solid bet for other people to depend on. I think that's the responsible way to handle this. That one was a deliberately provocative question, because for a new HTML5 parsing library that passes 9,200 tests you would need a very good reason to hire an expert team for two months (at a cost of hundreds of thousands of dollars) to write such a thing. And honestly, thanks to the existing conformance suites this kind of library is simple enough that you may find their results weren't notably better than the one written by the coding agent. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

1 views
James Stanley 1 weeks ago

A parametric mannequin for FreeCAD

I am toying with the idea of building a car. Check out Cyclekarts , they are a small and simple go-kart-like vehicle styled like ~1920s sports cars. Most importantly, check out Geoff May's "Maybug" , which is road legal! How cool is that? But I don't want to build a cyclekart as such, they are a bit too small and underpowered. Geoff's cyclekart is road legal under the "heavy quadricycle" classification, which is way easier than full car spec, the main requirements being to stay under 450kg (under 250kg if you don't want seatbelts), and under 20hp, and then you get regulated more like a motorbike instead of a car. Cyclekart engines are generally under 10hp so I would find something better. And cyclekarts are normally single-seaters and ideally I would like to have room for 2. Anyway, I wanted to mess about in FreeCAD and see what sort of size and layout would work, and I found that I didn't have a good idea of how big it would need to be to fit people inside it. So I have made a mannequin for FreeCAD. Get it on github . It is based on the "average male" dimensions from this diagram that I found online. You can change the dimensions of the body using the "S" Spreadsheet in the model, and reposition the limbs by selecting one of the LinkGroups in the tree (they are named with "Joint" suffix) and rotating using the Transform tool. You will want to use at least FreeCAD 1.1 otherwise the Transform tool rotates about the centroid instead of the origin. Here is my mannequin contorted to fit on the toy tractor: If you want to use it, I recommend save a local copy of mannequin.FCStd , edit the dimensions to suit your body if required, and then copy and paste him into whatever projects you want mannequins in. There are other FreeCAD mannequins available, in particular Mannequin_mp from the FreeCAD library. But I didn't manage to find one that can have the joints posed without laboriously having to relocate everything downstream of that joint so that it stays connected.

0 views
daniel.haxx.se 1 weeks ago

curl 8.18.0

Download curl from curl.se ! the 272nd release 5 changes 63 days (total: 10,155) 391 bugfixes (total: 13,376) 758 commits (total: 37,486) 0 new public libcurl function (total: 100) 0 new curl_easy_setopt() option (total: 308) 0 new curl command line option (total: 273) 69 contributors, 36 new (total: 3,571) 37 authors, 14 new (total: 1,430) 6 security fixes (total: 176) This time there is no less than six separate vulnerabilities announced. There are a few this time, mostly around dropping support for various dependencies: See the release presentation video for a walk-through of some of the most important/interesting fixes done for this release, or go check out the full list in the changelog . CVE-2025-13034 : skipping pinning check for HTTP/3 with GnuTLS CVE-2025-14017 : broken TLS options for threaded LDAPS CVE-2025-14524 : bearer token leak on cross-protocol redirect CVE-2025-14819 : OpenSSL partial chain store policy bypass CVE-2025-15079 : libssh global knownhost override CVE-2025-15224 : libssh key passphrase bypass without agent set drop support for VS2008 (Windows) drop Windows CE / CeGCC support drop support for GnuTLS < 3.6.5 gnutls: implement CURLOPT_CAINFO_BLOB openssl: bump minimum OpenSSL version to 3.0.0

0 views
daniel.haxx.se 1 weeks ago

6,000 curl stickers

I am heading to FOSDEM again at the end of January. I go there every year and I have learned that there is a really sticker-happy audience there. The last few times I have been there, I have given away several thousands of curl stickers. As I realized I did not actually have a few thousand stickers left, I had to restock. I consider stickers a fun and somewhat easy way to market the curl project. It helps us getting known and seen out there in the world. The stickers are paid for by curl donations . Thanks to all of you who have donated! This time I ordered the stickers from stickerapp.se . They have a rather fancy web UI editor and tools to make sure the stickers become exactly the way I want them. I believe the total order price was actually slightly cheaper than the previous provider I used. I ordered five classic curl sticker designs and I introduced a new one. Here is the full set: Six different curl stickers Die cut curl logo 7.5cm x 2.8cm – the classic “small” curl logo sticker. (bottom left in the photo) Die cut curl logo 10cm x 3.7cm – the slightly larger curl logo sticker. (top row in the photo) Rounded rectangle 7.5cm x 4.1cm – yes we curl , the curl symbol and my face (mid left in the photo) Oval 7.5cm x 4cm – with the curl logo (bottom right in the photo) Round 2.5cm x 2.5 cm – small curl symbol. (in the middle of the photo). My favorite. Perfect for the backside of a phone. Fits perfectly in the logo on the lid of a Frame Work laptop. Round 4cm x 4cm – curl symbol in a slightly larger round version. The new sticker variant in the set. (on the right side in the middle row in the photo) The quality and feel of the products are next to identical to previous sticker orders. They look great! I got 1,000 copies of each variant this time. The official curl logo, the curl symbol, the colors and everything related is freely available and anyone is welcome to print their own stickers at will: https://curl.se/logo/ I bring curl stickers to all events I go to. Ask me! There is no way to buy stickers from me or from the curl project. I encourage you to look me up and ask for one or a few. At FOSDEM I try to make sure the wolfSSL stand has plenty to hand out, since it is a fixed geographical point that might be easier to find than me.

0 views
Harper Reed 1 weeks ago

Remote Claude Code: programing like it was the early 2000s

So so many friends have asked me how I use Claude Code from my phone. I am always a bit surprised, because a lot of this type of work I have been doing for nearly 25 years (or more!) and I always forget that it is partially a lost art. This is how it used to be. We didn’t have fancy IDEs, and fancy magic to deploy stuff. We had to ssh (hah. Telnet!) into a machine and work with it through the terminal. It ruled. It was a total nightmare. It was also a lot of fun. One of my favorite parts of the early 2000s was hanging out in IRC channels and just participating in the most ridiculous community of tech workers. A very very fun time. #corporate on efnet! There is a lot of nostalgia for that time period - but for the most part the tooling sucked. The new IDEs, and magic deploy systems have made it so that you do not have to deal with a terminal to get shit done. And then… Claude Code sashays into the room and fucks up the vibe. Or creates a new vibe? Who knows. Anyway. We are all using terminals now and it is hilarious and fun. So let’s vibe. Thinking about terminals probably, Leica M11, 12/2025 The conversation I have with people about Claude Code start normally, and almost without exception end with “I wish I could do this from my phone.” Well.. I am here to tell you that it is easy! And accessible! First things first - there are a couple really neat startups that are solving this in a very different way that I work. I think they are awesome. My favorite example of this is superconductor (great name!). They allow you to run and instantiate a bunch of agents (Claude Codex, Amp, Codex, etc) and interact with them remotely. They are also a really great team! Another example is happy coder . An open source magical app that connects to your Claude Code. It is theoretically pretty good, and I know some people who love it. I couldn’t get it to work reliably. One of my core values is: I want to just ssh into shit . That is kind of one of my general hobbies. Can I ssh into this thing? If yes, then I am happy. If no, then how can I make it so I can ssh into it. When it came to figuring out how to use Claude Code on my phone, the obvious answer was: ssh into my computer from my phone, and run claude . Turns out this is pretty straight forward. My workstation: Let’s break it down. I use an iPhone, so I will be talking about iPhone apps. There are good android apps to do this too! There are maybe 4 things you need to solve for: As a form of tldr, here are my personal answers: Let’s break it down: You will need to access your workstation from anywhere. I use a Mac and linux boxes for this. Linux is easy: Just make sure openssh-server is installed. Test that you can ssh into it - and bam. Typically if you are using a box from a Claude provider, this is built into the program. Macs are a bit harder. You need to enable ssh , and then for extra credit you need to enable screen sharing . Once this is done you should theoretically be able to remotely connect to your computer. It is very important you try to connect to it from another computer that is on the same network. Figure out your local IP (192.168.xxx.yyy), and then ssh to your local IP from another machine (or from the same machine). As long as you can connect to it - then the next step will be super easy. If you can’t connect to it, ask chatgpt wtf is going on. Once you can reliably SSH into your machine, then it is time to get Tailscale working. There are a few alternatives (zero tier, etc) and I am sure they are good. Tailscale is friends, and they are awesome. Having used them since before they launched, I can promise that it is a life changer. Install the Tailscale client on all your machines. Tailscale will magically create a network that only you have access to (or anyone else you add to your network). You can then access any of your machines from any of your machines. I.e. your phone can instantly connect to your workstation while you workstation is in Chicago, and your phone is in Tokyo. You don’t have to poke a hole in a firewall, do magical networking, or learn how to do magical networking. It just works. It is a beautiful product. There are is a deep bench of Tailscale features that you should check out eventually - but for today, just use it for networking. Since you were able to ssh into your machine before (I hope!) - now you can test it with your fancy new Tailscale ip address or magic name. And you can do that from any device that is on your Tailscale network. Like.. your phone! This means network is solved! This is where some personal preference comes in. You now need to pick a terminal client that you like to use, and feels good to use. Lots of my friends like prompt , and termius . Both are great choices. I personally really like blink . It is a bit nerdier, and when you open it, it just drops you into a shell. Immediately. No interface, no nonsense. Just a shell. It is a wild app. You can use their blink build product to host a lil dev server for yourself! I wanted to use their build product - but the default and unchangeable user was and I cannot being myself to seriously use a product that drops you into a server as the root user. lol Anyway, blink is for me! And since you set up Tailscale, and ssh you can just type and it will magically connect. you can use the command in blink to set up keys, and hosts, etc. highly worthwhile. Now you are inside of your workstation! Now you can really rip some tokens! I checked the build status RIGHT AFTER THIS SHOT, Leica M11, 01/2026 Tools! You could just navigate to the directory that your Claude project live, and run Claude. But then when you phone went to sleep or whatever - your ssh client may disconnect. And you would have to redo the connection, run Claude –continue, and live this life of lots of typing. We don’t use AI tools to type more! There are three tools that are super helpful: If you are using SSH a lot you need to set up some SSH keys, and then push them around to all your servers. I am not going to tell you how to do that, since you should already have keys somewhere to integrate with source code repositories. If you want to generate new, or have questions - the terminal clients may help you. My guess is that you already have some. Couple tips: Mosh is from a forgotten time (2012!) when the internet was slow, and connections were spotty. What mosh does is allow your “fragile” ssh connection to roam around with you. You use it just like ssh: . But now when you shut your laptop, or forget about your phone - the connection will pop back up when you surface it again. It allows the connection to survive a lot of the various environmental things that would normally derail a ssh connection. This RULES. I was on a train the other day and totally lost internet while we were in a tunnel. Then we emerged and internet came back. My ssh (really mosh) session just paused for a moment, and then BAM! Was back and Claude was telling me it had deleted my entire workstation, and was going to the beach! There are some gotchas about ssh-agent, keys and mosh that I won’t get into. If things are weird, just google it or as chatgpt. Tbh, I am a screen guy. But it is 2026 and TMUX is a better choice. It allows you to have a long running terminal process that you can reattach to. This is helpful even without a remote connection. It also acts as a multiplexer - allowing for multiple terminal sessions in a single terminal window. You can have 7 Claude Codes running simultaneously and just tab through them as needed. TMUX is what a lot of the “Claude Code orchestration” hacks are built upon.You should check them out . I haven’t yet found one that works how I want it - even know there are some good ones! I just want to use regular old TMUX, and a bunch of weird helpers. My TMUX config is here: harperreed/dotfiles/.TMUX.conf . Be forewarned, that the key combos are wacky! TMUX is the key that allows me to run a dozen Claude Code instances, and then walk away from my workstation, pick up my phone and continue hacking. To make things consistent, and easier I have a few scripts that really tie the room together. First, I have my claude code aliases: These allow me to start or pick up my last work. You are dangerously skipping permissions, right? Another helpful script is this one to help me unlock my keychain: On a Mac, Claude Code stores its api key in your keychain, then it requires you to unlock your keychain to work. This also has the added benefit of unlocking your ssh keys if they are using the keychain for your ssh-agent. My TMUX starter script is really handy. I just type and it magically starts a new named session, or attaches to the named session already. This script specifically names my sessions based on the workstation I use it from. This allows me to see what computer I am in via the terminal title. My workflow is: Now you can tell Claude to do weird shit from your phone 24 hours a day. It rules. Don’t do it while driving. Thank you for using RSS. I appreciate you. Email me terminal client workstation network: Tailscale client: blink workstation: Mac with constant power and fast internet tools: TMUX , some magic scripts, and Claude Code Keys/identity use a password to unlock your key! use an ssh agent to make that process not horrible on a Mac you can have your key be unlocked by your keychain (which is also where your Claude Code api key is!) ssh into my workstation burn tokens

0 views
xenodium 1 weeks ago

Bending Emacs - Episode 9: World times

A new year, a new Bending Emacs episode, so here it goes: Bending Emacs Episode 9: Time around the world Emacs comes with a built-in world clock: To customize displayed timezones, use: Each entry requires a valid timezone string (as per entries in your system's ) and a display label. I wanted a slightly different experience than the built-in command ( more details here ), so I built the time-zones package. is available on MELPA , so you can install with: Toggle help with the key add cities with the key. Shifting time is possible via the / keys, in addition to a other features available via the help menu. Hope you enjoyed the video! Liked the video? Please let me know. Got feedback? Leave me some comments . Please go like my video , share with others, and subscribe to my channel . If there's enough interest, I'll continue making more videos! Enjoying this content or my projects ? I am an indie dev. Help make it sustainable by ✨ sponsoring ✨ Need a blog? I can help with that . Maybe buy my iOS apps too ;)

0 views
Grumpy Gamer 1 weeks ago

This Time For Sure

I think 2026 is the year of Linux for me. I know I’ve said this before, but it feels like Apple has lost it’s way. Liquid Glass is the last straw plus their draconian desire to lock everything down gives me moral pause. It is only a matter of time before we can’t run software on the Mac that wasn’t purchased from the App Store. I use Linux on my servers so I am comfortable using it, just not in a desktop environment. Some things I worry about: A really good C++ IDE. I get a lot of advice for C++ IDEs from people who only use them now and then or just to compile, but don’t live in them all day and need to visually step into code and even ASM. I worry about CLion but am willing to give it a good try. Please don’t suggest an IDE unless you use them for hardcore C++ debugging. I will still make Mac versions of my games and code signing might be a problem. I’ll have to look, but I don’t think you can do it without a Mac. I can’t do that on a CI machine because for my pipeline the CI machine only compiles the code. The .app is built locally and that is where the code signing happens. I don’t want to spin up a CI machine to make changes when the engine didn’t change. My build pipeline is a running bash script, I don’t want to be hoping between machines just to do a build (which I can do 3 or 4 times a day) The only monitor I have is a Mac Studio monitor. I assume I can plug a Linux machine to it, but I worry about the webcam. It wouldn’t surprise me if Apple made it Mac only. The only keyboard I have is a Mac keyboard, I really like the keyboard especially how I can unlock the computer with the touch of my finger. I assume something like this exist for Linux. I have an iPhone but I only connect it to the computer to charge it. So not an issue. I worry about drivers for sound, video, webcams, controllers, etc. I know this is all solvable but I’m not looking forward to it. I know from releasing games on Linux our number-one complaint is related to drivers. Choosing a distro. Why is this so hard? A lot of people have said that it doesn’t really matter so just choose one. Why don’t more people use Linux on the Desktop? This is why. To a Linux desktop newbie, this is paralyzing. I’m going to miss Time Machine for local backups. Maybe there is something like it for Linux. I really like the Apple M processors. I might be able to install Linux on Mac hardware, but then I really worry about drivers. I just watched this video from Veronica Explains on installing Linux on Mac silicon. The big big worry is that there us something big I forgot. I need this to work for my game dev. It’s not a weekend hobby computer. I’ve said I was switching to Linux before, we’ll see if it sticks this time. I have a Linux laptop but when I moved I didn’t turn it on for over year and now I get BIOS errors when I boot. Some battery probably went dead. I’ve played with it a bit and nothing seems to work. It was an old laptop and I’ll need a new faster one for game dev anyway. This will be along well-thought out journey. Stay tuned for the “2027 - This Time For Sure” post. A really good C++ IDE. I get a lot of advice for C++ IDEs from people who only use them now and then or just to compile, but don’t live in them all day and need to visually step into code and even ASM. I worry about CLion but am willing to give it a good try. Please don’t suggest an IDE unless you use them for hardcore C++ debugging. I will still make Mac versions of my games and code signing might be a problem. I’ll have to look, but I don’t think you can do it without a Mac. I can’t do that on a CI machine because for my pipeline the CI machine only compiles the code. The .app is built locally and that is where the code signing happens. I don’t want to spin up a CI machine to make changes when the engine didn’t change. My build pipeline is a running bash script, I don’t want to be hoping between machines just to do a build (which I can do 3 or 4 times a day) The only monitor I have is a Mac Studio monitor. I assume I can plug a Linux machine to it, but I worry about the webcam. It wouldn’t surprise me if Apple made it Mac only. The only keyboard I have is a Mac keyboard, I really like the keyboard especially how I can unlock the computer with the touch of my finger. I assume something like this exist for Linux. I have an iPhone but I only connect it to the computer to charge it. So not an issue. I worry about drivers for sound, video, webcams, controllers, etc. I know this is all solvable but I’m not looking forward to it. I know from releasing games on Linux our number-one complaint is related to drivers. Choosing a distro. Why is this so hard? A lot of people have said that it doesn’t really matter so just choose one. Why don’t more people use Linux on the Desktop? This is why. To a Linux desktop newbie, this is paralyzing. I’m going to miss Time Machine for local backups. Maybe there is something like it for Linux. I really like the Apple M processors. I might be able to install Linux on Mac hardware, but then I really worry about drivers. I just watched this video from Veronica Explains on installing Linux on Mac silicon. The big big worry is that there us something big I forgot. I need this to work for my game dev. It’s not a weekend hobby computer.

1 views
Manuel Moreale 1 weeks ago

Yearly reminder to use RSS

The year is 2026, and RSS is still, by far, the best way to keep up with sites on the web. If you already know what RSS is but you’re not currently using it, consider this a reminder for you to dust off that RSS reader of yours and put it back to use. And don’t listen to the party-poopers that claim that RSS is dead. It is not. If instead you don’t know what RSS is, here’s a very brief explanation: RSS is a technology that allows you to create your own personal feed, using an RSS reader app, where content from different sources is aggregated and displayed—usually—in reverse chronological order. The same way you use a browser to access my site, you can use an RSS reader app to access the RSS feeds available on my website. Keep in mind that not all sites have RSS feeds available. It used to be the norm, but then the web got enshittified. I wrote a longer post about RSS years ago , but the web is full of resources if you want to get into RSS. And you should, because RSS is awesome. So go get an RSS reader app , stop being spoon-fed slop by algorithmic platforms, and start consuming content at your own pace. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
Higashi 1 weeks ago

Go generate meets vibes: vibe code Go one interface at a time using govibeimpl

Vibe-code Golang one interface at a time. During the holidays, I was working on a personal project in Go, and I wanted to use AI to help me do a few things (e.g. implement a downloader that downloads a file from Google Drive). However, I’m not a huge fan of having AI IDEs creating new directory structures or introducing abstractions that I need to read through and understand. Instead, I thought it would be cool to: And then I thought it would be great to combine this with where for every interface i define, i can just attach a tag so AI can fill in the rest at compile time. Therefore, I built govibeimpl (https://github.com/yuedongze/govibeimpl), a CLI tool that works with go generate that allows me to tag an interface for AI to implement, and seamlessly integrate that as part of my development flow. define an interface that i will need to use expect AI to write an impl to that interface i can just expect i will receive an instance of that interface at run time perhaps i’ll need to read about the api contract to see what concrete data types i need to pass in and read out profit i guess

2 views
neilzone 1 weeks ago

yt-dlp's --download-archive flag

Today, I learned about the flag for . From the readme : For instance: This means that, if the download stops working for whatever reason, you have a list of the files from the playlist which have been downloaded already. When you re-run the command, yt-dlp will not attempt to download the files, for which the IDs are already listed in archive.txt. Very handy! But what if you have already started downloading a playlist, and did not use the flag? You can create a suitable file, from the list of the directory of your downloads, although exactly how you do this will depend on your preferences for interacting with a computer. In the directory of the downloaded files, I used to get a file with the list of downloaded files. I then used vim’s integrated search-and-replace function to get the format right. This involved: (Yes, I could have down it with sed or awk, without vim. I did not.) downloads the files as usual adds the ID of each downloaded file to archive.txt (which is probably specific to archiving from YouTube)

0 views

Can I finally start using Wayland in 2026?

Wayland is the successor to the X server (X11, Xorg) to implement the graphics stack on Linux. The Wayland project was actually started in 2008, a year before I created the i3 tiling window manager for X11 in 2009 — but for the last 18 years (!), Wayland was never usable on my computers. I don’t want to be stuck on deprecated software, so I try to start using Wayland each year, and this articles outlines what keeps me from migrating to Wayland in 2026. For the first few years, Wayland rarely even started on my machines. When I was lucky enough for something to show up, I could start some toy demo apps in the demo compositor Weston. Around 2014, GNOME started supporting Wayland. KDE followed a few years later. Major applications (like Firefox, Chrome or Emacs) have been slower to adopt Wayland and needed users to opt into experimental implementations via custom flags or environment variables, until very recently, or — in some cases, like — still as of today. Unfortunately, the driver support situation remained poor for many years. With nVidia graphics cards, which are the only cards that support my 8K monitor , Wayland would either not work at all or exhibit heavy graphics glitches and crashes. In the 2020s, more and more distributions announced looking to switch to Wayland by default or even drop their X11 sessions , and RHEL is winding down their contributions to the X server . Modern Linux distributions like Asahi Linux (for Macs, with their own GPU driver!) clearly consider Wayland their primary desktop stack, and only support X11 on a best-effort basis. So the pressure to switch to Wayland is mounting! Is it ready now? What’s missing? I’m testing with my lab PC, which is a slightly upgraded version of my 2022 high-end Linux PC . I describe my setup in more details in stapelberg uses this: my 2020 desk setup . Most importantly for this article, I use a Dell 8K 32" monitor (resolution: 7680x4320!), which, in my experience, is only compatible with nVidia graphics cards (I try other cards sometimes). Hence, both the lab PC and my main PC contain an nVidia GPU: (In case you’re wondering why I use the older card in my PC: I had a crash once where I suspected the GPU, so I switched back from the 4070 to my older 3060.) For many years, nVidia drivers were entirely unsupported under Wayland. Apparently, nVidia refused to support the API that Wayland was using, insisting that their EGLStreams approach was superior. Luckily, with nVidia driver 495 (late 2021), they added support for GBM (Generic Buffer Manager). But, even with GBM support, while you could now start many Wayland sessions, the session wouldn’t run smoothly: You would see severe graphics glitches and artifacts, preventing you from getting any work done. The solution for the glitches was explicit sync support: because the nVidia driver does not support implicit sync (like AMD or Intel), Wayland (and wlroots, and sway) needed to get explicit sync support . Sway 1.11 (June 2025) and wlroots 0.19.0 are the first version with explicit sync support. With the nVidia driver now working per se with Wayland, unfortunately that’s still not good enough to use Wayland in my setup: my Dell UP3218K monitor requires two DisplayPort 1.4 connections with MST (Multi Stream Transport) and support. This combination worked just fine under X11 for the last 8+ years. While GNOME successfully configures the monitor with its native resolution of 7680x4320@60, the monitor incorrectly shows up as two separate monitors in sway. The reason behind this behavior is that wlroots does not support the property (issue #1580 from 2019) . Luckily, in 2023, contributor sent draft merge request !4154 , which adds support for the property. But, even with the patch, my monitor would not work correctly: The right half of the monitor would just stay black. The full picture is visible when taking a screenshot with , so it seems like an output issue. I had a few exchanges about this with starting in August 2025 (thanks for taking a look!), but we couldn’t figure out the issue. A quarter later, I had made good experiences regarding debugging complex issues with the coding assistant Claude Code (Opus 4.5 at the time of writing), so I decided to give it another try. Over two days, I ran a number of tests to narrow down the issue, letting Claude analyze source code (of sway, wlroots, Xorg, mesa, …) and produce test programs that I could run manually. Ultimately, I ended up with a minimal reproducer program (independent of Wayland) that shows how the DRM property does not work on nVidia (but does work on Intel, for example!): I posted a bug report with a video in the nVidia forum and hope an nVidia engineer will take a look! Crucially, with the bug now identified, I had Claude implement a workaround: copy the right half of the screen (at ) to another buffer, and then display that buffer , but with . With that patch applied, for the first time, I can use Sway on my 8K monitor! 🥳 By the way, when I mentioned that GNOME successfully configures the native resolution, that doesn’t mean the monitor is usable with GNOME! While GNOME supports tiled displays, the updates of individual tiles are not synchronized, so you see heavy tearing in the middle of the screen, much worse than anything I have ever observed under X11. GNOME/mutter merge request !4822 should hopefully address this. During 2025, I switched all my computers to NixOS . Its declarative approach is really nice for doing such tests, because you can reliably restore your system to an earlier version. To make a Wayland/sway session available on my NixOS 25.11 installation, I added the following lines to my NixOS configuration file ( ): I also added the following Wayland-specific programs to : Note that activating this configuration kills your running X11 session, if any. Just to be sure, I rebooted the entire machine after changing the configuration. With this setup, I spent about one full work day in a Wayland session. Trying to actually get some work done uncovers issues that might not show in casual testing. Most of the day was spent trying to fix Wayland issues 😅. The following sections explain what I have learned/observed. Many years ago, when Wayland became more popular, people asked on the i3 issue tracker if i3 would be ported to Wayland. I said no: How could I port a program to an environment that doesn’t even run on any of my computers? But also, I knew that with working a full-time job, I wouldn’t have time to be an early adopter and shape Wayland development. This attitude resulted in Drew DeVault starting the Sway project around 2016, which aims to be a Wayland version of i3. I don’t see Sway as competition. Rather, I thought it was amazing that people liked the i3 project so much that they would go through the trouble of creating a similar program for other environments! What a nice compliment! 😊 Sway aims to be compatible with i3 configuration files, and it mostly is. If you’re curious, here is what I changed from the Sway defaults, mostly moving key bindings around for the NEO keyboard layout I use, and configuring / blocks that I formerly configured in my file : I encountered the following issues with Sway: I don’t know how I can configure the same libinput settings that I had before. See for what I have on X11. Sway’s available settings do not seem to match what I used before. The mouse cursor / pointer seems laggy, somehow?! It seems to take longer to react when I move the trackball, and it also seems to move less smoothly across the screen. Simon Ser suspects that this might be because hardware cursor support might not work with the nVidia drivers currently. No Xwayland scaling: programs started via Xwayland are blurry (by default) or double-scaled (when setting ). This is a Sway-specific limitation: KDE fixed this in 2022 . From Sway issue #2966 , I can tell that Sway developers do not seem to like this approach for some reason, but that’s very unfortunate for my migration: The backwards compatibility option of running older programs through Xwayland is effectively unavailable to me. Sometimes, keyboard shortcuts seem to be executed twice! Like, when I focused the first of five Chrome windows in a stack and moved that window to another workspace, two windows would be moved instead of one. I also see messages like this one (not exactly correlated with the double-shortcut problem, though): …and that seems wrong to me. My high-end Linux PC certainly isn’t slow by any measure. When I first started GTK programs like GIMP or Emacs, I noticed all fonts were way too large! Apparently, I still had some scaling-related settings that I needed to reset like so: Debugging tip: Display GNOME settings using (stored in ). Some programs like apparently need an explicit environment variable, otherwise they run in Xwayland. Weird. I also noticed that font rendering is different between X11 and Wayland! The difference is visible in Chrome browser tab titles and the URL bar, for example: At first I thought that maybe Wayland defaults to different font-antialiasing and font-hinting settings, but I tried experimenting with the following settings (which default to and ), but couldn’t get things to render like they did before: Update : Thanks to Hugo for pointing out that under Wayland, GTK3 ignores the configuration file and uses dconf exclusively! Setting the following dconf setting makes the font rendering match: The obvious replacement for is . I quickly ran into a difference in architecture between the two programs: i3lock shows a screen locker window. When you kill i3lock, the screen is unlocked. When you kill swaylock, you end up in a Red Screen Of Death . To get out of this state, you need to restart swaylock and unlock. You can unlock from the command line by sending to the process. This was very surprising to me, but is by (Wayland) design! See Sway issue #7046 for details, and this quote from the Wayland protocol : “The compositor must stop rendering and provide input to normal clients. Instead the compositor must blank all outputs with an opaque color such that their normal content is fully hidden.” OK, so when you start via SSH for testing, remember to always unlock instead of just cancelling with Ctrl+C. And hope it never crashes. I used to start via a wrapper script, which turns off the monitor (input wakes it up): With Wayland, the DPMS behavior has to be implemented differently, with : The i3 window manager can be extended via its IPC interface (interprocess communication) . I use a few small tools that use this interface. I noticed the following issues when using these tools with Sway: Tools using the Go package need a special socket path hook currently . We should probably include transparent handling in the package to ease the transition. Tools started with from the Sway config unexpectedly keep running even when you exit Sway ( ) and log into a new session! My workspace-populate-for-i3 did not work: My wsmgr-for-i3 worked partially: On X11, I use the rxvt-unicode (URxvt) terminal emulator. It has a couple of quality-of-life features that I don’t want to lose, aside from being fast and coming with a minimal look: In earlier experiments, I tried Alacritty or Kitty, but wasn’t happy with either. Thanks to anarcat’s blog post “Wayland: i3 to Sway migration” , I discovered the terminal emulator , which looks like a really nice option! I started a config file to match my URxvt config, but later I noticed that at least some colors don’t seem to match (some text lines with green/red background looked different). I’m not sure why and have not yet looked into it any further. I noticed the following issues using : Pressing Ctrl+Enter (which I seem to do by mistake quite a bit) results in escape sequences, whereas URxvt just treats Ctrl+Enter like Enter. This can be worked around in your shell (Zsh, in my case), see foot issue #628 for details. Double-clicking on part of a URL with the mouse selects the URL (as expected), but without the scheme prefix! Annoying when you do want to use the mouse. I can hold Ctrl to work around this, which will make select everything under the pointer up to, and until, the next space characters. Starting in results in not having color support for programs running inside the session. Probably a terminfo-related problem somehow…? I can also reproduce this issue with GNOME terminal. But with URxvt or xterm , it works. Selecting text highlights the text within the line, but not the entire line. This is different from other terminal emulators I am used to, but I don’t see an option to change it. Here’s a screenshot showing after triple-clicking on the right of “kthreadd”: But triple-clicking on an echo output line highlights only the contents, not the whole line: I find Emacs’s Wayland support rather disappointing. The standard version of Emacs only supports X11, so on Sway, it starts in Xwayland. Because Sway does not support scaling with Xwayland, Emacs shows up blurry (top/background window): Native Wayland support (bottom/foreground window) is only available in the Emacs version ( on NixOS). used to be a separate branch, but was merged in Emacs 29 (July 2023). There seem to be issues with on X11 (you get a warning when starting Emacs-pgtk on X11), so there have to be two separate versions for now… Unfortunately, the text rendering looks different than native X11 text rendering! The line height and letter spacing seems different: I’m not sure why it’s different! Does anybody know how to make it match the old behavior? Aside from the different text rendering, the other major issue for me is input latency: Emacs-pgtk feels significantly slower (less responsive) than Emacs. This was reported on Reddit multiple times ( thread 1 , thread 2 ) and Emacs bug #71591 , but there doesn’t seem to be any solution. I’ll also need a solution for running Emacs remotely. Thus far, I use X11 forwarding over SSH (which works fine and with low latency over fiber connections). I should probably check out waypipe, but have not yet had a chance. When starting Chrome and checking the debug page, things look good: But rather quickly, after moving and resizing browser windows, the GPU process dies with messages like the following and, for example, WebGL is no longer hardware accelerated: Of course, using a browser without hardware acceleration is very frustrating, especially at high resolutions. Starting Chrome with seems to work around the GPU process exiting, but Chrome still does not feel as smooth as on X11. Another big issue for me is that Sway does not open Chrome windows on the workspace on which I closed them. Support for tracking and restoring the EWMH atom was added to i3 in January 2016 and to Chrome in May 2016 and Firefox in March 2020 . I typically have 5+ workspaces and even more Chrome windows at any given point, so having to sort through 10+ Chrome windows every day (when I boot my work computer) is very annoying . Simon Ser said that this would be addressed with a new Wayland protocol ( , merge request !18 ). I work remotely a lot, so screen sharing is a table-stakes feature for me. I use screen sharing in my browser almost every day, in different scenarios and with different requirements. In X11, I am used to the following experience with Chrome. I click the “Window” tab and see previews of my windows. When I select the window and confirm, its contents get shared: To get screen sharing to work in Wayland/sway, you need to install and (the latter is specific to wlroots, which sway uses). With these packages set up, this is the behavior I see: This is a limitation of (and others) , which should be addressed with the upcoming Sway 1.12 release. I changed my NixOS configuration to use sway and wlroots from git to try it out. When I click on the “Window” tab, I see a chooser in which I need to select a window: After selecting the window, I see only that window’s contents previewed in Chrome: After confirming, I get another chooser and need to select the window again. Notably, there is no connection between the previewed window and the chosen window in this second step — if I chose a different window, that’s what will be shared: Now that window is screenshared (so the feature now works; nice!), but unfortunately in low resolution, meaning the text is blurry for my co-workers. I reported this as xdg-desktop-portal-wlr issue #364 and it seems like the issue is that the wrong scale factor is applied. The patch provided in the issue works for me. But, on a high level, the whole flow seems wrong: I shouldn’t see a chooser when clicking on Chrome’s “Window” tab. I should see previews of all windows. I should be able to select the window in Chrome, not with a separate chooser. I also noticed a very annoying glitch when output scaling is enabled: the contents of (some!) windows would “jump around” as I was switching between windows (in a tabbed or stacked container) or between workspaces. I first noticed this in the terminal emulator, where the behavior is as follows: I captured the following frame with my iPhone just as the content was moving a few pixels, shortly after switching focus to this window: Later, I also noticed that Chrome windows briefly show up blurry after switching . My guess is that because Sway sets the scale factor to 1 for invisible windows, when switching focus you see a scale-1 content buffer until the application provided its scale-3 content buffer. dunst supports Wayland natively. I tried dunst 1.13 and did not notice any issues. rofi works on Wayland since v2.0.0 (2025-09-01). I use rofi with rofimoji as my Emoji picker. For text input, instead of , seems to work. I didn’t notice any issues. Instead of my usual choice , I tried , but unfortunately ’s flag to select the window to capture is rather cumbersome to use (and captures in 1x scale). Does anyone have any suggestions for a good alternative? Finally I made some progress on getting a Wayland session to work in my environment! Before giving my verdict on this Wayland/sway experiment, let me explain that my experience on X11/i3 is really good. I don’t see any tearing or other artifacts or glitches in my day-to-day computer usage. I don’t use a compositor, so my input latency is really good: I once measured it to approximately 763 μs in Emacs on X11 with my custom-built keyboard (plus output latency), see kinX: latency measurement (2018) . So from my perspective, switching from this existing, flawlessly working stack (for me) to Sway only brings downsides. I observe new graphical glitches that I didn’t have before. The programs I spend most time in (Chrome and Emacs) run noticeably worse. Because of the different implementations, or because I need to switch programs entirely, I encounter a ton of new bugs. For the first time, an on-par Wayland experience seems within reach, but realistically it will require weeks or even months of work still. In my experience, debugging sessions quickly take hours as I need to switch graphics cards and rewire monitors to narrow down bugs. I don’t have the time to contribute much to fixing these numerous issues unfortunately, so I’ll keep using X11/i3 for now. For me, a Wayland/Sway session will be ready as my daily driver when: The lab PC contains a nVidia GeForce RTX 4070 Ti. The main PC contains a nVidia GeForce RTX 3060 Ti. I don’t know how I can configure the same libinput settings that I had before. See for what I have on X11. Sway’s available settings do not seem to match what I used before. The mouse cursor / pointer seems laggy, somehow?! It seems to take longer to react when I move the trackball, and it also seems to move less smoothly across the screen. Simon Ser suspects that this might be because hardware cursor support might not work with the nVidia drivers currently. No Xwayland scaling: programs started via Xwayland are blurry (by default) or double-scaled (when setting ). This is a Sway-specific limitation: KDE fixed this in 2022 . From Sway issue #2966 , I can tell that Sway developers do not seem to like this approach for some reason, but that’s very unfortunate for my migration: The backwards compatibility option of running older programs through Xwayland is effectively unavailable to me. Sometimes, keyboard shortcuts seem to be executed twice! Like, when I focused the first of five Chrome windows in a stack and moved that window to another workspace, two windows would be moved instead of one. I also see messages like this one (not exactly correlated with the double-shortcut problem, though): …and that seems wrong to me. My high-end Linux PC certainly isn’t slow by any measure. i3lock shows a screen locker window. When you kill i3lock, the screen is unlocked. When you kill swaylock, you end up in a Red Screen Of Death . To get out of this state, you need to restart swaylock and unlock. You can unlock from the command line by sending to the process. Tools using the Go package need a special socket path hook currently . We should probably include transparent handling in the package to ease the transition. Tools started with from the Sway config unexpectedly keep running even when you exit Sway ( ) and log into a new session! My workspace-populate-for-i3 did not work: Sway does not implement i3’s layout saving/restoring because Drew decided in 2017 that the feature is “too complicated and hacky for too little benefit” . Too bad. I have a couple of layouts I liked that I’ll need to replicate differently. Sway does not match workspace nodes with criteria. There’s pull request #8980 (posted independently, five days ago) to fix that. My wsmgr-for-i3 worked partially: Restoring workspaces ( ) worked. Sway’s command implementation does not seem to pick up workspace numbers from the target name. Backwards search through your scrollback (= command output) Opening URLs in your scrollback using keyboard shortcuts Opening a new terminal window in the same working directory Updating the terminal title from your shell Pressing Ctrl+Enter (which I seem to do by mistake quite a bit) results in escape sequences, whereas URxvt just treats Ctrl+Enter like Enter. This can be worked around in your shell (Zsh, in my case), see foot issue #628 for details. Double-clicking on part of a URL with the mouse selects the URL (as expected), but without the scheme prefix! Annoying when you do want to use the mouse. I can hold Ctrl to work around this, which will make select everything under the pointer up to, and until, the next space characters. Starting in results in not having color support for programs running inside the session. Probably a terminfo-related problem somehow…? I can also reproduce this issue with GNOME terminal. But with URxvt or xterm , it works. Selecting text highlights the text within the line, but not the entire line. This is different from other terminal emulators I am used to, but I don’t see an option to change it. Here’s a screenshot showing after triple-clicking on the right of “kthreadd”: But triple-clicking on an echo output line highlights only the contents, not the whole line: I can share a Chrome tab. I can share the entire monitor. I cannot share a specific window (the entire monitor shows up as a single window). Switch focus to another terminal by changing workspaces, or by switching focus within a stacked or tabbed container. The new terminal shows up with its text contents slightly offset. Within a few milliseconds, ’s text jumps to the correct position. Sway no longer triggers some key bindings twice some times (“ghost key presses”) I no longer see glitches when switching between windows or workspaces in Sway. Chrome is continuously hardware-accelerated. Chrome windows are restored to their previous workspace when starting. Emacs either: Runs via Xwayland and Sway makes scaling work. Or if its variant fixes its input latency issues and can be made to render text the same as before somehow.

0 views
Carlos Becker 1 weeks ago

LIVE from GitHub Universe: Inside the GitHub Secure Open Source Fund

I had a chat with Greg Cochran (GitHub) , Christian Grobmeier (log4j) , Michael Geers (evcc) , and Camila Maia (ScanAPI) about the GitHub Secure OpenSource Fund . It was recorded at the last day of GitHub Universe 2025.

0 views
Rob Zolkos 1 weeks ago

A Month Exploring Fizzy

In their book Getting Real , 37signals talk about Open Doors — the idea that you should give customers access to their data through RSS feeds and APIs. Let them get their information when they want it, how they want it. Open up and good things happen. Fizzy takes that seriously. When 37signals released Fizzy with its full git history available , they didn’t just open-source the code — they shipped a complete API and webhook system too. The doors were wide open baby! So I dove in — reading the source, building tools, and sharing what I found. Every time curiosity kicked in, there was a direct path from “I wonder if…” to something I could actually try and execute. This post is a catch-all for my very bubbly month of December. Fizzy Webhooks: What You Need to Know — I set up a local webhook receiver to capture and document every event type Fizzy sends. The post covers the payload structures, signature verification, and ideas for what you could build on top of the webhook system. The Making of Fizzy, Told by Git — I prompted Claude Code to analyze the entire git history and write a documentary about the development. Vanilla CSS is all you need — Diving into the no-build CSS architecture across Campfire, Writebook, and Fizzy. Fizzy Design Evolution: A Flipbook from Git — I went through each day of commits, got the application to a bootable state, seeded the database, and took a screenshot. Then I stitched those screenshots into a flipbook video with a soundtrack made from Fizzy’s own audio files. Fizzy’s Pull Requests: Who Built What and How — An analysis of who owned which domains in the Fizzy codebase. The post maps contributors to their expertise areas and curates learning paths through the PRs for topics like Turbo/Hotwire, caching, AI integration, multi-tenancy, and webhooks. The open API invited experimentation. I spotted gaps that would make integration easier for other developers, so I filled them: fizzy-api-client — Ruby client for the Fizzy API. fizzy-client-python — Python client for the Fizzy API. fizzy-cli — Command-line interface for the Fizzy API, built first in Ruby and then migrated to Go for portability. fizzy-skill — An AI agent skill for interacting with Fizzy. n8n-nodes-fizzy — An n8n community node that brings Fizzy into your automation workflows. Create cards, manage assignments, and react to real-time events through webhook triggers. Migration tools — I built these to make it easier to try Fizzy without starting from scratch. Migrating your existing issues and boards gives you an immediate sense of how it could work for you, without having to manually create test cards. You can see your real data running in Fizzy from day one, which I think makes it easier to evaluate and decide if its useful for you. I also contributed a few small fixes back to the main repository: Fizzy is released under the O’Saasy License , which is similar in spirit to MIT but includes a restriction on offering the software as a competing hosted or SaaS product. You can modify and self-host it, but you can’t repackage it and sell it as your own hosted service. I built O’Saasy Directory to make it easy to find applications released under this license. Beyond Fizzy, the directory includes other submitted projects where the source is available to read and modify. If you have built something under the O’Saasy License, visit the submission page to add yours. Having built the Fizzy CLI and fizzy-api-client Rubygem, I saw some fun opportunities to build little lab experiments to show how Fizzy could be integrated with - both to power up some functionality that isn’t there yet, but also creating boards in some interesting ways (eg Movie Quiz). I got the idea for this on a flight to Australia with no internet. Just a pad of paper and a pen. I should probably do that more often as a bunch of ideas for all sorts of products came out. CarbonationLabs is not a product per se. It’s an open source Rails application designed to be run locally where you can interact with the hosted or self-hosted versions of Fizzy. If anything I hope it inspires creation of little problem solving workflows for Fizzy that wouldn’t be built into the main product (the problem is too niche). The API and webhook system is really flexible and most of your bespoke problems could be solved with some creative thinking. Introducing Carbonation Labs - fun ways to add experiments to and extend Fizzy (repo link and demo videos below)🧵 I built carbonation.dev to bring together all the tools, libraries, and integrations that I and others in the community have created for Fizzy. It’s a directory covering API clients (Ruby, Python, JavaScript), CLI tools with packages for macOS, Arch Linux, Debian, Fedora, and Windows, integrations for Claude Code and other AI agents, n8n, Raycast, Telegram, and MCP servers, plus migration tools for GitHub, Linear, Asana, and Jira. If you’ve built something for Fizzy, I’d love to feature it. You can submit a pull request to add your tool to the directory. Building the Fizzy CLI pushed me into some new territory. I created an AUR package for Arch Linux users, set up a Homebrew tap for macOS, published my first Python package to PyPI, and made an n8n plugin — all firsts for me. While I already knew Go, rewriting the CLI in it was a fun exercise, and building TUIs for the setup and skill commands introduced me to terminal UI libraries I hadn’t used before. Gosh it was fun! If you want to get better at Rails, Fizzy is a great place to study real-world code. And in my view if you want to work at 37signals as a Rails programmer, digging into Fizzy — along with Campfire and Writebook — is a solid way to learn how they approach Rails architecture and design decisions. Submitting PRs is also a good way to contribute back while learning — just be respectful of the contribution policy . The review discussions give you a window into how to reason about problems, spot opportunities, and make trade-offs. This month pushed parts of my creative thinking that weren’t gone, but definitely weren’t being stressed. Like any muscle, use it or lose it. The direction of what to explore came from my own curiosity and a habit of poking around under the hood, and AI helped me move a lot faster once I knew where I wanted to go. Most of this information already exists somewhere — Google, Stack Overflow, documentation — but having AI right there alongside me as a partner was thrilling. All of this was made possible because a team left the doors open. No one asked me to step inside; I decided to invest the time and do the work to see what I could build, learn and share. I do this at work too—when I can—looking for opportunities I can shape, experiment with, and get genuinely excited about. Most importantly I had fun and I hope you enjoyed following along. linear2fizzy — Migrate Linear issues jira2fizzy — Migrate JIRA issues asana2fizzy — Migrate Asana tasks gh2fizzy — Migrate GitHub Issues prd2fizzy — Convert PRDs to Fizzy cards #2114 — Remove unused install.svg and its CSS class #2111 — Remove unpaired view-transition-name #2095 — Fix typo: minues → minutes #2094 — Fix duplicate word: use use → use #2093 — Add QrCodesController test #2088 — Fix view-transition-name typo in public card show

0 views
Xe Iaso 1 weeks ago

2026 will be my year of the Linux desktop

TL;DR: 2026 is going to be The Year of The Linux Desktop for me. I haven't booted into Windows in over 3 months on my tower and I'm starting to realize that it's not worth wasting the space for. I plan to unify my three SSDs and turn them all into btrfs drives on Fedora. I've been merely tolerating Windows 11 for a while but recently it's gotten to the point where it's just absolutely intolerable. Somehow Linux on the desktop has gotten so much better by not even doing anything differently. Microsoft has managed to actively sabotage the desktop experience through years of active disregard and spite against their users. They've managed to take some of their most revolutionary technological innovations (the NT kernel's hybrid design allowing it to restart drivers, NTFS, ReFS, WSL, Hyper-V, etc.) then just shat all over them with start menus made with React Native, control-alt-delete menus that are actually just webviews, and forcing Copilot down everyone's throats to the point that I've accidentally gotten stuck in Copilot in a handheld gaming PC and had to hard reboot the device to get out of it. It's as if the internal teams at Microsoft have had decades of lead time in shooting each other in the head with predictable results. To be honest, I've had enough. I'm going to go with Fedora on my tower and Bazzite (or SteamOS) on my handhelds. I think that Linux on the desktop is ready for the masses now, not because it's advanced in a huge leap/bound. It's ready for the masses to use because Windows has gotten so much actively worse that continuing to use it is an active detriment to user experience and stability. Not to mention with the price of ram lately, you need every gigabyte you can get and desktop Linux lets you waste less of it on superfluous bullshit that very few people actually want. Oh, and if I want a large language model integrated into my tower, I'm going to write the integration myself with the model running on hardware I can look at . At the very least, when something goes wrong on Linux you have log messages that can let you know what went wrong so you can search for it.

0 views
Circus Scientist 1 weeks ago

Happy New Year (resolutions)

I feel like I got a lot done in 2025 – but there is still more to do when it comes to the Open Source POV Poi and other related projects I am working on. First – a quick personal note: I didn’t get as much circus related work as usual in December. I wrote a blog post where I blamed it on Google and it went VIRAL on Hacker News. More than 150 000 people visited my site in 48 hours! It seems I am not alone in thinking that Google Search is going away. Read the post here: https://www.circusscientist.com/2025/12/29/google-is-dead-where-do-we-go-now/ – note: read until the end, there is a happy ending! Since some people are still having trouble with ESP32 version of SmartPoi I am going to first update the ESP8266 (D1 Mini) Arduino version. I haven’t touched the code in more than a year but it still doesn’t have the single image display I added to C3 version. Look out for this update before the end of January. Next I have to finish my ESP32 C3 poi – I have one fully soldered and all of the components and pieces for both poi on my desk. This will be a reference for anyone trying to make their own, and hopefully after doing a full build we can work out the best working version – without power issues or re-starting or anything else. I also have everything ready for a cheap IR LED poi set. This is going to help anyone (like me at the moment) who is on a budget. I will be doing a full tutorial on that. Happy New Year and a big shout out to my Patreon Supporters . Did you know you can buy SmartPoi (and Smart Hoops!) in Brazil right now? Commercial design and build by Flavio ( https://www.instagram.com/hoop_roots_lovers/ ). I also am in contact with a supporter from Dominican Republic who is developing his own version which will also be for sale soon. Not to mention the Magic Poi being built over in Australia. The post Happy New Year (resolutions) appeared first on Circus Scientist .

0 views
seated.ro 2 weeks ago

glimpses of the future

Glimpse can now build call graphs, showing you exactly how functions relate to each other in your codebase. This works by parsing your code with tree-sitter, extracting function definitions and calls, then resolving those calls to their actual definitions. Sometimes tree-sitter based resolution isn’t enough. Maybe you’re dealing with dynamic dispatch, generics, or just a language with particularly complex module resolution. For this, Glimpse can use LSPs to resolve definitions semantically. This spins up actual LSP servers and uses goto-definition / goto-implementation to resolve calls. It’s slower, but accurate. Glimpse will attempt to auto-install the LSP servers for you. Glimpse eagerly caches whatever it finds into an incremental index. But you can choose to pre-build the index ahead of time for instant queries. The index stores all the definitions, calls, and resolutions so subsequent queries are fast. Glimpse now supports: Go, Rust, C, C++, Python, TypeScript, JavaScript, Zig, Java, Scala, Nix, Lua, Ruby, C#, Kotlin, Swift, and Haskell. Each language has custom tree-sitter queries for extracting definitions, calls, and imports. The grammars are downloaded and compiled automatically on first use.

0 views
マリウス 2 weeks ago

Updates 2025/Q4

This post includes personal updates and some open source project updates. As the year comes to a close, I’d like to begin this update by sharing a famous (and sadly now gone ) tweet . My goal is not only to remind those who have seen it before, but also to introduce it to those who haven’t, along with the thoughts it inevitably sparks. It’s a way to preserve this rare gem of social media for posterity. Below is the original post, with added speaker information for easier reading. Warning: This text is a bit long. If you’d rather skip ahead to the next part of the update, click/tap here . Someday aliens are going to land their saucers in a field somewhere in New Jersey and everything is going to go just fine right up until we try to explain our calendar to them. Humans: “yeah we divide our year into a number of sub units called ‘months’ made up a number of days, and they’re not all the same length” Aliens: “I guess that’s unavoidable, if your rotations-count per orbit is a prime number” Humans: “yeah, our’s isn’t prime” Aliens: “but surely you have most of these ‘months’ the same length and just make the last one shorter or longer?” Humans: “No… They’re different lengths following no logical pattern” Aliens: “what” Humans: “and we further subdivide the months into ‘weeks’, which is 7 days.” Aliens: “ahh, so each month is an integer multiple of weeks?” Humans: “that would make sense, but no. Only one is, sometimes” Aliens: “SOMETIMES?!” Humans: “yeah our orbit around the sun isn’t an integer number of days, so we have to change the number of days to in a year from time to time” Aliens: “oh yes, a similar thing happens on Epsilon Indi 7, where they have to add an extra day every 39 years to keep holidays on track” Humans: “yeah that’s how ours work! Although the ratio doesn’t work out cleanly, so we just do every 4 years, except every 100 years, except except every 400 years” Aliens: “oh, you number your years? What’s the epoch?” Humans: “uh, it’s supposed to be the birth of a religious leader, but they got the math wrong so it’s off by 4 years, if he existed at all.” Aliens: “if? You based your calendar off the birth date of someone you’re not sure exists?” Humans: “yeah. He’s written about in a famous book but historical records are spotty.” Aliens: “interesting. I didn’t realize your planet was one of the ones with a single universal religion, that usually only happens in partial or complete hive minds.” Humans: “uhh, we’re not.” Aliens: “You’re not?!” Humans: “yeah we have multiple religions.” Aliens: “oh but they all have a common ancestor, which agrees on the existence of that leader, right?” Humans: “uh, no. Two of the big ones do, but most of the others don’t believe in him” Aliens: “YOUR CALENDAR IS BASED ON A RELIGIOUS LEADER THAT NOT EVERYONE BELIEVES IN?” Humans: “well, on his birth. And yeah, we got it wrong by a couple years.” Aliens: “OK, fine. So, you have somewhat complicated rules about when you change the length of your years, and I’m scared to ask this, but… You definitely just add or subtract that extra day at the end, right?” Humans: “…. Nope.” Aliens: “At the start of the year? " Humans: “nah. The end of the second month” Aliens: “WHY WOULD IT BE THE SECOND MONTH?” Humans: “I’m not sure, really.” Aliens: “huh. So at this point I’m dreading asking this, but how do you measure time within each day?” Humans: “oh that’s much simpler. Each day is divided into hours, each hour has minutes, and each minute has seconds.” Aliens: “ok. And 10 of each?” Humans: “10 hours? No. There’s 24 hours, 60 minutes, 60 seconds” Aliens: “…. I thought you said you used a base-10 counting system” Humans: “we do! Mostly. But our time system came from some long gone civilization that liked base-60 like 5000 years ago” Aliens: “and you haven’t changed it since?” Humans: “No.” Aliens: “huh. Okay, so why 24? That’s not a divisor of 60” Humans: “oh because it’s actually 12!” Aliens: “what” Humans: “yeah each day is 24 hours but they are divided into two sets of 12.” Aliens: “and that’s 5 12s, right, I see the logic here, almost. So like, after hour 12, it becomes the second half, which is 1?” Humans: “No, after 11.” Aliens: “oh, you zero-index them! So it’s hours 0-11 in the first half, then 12-23 in the second half?” Humans: “No. 12 to 11 in the first half, and again in the second half” Aliens: “please explain that before my brain melts out my mouth” Humans: “the first hour is 12. Then the next one is 1, then it goes back up to 11, then 12 again” Aliens: “that is not how numbers work. And how do you tell first 12 apart from second 12?” Humans: “oh we don’t use numbers for that!” Aliens: “you don’t number the two halves of your day?” Humans: “nah, we call them AM and PM” Aliens: “WHAT DOES THAT MEAN” Humans: “I think it’s ante-meridian and post-meridian? But I’m not sure, I dont know much Latin” Aliens: “Latin?” Humans: “yeah it’s an ancient language from an old empire which controlled a lot of the world and we still use some of their terms” Aliens: “oh, and that was the civilization that liked base-60 and set up your time system?” Humans: “that would make sense, but… No, completely different one.” Aliens: “okay, and what do you do to if you want to measure very short times, shorter than a second?” Humans: “oh we use milliseconds and microseconds” Aliens: “ahh, those are a 60th of a second and then 60th of the other?” Humans: “No. Thousandths.” Aliens: “so you switch to base-10 at last, but only for subdivisions of the second?” Humans: “yeah.” Aliens: “but at thousands, ie, ten tens tens” Humans: “yeah. Technically we have deciseconds and centiseconds, which are 1/10 of a second, and 1/100 of a second, but no one really uses them. We just use milli.” Aliens: “that seems more like a base-1000 system than a base-10 system.” Humans: “it kinda is? We do a similar thing with measures of volume and distance and mass.” Aliens: “but you still call it base-10?” Humans: “yeah” Aliens: “so let me see if I get this right: Your years are divided in 10 months, each of which is some variable number of days, the SECOND of which varies based on a complex formula… and each day is divided into two halves of 12 hours, of 60 minutes, 60 seconds, 1000 milliseconds?” Humans: “12 months, actually.” Aliens: “right, because of the ancient civilization that liked base-60, and 12 is a divisor of 60.” Humans: “No, actually, that came from the civilization that used latin. Previously there were 10.” Aliens: “what” Humans: “yeah the Latin guys added two months part of the way through their rule, adding two more months. That’s why some are named after the wrong numbers” Aliens: “you just said two things I am having trouble understanding. 1. Your months are named, not numbered? 2. THE NAMES ARE WRONG?” Humans: “yep! Our 9th month is named after the number 7, and so on for 10, 11, and 12.” Aliens: “your 12th month is named… 10?” Humans: “yeah.” Aliens: “what are the other ones named after?!” Humans: “various things. Mainly Gods or rulers” Aliens: “oh, from that same religion that your epoch is from?” Humans: “uh… No. Different one.” Aliens: “so you have an epoch based on one religion, but name your months based on a different one?” Humans: “yeah! Just wait until you hear about days of the week.” Aliens: “WHAT” Humans: “so yeah we group days into 7-day periods-” Aliens: “which aren’t an even divisor of your months lengths or year lengths?” Humans: “right. Don’t interrupt” Aliens: “sorry” Humans: “but we name the days of the week, rather than numbering them. Funny story with that, actually: there’s disagreement about which day starts the week.” Aliens: “you have a period that repeats every 7 days and you don’t agree when it starts?” Humans: “yeah, it’s Monday or Sunday.” Aliens: “and those names come from…” Humans: “celestial bodies and gods! The sun and moon are Sunday and Monday, for example” Aliens: “but… I looked at your planet’s orbit parameters. Doesn’t the sun come up every day?” Humans: “yeah.” Aliens: “oh, do you have one of those odd orbits where your natural satellite is closer or eclipsed every 7 days, like Quagnar 4?” Humans: “no, the sun and moon are the same then as every other day, we just had to name them something.” Aliens: “and the other days, those are named after gods?” Humans: “yep!” Aliens: “from your largest religion, I imagine?” Humans: “nah. That one (and the second largest, actually) only has one god, and he doesn’t really have a name.” Aliens: “huh. So what religion are they from? The Latin one again?” Humans: “nah, they only named one of the God-days” Aliens: “only on… SO THE OTHER DAYS ARE FROM A DIFFERENT RELIGON ENTIRELY?” Humans: “Yep!” Aliens: “the third or forth biggest, I assume?” Humans: “nah, it’s one that… Kinda doesn’t exist anymore? It mostly died out like 800 years ago, though there are some modern small revivals, of course” Aliens: “so, let me get confirm I am understanding this correctly. Your days and hours and seconds and smaller are numbered, in a repeating pattern. But your years are numbered based on a religious epoch, despite it being only one religion amongst several.” Humans: “correct so far” Aliens: “and your months and days of the week are instead named, although some are named after numbers, and it’s the wrong numbers” Humans: “exactly” Aliens: “and the ones that aren’t numbers or rulers or celestial objects are named after gods, right?” Humans: “yup!” Aliens: “but the months and the days of the week are named after gods from different religons from the epoch religion, and indeed, each other?” Humans: “yeah! Except Saturday. That’s the same religion as the month religion” Aliens: “and the month/Saturday religion is also from the same culture who gave you the 12 months system, and the names for the two halves of the day, which are also named?” Humans: “right! Well, kinda.” Aliens: “please explain, slowly and carefully” Humans: “yeah so cultures before then had a 12 month system, because of the moon. But they had been using a 10 month system, before switching to 12 and giving them the modern names” Aliens: “the… Moon? Your celestial body?” Humans: “yeah, it completes an orbit about every 27 days, so which is about 12 times a year, so it is only natural to divide the year into 12 periods, which eventually got called months” Aliens: “ok, that makes sense. Wait, no. Your orbital period is approximately 365.25 days, right?” Humans: “yeah. That’s why we do 365 or 366 based on the formula” Aliens: “but that doesn’t work. 365 divided by 27 is ~13.5, not 12” Humans: “yeah I’m not sure why 12 was so common then. Maybe it goes back to the base 60 people?” Aliens: “okay so one final check before I file this report: Years are numbered based on a religious leader. Years always have 12 months, but the lengths of those months is not consistent between each other or between years.” Humans: “don’t forget the epoch we number our years from is wrong!” Aliens: “right, yes. And your months are named, some after a different religion, and some after numbers, but not the number the month is in the year.” Humans: “right. And when we change the month lengths, it’s the second one we change” Aliens: “how could I forget? After months you have a repeating ‘week’ of 7 days, which is named after gods from two religons, one of which is the month-naming one, and a nearly extinct one. And you don’t agree when the week starts.” Humans: “nope! My money is on Monday.” Aliens: “that’s the Monday that’s named after your moon, which supposedly influenced the commonality of the 12 months in a year cycle, despite it orbiting 13 times in a year?” Humans: “correct!” Aliens: “and as for your days, they split into two halves, named after a phrase you don’t really understand in the long dead language of the same culture that named the months and Saturday.” Humans: “Yep. I took some in college but all I remember is like, ‘boy’, ‘girl’, ‘stinky’, ‘cocksucker’” Aliens: “charming. And then each half is divided into 12 hours, but you start at 12, then go to 1, and up to 11” Humans: “all I can say is that it makes more sense on analog clocks.” Aliens: “i don’t know what that is and at this point I would prefer you not elaborate. So each of those hours is divided into 60 minutes and then 60 seconds, and this comes from an ancient civilization, but not the one that gave you the month names” Humans: “yep. Different guys. Different part of the world.” Aliens: “ok. And then after seconds, you switch to a ‘base-10’ system, but you only really use multiples of a thousand? Milliseconds and microseconds?” Humans: “right. And there’s smaller ones beyond that, but they all use thousands” Aliens: “right. Got it. All written down here. Now if you’ll excuse me, I just gotta go make sure I didn’t leave my interociter on, I’ll be right back.” The tall alien walks back into their saucer without a wave. The landing ramp closes. The ship gently lifts off as gangly landing legs retract. There’s a beat, then a sudden whooshing sound as air rushes back into the space that previously held the craft, now suddenly vacuum. NORAD alarms go off briefly as an object is detected leaving the earth’s atmosphere at a significant fraction of the speed of light. In the years to come, many technological advances are made from what was left behind, a small tablet shaped object made of some kind of artifical stone/neutrino composite material. The alien message left on screen is eventually translated to read “Untitled Document 1 has not been saved, are you sure you wish to quit? (yes) (no) (cancel)” Many years have passed, and we await the day the aliens return. They have not. As I mentioned in the previous update ( here ), my beloved 9barista coffee brewer started malfunctioning at the end of Q3, likely due to the age of the O-ring sealing the water chamber and the descaling process I performed. However, I was able to fix the machine using the official 9barista repair kit and have been using it daily ever since. In recent months, though, I’ve almost entirely switched to decaf coffee in an effort to reduce some recurring headaches I’ve been dealing with for a while. It doesn’t seem to be the constant consumption of caffeine causing the issue; rather, the headaches mostly appeared whenever I skipped a cup, making it seem more like a caffeine withdrawal effect. Although I continued to experience headaches in Q4, those were likely linked to being sick rather than coffee, see below . That said, both the frequency and intensity of the headaches have noticeably decreased. Toward the end of Q4, I also began experimenting with additions to my coffee, specifically Lion’s Mane , a well-known component of traditional Chinese medicine that’s often advertised as an alternative to caffeine. It’s believed to enhance focus without the jitters or cold sweats that usually come with high caffeine consumption. In mid-October, I unfortunately got hit with a heavy dose of COVID-19 , which knocked me out for three weeks and has had (once again) a lasting impact on my overall health. Since I was mostly bedbound during that time, I spent some of it exchanging COVID anecdotes with the friendly folks in the community channel . I was surprised to find that many people there had similar negative experiences, particularly in relation to post-vaccine infections. My first encounter with COVID was back in 2020, and for me, it turned out to be little more than a bad flu, with two days of fever and some headaches. I didn’t lose my sense of smell or taste, nor did I experience any long-term effects. In fact, the most troubling part of the whole COVID experience for me back then wasn’t the sickness itself, but the fear of being picked up by local authorities for having an elevated body temperature. This was especially concerning because I was still traveling the world at the time, enjoying the eerie quiet of empty airports and cities. Due to increasing social pressure, especially from governments imposing heavy travel restrictions, I was eventually pushed into getting vaccinated shortly after that. Unfortunately, my body didn’t handle the two doses very well. I experienced extreme muscle pain and a general sense of being under the weather . While those side effects faded after a few days, in the months that followed, I felt more tired and inflamed than usual, with recurring flu-like symptoms and headaches. At some point, COVID hit me again, but this time it was really bad. I ended up battling a fever around 40°C/104°F for over a week, and I was completely knocked out for almost two months. On top of that, I began experiencing cardiovascular symptoms, which persisted for months and even years afterward. The adverse effects I’d never experienced before didn’t just show up with subsequent COVID infections, but also with regular flu. There was one point when a strain of Influenza B hit me so hard that I had to visit the emergency room, which is something I’d never done before, even though I’d never received the annual flu vaccine. To this day, it feels like ever since I got the Pfizer shots (for which I had to sign a liability waiver), my health has been in a constant decline, especially whenever influenza or COVID strikes. No matter how healthy my diet or activity level, it doesn’t seem to make much of a difference. In fact, the ongoing inflammation and regular flu-like symptoms have made it especially hard to push myself during a workout or a run. At some point, I started digging deeper into the issue, with regular bloodwork and visits to specialists, particularly cardiologists. Unfortunately, as is often the case, no medical expert has been able to diagnose the underlying issue(s) or propose meaningful solutions. Society seems quick to ridicule those who seek to improve their health through unconventional methods, yet most people fail to recognize the globally poor state of healthcare, which leaves people stranded, regardless of how much private money they’re willing to spend to solve their problems. Long story short, will I continue to get the battletested shots for Hepatitis , Tetanus , and other dangers humanity faces? Definitely. But will I be significantly more skeptical of vaccines that didn’t undergo year-long trials and were fast-tracked by every government on Earth to curb an allegedly man-made virus that escaped a biological research facility, all while creating shareholder value ? You bet! Note: This is a complex topic, and everyone has their own personal experience. For many, the COVID shots seem to have had no negative side effects. For some, however, they did. This doesn’t mean that COVID doesn’t exist, nor that lizard overlords used it as an excuse to inject us with nanobots . Medicine certainly has its flaws, and financial interests were prioritized over absolute safety, something that’s happened in other areas as well over the past few years (e.g., Boeing ). If, however, you think there’s a pLaNdEmIc or some intentional, eViL gEnEtIc ExPeRiMeNt at play, there’s no need at all to launch your XLibre Xserver to reach out to me with fUrThEr iNfO oN tHiS tOpIc . Thank you. You might have noticed that the main menu at the top of this website has grown, now including a now page , as well as a link to Codeberg, but more on that in a second . The now page is exactly what the name suggests: a now page . Given the failure of social media, I’ve pretty much given up on maintaining a public profile for posting status updates. Up until the end of 2021, I was still actively maintaining a Mastodon account alongside a TUI client , but that eventually fell apart for multiple reasons. After that, I used Nostr for a while, but eventually gave it up too. These days, I’m somewhat active on Bluesky , though my account isn’t publicly available. I don’t have high hopes for Bluesky either, and I’ll probably delete my account there one day, at the latest when Bluesky inevitably becomes enshittified . The now page , however, is here to stay. It will continue to feature short, tweet -like updates about all sorts of things. If you’re interested, feel free to check it every once in a while. I might even activate a dedicated RSS feed for it at some point. For the past few months I’ve been silently moving most private project repositories away from GitHub towards privately hosted instances of Forgejo – a terrible name, btw – as well as many of my public GitHub projects to Codeberg . One reason to do so is… well, let me just quote Andrew Kelley here, who probably put it best: […] the engineering excellence that created GitHub’s success is no longer driving it. Priorities and the engineering culture have rotted, leaving users inflicted with some kind of bloated, buggy JavaScript framework in the name of progress. Stuff that used to be snappy is now sluggish and often entirely broken. Most importantly, Actions has inexcusable bugs while being completely neglected . After the CEO of GitHub said to “embrace AI or get out” , it seems the lackeys at Microsoft took the hint, because GitHub Actions started “vibe-scheduling”; choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked. However, unlike most people who decided to migrate from GitHub to Codeberg, I won’t be deleting my repositories on GitHub just yet. Instead, I’ve updated all my local clones to point toward Codeberg, and I’ve enabled synchronized pushes from Codeberg to GitHub, as I plan to continue using GitHub’s workflows. “But why?!” you might ask. The reason is simple: Because I’m happy to waste Microsoft’s resources on automated tests and build actions. While I could use Codeberg’s Woodpecker CI or even set up my own, I’m more than content to keep using GitHub’s CPU cycles for free to build my silly little projects , while hosting the primary source code repositories on Codeberg. Since there doesn’t seem to be a way to disable Pull Requests on GitHub for my respective projects, I’ve added pull request templates that warn against opening PRs there. I’ve also disabled the Issues tab and updated the short descriptions to link to Codeberg. Additionally, my overview page on GitHub now links to Codeberg, with the GitHub repositories listed explicitly as GitHub mirrors . At the end of October I encountered an issue with ungoogled-chromium on my Gentoo laptop that prevented it from compiling successfully. Upon further investigation I learned that, quote: Using the system libc++ is no longer supported This change was driven by the Chromium project and affected my, along with many others’, Gentoo installation, due to the use of system libraries instead of the in-tree ones provided by Chromium. As mentioned here , this is a security concern, as users will need to trust the Chromium-provided libraries over those from their distribution. In case you’ve ever wondered why anyone in 2025 would still compile from source when tHe PeRfOrMaNcE bEnEfItS aRe NeGlIgIbLe , this is one of the key reasons why compiling from source still makes sense and, in fact, is more important than ever. The same projects that have historically taken a controversial stance on sensible default settings are now the ones seemingly rejecting security-critical system components in favor of their own. Tl;dr: If you’re using Chromium or a Chromium-based browser (other than ungoogled-chromium on Gentoo through PF4Public ’s repository), it’s highly likely that your browser is not using your system maintainer’s libraries, but rather Chromium’s in-tree ones with whatever versions and features the Chromium developers deem necessary and sensible. In what to this day remains a mystery the keyboard switch of my key has decided that it rejects its existence and seemingly removed one of its legs, presumably in an effort to escape and start a new live. I had documented the whole incident on Keebtalk for anyone who’s equally as puzzled by this as I am. I invested quite some time in pursuing my open source projects in the past quarter, hence there are a few updates to share. At the beginning of November I released Zeit v1.0.0 , a full rewrite of my command line time tracking tool. In case you missed it, I summed up everything in a dedicated post and have also published a dedicated project website that will soon act as more than just a landingpage. With 📨🚕 (MSG.TAXI) continuing to grow and evolve, Overpush has received a few important updates improving its stability with long-running XMPP connections. One thing that made me very happy throughout the debugging phase was the fact that despite stability of Overpush not being perfect , no messages ever got lost whatsoever and were always successfully delivered the moment the service would be able to reach the target platforms (specifically XMPP in this case). :-) If you haven’t yet tried Overpush yourself, I encourage you to sign up on 📨🚕 and give it a go. If you find the service useful you’ll be able to easily spin up your own Overpush instance further down the line and won’t have to depend on any closed-source proprietary platfrom. As those of you idling in the community channel might know, I’ve been actively working on an internet forum software for some time now . What kick-started my efforts was the desire to set up a support and discussion forum for 📨🚕 , among other things, but I was dissatisfied with the existing options. I was looking for an internet forum that… The first thing that came to mind was phpBB , which has been around for decades and appears to be one of the few options that (unlike Discourse and Lemmy ) doesn’t require users to have JavaScript enabled. Sadly, phpBB is a monster . It has too many features, takes a lot of time to properly install and configure, and, more importantly, when looking at its runtime dependencies and extensions, it requires some recurring effort to keep it safe and sound. Don’t get me wrong, unlike Discourse , which is frankly terrible, phpBB is a solid piece of software. However, for my use cases, I wanted something more lightweight that is easy to set up and run. None of the existing solutions, with maybe one or two exceptions like DFeed , came close to what I was looking for. And those that seemed like a good fit sadly lacked some functionalities, which would have required me to extend them in ways that would significantly alter core functionality. These changes would have likely not been merged upstream, meaning I’d probably end up maintaining my own fork anyway. The bulletin board I’m working on is built in Go, as a single executable binary (without CGO ) for all major platforms ( Linux , * BSD , (maybe) Plan 9 , macOS , and (maybe) Windows ) that doesn’t require a runtime (like Erlang / Elixir , PHP , Ruby , Python , or worse, Node.js ) or even assets (e.g., HTML/CSS files) anywhere in . It renders modern HTML on the server-side and doesn’t require any user-side JavaScript to be enabled. The forum will support only PostgreSQL (single- and multi-node setups), require a Redis/Valkey instance or cluster, and use S3-compatible storage for user content (e.g., profile pictures, file uploads, etc.). The platform will allow sign-ups via email and XMPP addresses, supporting notifications and replies through both services. But don’t worry: OAuth authentication via popular providers will also be available. Additionally, the forum will feature a dedicated REST API that, unlike Lemmy ’s or Discourse ’s APIs, will be much easier to work with. One mid-term goal is to integrate this API into Neon Modem Overdrive , which will become its official TUI client. Short story long: I’ve been working on this project for a little while now and expect to release a first live demo around February ‘26. While many basic features are already implemented, there are still details I’d like to perfect before publishing the first version. I’ll set up a live online demo for people to try out first, and only after fine-tuning the code based on feedback will I wrap up the actual source release. The forum will be open-source and available under the SEGV license. If this sounds interesting to you and you’d like to participate in development or testing, reach out to me ! With that said, I sincerely hope you’re enjoying a wonderful holiday season and gearing up for a great new year! As we wrap up 2025, I’ll be taking a well-deserved break from posting here on the site. The start of 2026 is shaping up to be quite hectic, and I’m looking forward to diving into some exciting projects, especially focusing on the ▓▓▓▓▓▓▓▓▓▓▓ bulletin board system I’m building. I hope this season brings you moments of joy, relaxation, and time well spent with those who matter most. May the new year be filled with new opportunities, exciting adventures, and personal growth. I look forward to reconnecting with all of you next year ! Stay safe, take care of yourselves, and I’ll see you in 2026! Can use an existing database to authenticate users and/or… Supports simple email/username signups. Ideally supports notifications and replies via email. Is lightweight and doesn’t require a ton of runtime dependencies. Does not require users to have JavaScript enabled . Does not overwhelm me with administrative features. Is somewhat easily themeable.

0 views
Max Bernstein 2 weeks ago

The GDB JIT interface

GDB is great for stepping through machine code to figure out what is going on. It uses debug information under the hood to present you with a tidy backtrace and also determine how much machine code to print when you type . This debug information comes from your compiler. Clang, GCC, rustc, etc all produce debug data in a format called DWARF and then embed that debug information inside the binary (ELF, Mach-O, …) when you do or equivalent. Unfortunately, this means that by default, GDB has no idea what is going on if you break in a JIT-compiled function. You can step instruction-by-instruction and whatnot, but that’s about it. This is because the current instruction pointer is nowhere to be found in any of the existing debug info tables from the host runtime code, so your terminal is filled with . See this example from the V8 docs: Fortunately, there is a JIT interface to GDB. If you implement a couple of functions in your JIT and run them every time you finish compiling a function, you can get the debugging niceties for your JIT code too. See again a V8 example: Unfortunately, the GDB docs are somewhat sparse . So I went spelunking through a bunch of different projects to try and understand what is going on. GDB expects your runtime to expose a function called and a global variable called . GDB automatically adds its own internal breakpoints at this function, if it exists. Then, when you compile code, you call this function from your runtime. In slightly more detail: This is why you see compiler projects such as V8 including large swaths of code just to make object files: Because this is a huge hassle, GDB also has a newer interface that does not require making an ELF/Mach-O/…+DWARF object. This new interface requires writing a binary format of your choice. You make the writer and you make the reader. Then, when you are in GDB, you load your reader as a shared object. The reader must implement the interface specified by GDB : The function pointer does the bulk of the work and is responsible for matching code ranges to function names, line numbers, and more. Here are some details from Sanjoy Das . Only a few runtimes implement this interface. Most of them stub out the and function pointers: I think it also requires at least the reader to proclaim it is GPL via the macro . Since I wrote about the perf map interface recently, I have it on my mind. Why can’t we reuse it in GDB? I suppose it would be possible to try and upstream a patch to GDB to support the Linux perf map interface for JITs. After all, why shouldn’t it be able to automatically pick up symbols from ? That would be great baseline debug info for “free”. In the meantime, maybe it is reasonable to create a re-usable custom debug reader: It would be less flexible than both the DWARF and custom readers support: it would only be able to handle filename and code region. No embedding source code for GDB to display in your debugger. But maybe that is okay for a partial solution? Update: Here is my small attempt at such a plugin. V8 notes in their GDB JIT docs that because the JIT interface is a linked list and we only keep a pointer to the head, we get O(n 2 ) behavior. Bummer. This becomes especially noticeable since they register additional code objects not just for functions, but also trampolines, cache stubs, etc. Since GDB expects the code pointer in your symbol object file not to move, you have to make sure to have a stable symbol file pointer and stable executable code pointer. To make this happen, V8 disables its moving GC. Additionally, if your compiled function gets collected, you have to make sure to unregister the function. Instead of doing this eagerly, ART treats the GDB JIT linked list as a weakref and periodically removes dead code entries from it. Compile a function in your JIT compiler. This gives you a function name, maybe other metadata, an executable code address, and a code size Generate an entire ELF/Mach-O/… object in-memory (!) for that one function, describing its name, code region, maybe other DWARF metadata such as line number maps Write a linked list node that points at your object (“symfile”) Link it into the linked list Call , which gives GDB control of the process so it can pick up the new function’s metadata Optionally, break into (or crash inside) one of your JITed functions At some point, later, when your function gets GCed, unregister your code by editing the linked list and calling again CoreCLR/.NET JavaScriptCore ART which looks like it does something smart about grouping the JIT code entries together ( ), but I’m not sure exactly what it does TomatoDotNet a minimal example It looks like Dart used to have support for this but has since removed it yk write yk read asmjit-utilities write asmjit-utilities read Erlang/OTP write Erlang/OTP read FEX write FEX read buxn-jit write buxn-jit read box64 write box64 read When registering code, write the address and name to as you normally would Write the filename as the symfile (does this make the magic number?) Have the debug info reader just parse the perf map file

0 views