Posts in Shell (20 found)
xenodium 4 days ago

agent-shell 0.17 improvements + MELPA

While it's only been a few weeks since the last agent-shell post , there are plenty of new updates to share. What's agent-shell again? A native Emacs shell to interact with any LLM agent powered by ACP ( Agent Client Protocol ). Before getting to the latest and greatest, I'd like to say thank you to new and existing sponsors backing my projects. While the work going in remains largely unsustainable, your contributions are indeed helping me get closer to sustainability. Thank you! If you benefit from my content and projects, please consider sponsoring to make the work sustainable. Work paying for your LLM tokens and other tools? Why not get your employer to sponsor agent-shell also? Now on to the very first update… Both agent-shell and acp.el are now available on MELPA. As such, installation now boils down to: OpenCode and Qwen Code are two of the latest agents to join agent-shell . Both accessible via and through the agent picker, but also directly from and . Adding files as context has seen quite a few improvements in different shapes. Thank you Ian Davidson for contributing embedded context support. Invoke to take a screenshot and automatically send it over to . A little side-note, did you notice the activity indicator in the header bar? Yep. That's new too. While file completion remains experimental, you can enable via: From any file you can now invoke to send the current file to . If region is selected, region information is sent also. Fancy sending a different file other than current one? Invoke with , or just use . , also operates on files (selection or region), DWIM style ;-) You may have noticed paths in section titles are no longer displayed as absolute paths. We're shortening those relative to project roots. While you can invoke with prefix to create new shells, is now available (and more discoverable than ). Cancelling prompt sessions (via ) is much more reliable now. If you experienced a shell getting stuck after cancelling a session, that's because we were missing part of the protocol implementation. This is now implemented. Use the new to automatically insert shell (ie. bash) command output. Initial work for automatically saving markdown transcripts is now in place. We're still iterating on it, but if keen to try things out, you can enable as follows: Text header Applied changes are now displayed inline. The new and can now be used to change the session mode. You can now find out what capabilities and session modes are supported by your agent. Expand either of the two sections. Tired of pressing and to accept changes from the diff buffer? Now just press from the diff viewer to accept all hunks. Same goes for rejecting. No more and . Now just press from the diff buffer. We get a new basic transient menu. Currently available via . We got lots of awesome pull requests from wonderful folks. Thank you for your contributions! Beyond what's been showcased here, much love and effort's been poured into polishing the experience. Interested in the nitty-gritty? Have a look through the 173 commits since the last blog post. If agent-shell or acp.el are useful to you, please consider sponsoring its development. LLM tokens aren't free, and neither is the time dedicated to building this stuff ;-) Arthur Heymans : Add a Package-Requires header ( PR ). Elle Najt : Execute commands in devcontainer ( PR ). Elle Najt : Fix Write tool diff preview for new files ( PR ). Elle Najt : Inline display of historical changes ( PR ). Elle Najt : Live Markdown transcripts ( PR ). Elle Najt : Prompt session mode cycling and modeline display ( PR ). Fritz Grabo : Devcontainer fallback workspace ( PR ). Guilherme Pires : Codex subscription auth ( PR ). Hordur Freyr Yngvason : Make qwen authentication optional ( PR ). Ian Davidson : Embedded context support ( PR ). Julian Hirn : Fix quick-diff window restoration for full-screen ( PR ). Ruslan Kamashev : Hide header line altogether ( PR ). festive-onion : Show Planning mode more reliably ( PR ).

0 views
Maurycy 2 weeks ago

You already have a git server:

If you have a git repository on a server with ssh access, you can just clone it: You can then work on it locally and push your changes back to the origin server. By default, git won’t let you push to the branch that is currently checked out, but this is easy to change: This is a great way to sync code between multiple computers or to work on server-side files without laggy typing or manual copying. If you want to publish your code, just point your web server at the git repo: … although you will have to run this command server-side to make it cloneable: That’s a lot of work, so let’s set up a hook to do that automatically: Git hooks are just shell scripts, so they can do things like running a static site generator: This is how I’ve been doing this blog for a while now: It’s very nice to be able to type up posts locally (no network lag), and then push them to the server and have the rest handled automatically. It’s also backed up by default: If the server breaks, I’ve still got the copy on my laptop, and if my laptop breaks, I can download everything from the server. Git’s version tracking also prevents accidental deletions, and if something breaks, it’s easy to figure out what caused it.

0 views
xenodium 2 weeks ago

Bending Emacs - Episode 4: Batch renaming files

I'm now a few weeks into my Bending Emacs series. Today I share a new episode. Bending Emacs Episode 4: Batch renaming files In this video, I show a few ways of batch renaming files. The covered flows are: Liked the video? Please let me know. Got feedback? Leave me some comments . Please go like my video , share with others, and subscribe to my channel . If there's enough interest, I'll continue making more videos! Enjoying this content or my projects ? I am an indie dev. Help make it sustainable by ✨ sponsoring ✨ Need a blog? I can help with that . Maybe buy my iOS apps too ;) Dired editable buffers . Multiple cursors using region-bindings-mode to insert numbers via # (I had a tiny blog entry on this). We can batch rename using Keyboard Macros too and insert numbers via macro counters . While I typically use multiple cursors for batch renaming files, I also experimented with dwim-shell-command via M-x dwim-shell-commands-rename-all .

0 views
Evan Hahn 2 weeks ago

Scripts I wrote that I use all the time

In my decade-plus of maintaining my dotfiles , I’ve written a lot of little shell scripts. Here’s a big list of my personal favorites. and are simple wrappers around system clipboard managers, like on macOS and on Linux. I use these all the time . prints the current state of your clipboard to stdout, and then whenever the clipboard changes, it prints the new version. I use this once a week or so. copies the current directory to the clipboard. Basically . I often use this when I’m in a directory and I want use that directory in another terminal tab; I copy it in one tab and to it in another. I use this once a day or so. makes a directory and s inside. It’s basically . I use this all the time —almost every time I make a directory, I want to go in there. changes to a temporary directory. It’s basically . I use this all the time to hop into a sandbox directory. It saves me from having to manually clean up my work. A couple of common examples: moves and to the trash. Supports macOS and Linux. I use this every day. I definitely run it more than , and it saves me from accidentally deleting files. makes it quick to create shell scripts. creates , makes it executable with , adds some nice Bash prefixes, and opens it with my editor (Vim in my case). I use this every few days. Many of the scripts in this post were made with this helper! starts a static file server on in the current directory. It’s basically but handles cases where Python isn’t installed, falling back to other programs. I use this a few times a week. Probably less useful if you’re not a web developer. uses to download songs, often from YouTube or SoundCloud, in the highest available quality. For example, downloads that video as a song. I use this a few times a week…typically to grab video game soundtracks… similarly uses to download something for a podcast player. There are a lot of videos that I’d rather listen to like a podcast. I use this a few times a month. downloads the English subtitles for a video. (There’s some fanciness to look for “official” subtitles, falling back to auto-generated subtitles.) Sometimes I read the subtitles manually, sometimes I run , sometimes I just want it as a backup of a video I don’t want to save on my computer. I use this every few days. , , and are useful for controlling my system’s wifi. is the one I use most often, when I’m having network trouble. I use this about once a month. parses a URL into its parts. I use this about once a month to pull data out of a URL, often because I don’t want to click a nasty tracking link. prints line 10 from stdin. For example, prints line 10 of a file. This feels like one of those things that should be built in, like and . I use this about once a month. opens a temporary Vim buffer. It’s basically an alias for . I use this about once a day for quick text manipulation tasks, or to take a little throwaway note. converts “smart quotes” to “straight quotes” (sometimes called “dumb quotes”). I don’t care much about these in general, but they sometimes weasel their way into code I’m working on. It can also make the file size smaller, which is occasionally useful. I use this at least once a week. adds before every line. I use it in Vim a lot; I select a region and then run to quote the selection. I use this about once a week. returns . (I should probably just use .) takes JSON at stdin and pretty-prints it to stdout. I use this a few times a year. and convert strings to upper and lowercase. For example, returns . I use these about once a week. returns . I use this most often when talking to customer service and need to read out a long alphanumeric string, which has only happened a couple of times in my whole life. But it’s sometimes useful! returns . A quick way to do a lookup of a Unicode string. I don’t use this one that often…probably about once a month. cats . I use for , for a quick “not interested” response to job recruiters, to print a “Lorem ipsum” block, and a few others. I probably use one or two of these a week. Inspired by Ruby’s built-in REPL, I’ve made: prints the current date in ISO format, like . I use this all the time because I like to prefix files with the current date. starts a timer for 10 minutes, then (1) plays an audible ring sound (2) sends an OS notification (see below). I often use to start a 5 minute timer in the background (see below). I use this almost every day as a useful way to keep on track of time. prints the current time and date using and . I probably use it once a week. It prints something like this: extracts text from an image and prints it to stdout. It only works on macOS, unfortunately, but I want to fix that. (I wrote a post about this script .) (an alias, not a shell script) makes a happy sound if the previous command succeeded and a sad sound otherwise. I do things like which will tell me, audibly, whether the tests succeed. It’s also helpful for long-running commands, because you get a little alert when they’re done. I use this all the time . basically just plays . Used in and above. uses to play audio from a file. I use this all the time , running . uses to show a picture. I use this a few times a week to look at photos. is a little wrapper around some of my favorite internet radio stations. and are two of my favorites. I use this a few times a month. reads from stdin, removes all Markdown formatting, and pipes it to a text-to-speech system ( on macOS and on Linux). I like using text-to-speech when I can’t proofread out loud. I use this a few times a month. is an wrapper that compresses a video a bit. I use this about once a month. removes EXIF data from JPEGs. I don’t use this much, in part because it doesn’t remove EXIF data from other file formats like PNGs…but I keep it around because I hope to expand this one day. is one I almost never use, but you can use it to watch videos in the terminal. It’s cursed and I love it, even if I never use it. is my answer to and , which I find hard to use. For example, runs on every file in a directory. I use this infrequently but I always mess up so this is a nice alternative. is like but much easier (for me) to read—just the PID (highlighted in purple) and the command. or is a wrapper around that sends , waits a little, then sends , waits and sends , waits before finally sending . If I want a program to stop, I want to ask it nicely before getting more aggressive. I use this a few times a month. waits for a PID to exit before continuing. It also keeps the system from going to sleep. I use this about once a month to do things like: is like but it really really runs it in the background. You’ll never hear from that program again. It’s useful when you want to start a daemon or long-running process you truly don’t care about. I use and most often. I use this about once a day. prints but with newlines separating entries, which makes it much easier to read. I use this pretty rarely—mostly just when I’m debugging a issue, which is unusual—but I’m glad I have it when I do. runs until it succeeds. runs until it fails. I don’t use this much, but it’s useful for various things. will keep trying to download something. will stop once my tests start failing. is my emoji lookup helper. For example, prints the following: prints all HTTP statuses. prints . As a web developer, I use this a few times a month, instead of looking it up online. just prints the English alphabet in upper and lowercase. I use this surprisingly often (probably about once a month). It literally just prints this: changes my whole system to dark mode. changes it to light mode. It doesn’t just change the OS theme—it also changes my Vim, Tmux, and terminal themes. I use this at least once a day. puts my system to sleep, and works on macOS and Linux. I use this a few times a week. recursively deletes all files in a directory. I hate that macOS clutters directories with these files! I don’t use this often, but I’m glad I have it when I need it. is basically . Useful for seeing the source code of a file in your path (used it for writing up this post, for example!). I use this a few times a month. sends an OS notification. It’s used in several of my other scripts (see above). I also do something like this about once a month: prints a v4 UUID. I use this about once a month. These are just scripts I use a lot. I hope some of them are useful to you! If you liked this post, you might like “Why ‘alias’ is my last resort for aliases” and “A decade of dotfiles” . Oh, and contact me if you have any scripts you think I’d like. to start a Clojure REPL to start a Deno REPL (or a Node REPL when Deno is missing) to start a PHP REPL to start a Python REPL to start a SQLite shell (an alias for )

0 views
Brain Baking 2 weeks ago

The Crazy Shotguns In Boomer Shooters

Emberheart’s recent Wizordum rekindled my interest in retro-inspired First Person Shooters (FPS) also known as boomer shooters . Some are offended by the term, but I quite like it: it not only denotes the DOOM clones of the early nineties as the boomer generation of FPS gaming but also perfectly defines what a boomer shooter is: things that go boom . That’s it. And boy, do things go boom in these games, thanks to the crazy amount of weaponry at the player’s disposal. Combine that with emphasis on movement and speed—remember circle strafing? That’s just the bare minimum now—and you’ve got yourself a hundred different ways to murder, shred, and rip your enemies apart. To stay true to their DOOM roots, boomer shooters are usually a bloody affair. I’ve always been fascinated with the shotguns in these games: the rapid BOOM TSJK BOOM TSJK BOOM TSJK of Quake , the heavy KABOOM click clack KABOOM click clack of the super shotgun in DOOM II . Somehow along the way, the shotgun (and double barrel one) became an indispensable part of any boomer shooter. That’s why I’d like to take a closer look at the craziness involved in these retro-inspired shooters. Or more specifically, what’s bound behind key number 3. Assuming 1 is the melee weapon and 2 is the pistol, of course. It’s impossible to talk about shotguns in shooters without mentioning DOOM —which I already did three times, but they, one more time can’t hurt. In 1993, id Software not only started the gory Binary Space Partitioning revolution, but also iterated on Wolfenstein 3D ’s rather boring weaponry line-up: the pistol, the automatic rifle, and the mini gun. DOOM gave us a plethora of new stuff to play with, including some sweet sweet pump action. Yet that digitized child toy won at fairs can hardly be called crazy by modern standards. Enter DOOM II ’s double barrel “super” shotgun: double the barrels, double the fun! Thirty-one years later, those two barrels still pack a mighty punch, up to the point that most other weapons in the game are obsolete. According to various weapon damage tables , the super shotgun has a mean damage output of as much as the rocket launcher! Rocking my super shotgun in a slimy sewer hallway the Legacy of Rust expansion. The deep sound that accompanies the shotgun is still an instant nostalgia trigger. You’ll immediately recognize it. Let’s put it to the test: for this article, I randomly compiled 11 different shotgun sounds into a single audio file. It’s up to you to identify the games and shotguns: If that’s too difficult for you, the following hint will spoil the games but not the order: So where do you go from there? What can possibly topple DOOM II ’s super shotgun? Nothing, really, but developers have been giving it a damn good try since then anyway. There are quite a few almost as iconic double barrel shotguns. In DUSK , we see the protagonist getting attached to their favourite killing machine. When it’s temporarily taken away and then returned a few levels later, we whisper welcome back, friend , lovingly stroke its long barrels, and happily resume the rampage. In Serous Sam , the BOOM sound the weapon emits is almost as majestic as the huge open spaces between the pyramids that are infested with AAAAAAAAAHHHH screaming beheaded kamikazes. How about reskinning the shotgun into a crossbow firing three green projectiles ( Heretic )? Not cool enough? Okay, I get it, we need to step up our game. How about modding our double barrels? Sawing them off, perhaps? In Outlaws , there are three (!) shotguns mapped to your keypads: a single barrel, a double barrel, and a sawn-off one, although to this day I am puzzled by the difference in function as they even sound alike. In Project Warlock , being a more modern retro-inspired shooter, you can upgrade your weapons after collecting enough skill points. That single barrel can become an automatic and that double barrel lovingly called the Boom Stick can gain alternate firing modes. Project Warlock's Doom Stick has a very satisfying 'boom sound' to it. Speaking of mods, DOOM Eternal ’s super shotgun Meat Hook attachment is one of the most genius ideas ever: pulling yourself closer to your enemies before unloading those two barrels ups the fun (and gore) dramatically. I believe you can also inject incendiary rounds. In DOOM 2016 , you can tinker with your shotgun by swapping out pieces. Tadaa, now it’s a shotgun gatling gun! Still not crazy enough, I hear ya. What about Forgive Me Father then, where the unlocked upgrades gradually push more and more crazy (literally) into the weapon designs by merging with the Cthulhu mythos. The Abyssal Shotgun features more bullets per shot and has an increased firing speed, essentially making it an automatic double barrel? What about dual wielding instead? In DUSK , beyond the trusty double barrel, you can dual wield two regular shotguns and pump out that lead at a demonic speed (no wait wrong game). In Nightmare Reaper , the reflection power-up allows you to temporarily dual wield your current load-out that can already be pretty wild as the modifiers are random. I saw someone unloading 100+ shots at once. How’s that for a boomer shooter. The idea is not new though: Blood allowed us to temporarily dual wield sawn-off shotguns in as early as 1997. If that’s not impressive enough, F.E.A.R. not (get it? 1 ): if two barrels aren’t enough, then how about three instead? The game INCISION will congratulate you with the message “Ludicrous Gibs!!” after firing off that bad boy. But we can do even better: the hand cannon in Prodeus features a whopping four barrels that can be fired individually or all at once, turning anything on screen into ketchup. KABOOM click clack. I first thought Prodeus invented that but Shadow Warrior —yet another crazy Build Engine game from 1997 with even crazier weapons—technically already featured a four-barreled shotgun that rapidly rotates as you shoot. I don’t think you can unload everything at once though. Or how about another rotating barrel that can also eject grenades? That’s Shelly’s Disperser from Ion Fury . Guess what, Ion Fury runs on the Build Engine. No coincidence there. Shelly's Disperser might not look sexy but the hybrid weapon can rapidly fire off 6 shots and launch as many grenades! But perhaps the craziest of them all must be the projectile boosting mechanic in ULTRAKILL : after firing off those shotgun shells, you can hit them with your fists to increase their speed? I have no idea how that works. I skipped that game because the trailers induced motion sickness. I can tolerate a crazy amount of crazy but that’s a bit too much. From a pump action toy to a boom stick, quad shotgun, rapid firing abyssal shotgun or disperser. From a regular buckshot shell to incendiary rounds, grenades, and meat hooks. I love these kinds of games because they have the creative freedom to bend all the rules—especially when it comes to the weaponry. And yet, we stay true to our DOOM-like roots: you can’t release a successful retro-inspired shooter without the presence of a (super) shotgun. If you’re interested in my opinion on many of the games mentioned here, be sure to check out my reviews on these retro shooters . The game F.E.A.R. , although not a boomer shooter, is revered for its excellent VK-12 combat shotgun that chews through enemies rather quickly.  ↩︎ Related topics: / games / boomer shooters / By Wouter Groeneveld on 20 October 2025.  Reply via email . DOOM II (obviously) DOOM Eternal Outlaws (2x) Project Warlock Redneck Rampage Serious Sam The game F.E.A.R. , although not a boomer shooter, is revered for its excellent VK-12 combat shotgun that chews through enemies rather quickly.  ↩︎

0 views
Sean Goedecke 3 weeks ago

We are in the "gentleman scientist" era of AI research

Many scientific discoveries used to be made by amateurs. William Herschel , who discovered Uranus, was a composer and an organist. Antoine Lavoisier , who laid the foundation for modern chemistry, was a politician. In one sense, this is a truism. The job of “professional scientist” only really appeared in the 19th century, so all discoveries before then logically had to have come from amateurs, since only amateur scientists existed. But it also reflects that any field of knowledge gets more complicated over time . In the early days of a scientific field, discoveries are simple: “air has weight”, “white light can be dispersed through a prism into different colors”, “the mass of a burnt object is identical to its original mass”, and so on. The way you come up with those discoveries is also simple: observing mercury in a tall glass tube, holding a prism up to a light source, weighing a sealed jar before and after incinerating it, and so on. The 2025 Nobel prize in physics was just awarded “for the discovery of macroscopic quantum mechanical tunnelling and energy quantisation in an electric circuit”. The press release gallantly tries to make this discovery understandable to the layman, but it’s clearly much more complicated than the examples I listed above. Even understanding the terms involved would take years of serious study. If you wanted to win the 2026 Nobel prize in physics, you have to be a physicist : not a musician who dabbles in physics, or a politician who has a physics hobby in your spare time. You have to be fully immersed in the world of physics 1 . AI research is not like this. We are very much in the “early days of science” category. At this point, a critical reader might have two questions. How can I say that when many AI papers look like this ? 2 Alternatively, how can I say that when the field of AI research has been around for decades, and is actively pursued by many serious professional scientists? First, because AI research discoveries are often simpler than they look . This dynamic is familiar to any software engineer who’s sat down and tried to read a paper or two: the fearsome-looking mathematics often contains an idea that would be trivial to express in five lines of code. It’s written this way because (a) researchers are more comfortable with mathematics, and so genuinely don’t find it intimidating, and (b) mathematics is the lingua franca of academic research, because researchers like to write to far-future readers for whom Python syntax may be as unfamiliar as COBOL is to us. Take group-relative policy optimization, or GRPO, introduced in a 2024 DeepSeek paper . This has been hugely influential for reinforcement learning (which in turn has been the driver behind much LLM capability improvement in the last year). Let me try and explain the general idea. When you’re training a model with reinforcement learning, you might naively reward success and punish failure (e.g. how close the model gets to the right answer in a math problem). The problem is that this signal breaks down on hard problems. You don’t know if the model is “doing well” without knowing how hard the math problem is, which is itself a difficult qualitative assessment. The previous state-of-the art was to train a “critic model” that makes this “is the model doing well” assessment for you. Of course, this brings a whole new set of problems: the critic model is hard to train and verify, costs much more compute to run inside the training loop, and so on. Enter GRPO. Instead of a critic model, you gauge how well the model is doing by letting it try the problem multiple times and computing how well it does on average . Then you reinforce the model attempts that were above average and punish the ones that were below average. This gives you good signal even on very hard prompts, and is much faster than using a critic model. The mathematics in the paper looks pretty fearsome, but the idea itself is surprisingly simple. You don’t need to be a professional AI researcher to have had it. In fact, GRPO is not necessarily that new of an idea. There is discussion of normalizing the “baseline” for RL as early as 1992 (section 8.3), and the idea of using the model’s own outputs to set that baseline was successfully demonstrated in 2016 . So what was really discovered in 2024? I don’t think it was just the idea of “averaging model outputs to determine a RL baseline”. I think it was that that idea works great on LLMs as well . As far as I can tell, this is a consistent pattern in AI research. Many of the big ideas are not brand new or even particularly complicated. They’re usually older ideas or simple tricks, applied to large language models for the first time. Why would that be the case? If deep learning wasn’t a good subject for the amateur scientist ten years ago, why would the advent of LLMs change that? Suppose someone discovered that a rubber-band-powered car - like the ones at science fair competitions - could output as much power as a real combustion engine, so long as you soaked the rubber bands in maple syrup beforehand. This would unsurprisingly produce a revolution in automotive (and many other) engineering fields. But I think it would also “reset” scientific progress back to something like the “gentleman scientist” days, where you could productively do it as a hobby. Of course, there’d be no shortage of real scientists doing real experiments on the new phenomenon. However, there’d also be about a million easy questions to answer. Does it work with all kinds of maple syrup? What if you soak it for longer? What if you mixed in some maple-syrup-like substances? You wouldn’t have to be a real scientist in a real lab to try your hand at some of those questions. After a decade or so, I’d expect those easy questions to have been answered, and for rubber-band engine research to look more like traditional science. But that still leaves a long window for the hobbyist or dilettante scientist to ply their trade. The success of LLMs is like the rubber-band engine. A simple idea that anyone can try 3 - train a large transformer model on a ton of human-written text - produces a surprising and transformative technology. As a consequence, many easy questions have become interesting and accessible subjects of scientific inquiry, alongside the normal hard and complex questions that professional researchers typically tackle. I was inspired to write this by two recent pieces of research: Anthropic’s “skills” product and the Recursive Language Models paper . Both of these present new and useful ideas, but they’re also so simple as to be almost a joke. “Skills” are just markdown files and scripts on-disk that explain to the agent how to perform a task. Recursive language models are just agents with direct code access to the entire prompt via a Python REPL. There, now you can go and implement your own skills or RLM inference code. I don’t want to undersell these ideas. It is a genuinely useful piece of research for Anthropic to say “hey, you don’t really need actual tools if the LLM has shell access, because it can just call whatever scripts you’ve defined for it on disk”. Giving the LLM direct access to its entire prompt via code is also (as far as I can tell) a novel idea, and one with a lot of potential. We need more research like this! Strong LLMs are so new, and are changing so fast, that their capabilities are genuinely unknown 4 . For instance, at the start of this year, it was unclear whether LLMs could be “real agents” (i.e. whether running with tools in a loop would be useful for more than just toy applications). Now, with Codex and Claude Code, I think it’s pretty clear that they can. Many of the things we learn about AI capabilities - like o3’s ability to geolocate photos - come from informal user experimentation. In other words, they come from the AI research equivalent of 17th century “gentleman science”. Incidentally, my own field - analytic philosophy - is very much the same way. Two hundred years ago, you could publish a paper with your thoughts on “what makes a good act good”. Today, in order to publish on the same topic, you have to deeply engage with those two hundred years of scholarship, putting the conversation out of reach of all but professional philosophers. It is unclear to me whether that is a good thing or not. Randomly chosen from recent AI papers on arXiv . I’m sure you could find a more aggressively-technical paper with a bit more effort, but it suffices for my point. Okay, not anyone can train a 400B param model. But if you’re willing to spend a few hundred dollars - far less than Lavoisier spent on his research - you can train a pretty capable language model on your own. In particular, I’d love to see more informal research on making LLMs better at coming up with new ideas. Gwern wrote about this in LLM Daydreaming , and I tried my hand at it in Why can’t language models come up with new ideas? . Incidentally, my own field - analytic philosophy - is very much the same way. Two hundred years ago, you could publish a paper with your thoughts on “what makes a good act good”. Today, in order to publish on the same topic, you have to deeply engage with those two hundred years of scholarship, putting the conversation out of reach of all but professional philosophers. It is unclear to me whether that is a good thing or not. ↩ Randomly chosen from recent AI papers on arXiv . I’m sure you could find a more aggressively-technical paper with a bit more effort, but it suffices for my point. ↩ Okay, not anyone can train a 400B param model. But if you’re willing to spend a few hundred dollars - far less than Lavoisier spent on his research - you can train a pretty capable language model on your own. ↩ In particular, I’d love to see more informal research on making LLMs better at coming up with new ideas. Gwern wrote about this in LLM Daydreaming , and I tried my hand at it in Why can’t language models come up with new ideas? . ↩

0 views

We were angry

In documenting the history of our understanding of trauma, Judith Herman follows the investigations into hysteria out into the battlefield. During the First World War, psychologists began to observe symptoms of what was initially termed “shell shock” among soldiers. An early theory posited that the men suffered from some physical ailment, perhaps a consequence of repeated concussions caused by proximity to exploding shells. But it rapidly became clear that a great many of the men affected had suffered no physical harm and yet had been entirely incapacitated: they wept or howled, sat frozen and speechless, became forgetful and detached. In short, they behaved like hysterical women. The first wave of responses to this behavior was unforgiving: accused of laziness and cowardice, the soldiers were shamed and punished. But another psychologist, W. H. R. Rivers, approached the problem more humanely, and arrived at a different conclusion: [Rivers] demonstrated, first, that men of unquestioned bravery could succumb to overwhelming fear and, second, that the most effective motivation to overcome that fear was something stronger than patriotism, abstract principles, or hatred of the enemy. It was the love of soldiers for one another. In other words, “hysteria” and “shell shock” were the same thing, both the result of psychological trauma, including the trauma of bearing witness to horrors which you were powerless to stop. Moreover, it was love for one’s comrades that offered the greatest defense against that trauma—both during the events themselves and in the days and years that followed. Herman traces the ways that our understanding of trauma was discovered and then conveniently (in Freud’s case, intentionally ) lost again, making yet future discoveries inevitable. Each time, it was survivors who drove awareness of the sources of trauma and its most effective treatments, forcing established practitioners of medicine and psychology to follow their lead. In the middle of the last century, survivors of sexual trauma formed consciousness-raising groups, while veterans of the Vietnam War created rap groups; in both cases, the efforts combined demands for better treatment alongside those for political awakening. The purpose of the rap groups was twofold: to give solace to individual veterans who had suffered psychological trauma, and to raise awareness about the effects of war. The testimony that came out of these groups focused public attention on the lasting psychological injuries of combat. These veterans refused to be forgotten. Moreover, they refused to be stigmatized. The insisted upon the rightness, the dignity of their distress. In the words of a marine veteran, Michael Norman: “Family and friends wondered why we were so angry. What are you crying about? they would ask. Why are you so ill-tempered and disaffected. Our fathers and grandfathers had gone off to war, done their duty, come home and got on with it. What made our generation so different? As it turns out, nothing. No difference at all. When old soldiers from ‘good’ wars are dragged out from behind the curtain of myth and sentiment and brought into the light, they too seem to smolder with choler and alienation….So we were angry. Our anger was old, atavistic. We were angry as all civilized men who have ever been sent to make murder in the name of virtue were angry.” Calls for healing and for reparation are the same call: to heal a wound is to account for the wounding. And anger is the appropriate response when that accountability is withheld. Anger, like love, can be useful: it is a shield against further harm, a defense against erasure. It is a weapon that tears down the curtains of myth and sentiment. It is the refusal to be forgotten, even as each new generation tries so hard to forget. View this post on the web , subscribe to the newsletter , or reply via email .

0 views
Takuya Matsuyama 3 weeks ago

The scariest “user support” email I’ve ever received

Hi, it's Takuya . As your app grows in popularity, you occasionally start to attract attacks aimed directly at you—the developer or site owner. Just the other day, I got one that was honestly terrifying, so I'd like to share it. In short, they’re saying: Weird already — because my app’s website, https://www.inkdrop.app/ , doesn’t even show a cookie consent dialog . I don’t track or serve ads, so there’s no need for that. Still, I replied politely: A bit later, I got this reply (which Gmail had automatically placed in the spam folder): At first glance, it looked perfectly normal. But notice — they never actually told me which page was causing the issue. Instead, they sent a link claiming to contain a screenshot. It looked like a Google Drive link, but it was actually a Google Sites page. Without thinking, I clicked it. (You should never do this!) It showed a Captcha screen. I clicked it… and got this: It said something like “verification step” — telling me to open a terminal, paste a command, and run it. That’s when it hit me: “Oh no, this is phishing.” The command they had copied to my clipboard was this: Never run anything like this in your terminal. It downloads and executes a shell script from a remote server — as ChatGPT confirmed when I asked it to analyze it: Absolutely terrifying. Because Gmail had flagged the second message as spam, the URL was probably already reported as malicious. But the first message wasn’t flagged — so I thought, “Maybe it’s a false positive,” and replied. Big mistake. Even on my user forum, I’ve started seeing suspicious posts that seem to be written by AI. They look natural at first glance, but the intent is unclear — often just spam or trolling. Phishing emails disguised as support inquiries are getting more sophisticated, too. They read naturally, but something always feels just a little off — the logic doesn’t quite line up, or the tone feels odd. It’s unsettling. Stay alert, guys — the attacks are getting smarter. Hope it's helpful!

0 views
xenodium 3 weeks ago

Bending Emacs - Episode 3: Git clone (the lazy way)

Continuing on the Bending Emacs series, today I share a new episode. Bending Emacs Episode 03: Git clone (the lazy way) In this video, I show my latest iteration on an expedited git clone flow. If this topic sounds familiar, I covered it back in 2020 with my clone git repo from clipboard post. My git clone flow consists of copying a git repo URL to the clipboard and subsequently invoking . Everything else is taken care of for you. I've revisited this command and added a couple of improvements: Configurability (via ). For example: Optional prefixes to change function behavior Automatically place point/cursor at README file. I was going to post the snippet here, though may as well point you over to GitHub where is more likely to remain up-to-date. Note that is now optionally available as part of my dwim-shell-command package. Liked the video? Please let me know. Got feedback? Leave me some comments . Please go like my video , share with others, and subscribe to my channel . If there's enough interest, I'll continue making more videos! Configurability (via ). For example: Optional prefixes to change function behavior : Pick target location . : Pick any directory. Automatically place point/cursor at README file.

0 views
xenodium 3 weeks ago

agent-shell 0.5 improvements

While it's only been a few weeks since introducing Emacs agent-shell , we've landed nearly 100 commits and enough improvements to warrant a new blog post. agent-shell now includes support for two additional ACP-capable agents: In addition to starting new shells via agent-specific commands, we now have a unified entry point, enabling selection from a list of supported agents. The agent-specific commands remain available as usual: now provides basic control to toggle display of shell buffers: While provides basic display toggling, Calum MacRae offers a comprehensive sidebar package. Check out agent-shell-sidebar . now has experimental support for running agents inside dev containers. See docs . buffers, proposing changes, get a more polished experience. More notably, diffs get context (thanks to David J. Rosenbaum), single-key patch navigation/acceptance, and file names now displayed in header line. Environment variables can now be loaded from either the Emacs environment, .env files, and/or overridden inline: Different authentication methods are now supported. For example: Check per provider, as available options may differ. On the smaller side, but also contributing to overall polish: While not technically part of , acp.el 's traffic inspection has been getting some love to help users diagnose issues. Thank you for your contributions! Thank you to all sponsors. While LLMs aren't everyone's cup of tea, we're seeing editors across the board evolving to accommodate these new LLM tools. In a somewhat similar vein, LSP integration wasn't for everyone, but for those who did want it, Emacs luckily catered to them. Thank you for helping make this project sustainable while also enabling Emacs to cater to all. If agent-shell or acp.el are useful to you, consider sponsoring its development. LLM tokens aren't free, and neither is the time dedicated to building this stuff ;-) Claude Code Codex via codex-acp (new) Goose (new) : Toggles display of the most recently accessed agent (per project). : Controls how agent shells are displayed when activated. Single-key permission bindings (y/n/!). Improved error messages. Improved task status rendering. Improved TAB navigation. David J. Rosenbaum: Context support in diffs ( PR ). Fritz Grabo: Dev container support ( PR ). Grant Surlyn: Doom Emacs installation instructions ( PR ). Mark A. Hershberger: Goose key improvement ( PR ). Ruslan Kamashev: Customization group fix ( PR ).

0 views
Justin Duke 4 weeks ago

Adding imports to the Django shell

I was excited to finally remove from my file when 5.2 dropped because they added support for automatic model import. However, I found myself missing one other little escape hatch that exposed, which was the ability to import other arbitrary modules into the namespace. Django explains how to do bring in modules without a namespace , but I wanted to be able to inoculate my shell, since most of my modules follow a similar structure (exposing a single function). It took the bare minimum of sleuthing to figure out how to hack this in for myself, and now here I am to share that sleuthing with you. Behold, a code snippet that is hopefully self-explanatory:

0 views
blog.philz.dev 1 months ago

Containerizing Agents

Simon Willison has been writing about using parallel coding agents ( blog ), and his post encouraged me to write down about my current workflow, which involves both parallelism, containerization, and web browsers. I’m spoiled by (and helped build) sketch.dev’s agent containerization , so, when I need to use other agents as well, I wrote a shell script to containerize them "just so." My workflow is that I run , and I find myself in a web browser, in the same git repo I was in, but now in a randomly named branch, in a container, in . The first pane is the agent, but there are other panes doing other stuff. When I'm done, I've got a branch to work with, and I merge/rebase/cherry-pick. Let's break up the pieces: First, my shell script is in my favorite shell scripting language, dependency-less python3. Python3 has the advantage of not requiring you to think about and is sufficiently available. Second, I have a customized Dockerfile with the dependencies my projects need. I don't minimize the container; I add all the things I want. Browsers, playwright, subtrace, tmux, etc. Third, U cross-mount my git repo itself into the container, and create a worktree inside the container. From the outside, this work tree is going to look "prunable", but that causes no harm, and there’s a new branch that corresponds to the agent’s worktree. I like worktrees more than remotes because they’re in the same namespace; you don’t need to "fetch" or "push" across them. It’s easy to lose changes when the container exits; I commit automatically on exit. It's also easy to lose the worktree if something calls on your behalf, but recovery is possible with and some fiddling. Fourth, I run tmux inside the container so that opening a shell in the container is as simple as opening a new pane. (Somehow, is too rich.) I'm used to sketch.dev's terminal pane to do the little git operation, take a look at a diff, run a server... tmux helps. Fifth, networking magic with Tailscale. I publish ports 8000-9999 (and 11111) on my tailnet, using the same randomly generated name as I've used for my container and my branch. You're inevitably working on a web app, and you inevitably need to actually look at it, and Docker networking is doable, but you have to pre-declare exposed ports, and avoid conflicts, and ... it's just not great for this use case. There are other solutions (ngrok, SSH port forwarding), but I already use Tailscale, so this works nicely. I originally started with tsnsrv , but then vibe-coded a custom thing that supports port ranges. is the userland networking library here, and the agents do a fine job one-shotting this stuff. Sixth, I use to expose my to my browser over the tailnet network. I'm used to having a browser-tab per agent, and this gives me that. (Terminal-based agents feel weird to me. Browsers are great at scrolling, expand/collapse widgets, cut and paste, word wrap of text, etc.) Seventh, I vibe-coded a headless browser tool called , which wraps the excellent chromedp library, which remote-controls a headless Chrome over it's debugging protocol. Getting the MCPs configured for playwright was finnicky, especially across multiple agents, and I'm experimenting with this command line tool to do the same. As I’ve written about before . Using agents in containers gives me two things I value: Isolation for parallel work. The agents can start processes and run tests and so forth without conflicting on ports or files. A bit more security. Even the Economist has now picked up on the Lethal Trifecta (or Simon Willison's original) . By explicitly choosing which environment variables I forward, and not sharing my cookies and my SSH keys, I’m exerting some control over what data and capabilities are exposed to the agent. We’re still playing with fire (can you break out of Colima? Sure! Can you edit my git repo? Sure! Break into my tailnet? Sorta.), but it’s a smaller, more controlled burn. If you want to try my nonsense, https://github.com/philz/ctr-agent . First, my shell script is in my favorite shell scripting language, dependency-less python3. Python3 has the advantage of not requiring you to think about and is sufficiently available. Second, I have a customized Dockerfile with the dependencies my projects need. I don't minimize the container; I add all the things I want. Browsers, playwright, subtrace, tmux, etc. Third, U cross-mount my git repo itself into the container, and create a worktree inside the container. From the outside, this work tree is going to look "prunable", but that causes no harm, and there’s a new branch that corresponds to the agent’s worktree. I like worktrees more than remotes because they’re in the same namespace; you don’t need to "fetch" or "push" across them. It’s easy to lose changes when the container exits; I commit automatically on exit. It's also easy to lose the worktree if something calls on your behalf, but recovery is possible with and some fiddling. Fourth, I run tmux inside the container so that opening a shell in the container is as simple as opening a new pane. (Somehow, is too rich.) I'm used to sketch.dev's terminal pane to do the little git operation, take a look at a diff, run a server... tmux helps. Fifth, networking magic with Tailscale. I publish ports 8000-9999 (and 11111) on my tailnet, using the same randomly generated name as I've used for my container and my branch. You're inevitably working on a web app, and you inevitably need to actually look at it, and Docker networking is doable, but you have to pre-declare exposed ports, and avoid conflicts, and ... it's just not great for this use case. There are other solutions (ngrok, SSH port forwarding), but I already use Tailscale, so this works nicely. I originally started with tsnsrv , but then vibe-coded a custom thing that supports port ranges. is the userland networking library here, and the agents do a fine job one-shotting this stuff. Sixth, I use to expose my to my browser over the tailnet network. I'm used to having a browser-tab per agent, and this gives me that. (Terminal-based agents feel weird to me. Browsers are great at scrolling, expand/collapse widgets, cut and paste, word wrap of text, etc.) Seventh, I vibe-coded a headless browser tool called , which wraps the excellent chromedp library, which remote-controls a headless Chrome over it's debugging protocol. Getting the MCPs configured for playwright was finnicky, especially across multiple agents, and I'm experimenting with this command line tool to do the same. Isolation for parallel work. The agents can start processes and run tests and so forth without conflicting on ports or files. A bit more security. Even the Economist has now picked up on the Lethal Trifecta (or Simon Willison's original) . By explicitly choosing which environment variables I forward, and not sharing my cookies and my SSH keys, I’m exerting some control over what data and capabilities are exposed to the agent. We’re still playing with fire (can you break out of Colima? Sure! Can you edit my git repo? Sure! Break into my tailnet? Sorta.), but it’s a smaller, more controlled burn.

1 views
David Bushell 1 months ago

Not My Cup of Tea

As blog topics go, last week’s Next.js clownshow was a freebie. A bit of front-end dev and a middle finger to fascism. I had it drafted before my tea went cold. I wasn’t expecting round two to hit harder. This week I momentarily lost the will to blog. Ok, I’m being very dramatic. It wasn’t writer’s block or imposter syndrome. Not the usual suspects. I was just stun-locked for a couple of days. Like any self-respecting person with an ounce of sanity, I’ve been off Twitter X since it got all fashy. Nevertheless, it’s impossible to avoid the crazy stuff. And this week’s crazy was another level. Enjoyed my discussion with PM Netanyahu on how AI education and literacy will keep our free societies ahead. We spoke about AI empowering everyone to build software and the importance of ensuring it serves quality and progress. Optimistic for peace, safety, and greatness for Israel and its neighbors. @rauchg Sep 29, 2025 (xcancel) - Guillermo Rauch On seeing this I noted in anger followed by a hastily worded social post: I wonder if @svelte.dev is onboard with this? The obvious answer is: ‘no’. Everything I already know suggests the Svelte maintainers are antithetical to Rauch. My words were clumsy not malicious. I was merely wondering what on earth does Svelte do? WTF does anyone do when a major funding source does… that? A quick catch-up for those unaware: In the wake of this mess some people were keen to remind everyone of Svelte’s independence. Rich Harris and others were lost for words. Can you blame them? I called out Svelte specifically because it’s the one project with ties to Vercel I care about. Svelte is a shining light in a rather bleak JavaScript ecosystem . I try to avoid political discussion online. I’m a little ham-fisted in questioning the ethics of my tech stack. Some argue that taking Vercel’s money has moral baggage. That it makes the recipient complicit in brand-washing. Personally I’m not sure what to think. To cut ties with Vercel would be morally courageous. Then what? There’s no easy alternative to keep the lights on. The day after Rauch’s infamous selfie Vercel announced $300M in series F funding . The “F” stands for “f*ck you” I’m told. Vercel’s pivot to AI banked them one third of a billy in a world where profit doesn’t matter . Until the “AI” bubble bursts Vercel are untouchable. Is it wrong to siphon off a little cheddar for better use? Let’s be honest, who else is funding open source software? Few users are willing to pay for it, developers included. The world revolves around load-bearing Nebraskans . So what can projects like Svelte do? Does it matter? Nothing matters anymore. You can just say things these days. Make up your own truths. Re-roll the chatbot until it agrees. And if your own chatbot continues to fact-check you just rewrite history . We live in a reality where you can spew white supremacy fan fiction for the boys on X and Cloudflare will sponsor your side hustle a week later. Moral bankruptcy is a desirable trait in this economy. Is it any wonder open source maintainers with a backbone are shell-shocked? I’ll leave Svelte to figure out how to navigate impossible waters. Let’s hope that open governance remains intact, lest it go off the rails . For the rest of us: Taking action and Doing The Right Thing is often difficult, always exhausting, but it is what we must do, together. We all deserve better. The world deserves better. It’ll take a little work to get there, but there is hope. We all have a choice - Salma Alam-Naylor Amen to that. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Guillermo Rauch is CEO of Vercel Netanyahu is a wanted war criminal Vercel funds Svelte and employs creator Rich Harris Svelte remains open governance; not owned by Vercel Cut ties and take the moral victory for a day and then be amazed by the magical vanishing act of an entire dev community when asked for spare change tomorrow? Continue discarding skeets until the drama blows over? Pivot to “AI”?

0 views
Simon Willison 1 months ago

Designing agentic loops

Coding agents like Anthropic's Claude Code and OpenAI's Codex CLI represent a genuine step change in how useful LLMs can be for producing working code. These agents can now directly exercise the code they are writing, correct errors, dig through existing implementation details, and even run experiments to find effective code solutions to problems. As is so often the case with modern AI, there is a great deal of depth involved in unlocking the full potential of these new tools. A critical new skill to develop is designing agentic loops . One way to think about coding agents is that they are brute force tools for finding solutions to coding problems. If you can reduce your problem to a clear goal and a set of tools that can iterate towards that goal a coding agent can often brute force its way to an effective solution. My preferred definition of an LLM agent is something that runs tools in a loop to achieve a goal . The art of using them well is to carefully design the tools and loop for them to use. Agents are inherently dangerous - they can make poor decisions or fall victim to malicious prompt injection attacks , either of which can result in harmful results from tool calls. Since the most powerful coding agent tool is "run this command in the shell" a rogue agent can do anything that you could do by running a command yourself. To quote Solomon Hykes : An AI agent is an LLM wrecking its environment in a loop. Coding agents like Claude Code counter this by defaulting to asking you for approval of almost every command that they run. This is kind of tedious, but more importantly, it dramatically reduces their effectiveness at solving problems through brute force. Each of these tools provides its own version of what I like to call YOLO mode, where everything gets approved by default. This is so dangerous , but it's also key to getting the most productive results! Here are three key risks to consider from unattended YOLO mode. If you want to run YOLO mode anyway, you have a few options: Most people choose option 3. Despite the existence of container escapes I think option 1 using Docker or the new Apple container tool is a reasonable risk to accept for most people. Option 2 is my favorite. I like to use GitHub Codespaces for this - it provides a full container environment on-demand that's accessible through your browser and has a generous free tier too. If anything goes wrong it's a Microsoft Azure machine somewhere that's burning CPU and the worst that can happen is code you checked out into the environment might be exfiltrated by an attacker, or bad code might be pushed to the attached GitHub repository. There are plenty of other agent-like tools that run code on other people's computers. Code Interpreter mode in both ChatGPT and Claude can go a surprisingly long way here. I've also had a lot of success (ab)using OpenAI's Codex Cloud . Coding agents themselves implement various levels of sandboxing, but so far I've not seen convincing enough documentation of these to trust them. Update : It turns out Anthropic have their own documentation on Safe YOLO mode for Claude Code which says: Letting Claude run arbitrary commands is risky and can result in data loss, system corruption, or even data exfiltration (e.g., via prompt injection attacks). To minimize these risks, use in a container without internet access. You can follow this reference implementation using Docker Dev Containers. Locking internet access down to a list of trusted hosts is a great way to prevent exfiltration attacks from stealing your private source code. Now that we've found a safe (enough) way to run in YOLO mode, the next step is to decide which tools we need to make available to the coding agent. You can bring MCP into the mix at this point, but I find it's usually more productive to think in terms of shell commands instead. Coding agents are really good at running shell commands! If your environment allows them the necessary network access, they can also pull down additional packages from NPM and PyPI and similar. Ensuring your agent runs in an environment where random package installs don't break things on your main computer is an important consideration as well! Rather than leaning on MCP, I like to create an AGENTS.md (or equivalent) file with details of packages I think they may need to use. For a project that involved taking screenshots of various websites I installed my own shot-scraper CLI tool and dropped the following in : Just that one example is enough for the agent to guess how to swap out the URL and filename for other screenshots. Good LLMs already know how to use a bewildering array of existing tools. If you say "use playwright python " or "use ffmpeg" most models will use those effectively - and since they're running in a loop they can usually recover from mistakes they make at first and figure out the right incantations without extra guidance. In addition to exposing the right commands, we also need to consider what credentials we should expose to those commands. Ideally we wouldn't need any credentials at all - plenty of work can be done without signing into anything or providing an API key - but certain problems will require authenticated access. This is a deep topic in itself, but I have two key recommendations here: I'll use an example to illustrate. A while ago I was investigating slow cold start times for a scale-to-zero application I was running on Fly.io . I realized I could work a lot faster if I gave Claude Code the ability to directly edit Dockerfiles, deploy them to a Fly account and measure how long they took to launch. Fly allows you to create organizations, and you can set a budget limit for those organizations and issue a Fly API key that can only create or modify apps within that organization... So I created a dedicated organization for just this one investigation, set a $5 budget, issued an API key and set Claude Code loose on it! In that particular case the results weren't useful enough to describe in more detail, but this was the project where I first realized that "designing an agentic loop" was an important skill to develop. Not every problem responds well to this pattern of working. The thing to look out for here are problems with clear success criteria where finding a good solution is likely to involve (potentially slightly tedious) trial and error . Any time you find yourself thinking "ugh, I'm going to have to try a lot of variations here" is a strong signal that an agentic loop might be worth trying! A few examples: A common theme in all of these is automated tests . The value you can get from coding agents and other LLM coding tools is massively amplified by a good, cleanly passing test suite. Thankfully LLMs are great for accelerating the process of putting one of those together, if you don't have one yet. Designing agentic loops is a very new skill - Claude Code was first released in just February 2025! I'm hoping that giving it a clear name can help us have productive conversations about it. There's so much more to figure out about how to use these tools as effectively as possible. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . The joy of YOLO mode Picking the right tools for the loop Issuing tightly scoped credentials When to design an agentic loop This is still a very fresh area Bad shell commands deleting or mangling things you care about. Exfiltration attacks where something steals files or data visible to the agent - source code or secrets held in environment variables are particularly vulnerable here. Attacks that use your machine as a proxy to attack another target - for DDoS or to disguise the source of other hacking attacks. Run your agent in a secure sandbox that restricts the files and secrets it can access and the network connections it can make. Use someone else's computer. That way if your agent goes rogue, there's only so much damage they can do, including wasting someone else's CPU cycles. Take a risk! Try to avoid exposing it to potential sources of malicious instructions and hope you catch any mistakes before they cause any damage. Try to provide credentials to test or staging environments where any damage can be well contained. If a credential can spend money, set a tight budget limit. Debugging : a test is failing and you need to investigate the root cause. Coding agents that can already run your tests can likely do this without any extra setup. Performance optimization : this SQL query is too slow, would adding an index help? Have your agent benchmark the query and then add and drop indexes (in an isolated development environment!) to measure their impact. Upgrading dependencies : you've fallen behind on a bunch of dependency upgrades? If your test suite is solid an agentic loop can upgrade them all for you and make any minor updates needed to reflect breaking changes. Make sure a copy of the relevant release notes is available, or that the agent knows where to find them itself. Optimizing container sizes : Docker container feeling uncomfortably large? Have your agent try different base images and iterate on the Dockerfile to try to shrink it, while keeping the tests passing.

3 views
xenodium 1 months ago

Bending Emacs - Episode 1: Applying CLI utils

While most of the content I share is typically covered in blog posts, I'm trying something new. Today, I'll share my first episode of Bending Emacs. This video focuses on how I like to apply (or batch-apply) command line utilities. While the video focuses on applying command line utilities, here's a list of all the things I used: Liked the video? Please let me know. Got feedback? Leave me some comments . Please go like my video , share with others, and subscribe to my channel . If there's enough interest, I'll make more videos! Org mode for the presentation itself. ffmpeg does the heavy lifting converting videos to gifs. Asked Claude for the relevant command via chatgpt-shell 's . Browsed the video directory via dired mode. Previewed video thumbnails via ready-player mode. Previewed gifs via image mode 's . Validated the command via eshell . Applied a DWIM shell command via . Duplicated files from via .

0 views
André Arko 1 months ago

stupid jj tricks

This post was originally given as a talk for JJ Con . The slides are also available. Welcome to “stupid jj tricks”. Today, I’ll be taking you on a tour through many different jj configurations that I have collected while scouring the internet. Some of what I’ll show is original research or construction created by me personally, but a lot of these things are sourced from blog post, gists, GitHub issues, Reddit posts, Discord messages, and more. To kick things off, let me introduce myself. My name is André Arko, and I’m probably best known for spending the last 15 years maintaining the Ruby language dependency manager, Bundler. In the world, though, my claim to fame is completely different: Steve Klabnik once lived in my apartment for about a year, so I’m definitely an authority on everything about . Thanks in advance for putting into the official tutorial that whatever I say here is now authoritative and how things should be done by everyone using , Steve. The first jj tricks that I’d like to quickly cover are some of the most basic, just to make sure that we’re all on the same page before we move on to more complicated stuff. To start with, did you know that you can globally configure jj to change your name and email based on a path prefix? You don’t have to remember to set your work email separately in each work repo anymore. I also highly recommend trying out multiple options for formatting your diffs, so you can find the one that is most helpful to you. A very popular diff formatter is , which provides syntax aware diffs for many languages. I personally use , and the configuration to format diffs with delta looks like this: Another very impactful configuration is which tool jj uses to handle interactive diff editing, such as in the or commands. While the default terminal UI is pretty good, make sure to also try out Meld, an open source GUI. In addition to changing the diff editor, you can also change the merge editor, which is the program that is used to resolve conflicts. Meld can again be a good option, as well as any of several other merging tools. Tools like mergiraf provide a way to attempt syntax-aware automated conflict resolution before handing off any remaining conflicts to a human to resolve. That approach can dramatically reduce the amount of time you spend manually handling conflicts. You might even want to try FileMerge, the macOS developer tools built-in merge tool. It supports both interactive diff editing and conflict resolution. Just two more configurations before we move on to templates. First, the default subcommand, which controls what gets run if you just type and hit return. The default is to run , but my own personal obsessive twitch is to run constantly, and so I have changed my default subcommand to , like so: The last significant configuration is the default revset used by . Depending on your work patterns, the multi-page history of commits in your current repo might not be helpful to you. In that case, you can change the default revset shown by the log command to one that’s more helpful. My own default revset shows only one change from my origin. If I want to see more than the newest change from my origin I use to get the longer log, using the original default revset. I’ll show that off later. Okay, enough of plain configuration. Now let’s talk about templates! Templates make it possible to do many, many things with jj that were not originally planned or built in, and I think that’s beautiful. First, if you haven’t tried this yet, please do yourself a favor and go try every builtin jj template style for the command. You can list them all with , and you can try them each out with . If you find a builtin log style that you especially like, maybe you should set it as your default template style and skip the rest of this section. For the rest of you sickos, let’s see some more options. The first thing that I want to show you all is the draft commit description. When you run , this is the template that gets generated and sent to your editor for you to complete. Since I am the kind of person who always sets git commit to verbose mode, I wanted to keep being able to see the diff of what I was committing in my editor when using jj. Here’s what that looks like: If you’re not already familiar with the jj template functions, this uses to combine strings, to choose the first value that isn’t empty, to add before+after if the middle isn’t empty, and to make sure the diff status is fully aligned. With this template, you get a preview of the diff you are committing directly inside your editor, underneath the commit message you are writing. Now let’s look at the overridable subtemplates. The default templates are made of many repeated pieces, including IDs, timestamps, ascii art symbols to show the commit graph visually, and more. Each of those pieces can be overrides, giving you custom formats without having to change the default template that you use. For example, if you are a UTC sicko, you can change all timestamps to render in UTC like , with this configuration: Or alternatively, you can force all timestamps to print out in full, like (which is similar to the default, but includes the time zone) by returning just the timestamp itself: And finally you can set all timestamps to show a “relative” distance, like , rather than a direct timestamp: Another interesting example of a template fragment is supplied by on GitHub, who changes the node icon specifically to show which commits might be pushed on the next command. This override of the template returns a hollow diamond if the change meets some pushable criteria, and otherwise returns the , which is the regular icon. It’s not a fragment, but I once spent a good two hours trying to figure out how to get a template to render just a commit message body, without the “title” line at the top. Searching through all of the built-in jj templates finally revealed the secret to me, which is a template function named . With that knowledge, it becomes possible to write a template that returns only the body of a commit message: We first extract the title line, remove that from the front, and then trim any whitespace from the start of the string, leaving just the description body. Finally, I’d like to briefly look at the possibility of machine-readable templates. Attempting to produce JSON from a jj template string can be somewhat fraught, since it’s hard to tell if there are quotes or newlines inside any particular value that would need to be escaped for a JSON object to be valid when it is printed. Fortunately, about 6 months ago, jj merged an function, which makes it possible to generate valid JSON with a little bit of template trickery. For example, we could create a output of a JSON stream document including one JSON object per commit, with a template like this one: This template produces valid JSON that can then be read and processed by other tools, looks like this. Templates have vast possibilities that have not yet been touched on, and I encourage you to investigate and experiment yourself. Now let’s look at some revsets. The biggest source of revset aliases that I have seen online is from @thoughtpolice’s jjconfig gist, but I will consolidate across several different config files here to demonstrate some options. The first group of revsets roughly corresponds to “who made it”, and composes well with other revsets in the future. For example, it’s common to see a type alias, and a type alias to let the current user easily identify any commits that they were either author or committer on, even if they used multiple different email addresses. Another group uses description prefixes to identify commits that have some property, like WIP or “private”. It’s then possible to use these in other revsets to exclude these commits, or even to configure jj to refuse to push them. Thoughtpolice seems to have invented the idea of a , which is a group of commits on top of some parent: Building on top of the stack, it’s possible to construct a set of commits that are “open”, meaning any stack reachable from the current commit or other commits authored by the user. By setting the stack value to 1, nothing from trunk or other remote commits is included, so every open commit is mutable, and could be changed or pushed. Finally, building on top of the open revset, it’s possible to define a “ready” revset that is every open change that isn’t a child of wip or private change: It’s also possible to create a revset of “interesting” commits by using the opposite kind of logic, as in this chain of revsets composed by . You take remote commits and tags, then subtract those from our own commits, and then show anything that is either local-only, tracking the remote, or close to the current commit. Now let’s talk about jj commands. You probably think I mean creating jj commands by writing our own aliases, but I don’t! That’s the next section. This section is about the jj commands that it took me weeks or months to realize existed, and understand how powerful they are. First up: . When I first read about absorb, I thought it was the exact inverse of squash, allowing you to choose a diff that you would bring into the current commit rather than eject out of the current commit. That is wildly wrong, and so I want to make sure that no one else falls victim to this misconception. The absorb command iterates over every diff in the current commit, finds the previous commit that changed those lines, and squashes just that section of the diff back to that commit. So if you make changes in four places, impacting four previous commits, you can to squash all four sections back into all four commits with no further input whatsoever. Then, . If you’re taking advantage of jj’s amazing ability to not need branches, and just making commits and squashing bits around as needed until you have each diff combined into one change per thing you need to submit… you can break out the entire chain of separate changes into one commit on top of trunk for each one by just running and letting jj do all the work for you. Last command, and most recent one: . You can use fix to run a linter or formatter on every commit in your history before you push, making sure both that you won’t have any failures and that you won’t have any conflicts if you try to reorder any of the commits later. To configure the fix command, add a tool and a glob in your config file, like this: Now you can just and know that all of your commits are possible to reorder without causing linter fix conflicts. It’s great. Okay. Now we can talk about command aliases. First up, the venerable . In the simplest possible form, it takes the closest bookmark, and moves that bookmark to , the parent of the current commit. What if you want it to be smarter, though? It could find the closest bookmark, and then move it to the closest pushable commit, whether that commit was , or , or . For that, you can create a revset for , and then tug from the closest bookmark to the closest pushable, like this: Now your bookmark jumps up to the change that you can actually push, by excluding immutable, empty, or descriptionless commits. What if you wanted to allow tug to take arguments, for those times when two bookmarks are on the same change, or when you actually want to tug a different bookmark than the closest one? That’s also pretty easy, by adding a second variant of the tug command that takes an argument: This version of tug works just like the previous one if no argument is given. But if you do pass an argument, it will move the bookmark with the name that you passed instead of the closest one. How about if you’ve just pushed to GitHub, and you want to create a pull request from that pushed bookmark? The command isn’t smart enough to figure that out automatically, but you can tell it which bookmark to use: Just grab the list of bookmarks attached to the closest bookmark, take the first one, pass it to , and you’re all set. What if you just want single commands that let you work against a git remote, with defaults tuned for automatic tugging, pushing, and tracking? I’ve also got you covered. Use to colocate jj into this git repo, and then track any branches from upstream, like you would get from a git clone. Then, you can to find the closest bookmark to , do a git fetch, rebase your current local commits on top of whatever just got pulled, and then show your new stack. When you’re done, just . This push handles looking for a huggable bookmark, tugging it, doing a git push, and making sure that you’re tracking the origin copy of whatever you just pushed, in case you created a new branch. Last, but definitely most stupid, I want to show off a few combo tricks that manage to deliver some things I think are genuinely useful, but in a sort of cursed way. First, we have counting commits. In git, you can pass an option to log that simply returns a number rather than a log output. Since jj doesn’t have anything like that, I was forced to build my own when I wanted my shell prompt to show how many commits beyond trunk I had committed locally. In the end, I landed on a template consisting of a single character per commit, which I then counted with . That’s the best anyone on GitHub could come up with, too . See? I warned you it was stupid. Next, via on Discord, I present: except for the closest three commits it also shows at the same time. Simply create a new template that copies the regular log template, while inserting a single conditional line that adds if the current commit is inside your new revset that covers the newest 3 commits. Easy. And now you know how to create the alias I promised to explain earlier. Last, but definitely most stupid, I have ported my previous melding of and over to , as the subcommand , which I alias to because it’s inspired by , the shell cd fuzzy matcher with the command . This means you can to see a list of local bookmarks, or to see a list of all bookmarks including remote branches. Then, you can to do a fuzzy match on , and execute . Jump to work on top of any named commit trivially by typing a few characters from its name. I would love to also talk about all the stupid shell prompt tricks that I was forced to develop while setting up a zsh prompt that includes lots of useful jj information without slowing down prompt rendering, but I’m already out of time. Instead, I will refer you to my blog post about a jj prompt for powerlevel10k , and you can spend another 30 minutes going down that rabbit hole whenever you want. Finally, I want to thank some people. Most of all, I want to thank everyone who has worked on creating jj, because it is so good. I also want to thank everyone who has posted their configurations online, inspiring this talk. All the people whose names I was able to find in my notes include @martinvonz, @thoughtpolice, @pksunkara, @scott2000, @avamsi, @simonmichael, and @sunshowers. If I missed you, I am very sorry, and I am still very grateful that you posted your configuration. Last, I need to thank @steveklabnik and @endsofthreads for being jj-pilled enough that I finally tried it out and ended up here as a result. Thank you so much, to all of you.

2 views
xenodium 1 months ago

Introducing Emacs agent-shell (powered by ACP)

Not long ago, I introduced acp.el , an Emacs lisp implementation of ACP ( Agent Client Protocol ), the agent protocol developed between Zed and Google folks . While I've been happily accessing LLMs from my beloved text editor via chatgpt-shell (a multi-model package I built), I've been fairly slow on the AI agents uptake. Probably a severe case of old-man-shouts-at-cloud sorta thing, but hey I want well-integrated tools in my text editor. When I heard of ACP, I knew this was the thing I was waiting for to play around with agents. With an early acp.el client library in place, I set out to build an Emacs-native agent integration… Today, I have an initial version of agent-shell I can share. is a native Emacs shell, powered by comint-mode (check out Mickey's comint article btw). As such, we don't have to dance between char and line modes to interact with things. is just a regular Emacs buffer like any other you're used to. Thanks to ACP, we can now build agent-agnostic experiences by simply configuring our clients to communicate with their respective agents using a common protocol. As users, we benefit from a single, consistent experience, powered by any agent of our choice. Configuring different agents from boils down which agent we want running in the comms process. Here's an example of Gemini CLI vs Claude Code configuration: I've yet to try other agents. If you get another agent running, I'd love to hear about it. Maybe submit a pull request ? While I've been relying on my acp.el client library, I'm still fairly new to the protocol. I often inspect traffic to see what's going on. After staring at json for far too long, I figured I may as well build some tooling around acp.el to make my life easier. I added a traffic buffer for that. From , you can invoke it via . Developing against paid agents got expensive quickly. Not only expensive, but my edit-compile-run cycle also became boringly slow waiting for agents. While I knew I wanted some sort of fake agent to work against, I didn't want to craft the fake traffic myself. Remember that traffic buffer I showed ya? Well, I can now save that traffic to disk and replay it later. This enabled me to run problematic sessions once and quickly replay multiple times to fix things. While re-playing has its quirks and limitations, it's done the job for now. You can see a Claude Code session below, followed by its replayed counterpart via fake infrastructure. Getting here took quite a bit of work. Having said that, it's only a start. I myself need to get more familiar with agent usage and evolve the package UX however it feels most natural within its new habitat. Lately, I've been experimenting with a quick diff buffer, driven by n/p keys, shown along the permission dialog. While I've implemented enough parts of the Agent Client Protocol Schema to make the package useful, it's hardly complete. I've yet to fully familiarize myself with most protocol features. Both of my new Emacs packages, agent-shell and acp.el , are now available on GitHub. As an agent user, go straight to agent-shell . If you're a package author and would like to build an ACP experience, then give acp.el a try. Both packages are brand new and may have rough edges. Be sure to file bugs or feature requests as needed. I've been heads down, working on these packages for some time. If you're using cloud LLM services, you're likely already paying for tokens. If you find my work useful, please consider routing some of those coins to help fund it. Maybe my tools make you more productive at work? Ask your employer to support the work . These packages not only take time and effort, but also cost me money. Help fund the work .

0 views
W. Jason Gilmore 1 months ago

Minimum Viable Expectations for Developers and AI

We're headed into the tail end of 2025 and I'm seeing a lot less FUD (fear, uncertainty, and doubt) amongst software developers when it comes to AI. As usual when it comes to adopting new software tools I think a lot of the initial hesitancy had to do with everyone but the earliest adopters falling into three camps: don't, can't, and won't: When it comes to AI adoption, I'm fortunately seeing the numbers falling into these three camps continuing to wane. This is good news because it benefits both the companies they work for and the developers themselves. Companies benefit because AI coding tools, when used properly, unquestionably write better code faster for many (but not all) use cases . Developers benefit because they are freed from the drudgery of coding CRUD (create, retrieve, update, delete) interfaces and can instead focus on more interesting tasks. Because this technology is so new, I'm not yet seeing a lot of guidance regarding setting employee expectations when it comes to AI usage within software teams. Frankly I'm not even sure that most managers even know what to expect. So I thought it might be useful to outline a few thoughts regarding MVEs (minimum viable expectations) when it comes to AI adoption: Even if your developers refuse to generative AI tools for large-scale feature implementation, the productivity gains to be had from simply adopting the intelligent code completion features is undeniable. A few seconds here and a few seconds there add up to hours, days, and weeks of time saved otherwise spent repeatedly typing for loops, commonplace code blocks, and the like. Agentic AIs like GitHub Copilot can be configured to perform automated code reviews on all or specific pull requests. At Adalo we've been using Copilot in this capacity for a few months now and while it hasn't identified any groundshaking issues it certainly has helped to improve the code by pointing out subtle edge cases and syntax issues which could ultimately be problematic if left unaddressed. In December, 2024 Anthropic announced a new open standard called Model Context Protocol (MCP) which you can think of as a USB-like interface for AI. This interface gives organizations the ability to plug both internal and third-party systems into AI, supplementing the knowledge already incorporated into the AI model. Since the announcement MCP adoption has spread like wildfire, with MCP directories like https://mcp.so/ tracking more than 16,000 public MCP servers. Companies like GitHub and Stripe have launched MCP servers which let developers talk to these systems from inside their IDEs. In doing so, developers can for instance create, review, and ask AI to implement tickets without having to leave their IDE. As with the AI-first IDE's ability to perform intelligent code completion, reducing the number of steps a developer has to take to complete everyday tasks will in the long run result in significant amounts of time saved. In my experience test writing has ironically one of AI's greatest strengths. SaaS products I've built such as https://securitybot.dev/ and https://6dollarcrm.com/ have far, far more test coverage than they would have ever had pre-AI. As of the time of this writing SecurityBot.dev has more than 1,000 assertions spread across 244 tests: 6DollarCRM fares even better (although the code base is significantly larger), with 1,149 assertions spread across 346 tests: Models such as Claude 4 Sonnet and Opus 4.1 have been remarkably good test writers, and developers can further reinforce the importance of including tests alongside generated code within specifications. AI coding tools such as Cursor and Claude Code tend to work much better when the programmer provides additional context to guide the AI. In fact, Anthropic places such emphasis on the importance of doing so that it appears first in this list of best practices . Anything deemed worth communicating to a new developer who has joined your team is worthy of inclusion in this context, including coding styles, useful shell commands, testing instructions, dependency requirements, and so forth. You'll also find publicly available coding guidelines for specific technology stacks. For instance I've been using this set of Laravel coding guidelines for AI with great success. The sky really is the limit when it comes to incorporating AI tools into developer workflows. Even though we're still in the very earliest stages of this technology's lifecycle, I'm both personally seeing enormous productivity gains in my own projects as well as greatly enjoying seeing the teams I work with come around to their promise. I'd love to learn more about how you and your team are building processes around their usage. E-mail me at [email protected] . Developers don't understand the advantages for the simple reason they haven't even given the new technology a fair shake. Developers can't understand the advantages because they are not experienced enough to grasp the bigger picture when it comes to their role (problem solvers and not typists). Developers won't understand the advantages because they refuse to do so on the grounds that new technology threatens their job or is in conflict with their perception that modern tools interfere with their role as a "craftsman" (you should fire these developers).

0 views
xenodium 1 months ago

Introducing acp.el

I recently shared my early Emacs experiments with ACP , the Agent Client Protocol now supported by Gemini CLI and Claude Code LLM agents. While we can already run these agents from Emacs with the likes of vterm , I'm keen to offer an Emacs-native alternative to drive them. To do that, I'm working an a new package: (more on this to be shared soon). While this new Emacs agent shell has an opinionated user experience, it uses ACP under the hood. Being a protocol, it's entirely UI-agnostic. For this, I now have an early version available of the acp.el library. implements Agent Client Protocol for Emacs lisp as per agentclientprotocol.com . While this library is in its infancy, it's enabling me to carry on with my work. lives as a separate library, is UI-agnostic, and can be used by Emacs package authors to build the their desired ACP-powered agent experience. You can instantiate an ACP client and send a request as follows: I'm new at using ACP myself, so I've added a special logging buffer to which enables me to inspect traffic and learn about the exchanges between clients and agents. You can enable logging with: Look out for the buffer, which looks a little like this: If you're keen to experiment with ACP in Emacs lisp and build agent-agnostic packages, take a look at ( now on GitHub ). As mentioned, it's early days for this library, but it's a start. Please file issues and feature requests. If you build anything on top of , lemme know. I'd love to see it in action. I'm working on two new Emacs packages: acp.el (introduced in this post) and (I'll soon share more about that). Please help me make development of these packages sustainable . These packages take time and effort, but also cost me money as I have to pay for LLM tokens throughout testing and development. Please help fund it .

0 views
Manuel Moreale 1 months ago

Jack Baty

This week on the People and Blogs series we have an interview with Jack Baty, whose blog can be found at baty.net . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Andrea Contino and the other 120 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hello, I'm Jack. I was born, raised, and live in west Michigan, US. I live in a quiet (aka "boring") suburb with my lovely wife, our dog, a few tropical fish, and a sea urchin named Lurch. I was a paperboy, fast food worker, and ditch digger long before I started creating software for a living. My first programming project was a Laboratory Information Management System (L.I.M.S.) for a local environmental testing lab. This was in 1992. I was learning as I went, using a Macintosh RDBMS environment called 4th Dimension. I continued as a solo software developer for a couple of years. In 1995, I cofounded the web design firm "Fusionary Media" with my two partners. Fusionary grew to a team of around 15 people. We built some very nice websites, software, and mobile apps for companies like MLB, GM, Steelcase, etc. This went on for 25 years, until we sold the company in 2020. I've been "retired" since then, but I miss working on things with people, so we'll see. These days I spend most of my time with photography, blogging, and reading. I enjoy tinkering with tech of all kinds and exploring what different software tools can do. This often means completely upending my workflow in order to shoehorn some cool new toy into it. I call this a "hobby". Which one? 😂 In the late 1990s, when the internet was still new and exciting, I wanted to tell everyone about everything. I was learning to create websites, so starting a blog was a great opportunity to do both. I created a couple of proto-blogs in 1998 and 1999, but those have been lost to time. My current blog at baty.net began in August 2000, 25 years ago this month. Everything before 2021 is archived at archive.baty.net . I don't delete old posts, although I probably should. My early posts were mostly Gruber-style link posts. It's sad that so many of those original links are dead now. Eventually I started sharing more details about what I was doing and thinking about, rather than just linking to other things. This continues today. I sporadically maintain several other sites/blogs. Other than Baty.net , there's also a "Daily Notes" blog at daily.baty.net , but lately I've just been rolling that into baty.net. I recently started a photo blog using Ghost at baty.photo . Ghost makes posting images easy, but I haven't decided if I'll continue. I keep a wiki using TiddlyWiki (since 2018) ( rudimentarylathe.org ). I don't even know what it's for, honestly, but I keep putting stuff there when I don't know where else it should go. My dream is to have only One True Blog, but that's been elusive. Honestly, I don't really have a creative process. Nothing deliberate, anyway. My posts are mostly journal entries about whatever's on my mind. What usually happens is that I'll read someone's blog post or I'll try some new tool, and share my thoughts on it. I used to write (bad) poetry and would love to compose longer, thoughtful essays, but that never happens. More often than not I publish things long before they're ready. It's as if I'd never heard of proofreading. I just fix things later. If I had to make everything perfect first, I'd never post anything. I write my posts in whatever text editor I'm infatuated with at the moment. 90% of the time, that means Emacs, the nerdiest possible option. I prefer a tidy, pleasant environment. Usually, though, I sit at my desktop computer (an M4 MacBook Air and Studio Display) in my messy basement office. I just start writing whenever I have something to say. My wife thinks I have some form of auditory processing disorder, so I rarely listen to music while writing. It only muddles my thoughts (even more than they already are). I do find that things come easier for me when I'm surrounded by books. They inspire me. Once in a while, I'll draft posts longhand with a nice fountain pen or on a manual typewriter, but I'm lazy, so that's pretty rare. If I had my way, there'd be a giant window in my home office, maybe overlooking water. Currently I stare at a bare wall, which is probably not ideal for creative inspiration. I change platforms so often that it'll probably be different by the time anyone reads this, but I'm currently using Hugo to render a static website. My static sites are hosted on a small VPS running FreeBSD with Caddy as the web server. I use Porkbun for domain registration and management. For creating new posts in Hugo, I have Emacs configured to create properly formatted Markdown files in the correct location. I write the posts in Emacs. When finished, I run a little shell script that builds the site and uploads it to the server. I don't use any fancy Github deployment actions or anything. I just render the site locally and use rsync to push changes. I've used nearly every blogging platform ever created. I've even written several of my own. Each platform has something I love about it, and when I start to miss whatever that thing is, I'll switch back to it. And so on. Sometimes moving to a new blogging platform gets the writing juices flowing. Sometimes it's just something to do when I'm bored and don't have anything to say. I would love to be the type of person who started a WordPress (or whatever) blog in the noughts and never changed anything. So many of my posts have bad links or missing images due to moving from platform to platform. It's frustrating for both me and my readers. I suppose what I'd do differently is pick a process and stick with it. Maybe focus on writing instead of tinkering with themes and platforms and such. Blogs are simple things, really, and overthinking everything has caused me nothing but trouble. I'm running my static sites on a small, $5/month (plus $1 for backups) VPS at Vultr, so it costs very little. I pay another $5/month for Tinylytics to watch traffic/views. So I'm in for around $11/month. The Ghost blog costs $15/month at MagicPages . One other cost is domain registrations, which adds up to maybe $50/year. I have no interested in trying to make money from blogging, even if it were feasible. I hesitate to recommend specific blogs, since that means leaving out so many others. I'll just pick a few at random from my RSS reader. Most of the blogs I follow are by people writing about their lives and interests. I'm less inclined to follow Capital-B Bloggers or industry-specific blogs these days. I'm interested in people, not companies. May I just suggest to anyone reading this, if you're even remotely interested in starting a blog, do it! 😁 Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 108 interviews . Make sure to also say thank you to Nicolas Magand and the other 120 supporters for making this series possible.

0 views