MacBook Neo, The (Not-So) Thin MacBook, Apple and Memory
The MacBook Neo was built to be cheap; that it is still good is not only a testament to Apple Silicon, but also the fact that the most important software runs in the cloud.
The MacBook Neo was built to be cheap; that it is still good is not only a testament to Apple Silicon, but also the fact that the most important software runs in the cloud.
It’s tough to make predictions, especially about the future. – Yogi Berra I’ve been an Emacs fanatic for over 20 years. I’ve built and maintained some of the most popular Emacs packages, contributed to Emacs itself, and spent countless hours tweaking my configuration. Emacs isn’t just my editor – it’s my passion, and my happy place. Over the past year I’ve also been spending a lot of time with Vim and Neovim, relearning them from scratch and having a blast contrasting how the two communities approach similar problems. It’s been a fun and refreshing experience. 1 And lately, like everyone else in our industry, I’ve been playing with AI tools – Claude Code in particular – watching the impact of AI on the broader programming landscape, and pondering what it all means for the future of programming. Naturally, I keep coming back to the same question: what happens to my beloved Emacs and its “arch nemesis” Vim in this brave new world? I think the answer is more nuanced than either “they’re doomed” or “nothing changes”. Predicting the future is obviously hard work, but it’s so fun to speculate on it. My reasoning is that every major industry shift presents plenty of risks and opportunities for those involved in it, so I want to spend a bit of time ruminating over the risks and opportunities for Emacs and Vim. VS Code is already the dominant editor by a wide margin, and it’s going to get first-class integrations with every major AI tool – Copilot (obviously), Codex, Claude, Gemini, you name it. Microsoft has every incentive to make VS Code the best possible host for AI-assisted development, and the resources to do it. On top of that, purpose-built AI editors like Cursor , Windsurf , and others are attracting serious investment and talent. These aren’t adding AI to an existing editor as an afterthought – they’re building the entire experience around AI workflows. They offer integrated context management, inline diffs, multi-file editing, and agent loops that feel native rather than bolted on. Every developer who switches to one of these tools is a developer who isn’t learning Emacs or Vim keybindings, isn’t writing Elisp, and isn’t contributing to our ecosystems. The gravity well is real. I never tried Cursor and Windsurf simply because they are essentially forks of VS Code and I can’t stand VS Code. I tried it several times over the years and I never felt productive in it for a variety of reasons. Part of the case for Emacs and Vim has always been that they make you faster at writing and editing code. The keybindings, the macros, the extensibility – all of it is in service of making the human more efficient at the mechanical act of coding. But if AI is writing most of your code, how much does mechanical editing speed matter? When you’re reviewing and steering AI-generated diffs rather than typing code character by character, the bottleneck shifts from “how fast can I edit” to “how well can I specify intent and evaluate output.” That’s a fundamentally different skill, and it’s not clear that Emacs or Vim have an inherent advantage there. The learning curve argument gets harder to justify too. “Spend six months learning Emacs and you’ll be 10x faster” is a tough sell when a junior developer with Cursor can scaffold an entire application in an afternoon. 2 VS Code has Microsoft. Cursor has venture capital. Emacs has… a small group of volunteers and the FSF. Vim had Bram, and now has a community of maintainers. Neovim has a small but dedicated core team. This has always been the case, of course, but AI amplifies the gap. Building deep AI integrations requires keeping up with fast-moving APIs, models, and paradigms. Well-funded teams can dedicate engineers to this full-time. Volunteer-driven projects move at the pace of people’s spare time and enthusiasm. Let’s go all the way: what if programming as we know it is fully automated within the next decade? If AI agents can take a specification and produce working, tested, deployed software without human intervention, we won’t need coding editors at all. Not Emacs, not Vim, not VS Code, not Cursor. The entire category becomes irrelevant. I don’t think this is likely in the near term, but it’s worth acknowledging as a possibility. The trajectory of AI capabilities has surprised even the optimists (and I was initially an AI skeptic, but the rapid advancements last year eventually changed my mind). Here’s the thing almost nobody is talking about: Emacs and Vim have always suffered from the obscurity of their extension languages. Emacs Lisp is a 1980s Lisp dialect that most programmers have never seen before. VimScript is… VimScript. Even Lua, which Neovim adopted specifically because it’s more approachable, is niche enough that most developers haven’t written a line of it. This has been the single biggest bottleneck for both ecosystems. Not the editors themselves – they’re incredibly powerful – but the fact that customizing them requires learning an unfamiliar language, and most people never make it past copying snippets from blog posts and READMEs. I felt incredibly overwhelmed by Elisp and VimScript when I was learning Emacs and Vim for the first time, and I imagine I wasn’t the only one. I started to feel very productive in Emacs only after putting in quite a lot of time to actually learn Elisp properly. (never bothered to do the same for VimScript, though, and admittedly I’m not too eager to master Lua either) AI changes this overnight. You can now describe what you want in plain English and get working Elisp, VimScript, or Lua. “Write me an Emacs function that reformats the current paragraph to 72 columns and adds a prefix” – done. “Configure lazy.nvim to set up LSP with these keybindings” – done. The extension language barrier, which has been the biggest obstacle to adoption for decades, is suddenly much lower. After 20+ years in the Emacs community, I often have the feeling that a relatively small group – maybe 50 to 100 people – is driving most of the meaningful progress. The same names show up in MELPA, on the mailing lists, and in bug reports. This isn’t a criticism of those people (I’m proud to be among them), but it’s a structural weakness. A community that depends on so few contributors is fragile. And it’s not just Elisp and VimScript. The C internals of both Emacs and Vim (and Neovim’s C core) are maintained by an even smaller group. Finding people who are both willing and able to hack on decades-old C codebases is genuinely hard, and it’s only getting harder as fewer developers learn C at all. AI tools can help here in two ways. First, they lower the barrier for new contributors – someone who understands the concept of what they want to build can now get AI assistance with the implementation in an unfamiliar language. Second, they help existing maintainers move faster. I’ve personally found that AI is excellent at generating test scaffolding, writing documentation, and handling the tedious parts of package maintenance that slow everything down. The Emacs and Neovim communities aren’t sitting idle. There are already impressive AI integrations: And this is just a sample. Building these integrations isn’t as hard as it might seem – the APIs are straightforward, and the extensibility of both editors means you can wire up AI tools in ways that feel native. With AI assistance, creating new integrations becomes even easier. I wouldn’t be surprised if the pace of plugin development accelerates significantly. Here’s an irony that deserves more attention: many of the most powerful AI coding tools are terminal-native. Claude Code, Aider, and various Copilot CLI tools all run in the terminal. And what lives in the terminal? Emacs and Vim. 3 Running Claude Code in an Emacs buffer or a Neovim terminal split is a perfectly natural workflow. You get the AI agent in one pane and your editor in another, with all your keybindings and tools intact. There’s no context switching to a different application – it’s all in the same environment. This is actually an advantage over GUI-based AI editors, where the AI integration is tightly coupled to the editor’s own interface. With terminal-native tools, you get to choose your own editor and your own AI tool, and they compose naturally. Emacs’s “editor as operating system” philosophy is uniquely well-suited to AI integration. It’s not just a code editor – it’s a mail client (Gnus, mu4e), a note-taking system (Org mode), a Git interface (Magit), a terminal emulator, a file manager, an RSS reader, and much more. AI can be integrated at every one of these layers. Imagine an AI assistant that can read your org-mode agenda, draft email replies in mu4e, help you write commit messages in Magit, and refactor code in your source buffers – all within the same environment, sharing context. No other editor architecture makes this kind of deep, cross-domain integration as natural as Emacs does. Admittedly, I’ve stopped using Emacs as my OS a long time ago, and these days I use it mostly for programming and blogging. (I’m writing this article in Emacs with the help of ) Still, I’m only one Emacs user and many are probably using it in a more holistic manner. One of the most underappreciated benefits of AI for Emacs and Vim users is mundane: troubleshooting. Both editors have notoriously steep learning curves and opaque error messages. “Wrong type argument: stringp, nil” has driven more people away from Emacs than any competitor ever did. AI tools are remarkably good at explaining cryptic error messages, diagnosing configuration issues, and suggesting fixes. They can read your init file and spot the problem. They can explain what a piece of Elisp does. They can help you understand why your keybinding isn’t working. This dramatically flattens the learning curve – not by making the editor simpler, but by giving every user access to a patient, knowledgeable guide. I don’t really need any AI assistance to troubleshoot anything in my Emacs setup, but it’s been handy occasionally in Neovim-land, where my knowledge is relatively modest by comparison. There’s at least one documented case of someone returning to Emacs after years away , specifically because Claude Code made it painless to fix configuration issues. They’d left for IntelliJ because the configuration burden got too annoying – and came back once AI removed that barrier. “Happy f*cking days I’m home again,” as they put it. If AI can bring back lapsed Emacs users, that’s a good thing in my book. Let’s revisit the doomsday scenario. Say programming is fully automated and nobody writes code anymore. Does Emacs die? Not necessarily. Emacs is already used for far more than programming. People use Org mode to manage their entire lives – tasks, notes, calendars, journals, time tracking, even academic papers. Emacs is a capable writing environment for prose, with excellent support for LaTeX, Markdown, AsciiDoc, and plain text. You can read email, browse the web, manage files, and yes, play Tetris. Vim, similarly, is a text editing paradigm as much as a program. Vim keybindings have colonized every text input in the computing world – VS Code, IntelliJ, browsers, shells, even Emacs (via Evil mode). Even if the Vim program fades, the Vim idea is immortal. 4 And who knows – maybe there’ll be a market for artisanal, hand-crafted software one day. “Locally sourced, free-range code, written by a human in Emacs.” I’d buy that t-shirt. And I’m fairly certain those artisan programmers won’t be using VS Code. So even in the most extreme scenario, both editors have a life beyond code. A diminished one, perhaps, but a life nonetheless. I think what’s actually happening is more interesting than “editors die” or “editors are fine.” The role of the editor is shifting. For decades, the editor was where you wrote code. Increasingly, it’s becoming where you review, steer, and refine code that AI writes. The skills that matter are shifting from typing speed and editing gymnastics to specification clarity, code reading, and architectural judgment. In this world, the editor that wins isn’t the one with the best code completion – it’s the one that gives you the most control over your workflow. And that has always been Emacs and Vim’s core value proposition. The question is whether the communities can adapt fast enough. The tools are there. The architecture is there. The philosophy is right. What’s needed is people – more contributors, more plugin authors, more documentation writers, more voices in the conversation. AI can help bridge the gap, but it can’t replace genuine community engagement. Not everyone in the Emacs and Vim communities is enthusiastic about AI, and the objections go beyond mere technophobia. There are legitimate ethical concerns that are going to be debated for a long time: Energy consumption. Training and running large language models requires enormous amounts of compute and electricity. For communities that have long valued efficiency and minimalism – Emacs users who pride themselves on running a 40-year-old editor, Vim users who boast about their sub-second startup times – the environmental cost of AI is hard to ignore. Copyright and training data. LLMs are trained on vast corpora of code and text, and the legality and ethics of that training remain contested. Some developers are uncomfortable using tools that may have learned from copyrighted code without explicit consent. This concern hits close to home for open-source communities that care deeply about licensing. Job displacement. If AI makes developers significantly more productive, fewer developers might be needed. This is an uncomfortable thought for any programming community, and it’s especially pointed for editors whose identity is built around empowering human programmers. These concerns are already producing concrete action. The Vim community recently saw the creation of EVi , a fork of Vim whose entire raison d’etre is to provide a text editor free from AI integration. Whether you agree with the premise or not, the fact that people are forking established editors over this tells you how strongly some community members feel. I don’t think these concerns should stop anyone from exploring AI tools, but they’re real and worth taking seriously. I expect to see plenty of spirited debate about this on emacs-devel and the Neovim issue tracker in the years ahead. The future ain’t what it used to be. – Yogi Berra I won’t pretend I’m not worried. The AI wave is moving fast, the incumbents have massive advantages in funding and mindshare, and the very nature of programming is shifting under our feet. It’s entirely possible that Emacs and Vim will gradually fade into niche obscurity, used only by a handful of diehards who refuse to move on. But I’ve been hearing that Emacs is dying for 20 years, and it’s still here. The community is small but passionate, the editor is more capable than ever, and the architecture is genuinely well-suited to the AI era. Vim’s situation is similar – the core idea is so powerful that it keeps finding new expression (Neovim being the latest and most vigorous incarnation). The editors that survive won’t be the ones with the flashiest AI features. They’ll be the ones whose users care enough to keep building, adapting, and sharing. That’s always been the real engine of open-source software, and no amount of AI changes that. So if you’re an Emacs or Vim user: don’t panic, but don’t be complacent either. Learn the new AI tools (if you’re not fundamentally opposed to them, that is). Pimp your setup and make it awesome. Write about your workflows. Help newcomers. The best way to ensure your editor survives the AI age is to make it thrive in it. Maybe the future ain’t what it used to be – but that’s not necessarily a bad thing. That’s all I have for you today. Keep hacking! If you’re curious about my Vim adventures, I wrote about them in Learning Vim in 3 Steps . ↩︎ Not to mention you’ll probably have to put in several years in Emacs before you’re actually more productive than you were with your old editor/IDE of choice. ↩︎ At least some of the time. Admittedly I usually use Emacs in GUI mode, but I always use (Neo)vim in the terminal. ↩︎ Even Claude Code has vim mode. ↩︎ gptel – a versatile LLM client that supports multiple backends (Claude, GPT, Gemini, local models) ellama – an Emacs interface for interacting with LLMs via llama.cpp and Ollama aider.el – Emacs integration for Aider , the popular AI pair programming tool copilot.el – GitHub Copilot integration (I happen to be the current maintainer of the project) elysium – an AI-powered coding assistant with inline diff application agent-shell – a native Emacs buffer for interacting with LLM agents (Claude Code, Gemini CLI, etc.) via the Agent Client Protocol avante.nvim – a Cursor-like AI coding experience inside Neovim codecompanion.nvim – a Copilot Chat replacement supporting multiple LLM providers copilot.lua – native Copilot integration for Neovim gp.nvim – ChatGPT-like sessions in Neovim with support for multiple providers Energy consumption. Training and running large language models requires enormous amounts of compute and electricity. For communities that have long valued efficiency and minimalism – Emacs users who pride themselves on running a 40-year-old editor, Vim users who boast about their sub-second startup times – the environmental cost of AI is hard to ignore. Copyright and training data. LLMs are trained on vast corpora of code and text, and the legality and ethics of that training remain contested. Some developers are uncomfortable using tools that may have learned from copyrighted code without explicit consent. This concern hits close to home for open-source communities that care deeply about licensing. Job displacement. If AI makes developers significantly more productive, fewer developers might be needed. This is an uncomfortable thought for any programming community, and it’s especially pointed for editors whose identity is built around empowering human programmers. If you’re curious about my Vim adventures, I wrote about them in Learning Vim in 3 Steps . ↩︎ Not to mention you’ll probably have to put in several years in Emacs before you’re actually more productive than you were with your old editor/IDE of choice. ↩︎ At least some of the time. Admittedly I usually use Emacs in GUI mode, but I always use (Neo)vim in the terminal. ↩︎ Even Claude Code has vim mode. ↩︎
Automating symbolic music analysis to answer the pressing question "how many notes did she just play?!"
I'm on medical leave recovering from surgery . Before I went under, I wanted to ship one thing I'd been failing to build for months: a sponsor panel at sponsors.xeiaso.net . Previous attempts kept dying in the GraphQL swamp. This time I vibe coded it — pointed agent teams at the problem with prepared skills and let them generate the gnarly code I couldn't write myself. And it works. Go and GraphQL are oil and water. I've held this opinion for years and nothing has changed it. The library ecosystem is a mess: shurcooL/graphql requires abusive struct tags for its reflection-based query generation, and the code generation tools produce mountains of boilerplate. All of it feels like fighting the language into doing something it actively resists. GitHub removing the GraphQL explorer made this even worse. You used to be able to poke around the schema interactively and figure out what queries you needed. Now you're reading docs and guessing. Fun. I'd tried building this panel before, and each attempt died in that swamp. I'd get partway through wrestling the GitHub Sponsors API into Go structs, lose momentum, and shelve it. At roughly the same point each time: when the query I needed turned out to be four levels of nested connections deep and the struct tags looked like someone fell asleep on their keyboard. Vibe coding was a hail mary. I figured if it didn't work, I was no worse off. If it did, I'd ship something before disappearing into a hospital for a week. Vibe coding is not "type a prompt and pray." Output quality depends on the context you feed the model. Templ — the Go HTML templating library I use — barely exists in LLM training data. Ask Claude Code to write Templ components cold and it'll hallucinate syntax that looks plausible but doesn't compile. Ask me how I know. Wait, so how do you fix that? I wrote four agent skills to load into the context window: With these loaded, the model copies patterns from authoritative references instead of inventing syntax from vibes. Most of the generated Templ code compiled on the first try, which is more than I can say for my manual attempts. Think of it like giving someone a cookbook instead of asking them to invent recipes from first principles. The ingredients are the same, but the results are dramatically more consistent. I pointed an agent team at a spec I'd written with Mimi . The spec covered the basics: OAuth login via GitHub, query the Sponsors API, render a panel showing who sponsors me and at what tier, store sponsor logos in Tigris . I'm not going to pretend I wrote the spec alone. I talked through the requirements with Mimi and iterated on it until it was clear enough for an agent team to execute. The full spec is available as a gist if you want to see what "clear enough for agents" looks like in practice. One agent team split the spec into tasks and started building. A second reviewed output and flagged issues. Meanwhile, I provisioned OAuth credentials in the GitHub developer settings, created the Neon Postgres database, and set up the Tigris bucket for sponsor logos. Agents would hit a point where they needed a credential, I'd paste it in, and they'd continue — ops work and code generation happening in parallel. The GraphQL code the agents wrote is ugly . Raw query strings with manual JSON parsing that would make a linting tool weep. But it works. The shurcooL approach uses Go idioms, sure, but it requires so much gymnastics to handle nested connections that the cognitive load is worse. Agent-generated code is direct: send this query string, parse this JSON, done. I'd be embarrassed to show it at a code review. I'd also be embarrassed to admit how many times I failed to ship the "clean" version. This code exists because the "proper" way kept killing the project. I'll take ugly-and-shipped over clean-and-imaginary. The full stack: Org sponsorships are still broken. The schema for organization sponsors differs enough from individual sponsors that it needs its own query path and auth flow. I know what the fix looks like, but it requires reaching out to other devs who've cracked GitHub's org-level sponsor queries. The code isn't my usual style either — JSON parsing that makes me wince, variable names that are functional but uninspired, missing error context in a few places. I'll rewrite chunks of this after I've recovered. The panel exists now, though. It renders real data. People can OAuth in and see their sponsorship status. Before this attempt, it was vaporware. I've been telling people "just ship it" for years. Took vibe coding to make me actually do it myself. I wouldn't vibe code security-critical systems or anything I need to audit line-by-line. But this project had stopped me cold on every attempt, and vibe coding got it across the line in a weekend. Skills made the difference here. Loading those four documents into the context window turned Claude Code from "plausible but broken Templ" into "working code on the first compile." I suspect that gap will only matter more as people try to use AI with libraries that aren't well-represented in training data. This sponsor panel probably won't look anything like it does today in six months. I'll rewrite the GraphQL layer once I find a pattern that doesn't make me cringe. Org sponsorships still need work. HTMX might get replaced. But it exists, and before my surgery, shipping mattered more than polish. The sponsor panel is at sponsors.xeiaso.net . The skills are in my site's repo under . templ-syntax : Templ's actual syntax, with enough detail that the model can look up expressions, conditionals, and loops instead of guessing. templ-components : Reusable component patterns — props, children, composition. Obvious if you've used Templ, impossible to infer from sparse training data. templ-htmx : The gotchas when combining Templ with HTMX. Attribute rendering and event handling trip up humans and models alike. templ-http : Wiring Templ into handlers properly — routes, data passing, request lifecycle. Go for the backend, because that's what I know and what my site runs on Templ for HTML rendering, because I'm tired of 's limitations HTMX for interactivity, because I refuse to write a React app for something this simple PostgreSQL via Neon for persistence GitHub OAuth for authentication GitHub Sponsors GraphQL API for the actual sponsor data Tigris for sponsor logo storage — plugged it in and it Just Works™
Those who cannot remember the past are condemned to repeat it. A sentence that I never really liked, and what is happening with AI, about software projects reimplementations, shows all the limits of such an idea. Many people are protesting the fairness of rewriting existing projects using AI. But, a good portion of such people, during the 90s, were already in the field: they followed the final part (started in the ‘80s) of the deeds of Richard Stallman, when he and his followers were reimplementing the UNIX userspace for the GNU project. The same people that now are against AI rewrites, back then, cheered for the GNU project actions (rightly, from my point of view – I cheered too). Stallman is not just a programming genius, he is also the kind of person that has a broad vision across disciplines, and among other things he was well versed in the copyright nuances. He asked the other programmers to reimplement the UNIX userspace in a specific way. A way that would make each tool unique, recognizable, compared to the original copy. Either faster, or more feature rich, or scriptable; qualities that would serve two different goals: to make GNU Hurd better and, at the same time, to provide a protective layer against litigations. If somebody would claim that the GNU implementations were not limited to copying ideas and behaviours (which is legal), but “protected expressions” (that is, the source code verbatim), the added features and the deliberate push towards certain design directions would provide a counter argument that judges could understand. He also asked to always reimplement the behavior itself, avoiding watching the actual implementation, using specifications and the real world mechanic of the tool, as tested manually by executing it. Still, it is fair to guess that many of the people working at the GNU project likely were exposed or had access to the UNIX source code. When Linus reimplemented UNIX, writing the Linux kernel, the situation was somewhat more complicated, with an additional layer of indirection. He was exposed to UNIX just as a user, but, apparently, had no access to the source code of UNIX. On the other hand, he was massively exposed to the Minix source code (an implementation of UNIX, but using a microkernel), and to the book describing such implementation as well. But, in turn, when Tanenbaum wrote Minix, he did so after being massively exposed to the UNIX source code. So, SCO (during the IBM litigation) had a hard time trying to claim that Linux contained any protected expressions. Yet, when Linus used Minix as an inspiration, not only was he very familiar with something (Minix) implemented with knowledge of the UNIX code, but (more interestingly) the license of Minix was restrictive, it became open source only in 2000. Still, even in such a setup, Tanenbaum protested about the architecture (in the famous exchange), not about copyright infringement. So, we could reasonably assume Tanenbaum considered rewrites fair, even if Linus was exposed to Minix (and having himself followed a similar process when writing Minix). # What the copyright law really says To put all this in the right context, let’s zoom in on the copyright's actual perimeters: the law says you must not copy “protected expressions”. In the case of the software, a protected expression is the code as it is, with the same structure, variables, functions, exact mechanics of how specific things are done, unless they are known algorithms (standard quicksort or a binary search can be implemented in a very similar way and they will not be a violation). The problem is when the business logic of the programs matches perfectly, almost line by line, the original implementation. Otherwise, the copy is lawful and must not obey the original license, as long as it is pretty clear that the code is doing something similar but with code that is not cut & pasted or mechanically translated to some other language, or aesthetically modified just to look a bit different (look: this is exactly the kind of bad-faith maneuver a court will try to identify). I have the feeling that every competent programmer reading this post perfectly knows what a *reimplementation* is and how it looks. There will be inevitable similarities, but the code will be clearly not copied. If this is the legal setup, why do people care about clean room implementations? Well, the reality is: it is just an optimization in case of litigation, it makes it simpler to win in court, but being exposed to the original source code of some program, if the exposition is only used to gain knowledge about the ideas and behavior, is fine. Besides, we are all happy to have Linux today, and the GNU user space, together with many other open source projects that followed a similar path. I believe rules must be applied both when we agree with their ends, and when we don’t. # AI enters the scene So, reimplementations were always possible. What changes, now, is the fact they are brutally faster and cheaper to accomplish. In the past, you had to hire developers, or to be enthusiastic and passionate enough to create a reimplementation yourself, because of business aspirations or because you wanted to share it with the world at large. Now, you can start a coding agent and proceed in two ways: turn the implementation into a specification, and then in a new session ask the agent to reimplement it, possibly forcing specific qualities, like: make it faster, or make the implementation incredibly easy to follow and understand (that’s a good trick to end with an implementation very far from others, given the fact that a lot of code seems to be designed for the opposite goal), or more modular, or resolve this fundamental limitation of the original implementation: all hints that will make it much simpler to significantly diverge from the original design. LLMs, when used in this way, don’t produce copies of what they saw in the past, but yet at the end you can use an agent to verify carefully if there is any violation, and if any, replace the occurrences with novel code. Another, apparently less rigorous approach, but potentially very good in the real world, is to provide the source code itself, and ask the agent to reimplement it in a completely novel way, and use the source code both as specification and in order to drive the implementation as far as possible away from the code itself. Frontier LLMs are very capable, they can use something even to explicitly avoid copying it, and carefully try different implementation approaches. If you ever attempted something like the above, you know how the “uncompressed copy” really is an illusion: agents will write the software in a very “organic” way, committing errors, changing design many times because of limitations that become clear only later, starting with something small and adding features progressively, and often, during this already chaotic process, we massively steer their work with our prompts, hints, wishes. Many ideas are consolatory as they are false: the “uncompressed copy” is one of those. But still, now the process of rewriting is so simple to do, and many people are disturbed by this. There is a more fundamental truth here: the nature of software changed; the reimplementations under different licenses are just an instance of how such nature was transformed forever. Instead of combatting each manifestation of automatic programming, I believe it is better to build a new mental model, and adapt. # Beyond the law I believe that organized societies prosper if laws are followed, yet I do not blindly accept a rule just because it exists, and I question things based on my ethics: this is what allows individuals, societies, and the law itself to evolve. We must ask ourselves: is the copyright law ethically correct? Does the speed-up AI provides to an existing process, fundamentally change the process itself? One thing that allowed software to evolve much faster than most other human fields is the fact the discipline is less anchored to patents and protections (and this, in turn, is likely as it is because of a sharing culture around the software). If the copyright law were more stringent, we could likely not have what we have today. Is the protection of single individuals' interests and companies more important than the general evolution of human culture? I don’t think so, and, besides, the copyright law is a common playfield: the rules are the same for all. Moreover, it is not a stretch to say that despite a more relaxed approach, software remains one of the fields where it is simpler to make money; it does not look like the business side was impacted by the ability to reimplement things. Probably, the contrary is true: think of how many businesses were made possible by an open source software stack (not that OSS is mostly made of copies, but it definitely inherited many ideas about past systems). I believe, even with AI, those fundamental tensions remain all valid. Reimplementations are cheap to make, but this is the new playfield for all of us, and just reimplementing things in an automated fashion, without putting something novel inside, in terms of ideas, engineering, functionalities, will have modest value in the long run. What will matter is the exact way you create something: Is it well designed, interesting to use, supported, somewhat novel, fast, documented and useful? Moreover, this time the inbalance of force is in the right direction: big corporations always had the ability to spend obscene amounts of money in order to copy systems, provide them in a way that is irresistible for users (free, for many years, for instance, to later switch model) and position themselves as leaders of ideas they didn’t really invent. Now, small groups of individuals can do the same to big companies' software systems: they can compete on ideas now that a synthetic workforce is cheaper for many. # We stand on the shoulders of giants There is another fundamental idea that we all need to internalize. Software is created and evolved as an incremental continuous process, where each new innovation is building on what somebody else invented before us. We are all very quick to build something and believe we “own” it, which is correct, if we stop at the exact code we wrote. But we build things on top of work and ideas already done, and given that the current development of IT is due to the fundamental paradigm that makes ideas and behaviors not covered by copyright, we need to accept that reimplementations are a fair process. If they don’t contain any novelty, maybe they are a lazy effort? That’s possible, yet: they are fair, and nobody is violating anything. Yet, if we want to be good citizens of the ecosystem, we should try, when replicating some work, to also evolve it, invent something new: to specialize the implementation for a lower memory footprint, or to make it more useful in certain contexts, or less buggy: the Stallman way. In the case of AI, we are doing, almost collectively, the error of thinking that a technology is bad or good for software and humanity in isolation. AI can unlock a lot of good things in the field of open source software. Many passionate individuals write open source because they hate their day job, and want to make something they love, or they write open source because they want to be part of something bigger than economic interests. A lot of open source software is either written in the free time, or with severe constraints on the amount of people that are allocated for the project, or - even worse - with limiting conditions imposed by the companies paying for the developments. Now that code is every day less important than ideas, open source can be strongly accelerated by AI. The four hours allocated over the weekend will bring 10x the fruits, in the right hands (AI coding is not for everybody, as good coding and design is not for everybody). Linux device drivers can be implemented by automatically disassembling some proprietary blob, for instance. Or, what could be just a barely maintained library can turn into a project that can be well handled in a more reasonable amount of time. Before AI, we witnessed the commodification of software: less quality, focus only on the money, no care whatsoever for minimalism and respect for resources: just piles of mostly broken bloat. More hardware power, more bloat, less care. It was already going very badly. It is not obvious nor automatic that AI will make it worse, and the ability to reimplement other software systems is part of a bigger picture that may restore some interest and sanity in our field. Comments
As you no doubt know by now, we Emacs users have the Teenage Mutant Ninja Power . Expert usage of a Heroes in a Hard Shell is no exception. Pizza Time! All silliness aside, the plethora of options available to the Emacs user when it comes to executing shell commands in “terminals”—real or fake—can be overwhelming. There’s , , , , , and then third party packages further expand this with , , … The most interesting shell by far is the one that’s not a shell but a Lisp REPL that looks like a shell: Eshell . That’s the one I would like to focus one now. But first: why would you want to pull in your work inside Emacs? The more you get used to it, the easier it will be to answer this: because all your favourite text selection, manipulation, … shortcuts will be available to you. Remember how stupendously difficult it is to just shift-select and yank/copy/whatever you want to call it text in your average terminal emulator? That’s why. In Emacs, I can move around the point in that shell buffer however I want. I can search inside that buffer—since everything is just text—however I want. Even the easiest solution, just firing off your vanilla , that in my case runs Zsh, will net you most of these benefits. And then there’s Eshell: the Lisp-powered shell that’s not really a shell but does a really good job in pretending it is. With Eshell you can interact with everything else you’ve got up and running inside Emacs. Want to dump the output to a buffer at point? . Want to see what’s hooked into LSP mode? . Want to create your own commands? and then just . Eshell makes it possible to mix Elisp and your typical Bash-like syntax. The only problem is that Eshell isn’t a true terminal emulator and doesn’t support full-screen terminal programs and fancy TTY stuff. That’s where Eat: Emulate A Terminal comes in. The Eat minor mode is compatible with Eshell: as soon as you execute a command-line program, it takes over. There are four input modes available to you for sending text to the terminal in case your Emacs shortcuts clash with those of the program. It solves all my problems: long-running processes like work; interactive programs like gdu and work, … Yet the default Eshell mode is a bit bare-bones, so obviously I pimped the hell out of it. Here’s a short summary of what my Bakemacs shelling.el config alters: Here’s a short video demonstrating some of these features: The reason for ditching is simple: it’s extremely slow over Tramp. Just pressing TAB while working on a remote machine takes six seconds to load a simple directory structure of a few files, what’s up with that? I’ve been profiling my Tramp connections and connecting to the local NAS over SSH is very slow because apparently can’t do a single and process that info into an autocomplete pop-up. Yet I wanted to keep my Corfu/Cape behaviour that I’m used to working in other buffers so I created my own completion-at-point-function that dispatches smartly to other internals: I’m sure there are holes in this logic but so far it’s been working quite well for me. Cape is very fast as is my own shell command/variable cache. The added bonus is having access to nerd icons. I used to distinguish Elisp vars from external shell vars in case you’re completing as there are only a handful shell variables and a huge number of Elisp ones. I also learned the hard way that you should cache stuff listed in your modeline as this gets continuously redrawn when scrolling through your buffer: The details can be found in —just to be on the safe side, I disabled Git/project specific stuff in case is to avoid more Tramp snailness. The last cool addition: make use of Emacs’s new Completion Preview mode —but only for recent commands. That means I temporarily remap as soon as TAB is pressed. Otherwise, the preview might also show things that I don’t really want. The video showcases this as well. Happy (e)sheling! Related topics: / emacs / By Wouter Groeneveld on 8 March 2026. Reply via email . Customize at startup Integrate : replaces the default “i-search backward”. This is a gigantic improvement as Consult lets me quickly and visually finetune my search through all previous commands. These are also saved on exit (increase while you’re at it). Improve to immediately kill a process or deactivate the mark. The big one: replace with a custom completion-at-point system (see below). When typing a path like , backspace kills the entire last directory instead of just a single character. This works just like now and speeds up my path commands by a lot. Bind a shortcut to a convenient function that sends input to Eshell & executes it. Change the prompt into a simple to more easily copy-paste things in and out of that buffer. This integrates with meaning I can very easily jump back to a previous command and its output! Move most of the prompt info to the modeline such as the working directory and optional Git information. Make sort by directories first to align it with my Dired change: doesn’t work as is an Elisp function. Bind a shortcut to a convenient pop-to-eshell buffer & new-eshell-tab function that takes the current perspective into account. Make font-lock so it outputs with syntax highlighting. Create a command: does a into the directory of that buffer’s contents. Create a command: stay on the current Tramp host but go to an absolute path. Using will always navigate to your local HDD root so is the same as if you’re used to instead of Emacs’s Tramp. Give Eshell dedicated space on the top as a side window to quickly call and dismiss with . Customise more shortcuts to help with navigation. UP and DOWN (or / ) just move the point, even at the last line, which never works in a conventional terminal. and cycle through command history. Customise more aliases of which the handy ones are: & If the point is at command and … it’s a path: direct to . it’s a local dir cmd: wrap to filter on dirs only. Cape is dumb and by default also returns files. it’s an elisp func starting with : complete that with . else it’s a shell command. These are now cached by expanding all folders from with a fast Perl command. If the point is at the argument and … it’s a variable starting with : create a super CAPF to lisp both Elisp and vars (also cached)! it’s a buffer or process starting with : fine, here , can you handle this? Are you sure? it’s a remote dir cmd (e.g. ): . it’s (still) a local dir cmd: see above. In all other cases, it’s probably a file argument: fall back to just .
Introduction Before deep learning was a thing, if you had \(N\) vectors, and you wanted to match the closest one to some noisy version you would use kNN (\(k\) nearest neighbour). This implies comparing the input to all candidates, and usually weighting the final answer by the distance \(d_i\) to the top \(k\) matches. If \(k=N\) and the weighing has the from \(exp(-d / \rho)\), this becomes the Nadaraya–Watson kernel regressor, which turns out to have the same form as the venerable “attention” mechanism from deep learning.
The HTML Sanitizer API allows multiple ways to customize the default allow list and this blog post aims to describe a few variations and tricks we came up with while writing the specification. Examples in this post will use configuration dictionaries. These dictionaries might be used …
When Eelco Dolstra , father of Nix, descended from the mountain tops and enlightened us all, one of the main commandments for Nix was to eschew all uses of the Filesystem Hierarchy Standard (FHS) . The FHS is the “find libraries and files by convention” dogma Nix abandons in the pursuit of purity. What if I told you that was a lie ? 😑 Nix was explicitly designed to eliminate standard FHS paths (like or ) to guarantee reproducibility. However, graphics drivers represent a hard boundary between user-space and kernel-space. The user-space library ( ) must match the host OS’s kernel module and the physical GPU. Nearly all derivations do not bundle with them because they have no way of predicting the hardware or host kernel the binary will run on. What about NixOS? Surely, we know what kernel and drivers we have there!? 🤔 Well, if we modified every derivation to include the correct it would cause massive rebuilds for every user and make the NixOS cache effectively useless. To solve this, NixOS & Home Manager introduce an intentional impurity, a global path at where derivations expect to find . We’ve just re-introduced a convention path à la FHS. 🫠 Unfortunately, that leaves users who use Nix on other Linux distributions in a bad state which is documented in issue#9415 , that has been opened since 2015. If you tried to install and run any Nix application that requires graphics, you’ll be hit with the exact error message Nix was designed to thwart: There are a couple of workarounds for those of us who use Nix on alternate distributions: For those of us though who cling to the beautiful purity of Nix however it feels like a sad but ultimately necessary trade-off. Thou shall not use FHS, unless you really need to. nixGL , a runtime script that injects the library via manually hacking creating your own and symlinking it with the drivers from
I work full time, while also studying part time, volunteering, and blogging here, together with fitness, other hobbies and keeping up with things, feeling available to people most of the time. What helps me do that, especially when I am chronically ill? Obviously, the less sick days and symptoms you have, the more energy you have, the faster you are and the more time you'll have. You can't discipline yourself out of having an uncontrolled illness. It's a bit unpredictable how I'll feel or when the next flare up comes, so when I feel good, I lock in and try to make the most of it because I can't count on tomorrow. That will make up for the days when I can do less or nothing at all. Can be household-related stuff, studying or exercise. It's difficult initially you wanna enjoy yourself and live your life during good days instead of doing The Things That Need Doing. One too many experiences where you banked on "just doing it next week" and you can't do anything will make you take this more seriously, though. For example, I studied for 12 hours on Sunday and 8 hours on Monday, and couldn't do much Tuesday-Friday due to work and other things draining me too much for it. Dedicating a day where I feel especially inclined to do something to do the most of it so it is done for the rest of the week or the things are scheduled; a good example are blog posts, cleaning, or my volunteer work (doing multiple case translations back to back instead of spread out throughout the week). Sometimes I get up and I notice today will not be good. I could sit there forcing myself to do the thing I thought I would do or that I should do, and struggle along for hours, making myself worse and having worse results, then mope around doing nothing while wishing I could do that one thing. Instead, I find something that needs to be doing that is manageable in that state, even if it is not the most urgent and very low on the priority list. You need failure points for that, something like "If it seems hopeless after doing it for 10 minutes, allow yourself to switch." At the same time, I also allow myself to wait with starting the task sometimes (occasionally even boring myself on purpose) and end up coming around to it, suddenly feeling ready for it 2 hours later. Yesterday, I was supposed to study hard for my upcoming exam, but I had a really bad headache all day that just wouldn't go away. Of course I was mad I couldn't study, but after it did not go away or change, I just did other, lower-priority stuff that was easy and needed doing; like re-organization of my Obsidian, entering more passwords into my password manager that weren't in there yet, and transferring stuff from my Discord server to my Obsidian. It didn't help for the exam, of course, but it was on my list and now I don't need to do it some time later. I'll still call that a win and the best use of my time, compared to the alternatives. In my experience, it all balances out: If I wake up on a day I thought I'd study and I'm doing more of this other thing instead, that frees up more studying time in the future. When I do struggle with needing priorities as everything feels equally urgent and doable or I am afraid I'm not giving enough attention to something, I assign weekly days or goals if the type of to-do permits it. For example, for months I struggled with not finding time to do a case for my volunteer work, but since deciding Fridays to Sundays are for doing at least one a week, I've been able to consistently do that. That lessens the decision fatigue, and by offering myself three days, I give myself more flexibility in case anything comes up or I feel sick. I enjoy that it gives me a break from thinking about it for most days of the week; giving myself a chance of missing it also makes it easier to do it. I can't adhere to these at all anyway, they are too rigid. If I'd say I'll start something at 8am and I wake up later that day for other reasons, now the entire day plan is messed up! I can't deal with that. So, no fixed time blocks and slots, and no Pomodoro. I hate that stuff. I know the things I have to do, and they are arranged like a decision tree in my mind. Can do top thing? Then do it until you can't anymore (done, lacking focus, sick of it). Then go through the list until you land at the next thing to do that fits mood and energy levels. I have trouble with getting myself to start something based on an arbitrary start time or cutting off activities prematurely, so it doesn't work for me to say "I'll work on this from 10am to 3pm." I'll work on it as long or short as it happens, starting and switching when I am ready. I'd also rather work on the thing I end up randomly feeling drawn to that day instead of what past-me thought I should do. I work more fluently between tasks, like a break from one thing can be work in the other (taking a break from studying to do volunteer work, or write a blog post, answer emails etc.). I also cannot keep up streaks most of the time, because I need breaks and have worse days where I shouldn't push through for an arbitrary number, especially when it's about fitness stuff. It's useless to try and emulate the lifestyle of an internet personality and pretend your best time to work on something is at 6am when it isn't for you personally. The best time for me to work on medium to easy stuff is during 10am - 4pm, and after 6pm, I work best on harder, more focus-heavy tasks. That's the opposite of the advice usually given to people. I just like when the world winds down and it is dark outside. There's no use for me trying to change myself or working against the internal clock, and I also don't want to waste time perfecting some rigid morning routine or work system over just... doing the work. I notice some people are just doing one thing after the other when they could actually combine them more sensibly. The easiest example to illustrate it would be: Don't stare at the pot until it boils (or the pan until it's hot) and then go cut the stuff that goes into it; do it while the pot or pan heats up so everything is ready at the same time. This is likely something you already do, but identify other areas in your life where you are "waiting for the pot to boil". You can do other things while your skincare or your conditioner set, you can already prepare something else while your tea steeps, the bathtub fills, the paint dries, the compiler is running or the software is downloading, and so on. You can listen to lectures while doing chores. These small things accumulate. While you got the stew on the pot, you might feel paralyzed because "food isn't ready yet, but until it is, I can't really start or continue doing anything else, because then I am interrupted by checking on the food". In that time, you could have already done some dishes, cleaned the kitchen, tidied a corner, took out the trash so you don't have to do it later. That way, nothing accumulates to the point where it takes an hour to clean and becomes a whole thing that takes away from your daily time. Invest the 5-10 minutes here and there and chain things together sensibly so you don't have to. My wife struggles with this at times, so she asks me how to best time and order the things she needs to do. Sometimes I slide back into that mindset, but mostly, I just accept now that everything has its purpose; it's either work, rest, play, or socializing, and all are equally important. I see one as the prerequisite for the others. That helps not beating myself up internally over things, which would only cause pressure, anxiety, and guilt. If I chat with some people while I should do something else, so what? In 30 minutes I'll get back to it, and I got my fill of some interaction. If I exercise instead of sitting down to study, that's great; it means I'm counteracting all the sitting at the desk that the studying often necessitates. If I write a blog post instead of studying, that's getting it out of my head and done so I can fully focus on studying later. I journal, draw, and watch YouTube videos? Great relaxation and play, I need this for the other days where I study for most of the day. I recently started tracking activities with a timer so my worries can no longer lie to me about spending too little time on some things. It helped with committing even more to the tasks, because I wanted to press play as soon as possible again and hesitated to pause. Also allows for spotting time wasters and pockets of time that could maybe be used better. But also: Time isn't everything, even when using the good days to the fullest (Point 2). It's just as good to invest time consistently in small ways, and it's better to work smart, not hard. Earlier in this post, I mentioned 12 hours and 8 hours of studying, and it's not that this is technically necessary for me usually; I only do this now because I have 4 exams this month, and I have to make up for the fact that I couldn't study much at all from November to January due to catching a cold, my old medication no longer working and causing a flare up of my autoimmune illnesses, switching to new medication, my birthday, Christmas and NYE, and feeling mentally unwell at the start of the year. It happens, and this is how I have to manage it, but this isn't the default. If I can't get myself to do something because of fear, stress and a feeling of powerlessness, I break it down into smaller subtasks and tell myself I only have to do it for 5 minutes. That gets the ball rolling. If that doesn't help because it's more about mental health and psychological fatigue, I focus first on smaller and easy tasks like getting dressed, making food/tea, watering plants, tidying up a tiny area, some self care etc. to feel capable and productive again, then I try to tackle the bigger task. As a general word on time management: If you look at super busy people around you and wonder how they manage it, it likely also has to do with the following: In my case, I don't have to work on weekends, I work from home 3 days a week, I don't have children, my real life friends live far away so we can't meet up often, I have a wife that helps with the household, I have no social media, and I have no familial obligations. Work is also slow for me most of the time, with 5 or more hours of having nothing left to do. Reply via email Published 07 Mar, 2026 They have partner and family stepping up in taking care of some things. They have no or little familial obligations (don't have to visit grandparents all the time, or take care of the elderly and disabled in their family). They have no children; or they have a nanny or the partner doing most of that work. They are rarely home because the thing demands a lot of travel or outside time. (The less you are at home, the less dirty it gets. They likely stay more in hotels, or eat at work/the cafeteria, and spend their time elsewhere where they don't generate so much general dirt and dishes and it also warrants a lot less trips to the grocery store. What doesn't change is laundry, but thanks to the washing machine (and potentially, the dryer), they can just let that run while away.) Instead of having to make time to meet friends and align schedules, they get their social fix from their work (coworkers, conferences, events, panels etc.) They're high up enough that they can delegate some work tasks to others. It has become routine to them, so they're quicker at it, almost like autopilot. They have no or a severely reduced commute compared to you, or: they can use the commute for something else because they don't drive (passenger seat, train, subway...). They either don't have social media or don't feel sucked in by it, spending little time on it.
All good things must come to an end, and today is that day for one of my projects, the 512kb Club . I started the 512kb Club back in November 2020, so it's been around 5.5 years. It's become a drain and I'm ready to move on. As of today I won't be accepting any new submissions to the project. At the time of writing this, there are 25 PRs open for new submissions, I'll work through them, then will disable the ability to submit pull requests. Over the years there have been nearly 2,000 pull requests, and there are currently around 950 sites listed on the 512kb Club. Pretty cool, but it's a lot of work to manage - there's reviewing new submissions (which is a steady stream of pull requests), cleaning up old sites, updating sites, etc. It's more than I have time to do. I'm also trying to focus my time on other projects, like Pure Commons . It's sad to see this kind of project fall by the wayside, but life moves on. Having said that, if you think you want to take over 512kb Club, let's have a chat, there are some pre-requisites though: I'm probably going to get a lot of emails with offers to help (which is fantastic), but if we've never interacted before, I won't be moving forward with your kind offer. After reading the above, if we know each other, and you're still interested, use the email button below and we can have a chat about you potentially taking over. By taking over, I will expect you to: If you're just looking to take over and use it as a means to slap ads on it, and live off the land, I'd rather it go to landfill, and will just take the site down. That's why I only want someone I know and trust to take it over. I think I've made my point now. 🙃 If there's no-one prepared to take over, I plan to do one final export of the source from Jekyll, then upload that to my web server, where it will live until I decide to no longer renew the domain. I'll also update the site with a message stating that the project has been sunset and there will be no more submissions. If you don't wanna see that happen, please get in touch. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . We need to know each other. I'm not going to hand the project over to someone I don't know, sorry. You probably need to be familiar with Jekyll and Git. Take ownership of the domain, so you will be financially responsible for renewals. Take ownership of the GitHub repo , so you will be responsible for all pull requests, issues and anything else Git related. Be responsible for all hosting and maintenance of the project - the site is currently hosted on my personal Vercel account, which I will be deleting after handing off. Be a good custodian of the 512kb Club and continue to maintain it in its current form.
Parse Server is one of those projects that sits quietly beneath a lot of production infrastructure. It powers the backend of a meaningful number of mobile and web applications, particularly those that started on Parse's original hosted platform before it shut down in 2017 and needed somewhere to migrate. Currently the project has over 21,000+ stars on GitHub I recently spent some time auditing its codebase and found four security vulnerabilities. Three of them share a common root, a fundamental gap between what is documented to do and what the server actually enforces. The fourth is an independent issue in the social authentication adapters that is arguably more severe, a JWT validation bypass that allows an attacker to authenticate as any user on a target server using a token issued for an entirely different application. The Parse Server team was responsive throughout and coordinated fixes promptly. All four issues have been patched. Parse Server is an open-source Node.js backend framework that provides a complete application backend out of the box, a database abstraction layer (typically over MongoDB or PostgreSQL), a REST and GraphQL API, user authentication, file storage, push notifications, Cloud Code for serverless functions, and a real-time event system. It is primarily used as the backend for mobile applications and is the open-source successor to Parse's original hosted backend-as-a-service platform. Parse Server authenticates API requests using one of several key types. The grants full administrative access to all data, bypassing all object-level and class-level permission checks. It is intended for trusted server-side operations only. Parse Server also exposes a option. Per its documentation, this key grants master-level read access, it can query any data, bypass ACLs for reading, and perform administrative reads, but is explicitly intended to deny all write operations. It is the kind of credential you might hand to an analytics service, a monitoring agent, or a read-only admin dashboard, enough power to see everything, but no ability to change anything. That contract is what three of these four vulnerabilities break. The implementation checks whether a request carries master-level credentials by testing a single flag — — on the auth object. The problem is that authentication sets both and , and a large number of route handlers only check the former. The flag is set but never consulted, which means the read-only restriction exists in concept but not in enforcement. Cloud Hooks are server-side webhooks that fire when specific Parse Server events occur — object creation, deletion, user signup, and so on. Cloud Jobs are scheduled or manually triggered background tasks that can execute arbitrary Cloud Code functions. Both are powerful primitives: Cloud Hooks can exfiltrate any data passing through the server's event stream, and Cloud Jobs can execute arbitrary logic on demand. The routes that manage Cloud Hooks and Cloud Jobs — creating new hooks, modifying existing ones, deleting them, and triggering job execution — are all guarded by master key access checks. Those checks verify only that the requesting credential has . Because satisfies that condition, a caller holding only the read-only credential can fully manage the Cloud Hook lifecycle and trigger Cloud Jobs at will. The practical impact is data exfiltration via Cloud Hook. An attacker who knows the can register a new Cloud Hook pointing to an external endpoint they control, then watch as every matching Parse Server event — user signups, object writes, session creation — is delivered to them in real time. The read-only key, intended to allow passive observation, can be turned into an active wiretap on the entire application's event stream. The fix adds explicit rejection checks to the Cloud Hook and Cloud Job handlers. Parse Server's Files API exposes endpoints for uploading and deleting files — and . Both routes are guarded by , a middleware that checks whether the incoming request has master-level credentials. Like the Cloud Hooks routes, this check only tests and never consults . The root cause traces through three locations in the codebase. In at lines 267–278, the read-only auth object is constructed with . In at lines 107–113, the delete route applies as its only guard. At lines 586–602 of the same file, the delete handler calls through to without any additional read-only check in the call chain. The consequence is that a caller with only can upload arbitrary files to the server's storage backend or permanently delete any existing file by name. The upload vector is primarily an integrity concern — poisoning stored assets. The deletion vector is a high-availability concern — an attacker can destroy application data (user avatars, documents, media) that may not have backups, and depending on how the application is structured, deletion of certain files could cause cascading application failures. The fix adds rejection to both the file upload and file delete handlers. This is the most impactful of the three issues. The endpoint is a privileged administrative route intended for master-key workflows — it accepts a parameter and returns a valid, usable session token for that user. The design intent is to allow administrators to impersonate users for debugging or support purposes. It is the digital equivalent of a master key that can open any door. The route's handler, , is located in at lines 339–345 and is mounted as at lines 706–708. The guard condition rejects requests where is false. Because produces an auth object where is true — and because there is no check anywhere in the handler or its middleware chain — the read-only credential passes the gate and the endpoint returns a fully usable for any provided. That session token is not a read-only token. It is a normal user session token, indistinguishable from one obtained by logging in with a password. It grants full read and write access to everything that user's ACL and role memberships permit. An attacker with the and knowledge of any user's object ID can silently mint a session as that user and then act as them with complete write access — modifying their data, making purchases, changing their email address, deleting their account, or doing anything else the application allows its users to do. There is no workaround other than removing from the deployment or upgrading. The fix is a single guard added to that rejects the request when is true. This vulnerability is independent of the theme and is the most severe of the four. It sits in Parse Server's social authentication layer — specifically in the adapters that validate identity tokens for Sign in with Google, Sign in with Apple, and Facebook Login. When a user authenticates via one of these providers, the client receives a JSON Web Token signed by the provider. Parse Server's authentication adapters are supposed to verify this token, they check the signature, the expiry, and critically, the audience claim — the field that specifies which application the token was issued for. Audience validation is what prevents a token issued for one application from being used to authenticate against a different application. Without it, a validly signed token from any Google, Apple, or Facebook application in the world can be used to authenticate against any Parse Server that trusts the same provider. The vulnerability arises from how the adapters handle missing configuration. For the Google and Apple adapters, the audience is passed to JWT verification via the configuration option. When is not set, the adapters do not reject the configuration as incomplete — they silently skip audience validation entirely. The JWT is verified for signature and expiry only, and any valid Google or Apple token from any app will be accepted. For Facebook Limited Login, the situation is worse, the vulnerability exists regardless of configuration. The Facebook adapter validates as the expected audience for the Standard Login (Graph API) flow. However, the Limited Login path — which uses JWTs rather than Graph API tokens — never passes to JWT verification at all. The code path simply does not include the audience parameter in the verification call, meaning no configuration value, however correct, can prevent the bypass on the Limited Login path. The attack is straightforward. An attacker creates or uses any existing Google, Apple, or Facebook application they control, signs in to obtain a legitimately signed JWT, and then presents that token to a vulnerable Parse Server's authentication endpoint. Because audience validation is skipped, the token passes verification. Combined with the ability to specify which Parse Server user account to associate the token with, this becomes full pre-authentication account takeover for any user on the server — with no credentials, no brute force, and no interaction from the victim. The fix enforces (Google/Apple) and (Facebook) as mandatory configuration and passes them correctly to JWT verification for both the Standard Login and Limited Login paths on all three adapters. What is Parse Server? The readOnlyMasterKey Contract Vulnerabilities CVE-2026-29182 Cloud Hooks and Cloud Jobs bypass readOnlyMasterKey CVE-2026-30228 File Creation and Deletion bypass readOnlyMasterKey CVE-2026-30229 /loginAs allows readOnlyMasterKey to gain full access as any user CVE-2026-30863 JWT Audience Validation Bypass in Google, Apple, and Facebook Adapters Disclosure Timeline CVE-2026-29182: GHSA-vc89-5g3r-cmhh — Fixed in 8.6.4 , 9.4.1-alpha.3 CVE-2026-30228: GHSA-xfh7-phr7-gr2x — Fixed in 8.6.5 , 9.5.0-alpha.3 CVE-2026-30229: GHSA-79wj-8rqv-jvp5 — Fixed in 8.6.6 , 9.5.0-alpha.4 CVE-2026-30863: GHSA-x6fw-778m-wr9v — Fixed in 8.6.10 , 9.5.0-alpha.11 Parse Server repository: github.com/parse-community/parse-server
I’ve been using AI in my job a lot more lately — and it’s becoming an explicit expectation across the industry. Write more code, deliver more features, ship faster. You know what this makes me think about? Vim. I’ll explain myself, don’t worry. I like Vim. Enough to write a book about the editor , and enough to use Vim to write this article. I’m sure you’ve encountered colleagues who swear by their Vim or Emacs setups, or you might be one yourself. Here’s the thing most people get wrong about Vim: it isn’t about speed. It doesn’t necessarily make you faster (although it can), but what it does is keep you in the flow. It makes text editing easier — it’s nice not having to hunt down the mouse or hold an arrow key for exactly three and a half seconds. You can just delete a sentence. Or replace text inside the parentheses, or maybe swap parentheses for quotes. You’re editing without interruption, and it gives your brain space to focus on the task at hand. AI tools look this way on the surface. They promise the same thing Vim delivers: less friction, more flow, your brain freed up to think about the hard stuff. And sometimes they actually deliver on that promise! I’ve had sessions where an AI assistant helped me skip past the tedious scaffolding and jump straight into the interesting architectural problem. There’s lots of good here. Well, I think the difference between AI and Vim explains a lot of the discomfort engineers are feeling right now. When I use Vim, the output is mine. Every keystroke, every motion, every edit — it’s a direct translation of my intent. Vim is a transparent tool: it does exactly what I tell it to do, nothing more. The skill floor and ceiling are high, but the relationship is honest. I learn a new motion, I understand what it does, and I can predict its behavior forever. There’s no hallucination. will always c hange text i nside parentheses. It won’t sometimes change the whole paragraph because it misunderstood the context. AI tools have a different relationship with their operator. The output looks like yours, reads like yours, and certainly looks more polished than what you would produce on a first pass. But it isn’t a direct translation of your intent. Sometimes it’s a fine approximation. Sometimes it’s subtly wrong in ways you won’t catch until a hidden bug hits production. This is what I’d call the depth problem. When I use Vim, nobody can tell from reading my code whether I wrote it in Vim, VS Code, or Notepad. The tool is invisible in the artifact. And that’s fine, great even - because the quality of the output still depends entirely on me. My understanding of the problem, my experience with the codebase, my judgment about edge cases, my ability to produce elegant code - all of that shows up in the final product, regardless of which editor I used to type it up. AI inverts this. The tool is extremely visible in the artifact - it shapes the output’s style, structure, and polish - but the operator’s skill level becomes invisible. Everything comes out looking equally competent. You can’t tell from a pull request whether the author spent thirty minutes carefully steering the AI through edge cases or just hit accept on the first suggestion. That’s a huge problem, really. Because before, a bad pull request was easy to spot. Oftentimes a junior engineer would give you “hints” by not following the style guides or established conventions, which eventually tips you off and leads you to discover a major bug or missed corner case. Well, AI output always looks polished. We lost a key indicator which makes engineering spidey sense tingle. Now every line of code, every pull request is a suspect. And that’s exhausting. I just read Ivan Turkovic’s excellent AI Made Writing Code Easier. It Made Being an Engineer Harder (thanks for the share-out, Ben), and I couldn’t agree more with his core observation. The gap between “looking done” and “being right” is growing, and it’s growing fast. You know what’s annoying? When your PM can prototype something in an afternoon and expects you to get that prototype “the rest of the way done” by Friday. Or the same day, if they’re feeling particularly optimistic about what “the rest of the way” means (my PMs are wonderful and thankfully don’t do this). But either way I don’t blame them, honestly. The prototype looks great. It’s got real-ish data, it handles the happy path, and it even has a loading spinner. It looks like a product. And if I could build this in two hours with an AI tool - well, how hard could it be for a full-time engineer to finish it up? The answer, of course, is that the last 10% of the work is 90% of the effort. Edge cases, error handling, validation, accessibility, security, performance under load, integration with existing systems, observability - none of that is visible in a prototype, and AI tools are exceptionally good at producing work that doesn’t have any of it. The prototype isn’t 90% done. It 90% looks good. Of course there’s an education component here - understanding the difference between surface level polish and structural soundness. But there’s a deeper problem here too, and it’s hard to solve with education alone. My friend and colleague Sarah put this better than I could: we’re going to need lessons in empathy. Here’s what she means. When a PM can spin up a working prototype in an afternoon using AI, they start to believe - even subconsciously - that they understand what engineering involves. When an engineer uses AI to generate user-facing documentation, they start to think the tech writer’s job is trivial. When a designer uses AI to write frontend code, they wonder why the team needs a dedicated frontend engineer. And none of these people are wrong about what they experienced. The PM really did build a working prototype. The engineer really did produce passable documentation. But the conclusion that they “did the other person’s job” and the job is therefore easy - is completely wrong. Speaking of Sarah. Sarah is a staff user experience researcher. It’s Doctor Sarah, actually. And I had the opportunity to contribute on a research paper, and I used AI to structure my contributions, and I was oh-so-proud of the work because it looked exactly like what I’ve seen in countless research papers I’ve read over the years. And Sarah scanned through my contributions, and was real proud of me. Until she sat down to read what I wrote, and had to rewrite just about everything I “contributed” from scratch. AI gives everyone a surface-level ability to contribute across almost any domain or role. And surface-level ability is the most dangerous kind, because it comes with surface-level understanding and full-depth confidence. Modern knowledge jobs are often understood by their output. Tech writers by the documents produced, designers by the mocks, and software engineers by code. But none of those artifacts are core skills of each role. Tech writers are really good at breaking down complex concepts in ways majority of people can understand and internalize. Designers build intuition and understanding of how people behave and engage with all kinds of stuff. Software engineers solve problems. AI tools can’t do those things. The path forward isn’t to gatekeep or to dismiss AI-generated contributions. It’s to build organizational empathy - a genuine understanding that every discipline has depth that isn’t visible from the outside, and that a tool which lets you produce artifacts in another person’s domain doesn’t mean you understand that domain. This is, admittedly, not a new problem. Engineers have underestimated designers since the dawn of software. PMs have underestimated engineers for just as long. But AI is pouring fuel on this particular fire by making everyone feel like a competent generalist. I don’t want to be the person writing yet another “AI is ruining everything” essay. Frankly, there are enough of those. AI tools are genuinely useful - I use them daily, they make certain kinds of work better, and they’re here to stay. The scaffolding, the boilerplate, the “I know exactly what this should look like but I don’t want to type it out” moments - AI is great for those. Just like Vim is great for the “I need to restructure this method” moments. A few things I think help, borrowing from Turkovic’s recommendations and adding some of my own: Draw clear boundaries around AI output. A prototype is a prototype, not a product. AI-generated code is a first draft, not a pull request. Making this explicit - in team norms, in review processes, in how we talk about work - helps close the gap between appearance and reality. Invest in education, not just adoption. Rolling out AI tools without teaching people how to evaluate their output is like handing someone Vim without explaining modes. They’ll produce something, sure, but they won’t understand what they produced. And unlike Vim, where the failure mode is in your file, the failure mode with AI is shipping code that looks correct and isn’t. Build empathy across disciplines. This is Sarah’s point, and I think it’s the most important one. If AI makes it easy for anyone to produce surface-level work in any domain, then we need to get much better at respecting the depth beneath the surface. That means engineers sitting with PMs to understand their constraints, PMs shadowing engineers through the painful parts of productionization, and everyone acknowledging that “I made a thing with AI” is the beginning of a conversation, not the end of one. Protect your flow. This is the Vim lesson. The best tools are the ones that serve your intent without distorting it. If an AI tool is helping you think more clearly about the problem, great. If it’s generating so much output that your job has shifted from “solving problems” to “reviewing AI’s work” - that’s not flow. That’s a different job, and it might not be the one you signed up for. I keep coming back to this: Vim is a good tool because it does what I mean. The gap between my intent and the output is zero. AI tools are useful, sometimes very useful, but that gap is never zero. Knowing when the gap matters and when it doesn’t - that’s a core skill for where we are today. P.S. Did this piece need a Vim throughline? No it didn’t. But I enjoyed shoehorning it in regardless. I hear that’s going around lately. All opinions expressed here are my own. I don’t speak for Google.
This is a small javascript snippet that removes most annoying website elements: It's really simple: removing anything that doesn't scroll with the page, and enabling scrolling if it's been disabled. This gets rid of cookie popups/banners, recommendation sidebars, those annoying headers that follow you down the page, etc, etc. If you don't want to mess around with the JS console , you can drag this link into your bookmark bar, and run it by clicking the bookmark: Cleanup Site If you need to manually create the bookmark, here's the URL: (On mobile chrome, you will have to click the bookmark from the address bar instead of the menu.) This is a typical website before the script : ... and after: One click to get all your screen space back. It even works on very complex sites like social media — great for when you want to read a longer post without constant distractions. As a bonus, I made these to fix bad color schemes: Force dark mode ... and ... Force light mode
HN Skins 0.3.0 is a minor update to HN Skins, a web browser userscript that adds custom themes to Hacker News and allows you to browse HN with a variety of visual styles. This release includes fixes for a few issues that slipped through earlier versions. For example, the comment input textbox now uses the same font face and size as the rest of the active theme. The colour of visited links has also been slightly muted to make it easier to distinguish them from unvisited links. In addition, some skins have been renamed: Teletype is now called Courier and Nox is now called Midnight. Further, the font face of several monospace based themes is now set to instead of . This allows the browser's preferred monospace font to be used. The font face of the Courier skin (formerly known as Teletype) remains set to . This will never change because the sole purpose of this skin is to celebrate this legendary font. To view screenshots of HN Skins or install it, visit github.com/susam/hnskins . Read on website | #web | #programming | #technology
TLDR: Use in your CSP and nothing besides works, essentially removing all DOM-XSS risks. I was guest at the ShopTalkShow Podcast to talk about and the HTML Sanitizer API. Feel free to listen to the whole episode, if you want to …
Round One of Mad CSS is out on YouTube!
Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Sharp Tech video is on why Amazon is ramping AI spending. Anthropic and the Military. This week’s Stratechery Interview with Gregory Allen of the Center for Strategic and International Studies and was one of my favorite conversations of the year so far. After a week of overheated rhetoric in every direction, Ben and Greg talk through the parallels and differences between AI and nuclear weapons, how the military uses autonomous weapons and the state of the art in 2026, and Allen provides some great insight into the process of contracting with the U.S. military and Anthropic’s process, specifically. I’d recommend this to anyone who’s been reading about the Anthropic standoff all week, as it was the best treatment of the issues that I’ve seen anywhere. — Andrew Sharp U.S. History and Our Political Present. On Sharp Text this week, I offered my own thoughts on the Anthropic mess , including a tour of American history that makes clear the government leaning on private businesses is not new, legal challenges have been common, and particularly given the security implications of AI, the tension here is not particularly surprising. More importantly, I find myself exhausted by the way everyone processes political controversies these days, including warnings about a dire American future that are now a daily occurrence online. Come for Anthropic, then, and stay for my one great hope for the future. — AS Apple Goes Downmarket. Apple released an entirely new Mac, and, for the first time in a long time (maybe ever?), the overriding motivation was to be cheap. We discuss John’s hands-on experience with the MacBook Neo on Dithering; it is both a Tim Cook special — no iPhone chip will go to waste! — and also the exact opposite of the super thin MacBook that I wanted a sequel to. — Ben Thompson Anthropic and Alignment — Anthropic is in a standoff with the Department of War; while the company’s concerns are legitimate, it position is intolerable and misaligned with reality. Technological Scale and Government Control, Paramount Outbids Netflix for Warner Bros. — Why government is not the primary customer for tech companies, and is Netflix relieved that they were outbid for Warner Bros.? Anthropic’s Skyrocketing Revenue, A Contract Compromise?, Nvidia Earnings — Anthropic’s enterprise business is reaching escape velocity, which increases the importance of finding a compromise with the government. Then, agents dramatically increase demand for Nvidia chips, even if they threaten software. An Interview with Gregory Allen About Anthropic and the U.S. Government — An interview with Gregory Allen about Anthropic’s dispute with the U.S. government. The End of the World As We Know It — On Anthropic’s standoff with the U.S. government and the exhausting nature of modern news commentary. Anthropic and the U.S. Government MacBook Neo Thyristors Did to Power What Transistors Did to Logic Vancomycin: The Iconic Antibiotic of Last Resort All Eyes on Iran; Two Sessions Questions; Alibaba, DeepSeek and Distillation; Another UK Spying Scandal An Emergency Bullseye Designation, Reviewing a Surprisingly Eventful Week, Remembering the 2011 Lockout League The Anthropic Mess Continues, Frontier AI and the Uncertain Future of Law, Q&A on Netflix, Dating Apps, F1
I recently started a new platform where I sell my books and courses, and in this website I needed to send account related emails to my users for things such as email address verification and password reset requests. The reasonable option that is often suggested is to use a paid email service such as Mailgun or SendGrid. Sending emails on your own is, according to the Internet, too difficult. Because the prospect of adding yet another dependency on Big Tech is depressing, I decided to go against the general advice and roll my own email server. And sure, it wasn't trivial, but it wasn't all that hard either! Are you interested in hosting your own email server, like me? In this article I'll tell you how to go from nothing to being able to send emails that are accepted by all the big email players. My main concern is sending, but I will also cover the simple solution that I'm using to receive emails and replies.
It has now been a month since I started playing with Claude Code “for real” and by now I’ve mostly switched to Codex CLI: it is much snappier—who would imagine that a “Rewrite in Rust” would make things tangibly faster—and the answers feel more to-the-point than Claude’s to me. As part of this experiment, I decided to go all-in with the crazy idea of vibecoding a project without even looking at the code. The project I embarked on is an Emacs module to wrap a CLI ticket tracking tool designed to be used in conjunction with coding agents. Quite fitting for the journey, I’d say. In this article, I’d like to present a bunch of reflections on this relatively-simple vibecoding journey. But first, let’s look at what the Emacs module does. Oh, you saw em dashes and thought “AI slop article”? Think again. Blog System/5 is still humanly written. Subscribe to support it! CLI-based ticket tracking seems to be a necessity to support driving multiple agents at once, for long periods of time, and to execute complex tasks. A bunch of tools have shown up to track tickets via Markdown files in a way that the agents can interact with. The prime example is Beads by Steve Yegge . I would have used it if I hadn’t read otherwise, but then the article “A ‘Pure Go’ Linux environment, ported by Claude, inspired by Fabrice Bellard” showed up and it contained this gem, paraphrased by yours truly: Beads is a 300k SLOC vibecoded monster backed by a 128MB Git repository, sporting a background daemon, and it is sluggish enough to increase development latency… all to manage a bunch of Markdown files. Like, WTH. The article went on to suggest Ticket (tk) instead: a pure shell implementation of a task tracking tool backed by Markdown files stored in a directory in your repo. This sort of simple tool is my jam and I knew I could start using it right away to replace the ad-hoc text files I typically write. Once I installed the tool and created a nixpkgs package for it —which still requires approval, wink wink—I got to creating a few tickets. As I started using Ticket more and more to keep a local backlog for my EndBASIC compiler and VM rewrite, I started longing for some sort of integration in Doom Emacs. I could edit the Markdown files produced by just fine, of course, but I wanted the ability to find them with ease and to create new tickets right from the editor. Normally, I would have discarded this idea because I don’t know Elisp. However, it quickly hit me: “I can surely ask Claude to write this Emacs module for me”. As it turns out, I could, and within a few minutes I had a barebones module that gave me rudimentary ticket creation and navigation features within Emacs. I didn’t even look at the code, so I continued down the path of refining the module via prompts to fix every bug I found and implement every new idea I had. By now, works reasonably well and fulfills a real need I had, so I’m pretty happy with the result . If you care to look, the nicest thing you’ll find is a tree-based interactive browser that shows dependencies and offers shortcuts to quickly manipulate tickets. doesn’t offer these features, so these are all implemented in Elisp by parsing the tickets’ front matter and implementing graph building and navigation algorithms. After all, Elisp is a much more powerful language than the shell, so this was easier than modifying itself. Should you want to try this out, visit jmmv/ticket.el on GitHub for instructions on how to install this plugin and to learn how to use it. I can’t promise it will function on anything but Doom Emacs even if the vibewritten claims that it does, but if it doesn’t, feel free to send a PR. Alright, so it’s time for those reflections I promised. Well, yes! It took more-or-less prodding to convince the AI that certain features it implemented didn’t work, but with little effort in additional prompts, I was able to fix them in minutes. A big part of why the AI failed to come up with fully working solutions upfront was that I did not set up an end-to-end feedback cycle for the agent. If you take the time to do this and tell the AI what exactly it must satisfy before claiming that a task is “done”, it can generally one-shot changes. But I didn’t do that here. At some point I asked the agent to write unit tests, and it did that, but those seem to be insufficient to catch “real world” Emacs behavior because even if the tests pass, I still find that features are broken when trying to use them. And for the most part, the failures I’ve observed have always been about wiring shortcuts, not about bugs in program logic. I think I’ve only come across one case in which parentheses were unbalanced. Certainly not. While learning Lisp and Elisp has been in my backlog for years and I’d love to learn more about these languages, I just don’t have the time nor sufficient interest to do so. Furthermore, without those foundations already in place, I would just not have been able to create this at all. AI agents allowed me to prototype this idea trivially , for literal pennies, and now I have something that I can use day to day. It’s quite rewarding in that sense: I’ve scratched my own itch with little effort and without making a big deal out of it. Nope. Even though I just said that getting the project to work was rewarding, I can’t feel proud about it. I don’t have any connection to what I have made and published, so if it works, great, and if it doesn’t… well, too bad. This is… not a good feeling. I actually enjoy the process of coding probably more than getting to a finished product. I like paying attention to the details because coding feels like art to me, and there is beauty in navigating the thinking process to find a clean and elegant solution. Unfortunately, AI agents pretty much strip this journey out completely. At the end of the day, I have something that I can use, though I don’t feel it is mine. Not really, and supports why people keep bringing up the Jevons paradox . Yes, I did prompt the agent to write this code for me but I did not just wait idly while it was working: I spent the time doing something else , so in a sense my productivity increased because I delivered an extra new thing that I would have not done otherwise. One interesting insight is that I did not require extended blocks of free focus time—which are hard to come by with kids around—to make progress. I could easily prompt the AI in a few minutes of spare time, test out the results, and iterate. In the past, if I ever wanted to get this done, I’d have needed to make the expensive choice of using my little free time on this at the expense of other ideas… but here, the agent did everything for me in the background. Other than how to better prompt the AI and the sort of failures to routinely expect? No. I’m as clueless as ever about Elisp. If you were to ask me to write a new Emacs module today, I would have to rely on AI to do so again: I wouldn’t be able to tell you how long it might take me to get it done nor whether I would succeed at it. And if the agent got stuck and was unable to implement the idea, I would be lost. This is a very different feeling from other tasks I’ve “mastered”. If you ask me to write a CLI tool or to debug a certain kind of bug, I know I’ll succeed and have a pretty good intuition on how long the task is going to take me. But by working with AI on a new domain… I just don’t, and I don’t see how I could build that intuition. This is uncomfortable and dangerous. You can try asking the agent to give you an estimate, and it will, but funnily enough the estimate will be in “human time” so it won’t have any meaning. And when you try working on the problem, the agent’s stochastic behavior could lead you to a super-quick win or to a dead end that never converges on a solution. Of course it is. Regardless, I just don’t care in this specific case . This is a project I started to play with AI and to solve a specific problem I had. The solution works and it works sufficiently well that I just don’t care how it’s done: after all, I’m not going to turn this Emacs module into “my next big thing”. The fact that I put the code as open source on GitHub is because it helps me install this plugin across all machines in which I run Doom Emacs, not because I expect to build a community around it or anything like that. If you care about using the code after reading this text and you are happy with it, that’s great, but that’s just a plus. I opened the article ranting about Beads’ 300K SLOC codebase, and “bloat” is maybe the biggest concern I have with pure vibecoding. From my limited experience, coding agents tend to take the path of least resistance to adding new features, and most of the time this results in duplicating code left and right. Coding agents rarely think about introducing new abstractions to avoid duplication, or even to move common code into auxiliary functions. They’ll do great if you tell them to make these changes—and profoundly confirm that the refactor is a great idea—but you must look at their changes and think through them to know what to ask. You may not be typing code, but you are still coding in a higher-level sense. But left unattended, you’ll end up with vast amounts of duplication: aka bloat. I fear we are about to see an explosion of slow software like we have never imagined before. And there is also the cynical take: the more bloat there is in the code, the more context and tokens agents need to understand it, so the more you have to pay their providers to keep up with the project. And speaking of open source… we must ponder what this sort of coding process means in this context. I’m worried that vibecoding can lead to a new type of abuse of open source that is hard to imagine: yes, yes, training the AI models has already been done by abusing open source, but that’s nothing compared to what might come in terms of taking over existing projects or drowning them with poor contributions. I’m starting to question my preference for BSD-style licenses all along… and this is such an interesting and important topic that I have more to say, but I’m going to save those thoughts for the next article. Vibecoding has been an interesting experiment. I got exactly what I wanted with almost no effort but it all feels hollow. I’ve traded the joy of building for the speed of prompting, and while the result is useful, it’s still just “slop” to me. I’m glad it works, but I’m worried about what this means for the future of software. Visit ticket and ticket.el to play with these tools if you are curious or need some sort of lightweight ticket management system for your AI interactions.