Latest Posts (20 found)

Nix is a lie, and that’s ok

When Eelco Dolstra , father of Nix, descended from the mountain tops and enlightened us all, one of the main commandments for Nix was to eschew all uses of the Filesystem Hierarchy Standard (FHS) . The FHS is the “find libraries and files by convention” dogma Nix abandons in the pursuit of purity. What if I told you that was a lie ? 😑 Nix was explicitly designed to eliminate standard FHS paths (like or ) to guarantee reproducibility. However, graphics drivers represent a hard boundary between user-space and kernel-space. The user-space library ( ) must match the host OS’s kernel module and the physical GPU. Nearly all derivations do not bundle with them because they have no way of predicting the hardware or host kernel the binary will run on. What about NixOS? Surely, we know what kernel and drivers we have there!? 🤔 Well, if we modified every derivation to include the correct it would cause massive rebuilds for every user and make the NixOS cache effectively useless. To solve this, NixOS & Home Manager introduce an intentional impurity, a global path at where derivations expect to find . We’ve just re-introduced a convention path à la FHS. 🫠 Unfortunately, that leaves users who use Nix on other Linux distributions in a bad state which is documented in issue#9415 , that has been opened since 2015. If you tried to install and run any Nix application that requires graphics, you’ll be hit with the exact error message Nix was designed to thwart: There are a couple of workarounds for those of us who use Nix on alternate distributions: For those of us though who cling to the beautiful purity of Nix however it feels like a sad but ultimately necessary trade-off. Thou shall not use FHS, unless you really need to. nixGL , a runtime script that injects the library via manually hacking creating your own and symlinking it with the drivers from

0 views

Sunsetting The 512kb Club

All good things must come to an end, and today is that day for one of my projects, the 512kb Club . I started the 512kb Club back in November 2020, so it's been around 5.5 years. It's become a drain and I'm ready to move on. As of today I won't be accepting any new submissions to the project. At the time of writing this, there are 25 PRs open for new submissions, I'll work through them, then will disable the ability to submit pull requests. Over the years there have been nearly 2,000 pull requests, and there are currently around 950 sites listed on the 512kb Club. Pretty cool, but it's a lot of work to manage - there's reviewing new submissions (which is a steady stream of pull requests), cleaning up old sites, updating sites, etc. It's more than I have time to do. I'm also trying to focus my time on other projects, like Pure Commons . It's sad to see this kind of project fall by the wayside, but life moves on. Having said that, if you think you want to take over 512kb Club, let's have a chat, there are some pre-requisites though: I'm probably going to get a lot of emails with offers to help (which is fantastic), but if we've never interacted before, I won't be moving forward with your kind offer. After reading the above, if we know each other, and you're still interested, use the email button below and we can have a chat about you potentially taking over. By taking over, I will expect you to: If you're just looking to take over and use it as a means to slap ads on it, and live off the land, I'd rather it go to landfill, and will just take the site down. That's why I only want someone I know and trust to take it over. I think I've made my point now. 🙃 If there's no-one prepared to take over, I plan to do one final export of the source from Jekyll, then upload that to my web server, where it will live until I decide to no longer renew the domain. I'll also update the site with a message stating that the project has been sunset and there will be no more submissions. If you don't wanna see that happen, please get in touch. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . We need to know each other. I'm not going to hand the project over to someone I don't know, sorry. You probably need to be familiar with Jekyll and Git. Take ownership of the domain, so you will be financially responsible for renewals. Take ownership of the GitHub repo , so you will be responsible for all pull requests, issues and anything else Git related. Be responsible for all hosting and maintenance of the project - the site is currently hosted on my personal Vercel account, which I will be deleting after handing off. Be a good custodian of the 512kb Club and continue to maintain it in its current form.

0 views
devansh Today

Four Vulnerabilities in Parse Server

Parse Server is one of those projects that sits quietly beneath a lot of production infrastructure. It powers the backend of a meaningful number of mobile and web applications, particularly those that started on Parse's original hosted platform before it shut down in 2017 and needed somewhere to migrate. Currently the project has over 21,000+ stars on GitHub I recently spent some time auditing its codebase and found four security vulnerabilities. Three of them share a common root, a fundamental gap between what is documented to do and what the server actually enforces. The fourth is an independent issue in the social authentication adapters that is arguably more severe, a JWT validation bypass that allows an attacker to authenticate as any user on a target server using a token issued for an entirely different application. The Parse Server team was responsive throughout and coordinated fixes promptly. All four issues have been patched. Parse Server is an open-source Node.js backend framework that provides a complete application backend out of the box, a database abstraction layer (typically over MongoDB or PostgreSQL), a REST and GraphQL API, user authentication, file storage, push notifications, Cloud Code for serverless functions, and a real-time event system. It is primarily used as the backend for mobile applications and is the open-source successor to Parse's original hosted backend-as-a-service platform. Parse Server authenticates API requests using one of several key types. The grants full administrative access to all data, bypassing all object-level and class-level permission checks. It is intended for trusted server-side operations only. Parse Server also exposes a option. Per its documentation, this key grants master-level read access, it can query any data, bypass ACLs for reading, and perform administrative reads, but is explicitly intended to deny all write operations. It is the kind of credential you might hand to an analytics service, a monitoring agent, or a read-only admin dashboard, enough power to see everything, but no ability to change anything. That contract is what three of these four vulnerabilities break. The implementation checks whether a request carries master-level credentials by testing a single flag — — on the auth object. The problem is that authentication sets both and , and a large number of route handlers only check the former. The flag is set but never consulted, which means the read-only restriction exists in concept but not in enforcement. Cloud Hooks are server-side webhooks that fire when specific Parse Server events occur — object creation, deletion, user signup, and so on. Cloud Jobs are scheduled or manually triggered background tasks that can execute arbitrary Cloud Code functions. Both are powerful primitives: Cloud Hooks can exfiltrate any data passing through the server's event stream, and Cloud Jobs can execute arbitrary logic on demand. The routes that manage Cloud Hooks and Cloud Jobs — creating new hooks, modifying existing ones, deleting them, and triggering job execution — are all guarded by master key access checks. Those checks verify only that the requesting credential has . Because satisfies that condition, a caller holding only the read-only credential can fully manage the Cloud Hook lifecycle and trigger Cloud Jobs at will. The practical impact is data exfiltration via Cloud Hook. An attacker who knows the can register a new Cloud Hook pointing to an external endpoint they control, then watch as every matching Parse Server event — user signups, object writes, session creation — is delivered to them in real time. The read-only key, intended to allow passive observation, can be turned into an active wiretap on the entire application's event stream. The fix adds explicit rejection checks to the Cloud Hook and Cloud Job handlers. Parse Server's Files API exposes endpoints for uploading and deleting files — and . Both routes are guarded by , a middleware that checks whether the incoming request has master-level credentials. Like the Cloud Hooks routes, this check only tests and never consults . The root cause traces through three locations in the codebase. In at lines 267–278, the read-only auth object is constructed with . In at lines 107–113, the delete route applies as its only guard. At lines 586–602 of the same file, the delete handler calls through to without any additional read-only check in the call chain. The consequence is that a caller with only can upload arbitrary files to the server's storage backend or permanently delete any existing file by name. The upload vector is primarily an integrity concern — poisoning stored assets. The deletion vector is a high-availability concern — an attacker can destroy application data (user avatars, documents, media) that may not have backups, and depending on how the application is structured, deletion of certain files could cause cascading application failures. The fix adds rejection to both the file upload and file delete handlers. This is the most impactful of the three issues. The endpoint is a privileged administrative route intended for master-key workflows — it accepts a parameter and returns a valid, usable session token for that user. The design intent is to allow administrators to impersonate users for debugging or support purposes. It is the digital equivalent of a master key that can open any door. The route's handler, , is located in at lines 339–345 and is mounted as at lines 706–708. The guard condition rejects requests where is false. Because produces an auth object where is true — and because there is no check anywhere in the handler or its middleware chain — the read-only credential passes the gate and the endpoint returns a fully usable for any provided. That session token is not a read-only token. It is a normal user session token, indistinguishable from one obtained by logging in with a password. It grants full read and write access to everything that user's ACL and role memberships permit. An attacker with the and knowledge of any user's object ID can silently mint a session as that user and then act as them with complete write access — modifying their data, making purchases, changing their email address, deleting their account, or doing anything else the application allows its users to do. There is no workaround other than removing from the deployment or upgrading. The fix is a single guard added to that rejects the request when is true. This vulnerability is independent of the theme and is the most severe of the four. It sits in Parse Server's social authentication layer — specifically in the adapters that validate identity tokens for Sign in with Google, Sign in with Apple, and Facebook Login. When a user authenticates via one of these providers, the client receives a JSON Web Token signed by the provider. Parse Server's authentication adapters are supposed to verify this token, they check the signature, the expiry, and critically, the audience claim — the field that specifies which application the token was issued for. Audience validation is what prevents a token issued for one application from being used to authenticate against a different application. Without it, a validly signed token from any Google, Apple, or Facebook application in the world can be used to authenticate against any Parse Server that trusts the same provider. The vulnerability arises from how the adapters handle missing configuration. For the Google and Apple adapters, the audience is passed to JWT verification via the configuration option. When is not set, the adapters do not reject the configuration as incomplete — they silently skip audience validation entirely. The JWT is verified for signature and expiry only, and any valid Google or Apple token from any app will be accepted. For Facebook Limited Login, the situation is worse, the vulnerability exists regardless of configuration. The Facebook adapter validates as the expected audience for the Standard Login (Graph API) flow. However, the Limited Login path — which uses JWTs rather than Graph API tokens — never passes to JWT verification at all. The code path simply does not include the audience parameter in the verification call, meaning no configuration value, however correct, can prevent the bypass on the Limited Login path. The attack is straightforward. An attacker creates or uses any existing Google, Apple, or Facebook application they control, signs in to obtain a legitimately signed JWT, and then presents that token to a vulnerable Parse Server's authentication endpoint. Because audience validation is skipped, the token passes verification. Combined with the ability to specify which Parse Server user account to associate the token with, this becomes full pre-authentication account takeover for any user on the server — with no credentials, no brute force, and no interaction from the victim. The fix enforces (Google/Apple) and (Facebook) as mandatory configuration and passes them correctly to JWT verification for both the Standard Login and Limited Login paths on all three adapters. What is Parse Server? The readOnlyMasterKey Contract Vulnerabilities CVE-2026-29182 Cloud Hooks and Cloud Jobs bypass readOnlyMasterKey CVE-2026-30228 File Creation and Deletion bypass readOnlyMasterKey CVE-2026-30229 /loginAs allows readOnlyMasterKey to gain full access as any user CVE-2026-30863 JWT Audience Validation Bypass in Google, Apple, and Facebook Adapters Disclosure Timeline CVE-2026-29182: GHSA-vc89-5g3r-cmhh — Fixed in 8.6.4 , 9.4.1-alpha.3 CVE-2026-30228: GHSA-xfh7-phr7-gr2x — Fixed in 8.6.5 , 9.5.0-alpha.3 CVE-2026-30229: GHSA-79wj-8rqv-jvp5 — Fixed in 8.6.6 , 9.5.0-alpha.4 CVE-2026-30863: GHSA-x6fw-778m-wr9v — Fixed in 8.6.10 , 9.5.0-alpha.11 Parse Server repository: github.com/parse-community/parse-server

0 views

AI, Vim, And the illusion of flow

I’ve been using AI in my job a lot more lately — and it’s becoming an explicit expectation across the industry. Write more code, deliver more features, ship faster. You know what this makes me think about? Vim. I’ll explain myself, don’t worry. I like Vim. Enough to write a book about the editor , and enough to use Vim to write this article. I’m sure you’ve encountered colleagues who swear by their Vim or Emacs setups, or you might be one yourself. Here’s the thing most people get wrong about Vim: it isn’t about speed. It doesn’t necessarily make you faster (although it can), but what it does is keep you in the flow. It makes text editing easier — it’s nice not having to hunt down the mouse or hold an arrow key for exactly three and a half seconds. You can just delete a sentence. Or replace text inside the parentheses, or maybe swap parentheses for quotes. You’re editing without interruption, and it gives your brain space to focus on the task at hand. AI tools look this way on the surface. They promise the same thing Vim delivers: less friction, more flow, your brain freed up to think about the hard stuff. And sometimes they actually deliver on that promise! I’ve had sessions where an AI assistant helped me skip past the tedious scaffolding and jump straight into the interesting architectural problem. There’s lots of good here. Well, I think the difference between AI and Vim explains a lot of the discomfort engineers are feeling right now. When I use Vim, the output is mine. Every keystroke, every motion, every edit — it’s a direct translation of my intent. Vim is a transparent tool: it does exactly what I tell it to do, nothing more. The skill floor and ceiling are high, but the relationship is honest. I learn a new motion, I understand what it does, and I can predict its behavior forever. There’s no hallucination. will always c hange text i nside parentheses. It won’t sometimes change the whole paragraph because it misunderstood the context. AI tools have a different relationship with their operator. The output looks like yours, reads like yours, and certainly looks more polished than what you would produce on a first pass. But it isn’t a direct translation of your intent. Sometimes it’s a fine approximation. Sometimes it’s subtly wrong in ways you won’t catch until a hidden bug hits production. This is what I’d call the depth problem. When I use Vim, nobody can tell from reading my code whether I wrote it in Vim, VS Code, or Notepad. The tool is invisible in the artifact. And that’s fine, great even - because the quality of the output still depends entirely on me. My understanding of the problem, my experience with the codebase, my judgment about edge cases, my ability to produce elegant code - all of that shows up in the final product, regardless of which editor I used to type it up. AI inverts this. The tool is extremely visible in the artifact - it shapes the output’s style, structure, and polish - but the operator’s skill level becomes invisible. Everything comes out looking equally competent. You can’t tell from a pull request whether the author spent thirty minutes carefully steering the AI through edge cases or just hit accept on the first suggestion. That’s a huge problem, really. Because before, a bad pull request was easy to spot. Oftentimes a junior engineer would give you “hints” by not following the style guides or established conventions, which eventually tips you off and leads you to discover a major bug or missed corner case. Well, AI output always looks polished. We lost a key indicator which makes engineering spidey sense tingle. Now every line of code, every pull request is a suspect. And that’s exhausting. I just read Ivan Turkovic’s excellent AI Made Writing Code Easier. It Made Being an Engineer Harder (thanks for the share-out, Ben), and I couldn’t agree more with his core observation. The gap between “looking done” and “being right” is growing, and it’s growing fast. You know what’s annoying? When your PM can prototype something in an afternoon and expects you to get that prototype “the rest of the way done” by Friday. Or the same day, if they’re feeling particularly optimistic about what “the rest of the way” means (my PMs are wonderful and thankfully don’t do this). But either way I don’t blame them, honestly. The prototype looks great. It’s got real-ish data, it handles the happy path, and it even has a loading spinner. It looks like a product. And if I could build this in two hours with an AI tool - well, how hard could it be for a full-time engineer to finish it up? The answer, of course, is that the last 10% of the work is 90% of the effort. Edge cases, error handling, validation, accessibility, security, performance under load, integration with existing systems, observability - none of that is visible in a prototype, and AI tools are exceptionally good at producing work that doesn’t have any of it. The prototype isn’t 90% done. It 90% looks good. Of course there’s an education component here - understanding the difference between surface level polish and structural soundness. But there’s a deeper problem here too, and it’s hard to solve with education alone. My friend and colleague Sarah put this better than I could: we’re going to need lessons in empathy. Here’s what she means. When a PM can spin up a working prototype in an afternoon using AI, they start to believe - even subconsciously - that they understand what engineering involves. When an engineer uses AI to generate user-facing documentation, they start to think the tech writer’s job is trivial. When a designer uses AI to write frontend code, they wonder why the team needs a dedicated frontend engineer. And none of these people are wrong about what they experienced. The PM really did build a working prototype. The engineer really did produce passable documentation. But the conclusion that they “did the other person’s job” and the job is therefore easy - is completely wrong. Speaking of Sarah. Sarah is a staff user experience researcher. It’s Doctor Sarah, actually. And I had the opportunity to contribute on a research paper, and I used AI to structure my contributions, and I was oh-so-proud of the work because it looked exactly like what I’ve seen in countless research papers I’ve read over the years. And Sarah scanned through my contributions, and was real proud of me. Until she sat down to read what I wrote, and had to rewrite just about everything I “contributed” from scratch. AI gives everyone a surface-level ability to contribute across almost any domain or role. And surface-level ability is the most dangerous kind, because it comes with surface-level understanding and full-depth confidence. Modern knowledge jobs are often understood by their output. Tech writers by the documents produced, designers by the mocks, and software engineers by code. But none of those artifacts are core skills of each role. Tech writers are really good at breaking down complex concepts in ways majority of people can understand and internalize. Designers build intuition and understanding of how people behave and engage with all kinds of stuff. Software engineers solve problems. AI tools can’t do those things. The path forward isn’t to gatekeep or to dismiss AI-generated contributions. It’s to build organizational empathy - a genuine understanding that every discipline has depth that isn’t visible from the outside, and that a tool which lets you produce artifacts in another person’s domain doesn’t mean you understand that domain. This is, admittedly, not a new problem. Engineers have underestimated designers since the dawn of software. PMs have underestimated engineers for just as long. But AI is pouring fuel on this particular fire by making everyone feel like a competent generalist. I don’t want to be the person writing yet another “AI is ruining everything” essay. Frankly, there are enough of those. AI tools are genuinely useful - I use them daily, they make certain kinds of work better, and they’re here to stay. The scaffolding, the boilerplate, the “I know exactly what this should look like but I don’t want to type it out” moments - AI is great for those. Just like Vim is great for the “I need to restructure this method” moments. A few things I think help, borrowing from Turkovic’s recommendations and adding some of my own: Draw clear boundaries around AI output. A prototype is a prototype, not a product. AI-generated code is a first draft, not a pull request. Making this explicit - in team norms, in review processes, in how we talk about work - helps close the gap between appearance and reality. Invest in education, not just adoption. Rolling out AI tools without teaching people how to evaluate their output is like handing someone Vim without explaining modes. They’ll produce something, sure, but they won’t understand what they produced. And unlike Vim, where the failure mode is in your file, the failure mode with AI is shipping code that looks correct and isn’t. Build empathy across disciplines. This is Sarah’s point, and I think it’s the most important one. If AI makes it easy for anyone to produce surface-level work in any domain, then we need to get much better at respecting the depth beneath the surface. That means engineers sitting with PMs to understand their constraints, PMs shadowing engineers through the painful parts of productionization, and everyone acknowledging that “I made a thing with AI” is the beginning of a conversation, not the end of one. Protect your flow. This is the Vim lesson. The best tools are the ones that serve your intent without distorting it. If an AI tool is helping you think more clearly about the problem, great. If it’s generating so much output that your job has shifted from “solving problems” to “reviewing AI’s work” - that’s not flow. That’s a different job, and it might not be the one you signed up for. I keep coming back to this: Vim is a good tool because it does what I mean. The gap between my intent and the output is zero. AI tools are useful, sometimes very useful, but that gap is never zero. Knowing when the gap matters and when it doesn’t - that’s a core skill for where we are today. P.S. Did this piece need a Vim throughline? No it didn’t. But I enjoyed shoehorning it in regardless. I hear that’s going around lately. All opinions expressed here are my own. I don’t speak for Google.

0 views

HN Skins 0.3.0

HN Skins 0.3.0 is a minor update to HN Skins, a web browser userscript that adds custom themes to Hacker News and allows you to browse HN with a variety of visual styles. This release includes fixes for a few issues that slipped through earlier versions. For example, the comment input textbox now uses the same font face and size as the rest of the active theme. The colour of visited links has also been slightly muted to make it easier to distinguish them from unvisited links. In addition, some skins have been renamed: Teletype is now called Courier and Nox is now called Midnight. Further, the font face of several monospace based themes is now set to instead of . This allows the browser's preferred monospace font to be used. The font face of the Courier skin (formerly known as Teletype) remains set to . This will never change because the sole purpose of this skin is to celebrate this legendary font. To view screenshots of HN Skins or install it, visit github.com/susam/hnskins . Read on website | #web | #programming | #technology

0 views
Stratechery Yesterday

2026.10: Higher Powers and Lower Macs

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Sharp Tech video is on why Amazon is ramping AI spending. Anthropic and the Military.  This week’s Stratechery Interview with Gregory Allen of the Center for Strategic and International Studies and was one of my favorite conversations of the year so far. After a week of overheated rhetoric in every direction, Ben and Greg talk through the parallels and differences between AI and nuclear weapons, how the military uses autonomous weapons and the state of the art in 2026, and Allen provides some great insight into the process of contracting with the U.S. military and Anthropic’s process, specifically. I’d recommend this to anyone who’s been reading about the Anthropic standoff all week, as it was the best treatment of the issues that I’ve seen anywhere.  — Andrew Sharp U.S. History and Our Political Present. On Sharp Text this week, I offered my own thoughts on the Anthropic mess , including a tour of American history that makes clear the government leaning on private businesses is not new, legal challenges have been common, and particularly given the security implications of AI, the tension here is not particularly surprising. More importantly, I find myself exhausted by the way everyone processes political controversies these days, including warnings about a dire American future that are now a daily occurrence online. Come for Anthropic, then, and stay for my one great hope for the future.  — AS Apple Goes Downmarket. Apple released an entirely new Mac, and, for the first time in a long time (maybe ever?), the overriding motivation was to be cheap. We discuss John’s hands-on experience with the MacBook Neo on Dithering; it is both a Tim Cook special — no iPhone chip will go to waste! — and also the exact opposite of the super thin MacBook that I wanted a sequel to. — Ben Thompson Anthropic and Alignment — Anthropic is in a standoff with the Department of War; while the company’s concerns are legitimate, it position is intolerable and misaligned with reality. Technological Scale and Government Control, Paramount Outbids Netflix for Warner Bros. — Why government is not the primary customer for tech companies, and is Netflix relieved that they were outbid for Warner Bros.? Anthropic’s Skyrocketing Revenue, A Contract Compromise?, Nvidia Earnings — Anthropic’s enterprise business is reaching escape velocity, which increases the importance of finding a compromise with the government. Then, agents dramatically increase demand for Nvidia chips, even if they threaten software. An Interview with Gregory Allen About Anthropic and the U.S. Government — An interview with Gregory Allen about Anthropic’s dispute with the U.S. government. The End of the World As We Know It — On Anthropic’s standoff with the U.S. government and the exhausting nature of modern news commentary. Anthropic and the U.S. Government MacBook Neo Thyristors Did to Power What Transistors Did to Logic Vancomycin: The Iconic Antibiotic of Last Resort All Eyes on Iran; Two Sessions Questions; Alibaba, DeepSeek and Distillation; Another UK Spying Scandal An Emergency Bullseye Designation, Reviewing a Surprisingly Eventful Week, Remembering the 2011 Lockout League The Anthropic Mess Continues, Frontier AI and the Uncertain Future of Law, Q&A on Netflix, Dating Apps, F1

0 views

How to Host your Own Email Server

I recently started a new platform where I sell my books and courses, and in this website I needed to send account related emails to my users for things such as email address verification and password reset requests. The reasonable option that is often suggested is to use a paid email service such as Mailgun or SendGrid. Sending emails on your own is, according to the Internet, too difficult. Because the prospect of adding yet another dependency on Big Tech is depressing, I decided to go against the general advice and roll my own email server. And sure, it wasn't trivial, but it wasn't all that hard either! Are you interested in hosting your own email server, like me? In this article I'll tell you how to go from nothing to being able to send emails that are accepted by all the big email players. My main concern is sending, but I will also cover the simple solution that I'm using to receive emails and replies.

0 views
Blog System/5 Yesterday

Reflections on vibecoding ticket.el

It has now been a month since I started playing with Claude Code “for real” and by now I’ve mostly switched to Codex CLI: it is much snappier—who would imagine that a “Rewrite in Rust” would make things tangibly faster—and the answers feel more to-the-point than Claude’s to me. As part of this experiment, I decided to go all-in with the crazy idea of vibecoding a project without even looking at the code. The project I embarked on is an Emacs module to wrap a CLI ticket tracking tool designed to be used in conjunction with coding agents. Quite fitting for the journey, I’d say. In this article, I’d like to present a bunch of reflections on this relatively-simple vibecoding journey. But first, let’s look at what the Emacs module does. Oh, you saw em dashes and thought “AI slop article”? Think again. Blog System/5 is still humanly written. Subscribe to support it! CLI-based ticket tracking seems to be a necessity to support driving multiple agents at once, for long periods of time, and to execute complex tasks. A bunch of tools have shown up to track tickets via Markdown files in a way that the agents can interact with. The prime example is Beads by Steve Yegge . I would have used it if I hadn’t read otherwise, but then the article “A ‘Pure Go’ Linux environment, ported by Claude, inspired by Fabrice Bellard” showed up and it contained this gem, paraphrased by yours truly: Beads is a 300k SLOC vibecoded monster backed by a 128MB Git repository, sporting a background daemon, and it is sluggish enough to increase development latency… all to manage a bunch of Markdown files. Like, WTH. The article went on to suggest Ticket (tk) instead: a pure shell implementation of a task tracking tool backed by Markdown files stored in a directory in your repo. This sort of simple tool is my jam and I knew I could start using it right away to replace the ad-hoc text files I typically write. Once I installed the tool and created a nixpkgs package for it —which still requires approval, wink wink—I got to creating a few tickets. As I started using Ticket more and more to keep a local backlog for my EndBASIC compiler and VM rewrite, I started longing for some sort of integration in Doom Emacs. I could edit the Markdown files produced by just fine, of course, but I wanted the ability to find them with ease and to create new tickets right from the editor. Normally, I would have discarded this idea because I don’t know Elisp. However, it quickly hit me: “I can surely ask Claude to write this Emacs module for me”. As it turns out, I could, and within a few minutes I had a barebones module that gave me rudimentary ticket creation and navigation features within Emacs. I didn’t even look at the code, so I continued down the path of refining the module via prompts to fix every bug I found and implement every new idea I had. By now, works reasonably well and fulfills a real need I had, so I’m pretty happy with the result . If you care to look, the nicest thing you’ll find is a tree-based interactive browser that shows dependencies and offers shortcuts to quickly manipulate tickets. doesn’t offer these features, so these are all implemented in Elisp by parsing the tickets’ front matter and implementing graph building and navigation algorithms. After all, Elisp is a much more powerful language than the shell, so this was easier than modifying itself. Should you want to try this out, visit jmmv/ticket.el on GitHub for instructions on how to install this plugin and to learn how to use it. I can’t promise it will function on anything but Doom Emacs even if the vibewritten claims that it does, but if it doesn’t, feel free to send a PR. Alright, so it’s time for those reflections I promised. Well, yes! It took more-or-less prodding to convince the AI that certain features it implemented didn’t work, but with little effort in additional prompts, I was able to fix them in minutes. A big part of why the AI failed to come up with fully working solutions upfront was that I did not set up an end-to-end feedback cycle for the agent. If you take the time to do this and tell the AI what exactly it must satisfy before claiming that a task is “done”, it can generally one-shot changes. But I didn’t do that here. At some point I asked the agent to write unit tests, and it did that, but those seem to be insufficient to catch “real world” Emacs behavior because even if the tests pass, I still find that features are broken when trying to use them. And for the most part, the failures I’ve observed have always been about wiring shortcuts, not about bugs in program logic. I think I’ve only come across one case in which parentheses were unbalanced. Certainly not. While learning Lisp and Elisp has been in my backlog for years and I’d love to learn more about these languages, I just don’t have the time nor sufficient interest to do so. Furthermore, without those foundations already in place, I would just not have been able to create this at all. AI agents allowed me to prototype this idea trivially , for literal pennies, and now I have something that I can use day to day. It’s quite rewarding in that sense: I’ve scratched my own itch with little effort and without making a big deal out of it. Nope. Even though I just said that getting the project to work was rewarding, I can’t feel proud about it. I don’t have any connection to what I have made and published, so if it works, great, and if it doesn’t… well, too bad. This is… not a good feeling. I actually enjoy the process of coding probably more than getting to a finished product. I like paying attention to the details because coding feels like art to me, and there is beauty in navigating the thinking process to find a clean and elegant solution. Unfortunately, AI agents pretty much strip this journey out completely. At the end of the day, I have something that I can use, though I don’t feel it is mine. Not really, and supports why people keep bringing up the Jevons paradox . Yes, I did prompt the agent to write this code for me but I did not just wait idly while it was working: I spent the time doing something else , so in a sense my productivity increased because I delivered an extra new thing that I would have not done otherwise. One interesting insight is that I did not require extended blocks of free focus time—which are hard to come by with kids around—to make progress. I could easily prompt the AI in a few minutes of spare time, test out the results, and iterate. In the past, if I ever wanted to get this done, I’d have needed to make the expensive choice of using my little free time on this at the expense of other ideas… but here, the agent did everything for me in the background. Other than how to better prompt the AI and the sort of failures to routinely expect? No. I’m as clueless as ever about Elisp. If you were to ask me to write a new Emacs module today, I would have to rely on AI to do so again: I wouldn’t be able to tell you how long it might take me to get it done nor whether I would succeed at it. And if the agent got stuck and was unable to implement the idea, I would be lost. This is a very different feeling from other tasks I’ve “mastered”. If you ask me to write a CLI tool or to debug a certain kind of bug, I know I’ll succeed and have a pretty good intuition on how long the task is going to take me. But by working with AI on a new domain… I just don’t, and I don’t see how I could build that intuition. This is uncomfortable and dangerous. You can try asking the agent to give you an estimate, and it will, but funnily enough the estimate will be in “human time” so it won’t have any meaning. And when you try working on the problem, the agent’s stochastic behavior could lead you to a super-quick win or to a dead end that never converges on a solution. Of course it is. Regardless, I just don’t care in this specific case . This is a project I started to play with AI and to solve a specific problem I had. The solution works and it works sufficiently well that I just don’t care how it’s done: after all, I’m not going to turn this Emacs module into “my next big thing”. The fact that I put the code as open source on GitHub is because it helps me install this plugin across all machines in which I run Doom Emacs, not because I expect to build a community around it or anything like that. If you care about using the code after reading this text and you are happy with it, that’s great, but that’s just a plus. I opened the article ranting about Beads’ 300K SLOC codebase, and “bloat” is maybe the biggest concern I have with pure vibecoding. From my limited experience, coding agents tend to take the path of least resistance to adding new features, and most of the time this results in duplicating code left and right. Coding agents rarely think about introducing new abstractions to avoid duplication, or even to move common code into auxiliary functions. They’ll do great if you tell them to make these changes—and profoundly confirm that the refactor is a great idea—but you must look at their changes and think through them to know what to ask. You may not be typing code, but you are still coding in a higher-level sense. But left unattended, you’ll end up with vast amounts of duplication: aka bloat. I fear we are about to see an explosion of slow software like we have never imagined before. And there is also the cynical take: the more bloat there is in the code, the more context and tokens agents need to understand it, so the more you have to pay their providers to keep up with the project. And speaking of open source… we must ponder what this sort of coding process means in this context. I’m worried that vibecoding can lead to a new type of abuse of open source that is hard to imagine: yes, yes, training the AI models has already been done by abusing open source, but that’s nothing compared to what might come in terms of taking over existing projects or drowning them with poor contributions. I’m starting to question my preference for BSD-style licenses all along… and this is such an interesting and important topic that I have more to say, but I’m going to save those thoughts for the next article. Vibecoding has been an interesting experiment. I got exactly what I wanted with almost no effort but it all feels hollow. I’ve traded the joy of building for the speed of prompting, and while the result is useful, it’s still just “slop” to me. I’m glad it works, but I’m worried about what this means for the future of software. Visit ticket and ticket.el to play with these tools if you are curious or need some sort of lightweight ticket management system for your AI interactions.

0 views
Robin Moffatt Yesterday

AI will fuck you up if you’re not on board

AI slop is ruining the internet . Given half a chance AI will delete your inbox or worse (even if you work in Safety and Alignment at Meta): Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb. pic.twitter.com/XAxyRwPJ5R AI slop is ruining the internet .

0 views
Stone Tools Yesterday

Lotus 1-2-3 on the PC w/DOS

What would a piece of software have to do today to make you cheer and applaud upon seeing a demo? I don't mean the "I'm attending a keynote and this is expected, please don't glower at me Mr. Pichai," polite-company type of applause. I mean the "Everything's different now." kind. For that, the bar is pretty high these days. "Photorealistic" fight scenes between Brad Pitt and Tom Cruise against an apocalyptic cityscape are generated out of nothing but a wish, and social media, smelling the cynical desperation, can offer no more than a clenched-teeth grimace. Within 48 hours the cold light of the epic battle has faded, leaving no residual heat. A sense of awe was easier to elicit back in the golden era. Bill Atkinson scrubbed out some pixels with an eraser in MacPaint to thunderous applause. Andy Warhol did a flood fill on an image capture of Debbie Harry, leaving an audience enraptured. Perhaps miracles work best when they're minor. Mitch Kapor has been on the receiving end of the adulation. As CEO of newly-formed Lotus Corporation, demos of their flagship product 1-2-3 generated significant light and heat with the crowds. In a 2004 interview with the Computer History Museum, Kapor said, "You could with one-click see the graph from your spreadsheet. You could not do that before. That was the killer feature when we demo’d it. I mean, literally, people used to applaud – as hard as it is to believe." He knew all too well the struggles of the VisiCalc crowd, having previously built VisiPlot and VisiTrend for VisiCorp. Those programs worked with VisiCalc data to draw graphs, but required a lot of disk swapping to move in and out of the various programs when fine-tuning charts and graphs. 48K on the Apple 2 made it essentially impossible to fit all of the software into memory at once, but they could at least put everything onto the same diskette, Kapor reasoned. Eliminating that song and dance would be useful to the customers. Depicted as a literal song-and-dance in their advertising. In an interview in Founders at Work, Kapor said, "At various times I raised a number of ideas with the publisher about combining ( VisiCalc and VisiPlot onto one disk) and they weren't interested at all. I don't think they really saw me as an equal. They saw me, when I was there as a product manager, as an annoyance—as a marginal person without experience or credentials who was kind of a pest. And I suppose I was kind of a pest." He said the feeling was mutual, and that was basically it for his employment with Personal Software and the VisiCalc team. He let them buy him out (i.e. the juicy royalties he was receiving for VisiPlot and VisiTrend ) for $1.2M, then took that money and went off to build the better mousetrap he had tried to pitch. Lotus 1-2-3 would quickly become the "killer app" for the nascent IBM-PC, doing for that system what VisiCalc had done earlier for Apple. 1-2-3 's success (and corporate in-fighting between Personal Software and VisiCorp) drove VisiCalc sales into the ground almost immediately. Two years later, Lotus would buy out Personal Software. One year later, Lotus would kill VisiCalc . Today, Microsoft Excel documentation still references Lotus 1-2-3 , not VisiCalc . I have no 1-2-3 experience going into this. I always thought "1-2-3" referred to its relationship to numbers. "1, 2, 3. Row numbers. Numbers in a spreadsheet. Mathy number stuff. I get it." I honestly had no idea "1-2-3" indicated something more. I'm learning that VisiCalc walked so 1-2-3 could run (over VisiCalc's ashes in a Sherman tank) . I have one goal in learning Lotus 1-2-3 . I want to understand what it did that was so superior to my beloved VisiCalc that it practically wiped them out in the first year of launch. Kapor had projected first year 1-2-3 sales of US$1M, but did US$53M instead. That's not just a little better than VisiCalc, that's " VisiWho ?" dominance. VisiCalc is a spreadsheet and 1-2-3 is a spreadsheet, so what's the big fuss? First, the platform of choice, the IBM-PC running PC-DOS (MS-DOS, to those buying it separately), affords two big wins right off the bat. 80-column text mode makes the Apple 2's 40-columns feel claustrophobic (and perhaps a bit un-business-like?). The greatly expanded memory of the 16-bit PC, max 640K vs. the 8-bit Apple 2's 48K, lets far more complex worksheets fill out those roomy 80-columns. As Lotus Corporation and magazines and Wikipedia pages and other blogs love to point out, the true game-changer is contained in the program's very name. "1-2-3" refers to the three components of this "integrated software" package. "1" is the spreadsheet capability, which surpassed most contemporaries handily in speed, being written in x86 assembly (until Release 3). "2" is for those graphing tools which had Kapor's audiences applauding. "3" was intended to be a word processor, but according to programmer Jonathan Sachs, "I was a few weeks into working on the word processing part, and I was getting bogged down. That's about when Context MBA came out, and I got a look at what they had done." "What they had done" was integrate a word processor, communications, and database, along with the spreadsheet and graphics components. Context 1-2-3-4-5 , as it were. When Sachs saw the database, that felt to him like a more natural fit and "3" was re-implemented as a database. "It would be a heck of a lot easier to implement," he noted. Woz bless our lazy programmers. The upshot is 1-2-3 plays nicely with last post's focus, dBase , which feels like a particularly powerful combination. I feel a tingle when skills picked up on a previous exploration pay dividends later. Deluxe Paint + Scala paid off similarly. Is this what it feels like to "level up?" Obtaining literature on Lotus 1-2-3 is only difficult in the " overchoice " sense. I expected to find a lot of books, but perhaps not the "What have I gotten myself into?" existential dread of 1,000 hits on archive.org. It wasn't just books, that period had an interesting side phenomenon of "software vendor published enthusiast magazines." Companies like Aldus, Corel and Oracle all had self-titled publications on newsstands. Lotus Corporation did as well with LOTUS Magazine . Published monthly by Lotus Corporation, it debuted with the May 1985 issue (probably on newsstands late March, early April). The tagline, "Computing for Managers and Professionals," oriented itself toward the decision makers, the ones with purchasing power. A poll of Lotus software users revealed, "Most of you see the computer primarily as a tool and are not interested in computing, per se." Toward that end, the magazine took a different tack than the BYTE s and PC Magazine s of the time. It was to be no-nonsense, non-techno-babble, short, easy-to-digest articles about computing from the manager's perspective. "What's all this I keep hearing about 'floopy disks' and 'rams' and 'memories' and such and so on? It's enough to drive a reasonable business computerist straight to distraction!" says the frazzled corporate executive trope. There there, fret not! LOTUS Magazine feels your pain and addresses it with the cover story of issue 1. "The world of computer memory has enough complexity and high-tech jargon to drive the most reasonable business computerist straight to distraction," leads in to "An Inside Look at Computer Memory" by T.R. Reid. The article explains the differences between RAM and ROM, floppies and hard disks, and so on, unfurrowing the knitted brows of befuddled mid-80's business executives. When it got into the 1-2-3 of it all, LOTUS Magazine didn't pull its punches. Articles were short, around four pages, and assumed a higher level of analytical aptitude than IT aptitude. Lots of charts of formulas, macro definitions with explanations, tips and tricks for faster data entry, and so on fill out the pages. That ran for about seven years, until the December 1992 issue, when publishing duties transferred to PC Magazine as PC Magazine: LOTUS Edition . It was PC Magazine with a mini-magazine's worth of Lotus-specific content appended each month, as a special imprint. That ran until August 1995 , marking a 10-year publication run which would have exceeded my prediction by about eight years. After judging books entirely by their covers, I've chosen the official Lotus manuals for 1.0A, 2.2, and 3.4, and two compilations of tips and tricks previously published in LOTUS Magazine . I flip through other stuff as well, but honestly nothing is holding my attention this time around; they all read the same, "dry and boring." 1,000 pages or more for some of those books and they didn't have room for even one joke? I promise at least seven in this post alone. See if you can spot them all! Launching into the program proper brings me to the expected "I'm a spreadsheet!" grid layout, with column and row labels, arrow-key controllable cell cursor, and a blank area at the top for VisiCalc -y stuff. Let's go. As an intermediate level VisiCalc user, I am delighted my menu muscle memory pays immediate dividends. Clearly Lotus welcomes defectors and even makes life easier on everyone by taking advantage of the 80-column display. VisiCalc 's single-letter menu mnemonics are enhanced in 1-2-3 by simply spelling it all out on-screen. Full menu item names are always visible, yet still accessible by single-letter commands. From the jump, 1-2-3 makes a strong case for itself, providing improved usability and discoverable tools. Before digging in too deeply, I should note that 1-2-3 does all of the VisiCalc things. A1-style cell references, slash menu, fixed and relative cell references, @ functions including transcendentals, range specifier, prefix for values, and on and on. It adds, it subtracts, it calculates interest. 1-2-3 "Yes, and..."s VisiCalc from there. We gain a lot, but there is a notable absence: the upper-right status check. VisiCalc shows calculation order, arrow-key toggle, and free memory in that spot. Those are all gone in 1-2-3 and good riddance, frankly. On the PC I have full arrow keys and more RAM than Woz; 1-2-3 sees my full 16MB of DOS Extended memory. There is no stopping me. 1-2-3 also says nuts to VisiCalc 's "calculation order" (by row or by column) hoo-hah and introduces "minimal recalculation." From the almost comically-straightforward named book Lotus 1-2-3, Release 2.3 , "When 1-2-3 recalculates a worksheet, only those formulas directly affected by a change in the data are recalculated." I am living large here in 1989, or 1991, or whatever year I'm pretending it is this week. Even VisiCalc 's gets a glow up. You know it today as and , both of which were present in 1-2-3 Release 1 back in 1983. At this rate, 1-2-3 is flirting dangerously close to "expected spreadsheet behavior in 2026." Don't get my hopes up, Lotus. There's only down from there. The more I encounter this, the more I wonder if we gave up on it too soon. This could be "blogger overly immersed in their subject matter" brain, but I'm growing to oftentimes prefer two-line horizontal menus over modern GUI menus. I find the left-right, up-down, left-right, up-down, scanning through GUI menus kind of tiring. With the two-line menu, I can step through top-level options with the left/right arrow keys, eyes focused on line two as I scan sub-menu items. It also provides something GUI menus don't: an immediate explanation of a menu item before committing its action to the document. If a menu item is not a sub-menu, line two describes it. It's easy to audit features in an unknown program. Also, every menu item has a keyboard shortcut; just type the first letter. This requires creativity by the developer when naming menu items such that each has a unique first letter, but it also creates a de-facto mnemonic for the user. Don't discount muscle memory! There's one "drawback," but I'll try to make a case for it. Specifically, it is probably impossible to fit everything in a modern GUI menu into a two-line scheme. There's just too much! I suggest the horizontal menu-bar solves this precisely because of that design constraint. If there's too much, the menu needs to be simplified. "Problem solved," the author asserted. This has to be one of 1-2-3 's greatest contributions to modern spreadsheets. It still exists, just open up your modern spreadsheet of choice and try it. Enter 1 through 5 down the A column. Starting with B2, enter the formula and copy it down a few rows. Old hands know that a symbol in a cell reference fixes that row or column of the reference, otherwise references are relative. That's a huge step up from VisiCalc 's "all or nothing" approach to cell references. Put in a formula and copy it through to other cells. For every cell reference, in every copy of the formula, VisiCalc prompts the user for "relative or fixed?" It is a complete drag, and Woz help you the day that formula needs updating. The approach is superior, allowing us to embed relativity into the formula itself. Then, copying a formula across cells copies our intent as a natural course. It's simple to understand and hard to mess up: my favorite combination. While it can't load non- 1-2-3 documents natively, Lotus does provide a nice translation tool for helping us get data out of the heavy hitters of the day. From a Stone Tools perspective, this handles everything I need so far, as VisiCalc and dBase are both accounted for and work as advertised. Translation works both ways, so bringing in dBase data, messing around with it in 1-2-3 , and going back out to dBase is possible, though there are cautions in doing so. One notable thing to watch out for is "deleted" records. dBase only "marks for deletion" (until a .PACK command), and that flag won't survive transit. A small inconvenience, all things considered. In the top-level menu is the shiny new option, the "2" in "1-2-3." I know exactly what I want: a pie chart of game software genres imported from dBase II . The options for are straightforward, and the limitations are self-evident. Notably, look at the "Ranges" settings. Range sets value labels which will appear along the X-axis. Ranges through define six, and only six, ranges of data to plot on the graph. That's it. Everything else you see is "make it pretty." Within the confines of my self-imposed time capsule, my only point of reference thus far is VisiCalc and its clones. Through that lens, I'm blown away by Lotus 1-2-3 . I mean, come on, 3-D bar charts ?! Am I living in the world of TRON right now?! The applause is well-earned, Mitch. Bravo! Encore, even! Now, Mr. Kapor, if you'll excuse me a moment, I need to have a quick, private chat with my readers. Yes, sorry, I'll only be a moment. Hello dear readers. Mitch can't hear us, yeah? We're safe? OK, between you and me, that graphing tool is a little underwhelming, huh? There's a lot we can do to make a graph look as pretty as possible for screens and printers of the time, but the core graphing options themselves are kind of anemic. Here's Google Sheets making the pie chat I'd hoped 1-2-3 could generate. However, 1-2-3 cannot do this because it can only graph strict numeric values; strings, like "genre" types, return blank charts. 1-2-3 also can't coalesce data, like we see Sheets doing above. To achieve my goal, I'll need to figure out a different approach. (Plus, maybe I've discovered a DOSBox-X bug ?) It's not fair to judge past tools as being "inferior" just because they don't live up to 2026 standards. Still, what I'm trying to do must have been one of the first things many business owners wanted to do, right? Am I storing my data in a style that hadn't been popularized yet? Is my 2026 brain making life more difficult for my 1991 doppelgänger unnecessarily? How does one graph out the count of each unique genre? Alright, this is going to get complicated, so I think a diagram is in order. This actually explains a lot about the Lotus 1-2-3 approach to data in general, how to manipulate it, how to query it, and generally how to interface with the more complex functions of the program. Having imported the dBase list of CP/M games from the dBase article, let's extract a list of all titles that are of genre "Simulation." I'll use a subset of the total data so everything fits on screen for demonstration purposes and perform (aka , aka The Notorious DQU, aka Query's L'il Helper) A worksheet is not just rows and columns of data. It also serves as a control mechanism for defining interactions with the data. A worksheet has columns up to IV (256) and rows up to 8192. What do we do with 2,000,000+ cells? In true Dwarf Fortress fashion, we section off areas ("ranges" in 1-2-3 speak) and designate functions to those areas. First, I have my data as the main table, field names at top. Then, I need to set up my query criteria. This is a separate portion of the worksheet, with the fields I want to query against and room below to accept the criteria definition. Think of it like building a little query request form. Then, Lotus needs a place to spit out the results. Again, I set up a little "form" to receive the data. Put in whichever field names are of interest in the final data capture. Now, what if there are multiple queries I want to re-use from time to time? Painful as it sounds, I must set up multiple query forms, one for each query I expect to re-use. So, re-copy all of the field headers of interest into a new portion of the worksheet. Re-copy the field headers for the output range. Put in the new query criteria. Do another extraction. Keep dividing the worksheet up into all of the various queries one might need to reuse. Each lives in its own little area of the worksheet, so maybe now's a good time to start labeling things? Maybe mentally divide the worksheet into "my queries live over here, in Q-Town" and "my results live over there, in Resultsville" and so on. For my stated goal, I need the unique list of genres for my game list and the count of each genre within the data set. From the previous section, I know how to extract a list of unique genres. To count them, can count all non-empty records which match my criteria. Lemme draw up another diagram here. After extracting the list of unique values for "Genre", I get a column of results as seen at in the image above. Notice the criteria at is empty? By not specifying anything, that equates to matching any "Genre". Next, I need to reformat that column into countable criteria for . Just like in a query, criteria consists of two vertically contiguous cells, the top of which is the field name and the bottom holds the parameter. The field name must be physically, immediately above each and every genre I want to count. will transpose a range of vertical or horizontal cells into their mirror universe opposite. That's how I generated the horizontal list at . A of the field name across row 15 generated nice pairings, perfect for use with . The cell formula outlined in yellow is essentially the same across , each lightly modified to point to a different criteria range. That calculates the count for each genre in column , and column holds my titles. Now I have what I need to generate the chart I wanted (aforementioned pie chart drawing bug notwithstanding). Here it is in glorious 3-D from the future (of the past)! Frustratingly, figuring all of that out took the better part of a day. But now I know! If only there were some way to make it easier. There are issues with my solution thus far, many of which boil down to the physical spaces assigned to hold queries and results and transformations and data. If I bring in new data with new genres, new result lists could physically lengthen and overlap one another. Planning a physical map for the worksheet is a priority. Building out the sheet, especially keeping cell references flexible to changes in data, is a drag. I'd also like to generate a graph from the new sheet arrangement, with just a simple hot-key. Like all great developers, I want to be lazy. The first step toward the promised land of laziness is "hard work," unfortunately. Hard work can be captured and reused, luckily, as Lotus 1-2-3 features "Friend of the Blog": macros. VisiCalc didn't have it, and 1-2-3 's implementation is robust enough that many books were devoted to understanding and taming it. Here's a simple macro, which hints at its latent power. 0:00 / 0:07 1× Custom menus are easy to build. Selecting an option could trigger a longer automation task, simplifying a multi-step process, or something as simple as a help menu. Macros are stored... ( say it with me now ) ...in the worksheet. Yep, whatever map you had in mind for dividing up the worksheet into query-related fiefdoms, redistrict once more to hold macro definitions. Custom menus are an easy way to illustrate macro structure. Here's a dumb example. The text in column A is mostly comments to organize our worksheet and thoughts. represents the keyboard shortcut assigned to the macro, accessed by . is a reference to a named cell range. Named ranges are an important improvement over VisiCalc . Once defined, a range can be invoked by name anywhere a range is expected. Assuming a cell range as has been assigned a name like , is totally valid. is a range defined as . is a range defined as . Notice a range only needs to define the first start of a macro definition. Macro execution will read each cell in order down a given column until the first empty cell. range names are interpreted by 1-2-3 as macro keyboard shortcuts automatically. The convention shown, of a human-readable label to the immediate left of a range by the same name is so common it has its own menu shortcut. applied to column A will auto-assign column B cells to the names in A. To a certain extent, a named range can function like a programming "goto". In the macro case, its saying "Goto the range named and continue executing the macro from there." Programmers in the readership are salivating at the deviously complex ways this "goto labeling" could be abused. Combine it with decision making through and iteration through and the possibility space opens wide. After doing dBase work last post, I noted that I had accidentally become a dBase developer without even trying; the dBase scripting language was precisely equivalent to the commands issued at the dot prompt. I'm not so lucky with 1-2-3 . Setting up a macro which issues a simple string of commands is easy enough, and reads (mostly) like how I'd type it at the menu, akin to Bank Street Writer 's approach to macros. For example, will issue to bring up the slash menu, access the ( W )orksheet menu, then the ( C )olumn sub-menu, and finally ( H )ide a column. ~ issues "enter", which at this point in the menu navigation will commit the prompt default, i.e. the current position of the cursor. Just like that, hiding the current column just became a single keystroke. There is also a menu tool which is "record every keystroke I do from now." That recording will be output into the worksheet. Apply a range name to that and it transforms into a macro. Very nice! That said, 1-2-3 macros go from zero to 100 pretty quickly and are visually difficult to parse and reason out. One must be super-duper intimately familiar with every command in the slash menu, plus the macro-specific vocabulary. Lotus understood things could get hairy pretty quickly and added a debugging tool to help make sense of things. enters mode, which executes macros one line at a time. The status bar at the bottom of the screen explains what is being run, so when something goes wrong I know who to blame. OK , are you ready to dig in and implement macros which simplify the queries and procedure discussed earlier? < cracking knuckles> Well, I'm not. < uncracks knuckles back to stiffness > The macro system has proven too complicated to feel any sense of control or mastery beyond Baby's First Macro™. With a couple of more weeks' study I think I could achieve my goal. Unfortunately, for this post, I am defeated. The "3" in "1-2-3", 1-2-3 can function as a database. A very simple, limited, one-row-equals-one-record, 8192 record max, 256 field max, flat database. Let's be honest, oftentimes that's more than enough. I showed examples of querying earlier, and that's as fancy as it gets for this. We can sort records ascending/descending by up to two keys, find and replace values, find records which match a search query, and extract those records into another area of the spreadsheet. And nothing else (at least for Releases 2.x). 0:00 / 0:52 1× Sorting dBase II data by genre. It may seem I'm giving this aspect of the program short-shrift, but so did Lotus. In their own manual for Release 2.2, macros have 300 pages devoted to them. Database functionality has 50, and the first 20 of those are instructions for typing in dummy data. Sorting, querying, finding, and extracting, the meat and potatoes of database-ing, warrant a mere 20 pages total. It's a useful feature and I'm glad it's here. It's enough to handle most of my meager needs. Beyond that, there's not much to say, except to note its legacy. It was an obvious idea to anyone who touched VisiCalc for more than five minutes, so its development feels inevitable. Do some database work in Excel tonight and light a candle for 1-2-3 . A very nice feature of 1-2-3 that fits right in with its "integrated" approach, is what we would call today "plug-ins" or "extensions," but which Lotus calls "add-ins." 1-2-3 shipped with a few. For example, one expanded macros by letting them live in-memory, for use across worksheets. Normally the only macros accessible to a worksheet are those defined within itself. Man, VisiCalc is just getting lapped by 1-2-3 's ingenuity, huh? According to a PC Magazine article about the state of add-ins, many business-people lived inside 1-2-3 all day long and wanted to do everything from within its confines . The 3rd party add-in after-market happily commodified those desires. In addition to obvious ideas, like automated save/backup utilities, or industry-specific analysis tools, add-ins could mold 1-2-3 into almost anything. Complete word processors, entire graphic subsystem replacements for complicated graphing needs, expert system logic, and non-linear function solvers were injected into the program. Oracle offered a way to connect to their external SQL databases from within the snugly confines of 1-2-3 's security blanket. The Lotus approach, being a product of lower-memory days, is both annoying and useful. Add-ins can be, though are not by default, loaded at app startup. Add-ins must be "activated" one-by-one to gain access to their extended powers, or "deactivated" to make room for other add-ins or a larger worksheet. I have enough memory, so I'm not in trouble here, though I'm sure it's easy to imagine on a 512K system that manual memory management was a real thing. Between macros and add-ins, 1-2-3 becomes an ecosystem unto itself, like dBase or HyperCard . One thing I don't like about Lotus's approach is how it can bifurcate the user experience. That's seen clearly with their own WYSIWYG add-in. With Release 2.3, Lotus included this add-in to help a world transitioning from textual interfaces into the flash and sizzle of OS/2, Windows, and Mac GUI interfaces. It's DOS for the GUI envious and frankly, I'm cold on it. It's not integrated elegantly, feels sluggish, and makes the program more difficult to use. Activating WYSIWYG switches the application from terminal mode to graphics mode, so already as a DOSBox-X user I'm annoyed at losing my lovely TrueType text. That's not Lotus's fault, but a blogger's gotta have his standards. The big usability problem is how the functionality of the program now splits in two. The menu works as before, but we also have a new menu for all things WYSIWYG. So, when you want to use a menu command, you must remember which menu holds that command. Many options appear at first blush to be the same as their counterparts, but they control WYSIWYG-specific parameters of those functions. Usually. That's not to say the add-in isn't useful for cell styling, or placing graphs into a worksheet directly. Making documents look nice is important after all. The boss needs to be impressed with those Q3 projection charts, even when they forecast doom. Especially then, probably! Release 3 embraced WYSIWYG as its main and only interface, no add-in required, which is probably why I keep gravitating to the 2.x releases. I'd chalk it up to being a stubborn old man, but the recent embrace of TUI interfaces by the Hacker News crowd seems to have me in good company. I'm writing this part on February 22. Two days prior, a project called "Pi for Excel: AI sidebar add-in for Excel" released and got good traction on Hacker News. As I noted in the XPER column , our current "AI" boom is the biggest, but not the first. English language interactions, first by keyboard and fingers-crossed-one-day-by-voice-if-AI-technology-continues-along-our-projected-path-of-wishes-and-dreams, were available as add-ins to various programs. Databases in particular were a notable target for those experiments. Consider how English-like dBase 's user interface is, and it doesn't take a huge leap to understand why developers felt something closer to true English was within reach. Symantec's Q&A had its natural language "Intelligent Assistant" built right in. R:BASE tried it with their CLOUT add-in, promising a user could query, "Which warehouses shipped more red and green argyle socks than planned?" The spreadsheet Silk promised built-in English language control over its tools. Like those self-published magazines at the start of this article, Lotus didn't want to miss out on this English parser party either. (For this exploration I must drop down into R2.01) Released for US$150 in late 1986, HAL is a memory-resident wrapper to 1-2-3 . We launch HAL directly, which in turn launches 1-2-3 . Its advertising explains the gimmick well enough. "Lotus HAL gives you the ability to perform 1-2-3 tasks using simple English phrases." What I've seen in my early time with it can honestly feel kind of magical. Look at how easily it generates monthly column headers. 0:00 / 0:22 1× That's pretty slick, I can't deny it. Similarly tedious actions are promised to be eased greatly by "requesting" HAL to do the heavy lifting. Here, I'm stepping through a quick tutorial to have HAL build an entire spreadsheet. I never touch the formula; I only describe it by intent. 0:00 / 1:14 1× HAL only recognizes the first three letters of anything. "Name" and "Names" and "Namaste" are all the same to well-meaning, but a bit dimwitted, HAL. As is the case for all such English-like languages for the time, it's English only within a generous definition of the word. Ultimately, we're learning to speak 1-2-3 's specific dialect and vocabulary. PC Magazine , February 1987, their HAL review was the cover story, " HAL comes with a 250-page manual. It is as important to read this manual as it is to read the 1-2-3 manual. All the commands are described as rigidly as the syntax of any command-line interface." That it takes a 250 page manual to explain how to speak "English" with HAL perhaps makes an argument against its own existence? The base 640K of DOS must hold both programs in memory at the same time, so this is a nice piece of corroborating history for those who think software today is too bloated. An industry-defining spreadsheet with graphing and database capabilities close to modern expectations, an online help system, plus a natural language interface, all run together in less than 1MB of RAM . There's the retro-computing dopamine hit I've been hoping for! HAL doesn't just provide an English-language interface to 1-2-3 's native tools, it brings its own unique toys to the Release 2.01 sandbox. I do need to emphasize the release version here, because some of these tools were later worked into the product proper over time. That said, HAL worked hard to be your friend. Even though HAL controls 1-2-3 , interfacing with it still feels bolted on. brings up the HAL dialog box, which isn't hard to remember, but never feels natural. Even after setting the HAL request dialog to remain on screen, it feels tenuous. Sometimes it toggles off after navigating a menu option, or the request box will intercept commands I wanted to do through the normal slash menu. It's in the way more than I expected, and I couldn't find a balance between "when I want it" and "when I don't." PC Magazine also felt that HAL is a bit of a kludge. Charles Petzold wrote in his review, "Is HAL really a natural-language interface for 1-2-3 ? Is it useful? Will it revolutionize the computer industry? Are menus dead? My answers are: Not really. Often. Give me a break. No way." This is all academic, because Lotus killed HAL . It has been difficult to find sales figures, though in a Raymond Chen post we catch a glimpse of the Softsel Hot List for December 1986. HAL hit the top 10 (along with other, future blog subjects), moving up the charts over the previous three weeks. On the other hand, it was only available for Releases 1A through 2.01, the pre-WYSIWYG releases, and never returned. Earlier I poked at macros, hoping to make charting "count by genre" easier, and failed. Then I got to ponderin' if HAL might be able to do it for me. Shockingly, HAL can, through its special vocabulary word "tabulate." It makes those previously complex actions, the ones I diagrammed earlier, so simple to perform I don't really need a macro (though I could make one). Check out this 80's magic . 0:00 / 0:22 1× We are supposed to be able to execute HAL requests via to have the system output the 1-2-3 commands HAL puts together to get the job done. It's a peek inside HAL 's brain, basically. If I watch HAL think, maybe it can teach me a better way to do all of the busywork I slogged through earlier? In 1962's Diffusion of Innovations , author Everett Rogers described five characteristics individuals consider when adopting new solutions to existing problems. If VisiCalc was the "existing problem," how well did Lotus 1-2-3 make its case as the "new solution?" In the VisiCalc post I talked about how much of its DNA is seen in modern spreadsheets. I see now that an equal case can be made for Lotus 1-2-3 . I'd phrase it as VisiCalc contributed the "look," and 1-2-3 contributed the "feel" we've come to expect. Where VisiCalc was life-changing for number crunchers, 1-2-3 positioned itself as an engine for business and executed that vision almost perfectly. Having gotten to know 1-2-3 over the past weeks, I can now say, "I get it." I see what the fuss was about and, truth be told, I'm a convert. Sorry, VisiCalc , you know I love you! But the next time I reach for a spreadsheet, I'm reaching for 1-2-3 . Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). Obviously, it depends on what you're trying to do. For business work, it doesn't play well in groups unless you're the CEO and can dictate, "OK people, we're all switching to DOS now." For personal projects, it meets many common needs and doesn't feel too much like compromise, aside from the graphing. Heck, the DOS version supports mouse control, and you can always turn on WYSIWYG mode to approximate modernity. We're also in luck with Y2K compatibility. Even Release 1.0 supports dates up to the year 2099. Let's take a moment of silent appreciation for yet another 1-2-3 foresight which keeps its spirit alive and kicking here in the 21st century. DOSBox-X 2026.01.02, Windows x64 build. I updated from the 2025.12 build mid-investigation. CPU set to 286 DOS reports as v6.22 Windows folder mounted as drive C:\ holds multiple Lotus installations 2x (forced) scaling; 80 columns x 25 lines I flipped back and forth with TrueType text mode (this is moot for 1-2-3 's WYSIWYG mode) Lotus 1-2-3 Releases 2.01, 2.2, 2.3, 2.4, and 3.4 all get exercised to some extent; you'll see that reflected in the screenshots. I mostly gravitate toward R2.3; it does what I need without bogging me down in feature creep. "Sharpening the Stone" explains getting DOSBox-X to work with R3.x. dBase III Plus for compatibility testing with 1-2-3 . Undoing your last action. It's almost worth installing HAL just for this, though it is a little dangerous that is the keyboard shortcut. Entering a sequential list of days, months, letters, or numbers automatically, though I wonder if macros could duplicate this to a certain degree. Linking a cell in one worksheet to data in another. Release 2.3 has this. Referring to columns and rows by name is a very neat trick. In fact, it's so neat I'm going to ask you to remember this fact for a later article. Just keep it tucked away in the part of your mind devoted to spreadsheet history, as we all have. The cell-row-bellum, I think its called? (I refuse to apologize.) Worksheet "auditing" can identify cell relationships/dependencies, or list out all formulas in use by a table in natural English. Auditing would become an add-in in later 2.x releases. Find and replace; change all instances of a product name, for example. Macros can mix HAL English with native 1-2-3 macro commands. "Relative advantage  is the degree to which an innovation is perceived as better than the idea it supersedes." 1-2-3 received applause for one-button graphing. Check. "Compatibility  is the degree to which an innovation is perceived as being consistent with...past experiences, and needs of potential adopters." 1-2-3 shipped with a VisiCalc translation tool and its interface is clearly built to make VisiCalc users comfortable. Check. " Complexity  is the degree to which an innovation is perceived as difficult to understand and use." 1-2-3 was initially praised for the simplicity with which a user could get up to speed. Its adoption of high-level VisiCalc concepts, like the slash menu, @ functions, and A1 cell references, helped. Check. "Trialability  is the degree to which an innovation may be experimented with on a limited basis." Trial disks for software during the 80's and 90's wasn't so prevalent; there was a lot of "blind faith" in software purchasing. I can't find any widespread cases of 1-2-3 demo disks circulating. No check. " Observability  is the degree to which the results of an innovation are visible to others." If the live demos, prevalent advertising, and magazine write-ups didn't convince you, 1-2-3 made it clear in the product name itself that you're getting 3x what VisiCalc delivers. Check. As with ThinkTank , DOSBox-X provided a simple, pain-free experience to get Lotus running. Multi-disk installs are handled well, but could be improved. Specifically, the "Swap Disk" option when loading up a stack of disks into the A: drive could use a selector and/or indicator of which disk is currently loaded. in autoexec.bat to auto-mount at launch. Revision 3.4 would not run until I explicitly set in DOSBox-X. I noted the pie graph bug in Release 2.x. I suspect, but cannot prove, that some x86 assembly call is being mangled by DOSBox-X. 86Box, which strives to be as pedantically accurate a simulation of real-world hardware as possible, does not exhibit this issue. However, setting up 86Box comes with a whole day of learning about the parts and pieces of assembling one's own raw DOS system from virtual components, installing from diskettes, and all of the old-school troubleshooting that entails. It's a commitment, is what I'm saying. I found that DOSBox-X would run the for Release 2.2, but failed to run it for Releases 2.3 and 2.4. can launch and run without issue. is a front-end utility to launch auxiliary programs like GraphPrint . If you're mounting a system folder as a "hard drive" in DOSBox-X, it is trivial to extract your data files. The Lotus utility "Translate" is handy for moving data between formats. I found that native .wk1 files open in LibreOffice , as-is. From there, you have any number of modern exporting options, though you might find some quirks from time to time. Check your formulas, just in case! I'd recommend checking out Travis Ormandy 's site. He's smarter than me and performs magic I didn't think possible, like pulling live stock data as JSON into 1-2-3 . He also got the Unix build to work natively in Linux.

0 views

Eric Schwarz

This week on the People and Blogs series we have an interview with Eric Schwarz, whose blog can be found at schwarztech.net . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hi! I'm Eric Schwarz and my online "home" has been SchwarzTech . I grew up in Indiana in the United States and had a knack for anything involving computers from a young age. Although my first computer was a very-old Radio Shack TRS-80, I quickly shifted to an Apple IIgs and later playing with various used Macs. I really appreciated the intentional, but flawed aspects of Apple's products in the late-1980s and early 1990s. Despite my technology background, I went to college to work in media, especially audio/video production, but between the devaluation of a lot of creative jobs and the 2008 financial crisis/recession, I stuck around for more schooling, getting a graduate degree in Information & Communication Sciences, basically a mix of information technology, telecom, and a bit of business. From there, I ended up working in higher education, moving through different roles in an IT department at a small college, the bulk of which involved network engineering. A couple of years ago, my now-fiancée and I uprooted for her work and I'm at a different university, still doing a variety of IT things. I really enjoy working on a small team because it means you get to a little bit of everything! I've found that it's really nice to balance the structured, break/fix things from my day job with creative pursuits and projects outside of work. Like many that have been interviewed here, I dabble in photography, have done some various audio and video projects, and seem to be my friends' go-to for graphic design-related things. Other than those, I appreciate a good TV show or movie, maybe satisfying my college-self a little bit. I've gotten into following the National Women's Soccer League (NWSL) as well as some of the minor-league sports that are in our city. I love trying new foods and visiting new places (as cliché as that sounds), just because there's so much of the world to explore and experience—I think that makes one a more well-rounded, empathetic person. I don't quite remember the origin story for the name other than that it was going to be the name for my software "business" (remember, I was kid!) when I was writing software on the TRS-80. None of that really lasted and I reused the name when I created a personal site on GeoCities. In the late 1990s, the Internet was a weird patchwork of personal sites, academic resources, and still rough-around-the-edges corporate sites. I think we were all learning what this could be used for as we went along and I was no exception. Initially, it was a landing page of sorts when I was writing about tech elsewhere, including Low End Mac and the long-defunct MacWeekly. Eventually, getting a new iBook G3 and wanting to expand my topics led me to turning my site into a blog. I think that second-generation of the site was my attempt to compete with some of the larger players at the time, mixing in product reviews, longform opinion articles, news stories, and even a few guest writers. At that time, my family still had a big analog C-Band satellite dish at home and I was able to tune in to the live feeds of the Macworld Expo keynotes, so I could "live blog" those from afar, too. iLounge, MacOpinion, Think Secret, and TUAW were some of the sites I looked up to. By the time I was in college, it was a lot to balance courses, a campus job, somewhat of a social life, and the site scaled back a little, but was still very much a fun hobby of mine. Like many other bloggers, my site's third-generation morphed into more of a format similar to John Gruber's Daring Fireball : longform articles mixed with linked-out items that have a couple of paragraphs of commentary (I call them "Snippets.") I liked the format, as it allowed me to share things I found interesting or worth talking about. However, I found that in recent years so much of the tech industry has started to feel like a parody of itself. I felt like I had to cover stories because of their importance, rather than because I wanted to. After realizing that, I've started to shift my content a bit and my goal is to get back to content that celebrates my relationship with technology and even things that can be more lasting. That might be leading to a "fourth-generation" of the site. As I touched on a little earlier, I think my creative process got a bit hijacked by so much bad news around "Big Tech"—while I've tried to avoid my site becoming a cheerleader for Apple, that's the corner of the tech world that I've lived in for the past 30+ years (if you count the Macs and Apple IIs I used in school before I had my own.) Inspiration and sources come from a variety of areas: other blogs and things in my RSS reader, links on social media, tech stories from the larger media outlets. I think for Snippets, it's something that I feel is important to share or that I have strong feelings for. Those are often a bit more off-the-cuff and get a quick proofread before publishing. If it's something longer-form, I'll take some time, edit as I go, maybe have someone look over portions if something isn't quite working for me, and then publish. In terms of research, I try to link to outside sources that can provide additional context, older posts of my own that can add some historical context, while still maintaining and assuming that most of my readers have an above-average grasp on a lot of the topics. It's a bit of writing-for-me and I hope others will join me on the ride. While I'd love to say that I have a certain ritualistic place that I write, the truth is that sometimes it's just wherever I am. I don't love writing from my phone, but sometimes due to travel or between things at work, I might hammer out a quick post. I do think that I've gotten my home-office to be a comfortable place to sit down and focus on writing, with cozy lighting and everything set up. When I was working at my last job, I'd often grab a laptop or iPad and work from a nearby coffee shop—I think getting out of my then-apartment and having a more intentional time for writing with fewer distractions helped. Since moving, I haven't done that as much. If I think of some of my favorite "let's go write" moments, it's often on a moody, rainy day where there's some ambient noise from outside while I work. I have found that taking a break and letting something sit for a day or two has been a more important thing than location. Trying to force oneself to write when your head and heart aren't in it just doesn't seem to work for me. I set up my site on WordPress about twenty years ago when I outgrew server-side includes. It took a little while to wrestle the templates to work like my previously-carefully-crafted stylesheets. In some ways WordPress has gotten really bloated for my needs, but it works well enough and I have yet to find something to easily replace it with all the random things I've bolted onto my theme over the years. I'm in the process of re-evaluating some of my services, but right now I'm using IONOS (formerly 1&1) for hosting, which I had originally started with when they set up shop in the United States. My domains are with Hover at the moment. As for what I use to create my site, I'm currently using a Mac mini (M4), iPad mini (A17 Pro), and iPhone 15. On the Mac, BBEdit or directly on the web are where I'll do my writing. On the iOS side, I do a lot of writing in iA Writer. I'm still using Panic's Coda an Code Editor (formerly Diet Coda) for a lot of file mananagement/coding. Considering how long both have been discontinued, finding suitable replacements for both at my desk and mobile are on my to-do list. Other than the name being sometimes hard to spell, I don't think I'd necessarily pick something else. The beauty of it is that I'm not necessarily tied down to Apple/Mac-specific content and I can adapt it over time. I think of how many sites were Mac-something or iPod-something and then had to abruptly (and sometimes awkwardly) rename to fit the changing scope of content. I think for a CMS, I might want something a bit "lighter," but WordPress has allowed me to adapt the site for my changing content numerous times. I find it to be relatively inexpensive to run the site with hosting running me about US$100/year and then US$20/domain on average. I make some of that back with the single ad through the Carbon network, but I don't necessarily want to have more ads than that. Since it's a hobby for me, I'm not looking to make a lot of money, but I understand for folks who want or need to and don't begrudge that. I've toyed with the idea of letting people support the site, but I'm also not sure if it's worth the trouble. To try to avoid repeating anyone who has already been interviewed, I went through my RSS feeds to find a few that I immediately skip to when I see a new post: Brent Simmons is behind NetNewsWire and I started following his writing soon after I discovered NetNewsWire years ago, and got to follow the story of how that piece of software changed hands numerous times. Stephen Hackett is someone whose content and knowledge I can really relate to, so it's interesting to see his take on a lot of tech. Matthew Haughey covers a lot of different topics, but manages to craft a post that is always so damn fascinating. Mike Davidson doesn't blog as much these days, but he was another person whose work I followed way back in the mid-2000s and looked up to when I was interested in the convergence of traditional media and the Web. Jedda, Keenan, Lou Plummer, Nick Heer, Riccardo Mori, and Louie Mantia were already in the series, but I always enjoy when something new comes along from them, too. I have a few odds and ends that I wasn't quite sure where to fit elsewhere. First, I wanted to mention my side-project, The Chaos League , a blog that followed a similar format as SchwarzTech, but focused on the NWSL. This was a fantastic distraction coming out of the pandemic as it gave me an outlet that wasn't tech. Unfortunately, in the last few years, coverage from large media outlets and the public's appetite for short-form video content have kind of killed a lot of interest in bloggers covering that space. It's currently on hiatus and I'm not sure what the next step, if any, will be. Other than shamelessly plugging what I’ve done, I wanted to comment that this was a really fun exercise to think over my place online and what it means to me—thanks again for the opportunity! Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 131 interviews . People and Blogs is possible because kind people support it.

0 views
Kev Quirk Yesterday

How Many Holes Does a Straw Have?

I was recently listening to an episode of The Rest Is Science , specifically the episode The Evolution Of The Butthole . As always, Hannah and Michael put on a great show and I came away thinking about its contents. In it, they asked how many holes does a straw have? And my default response was something like: Why they have 2 holes, silly! One at each end. You probably don't need it, dear reader, but here's a handy-dandy diagram of what I'm talking about...2 holes, right? Then Michael asked "okay, how many holes does a doughnut have?" Bah! More simple questions! A doughnut obviously has 1 hole, right? RIGHT?! Here's another diagram (look, I know you're a clever person, and you don't need a diagram of a bloody straw, or a doughnut, but we're going with it, okay). We're all on the same page here, right folks? A straw clearly has 2 holes, and a doughnut obviously has 1. This is where it gets interesting. Michael now flips script, and quite frankly, blows my fucking mind. He said: But isn't a straw just an elongated doughnut? What. The. Actual. Fuck? A straw is just an elongated doughnut (albeit not as tasty). So does a straw have 1 hole? Does a doughnut have 2 holes? I don't know. I'm questioning my life decisions at this point. It's all too hard. Can any of you tell me how many holes a straw (or a doughnut) has? Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
Gabe Mays Yesterday

Main Character 🦸‍♂️

I’m working on a new app called Main Character. It’s a gamified productivity app where you earn XP and level up for completing tasks & tracking habits. Tasks run on a kanban board and habits show up on a GitHub-style consistency graph. Basic tasks + habit tracking are live today and I use it daily. Long term I’m turning it into an AI orchestration…

0 views
Harper Reed Yesterday

Note #726

Made a surprise visit to Colorado to hang with the parents. Thank you for using RSS. I appreciate you. Email me

0 views
Harper Reed Yesterday

Note #724

We have been having a few friends over to the office for Friday afternoon. Club. Come hang out. HMU for an invite to the next one. Thank you for using RSS. I appreciate you. Email me

0 views
Xe Iaso Yesterday

Advice for staying in the hospital for a week

As I mentioned in my last couple posts , I recently got out of the hospital after a week-long stay. I survived the surgery, I survived the recovery, and now I'm home with some hard-won wisdom about what it's actually like to be stuck in a hospital bed for seven straight days. If you or someone you love is about to go through something similar, here's what I wish someone had told me. None of this is medical advice. I'm a software engineer who spent a week as a patient, not a doctor. Talk to your actual medical team about actual medical things. There is no way in hell you are going to be productive at anything. I cannot stress this enough. Whatever you're imagining — "oh I'll catch up on reading" or "maybe I'll do some light code review" — no . Stop. Depending on the procedure that landed you there, you're not going to be able to focus long enough to do anything that matters. Your brain is going to be running on fumes, painkillers, and whatever cursed cocktail of medications they have you on. Don't fight it. The name of the game is distraction. Wait, so what do you actually do all day? Scroll your phone. Watch terrible TV. Stare at the ceiling and have thoughts that feel profound but absolutely are not. Let your brain do whatever it wants. You've earned the right to be completely useless for a while. Bring a tablet loaded with comfort shows and don't feel guilty about any of it. Here's the thing nobody tells you: inside the hospital, time ceases to exist. All your memories from the stay get lumped together into one big amorphous blob. Was that conversation with the nurse on Tuesday or Thursday? Did you eat lunch today or was that yesterday? Genuinely impossible to tell. This is a well-documented phenomenon. Between disrupted sleep cycles, medication effects, and the complete absence of normal environmental cues, your brain has nothing to anchor memories to. It's not you being broken — it's the environment. Try not to have any meaningful conversations during this time. You're not going to remember them, and that's going to feel terrible later when someone references something heartfelt they said to you and you just... have nothing. Save the deep talks for when you're home and your brain is actually recording again. Don't even imagine having any meaningful thoughts during your hospital stay. They will evaporate. Okay, this one is weirdly specific but it came up constantly. Cables that glow when you plug them in are great because you can find them in the dark. Your hospital room is going to be a mess of wires and tubes and you need to charge your phone and finding the cable end at 2 AM without turning on a light feels like a genuine victory. But here's the problem: cables that glow when you plug them in are horrible because they glow in the dark. When you're desperately trying to sleep — which you will be, constantly, because the sleep in hospitals is atrocious — that little LED glow becomes your nemesis. Neither option is good. There is no middle ground. Pick your poison. I ended up draping a washcloth over the cable connector at night. Low-tech solutions for low-tech problems. Everything is going to be simultaneously too bright and too dark. The hallway fluorescents bleed under the door at all hours. Someone will come check your vitals at 3 AM with a flashlight. Meanwhile during the day the curtains don't quite block the sun and the overhead lights have exactly two settings: "interrogation room" and "off." You're going to have to grin and bear through this. Bring a sleep mask if you can. It won't fix the problem but it'll take the edge off enough that you might actually get a few consecutive hours of rest. Your ability to focus is going to be gone. Absolutely decimated. Do not fight it. Some days will be better than others — I had one afternoon where I could actually read a few pages of something before my brain wandered off — but mostly you're going to be operating at the cognitive level of someone who's been awake for 36 hours straight. So your advice for a week in the hospital is basically "give up on everything"? My advice is to stop pretending you're going to be a functional human being and just let yourself recover. That is the productive thing to do. Recovery is the job. Everything else can wait. Brainrot yourself. Watch the same comfort show for the fifth time. Scroll through memes. Let your attention span be whatever it wants to be. You've earned it. Honestly, the biggest thing I took away from my hospital stay is that the hardest part isn't the medical stuff — it's the expectations you put on yourself. Let those go. Be a potato. Heal. The world will still be there when you get out, and it'll make a lot more sense when your brain isn't marinating in hospital vibes and post-op medication. Be kind to yourself. You're going through something hard.

0 views

Are Design Tools Relevant Anymore

I was a product designer for a few years. I had switched careers to design after suffering burn out as a software engineer. During those years, my entire day was spent in Figma, building high fidelity mockups, leading workshops and creating prototypes. While Figma helped me move quickly, rapidly iterating after receiving user feedback, the engineer part of me always felt it was a throwaway step. You build something, only to then have somebody else build it again in code. I recently had to put on my design hat again, putting together interactive prototypes around a few redesign ideas. At first, I reached for Figma, but after fiddling around for an hour, decided to go a different route. While prototyping in Figma used to be faster than building in code, that’s no longer true. With Claude Code, building out frontend components is fast . Much faster than messing with layers, frames and symbols in Figma. Let me explain. Enterprise apps have well defined brand guidelines. Colors, type, scale. They are often built off an existing component library (think Bootstrap, shadcn). This means you can use Claude in a way that follows the look and feel of your application, and is constrained to the components the development team leverages. The rails help keep Claude from going off into the deep end. Design then becomes focused on solving the user’s problem through UX, less fiddling around with UI. I can open Freeform on my iPad, sketch something out, and prompt Claude to leverage our foundation to make my sketch a reality. Then, I can dig into the code and tweak things to be just right. The result is a more interactive, true to life prototype that gives your engineering team a head start with coded components. You get better feedback from users and stakeholders as it’s easier to visualize what the final product looks like. You discover pitfalls that might not have shown up until an engineer was halfway into the card. On top of all that, you move a lot faster, you’re designing and building in 1 step rather than 2, giving your engineering team a head start once designs are finalized. So then, what’s the point of Figma and Sketch? You can tell Figma is battling with this reality by pushing Figma make. The issue is, it’s too constrained and produces poor results. You can’t link it to existing coded components, Tailwind configs, etc. On the other hand, usin my approach requires a technical background. You need to guide with framework suggestions, foundational setup and be able to takeover and tweak yourself. That said, there in the shorter term there’s likely still a place for Figma and Sketch at the table. Designing using the method I talked about requires a technical background, otherwise your results will be all over the place, and small tweaks will be next to impossible. As the technology gets better though, I’ll be surprised if Figma and Sketch survive the next couple of years.

0 views
Carlos Becker Yesterday

You'll never see my child's face

I became a dad recently, and I’m not publishing a bunch of photos of my kid like most parents do. Some people started asking me why, so here it is.

0 views
ava's blog 2 days ago

[bearblog carnival] my favorite meme

For the Bearblog Carnival of March, I wanna briefly add in my own favorite meme! Or at least, one of them. There are so many I could add... I'm choosing a specific YouTube video, a YouTube Poop . A YouTube Poop (or YTP, often shortened in the title) is a type of video remixing that edits pre-existing media like ads, movies, TV series, game cutscenes, and so on. The point is to edit the video and sound so that the material suddenly shows or says new things. They usually have some crass or silly humor, other memes, vulgar, immature and nonsensical jokes. This format exists since 2004, and new ones are being made still! The skill lies in cutting it so it sounds like the new sentence sounds as if it was really said, or almost like it, while still being obvious that it was cut. Basically, making it credible via amazing (non-AI) editing skills (correct intonation, not as choppy, finding creative ways to string sounds and words together), while also showing via other means that it is not the original and not meant to be taken seriously. YTPs even reference each other or each others' creators sometimes, and a popular sound to edit in is 'soooos' or 'jooj'. Kami also picked a YTP but by very tall bart , which I also enjoy as a creator, but I really love DaThings and cs188 . Specifically, my favorite YTP is Wonder Bros . (You can turn subtitles on, they are always properly subtitled by hand!) This YTP edits a Nintendo ad for the Super Mario Bros Wonder Switch game to make silly statement about the games' contents - new characters, features, maps. The Urineurineurineurineurine badge, being in grill form to bust out of prison... I have watched this YTP so many times, I know it by heart at this point. It's also frequently referenced by me and my wife in real life. For example, as we have been on a bread-baking journey recently, I usually say " Bowser spreads his new bread across the land! " whenever a new bread is finished. Whenever a nun pops up anywhere (visually or as a word), one of us says: " Oh! A nun! Interesting! ". For a while, we have also just randomly said " Standees nuts ". When something goes wrong, my wife says " Dangnabbit, Yoshi. ", and when I feel silly, I try to emulate the motion of Elephant Mario and make the zazazazaoowie-wowie sound at 03:05 (as best as I can). Whenever it fits, usually because of a sound or seeing the word, I'll say " You can also eeuurgh. " or " You can use it to bust out of prison! Nifty. " We no longer call mushrooms mushrooms, we say shushrooms , even in our grocery list. Whenever someone is wearing a good outfit, we say " Mario's wylin. Just look at that drip! ". Whenever I work is weird or I feel awkward about an email I sent or something, I say " This [word that fits] is normal. " in the same tone. I know I even said " Up to four people can breathe the air for a bit. " some time. Writing this all out, I wasn't even aware of just how much it has infiltrated my life! I thought it was just 3-4 things, now this is slightly embarrassing even! But it's funny, and I love it. It's not even the only YTP we reference. We also reference this Garfield YTP fro cs188, specifically presentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresents, and opening the door just to cough (We even have that as a soundbite to play in voice calls). I even sing the song that starts at 2:50, and the one at 4:20, last time it was while we were walking on the street :D Maybe in some future post or carnival, I'll focus more on the written/image memes I like! Reply via email Published 05 Mar, 2026

0 views