Latest Posts (20 found)

dBASE on the Kaypro II

The world that might have been has been discussed at length. In one possible world, Gary Kildall's CP/M operating system was chosen over MS-DOS to drive IBM's then-new "Personal Computer." As such, Bill Gates's hegemony over the trajectory of computing history never happened. Kildall wasn't constantly debunking the myth of an airplane joyride which denied him Microsoft-levels of industry dominance. Summarily, he'd likely be alive and innovating the industry to this day. Kildall's story is pitched as a "butterfly flaps its wings" inflection point that changed computing history. The truth is, of course, there were many points along our timeline which led to Kildall's fade and untimely death. Rather, I'd like to champion what Kildall did . Kildall did co-host Computer Chronicles with Stewart Chiefet for seven years. Kildall did create the first CD-ROM encyclopedia. Kildall did develop (and coin the term for) what we know today as the BIOS. Kildall did create CP/M, the first wide-spread, mass-market, portable operating system for microcomputers, possible because of said BIOS. CP/M did dominate the business landscape until the DOS era, with 20,000+ software titles in its library. Kildall did sell his company, Digital Research Inc., to Novell for US $120M. Kildall did good . Systems built to run Kildall's CP/M were prevalent, all built around the same 8-bit limits: an 8080 or Z80 processor and up to 64KB RAM. The Osborne 1, a 25lb (11kg) "portable" which sold for $1795 ($6300 in 2026), was the talk of the West Coast Computer Faire in 1981. The price was sweet, considering it came bundled with MSRP $1500 in software, including Wordstar and Supercalc . Andy Kay's company, Non-Linear Systems, debuted the Kaypro II (the "I" only existed in prototype form) the following year at $1595, $200 less (and four pounds heavier) than the Osborne. Though slower than an Osborne, it arguably made it easier to do actual work, with a significantly larger screen and beefier floppy disk capacity. Within the major operating system of its day, on popular hardware of its day, ran the utterly dominant relational database software of its day. PC Magazine , February 1984, said, "Independent industry watchers estimate that dBASE II enjoys 70 percent of the market for microcomputer database managers." Similar to past subjects HyperCard and Scala Multimedia , Wayne Ratcliff's dBASE II was an industry unto itself, not just for data-management, but for programmability, a legacy which lives on today as xBase. Strangely enough, dBASE also decided to attach "II" to its first release; a marketing maneuver to make the product appear more advanced and stable at launch. I'm sure the popularity of the Apple II had nothing to do with anyone's coincidentally similar roman numeral naming scheme whatsoever. Written in assembly, dBASE II squeezed maximum performance out of minimal hardware specs. This is my first time using both CP/M and dBASE. Let's see what made this such a power couple. I'm putting on my tan suit and wide brown tie for this one. As the owner of COMPUTRON/X, a software retail shop, I'm in Serious Businessman Mode™. I need to get inventory under control, snake the employee toilet, do profit projections, and polish a mind-boggling amount of glass and chrome. For now, I'll start with inventory and pop in this laserdisc to begin my dBASE journey. While the video is technically for 16-bit dBASE III , our host, Gentry Lee of Jet Propulsion Laboratory, assures us that 8-bit dBASE II users can do everything we see demonstrated, with a few interface differences. This is Gail Fisher, a smarty pants who thinks she's better than me. Tony Lima, in his book dBASE II for Beginners , concurs with the assessment of dBASE II and III 's differences being mostly superficial. Lima's book is pretty good, but I'm also going through Mastering dBASE II The Easy Way , by Paul W. Heiser, the official Kaypro dBASE II Manual, and dBase II for the First Time User by Alan Freedman. That last one is nicely organized by common tasks a dBASE user would want to do, like "Changing Your Data" and "Modifying Your Record Structure." I find I return to Freedman's book often. As I understand my time with CP/M, making custom bootable diskettes was the common practice. dBASE II is no different, and outright encourages this, lest we risk losing US$2000 (in 2026 dollars) in software. Being of its time and place in computing history, dBASE uses the expected UI. You know it, you love it, it's "a blinking cursor," here called "the dot prompt." While in-program is available, going through the video, books, and manual is a must. dBASE pitches the dot prompt as a simple, English language interface to the program. for example sets the default save drive to the B: drive. You could never intuit that by what it says, nor guess that it even needs to be done, but when you know how it works, it's simple to remember. It's English only in the sense that English-like words are strung together in English-like order. That said, I kind of like it? creates a new database, prompting first for a database name, then dropping me into a text entry prompt to start defining fields. This is a nice opportunity for me to feign anger at The Fishers, the family from the training video. Fancy-pants dBASE III has a more user-friendly entry mode, which requires no memorization of field input parameters. Prompts and on-screen help walk Gail through the process. In dBASE II , a field is defined by a raw, comma-delimited string. Field definitions must be entered in the order indicated on-screen. is the data type for the field, as string, number, or boolean. This is set by a one-letter code which will never be revealed at any time, even when it complains that I've used an invalid code. Remind me to dog-ear that page of the manual. For my store, I'm scouring for games released for CP/M. Poking through Moby Games digs up roughly 30 or so commercial releases, including two within the past five years . Thanks, PunyInform ! My fields are defined thusly, called up for review by the simple command. The most frustrating part about examining database software is that it doesn't do anything useful until I've entered a bunch of data. At this stage in my learning, this is strictly a manual process. Speaking frankly, this part blows, but it also blows for Gail Fisher, so my schadenfreude itch is scratched. dBASE does its best to minimize the amount of keyboard shenanigans during this process, and in truth data entry isn't stressful. I can pop through records fairly quickly, if the raw data is before me. The prompt starts at the first field and (not !) moves to the next. If entry to a field uses the entire field length (as defined by me when setting up the fields earlier), the cursor automatically jumps to the next field with a PC-speaker beep. I guess dBASE is trying to "help," but when touch typing I'm looking at my data source, not the screen. I don't know when I'm about to hit the end of a field, so I'm never prepared when it switches input fields and makes that ugly beep. More jarring is that if the final field of a record is completely filled, the cursor "helpfully" jumps to the beginning of a new record instantly, with no opportunity to read or correct the data I just input. It's never not annoying. Gail doesn't have these issues with dBASE III and her daughter just made dinner for her. Well, I can microwave a burrito as well as anyone so I'm not jealous . I'm not. In defining the fields, I have already made two mistakes. First, I wanted to enter the critic score as a decimal value so I could get the average. Number fields, like all fields, have a "width" (the maximum number of characters/bytes to allocate to the field), but also a "decimal places" value and as I type these very words I see now my mistake. Rubber ducking works . I tricked myself into thinking "width" was for the integer part, and "decimal places" was appended to that. I see now that, like character fields, I need to think of the entire maximum possible number as being the "width." Suppose in a value we expect to record . There are 2 decimal places, and a decimal point, and a leading 0, and potentially a sign, as or . So that means the "width" should be 5, with 2 "decimal places" (of those 5). Though I'm cosplaying as a store owner, I'm apparently cosplaying as a store owner that sucks! I didn't once considered pricing! Gah, Gail is so much better at business than I am! Time to get "sorta good." Toward that end, I have my to-do list after a first pass through data entry. Modifying dBASE "structures" (the field/type definitions) can be risky business. If there is no data yet, feel free to change whatever you want. If there is pre-existing data, watch out. will at least do the common decency of warning you about the pile you're about to step into. Modifying a database structure is essentially verboten, rather we must juggle files to effect a structure change. dBASE let's us have two active files, called "work areas," open simultaneously: a and a . Modifications to these are read from or written to disk in the moment; 64K can't handle much live data. It's not quite "virtual memory" but it makes the best of a tight situation. When wanting to change data in existing records, the command sounds like a good choice, but actually winds up being more useful. will focus in on specified fields for immediate editing across all records. It's simple to through fields making changes. I could to edit everything at once, but I'm finding it safer while learning to make small incremental changes or risk losing a large body of work. Make a targeted change, save, make another change, save, etc. 0:00 / 0:03 1× I laughed every time Gentry Lee showed up, like he's living with The Fishers as an invisible house gremlin. They never acknowledge his presence, but later he eats their salad! Being a novice at dBASE is a little dangerous, and MAME has its own pitfalls. I have been conditioned over time to when I want to "back out" of a process. This shuts down MAME instantly. When it happens, I swear The Fishers are mocking me, just on the edge of my peripheral vision, while Gentry Lee helps himself to my tuna casserole. dBASE is a relational database. Well, let's be less generous and call it "relational-ish." The relational model of data was defined by Edgar F. Codd in 1969 where "relation is used here in its accepted mathematical sense." It's all set theory stuff; way over my head. Skimming past the nerd junk, in that paper he defines our go-to relationship of interest: the join. As a relational database, dBASE keeps its data arranged VisiCalc style, in rows and columns. So long as two databases have a field in common, which is defined, named, and used identically in both , the two can be "joined" into a third, new database. I've created a mini database of developer phone numbers so I can call and yell at them for bugs and subsequent lost sales. I haven't yet built up the grin-and-bear-it temperament Gail possesses toward Amanda Covington. Heads will roll! You hear me, Lebling? Blank?! 64K (less CP/M and dBASE resources) isn't enough to do an in-memory join. Rather, joining creates and writes a completely new database to disk which is the union of two databases. The implication being you must have space on disk to hold both original databases as well as the newly joined database, and also the new database cannot exceed dBASE 's 65,535 record limit after joining. In the above , means and means , so we can precisely specify fields and their work area of origin. This is more useful for doing calculations at time, like to join only records where deletes specific records, if we know the record number, like . Commands in dBASE stack, so a query can define the target for a command, as one would hope and expect in 2026. Comparisons and sub-strings can be used as well. So, rather than deleting "Infocom, Inc." we could: The command looks for the left-hand string as a case-sensitive sub-string in the right-hand string. We can be a little flexible in how data may have been input, getting around case sensitivity through booleans. Yes, we have booleans! Wait, why am I deleting any Infocom games? I love those! What was I thinking?! Once everything is marked for deletion, that's all it is: marked for deletion. It's still in the database, and on disk, until we do real-deal, non-reversible, don't-forget-undo-doesn't-exist-in-1982, destruction with . Until now, I've been using the command as a kind of ad-hoc search mechanism. It goes through every record, in sequence, finding record matches. Records have positions in the database file, and dBASE is silently keeping track of a "record pointer" at all times. This represents "the current record" and commands without a query will be applied to the currently pointed record. Typing in a number at the dot prompt moves the pointer to that record. That moves me to record #3 and display its contents. When I don't know which record has what I want, will move the pointer to the first match it finds. At this point I could that record, or to see a list of records from the located record onward. Depending on the order of the records, that may or may not be useful. Right now, the order is just "the order I typed them into the system." We need to teach dBASE different orders of interest to a stripmall retail store. While the modern reaction would be to use the command, dBASE's Sort can only create entirely new database files on disk, sorted by the desired criteria. Sort a couple of times on a large data set and soon you'll find yourself hoarding the last of new-old 5 1/4" floppy disk stock from OfficeMax, or being very careful about deleting intermediary sort results. SQL brainiacs have a solution to our problem, which dBASE can also do. An "index" is appropriate for fast lookups on our columnar data. We can index on one or more fields, remapping records to the sort order of our heart's desire. Only one index can be used at a time, but a single index can be defined against multiple fields. It's easier to show you. When I set the index to "devs" and , that sets the record pointer to the first record which matches my find. I happen to know I have seven Infocom games, so I can for fields of interest. Both indexes group Infocom games together as a logical block, but within that block Publisher order is different. Don't get confused, the actual order of files in the database is betrayed by the record number. Notice they are neither contiguous nor necessarily sequential. would rearrange them into strict numerical record order. An Index only relates to the current state of our data, so if any edits occur we need to rebuild those indexes. Please, contain your excitement. Munging data is great, but I want to understand my data. Let's suppose I need the average rating of the games I sell. I'll first need a count of all games whose rating is not zero (i.e. games that actually have a rating), then I'll need a summation of those ratings. Divide those and I'll have the average. does what it says. only works on numeric fields, and also does what it says. With those, I basically have what I need. Like deletion, we can use queries as parameters for these commands. dBASE has basic math functions, and calculated values can be stored in its 64 "memory variables." Like a programming language, named variables can be referenced by name in further calculations. Many functions let us append a clause which shoves a query result into a memory variable, though array results cannot be memorized this way. shoves arbitrary values into memory, like or . As you can see in the screenshot above, the rating of CP/M games is (of 100). Higher than I expected, to be perfectly honest. As proprietor of a hot (power of positive thinking!) software retail store, I'd like to know how much profit I'll make if I sold everything I have in stock. I need to calculate, per-record, the following but this requires stepping through records and keeping a running tally. I sure hope the next section explains how to do that! Flipping through the 1,000 pages of Kaypro Software Directory 1984 , we can see the system, and CP/M by extension, was not lacking for software. Interestingly, quite a lot was written in and for dBASE II, bespoke database solutions which sold for substantially more than dBASE itself. Shakespeare wrote, "The first thing we do, let's kill all the lawyers." Judging from these prices, the first thing we should do is shake them down for their lunch money. In the HyperCard article I noted how an entire sub-industry sprung up in its wake, empowering users who would never consider themselves programmers to pick up the development reigns. dBASE paved the way for HyperCard in that regard. As Jean-Pierre Martel noted , "Because its programming language was so easy to learn, millions of people were dBASE programmers without knowing it... dBASE brought programming power to the masses." dBASE programs are written as procedural routines called Commands, or .CMD files. dBASE helpfully includes a built-in (stripped down) text editor for writing these, though any text editor will work. Once written, a .CMD file like can be invoked by . As Martel said, I seem to have become a dBASE programmer without really trying. Everything I've learned so far hasn't just been dot prompt commands, it has all been valid dBASE code. A command at the dot prompt is really just a one-line program. Cool beans! Some extra syntax for the purpose of development include: With these tools, designing menus which add a veneer of approachability to a dBASE database are trivial to create. Commands are interpreted, not compiled (that would come later), so how were these solutions sold to lawyers without bundling a full copy of dBASE with every Command file? For a while dBASE II was simply a requirement to use after-market dBASE solutions. The 1983 release of dBASE Runtime changed that, letting a user run a file, but not edit it. A Command file bundled with Runtime was essentially transformed into a standalone application. Knowing this, we're now ready to charge 2026 US$10,000 per seat for case management and tracking systems for attorneys. Hey, look at that, this section did help me with my profit calculation troubles. I can write a Command file and bask in the glow of COMPUTRON/X's shining, profitable future. During the 8 -> 16-bit era bridge, new hardware often went underutilized as developers came to grips with what the new tools could do. Famously, Visicalc 's first foray onto 16-bit systems didn't leverage any of the expanded RAM on the IBM-PC and intentionally kept all known bugs from the 8-bit Apple II version. The word "stop gap" comes to mind. Corporate America couldn't just wait around for good software to arrive. CP/M compatibility add-ons were a relatively inexpensive way to gain instant access to thousands of battle-tested business software titles. Even a lowly Coleco ADAM could, theoretically, run WordStar and Infocom games, the thought of which kept me warm at night as I suffered through an inferior Dragon's Lair adaptation. They promised a laserdisc attachment! For US$600 in 1982 ($2,000 in 2026) your new-fangled 16-bit IBM-PC could relive the good old days of 8-bit CP/M-80. Plug in XEDEX's "Baby Blue" ISA card with its Z80B CPU and 64K of RAM and the world is your slowly decaying oyster. That RAM is also accessible in 16-bit DOS, serving dual-purpose as a memory expansion for only $40 more than IBM's own bare bones 64K board. PC Magazine' s February 1982 review seemed open to the idea of the card, but was skeptical it had long-term value. XEDEX suggested the card could someday be used as a secondary processor, offloading tasks from the primary CPU to the Z80, but never followed through on that threat, as far as I could find. Own anApple II with an 8-bit 6502 CPU but still have 8-bit Z80 envy? Microsoft offered a Z80 daughter-card with 64K RAM for US$399 in 1981 ($1,413 in 2026). It doesn't provide the 80-column display you need to really make use of CP/M software, but is compatible with such add-ons. It was Bill Gates's relationship with Gary Kildall as a major buyer of CP/M for this very card that started the whole ball rolling with IBM, Gates's purchase of QDOS, and the rise of Microsoft. A 16K expansion option could combine with the Apple II's built-in 48K memory, to get about 64K for CP/M usage. BYTE Magazine 's November 1981 review raved, "Because of the flexibility it offers Apple users, I consider the Softcard an excellent buy." Good to know! How does one add a Z80 processor to a system with no expansion slots? Shove a Z80 computer into a cartridge and call it a day, apparently. This interesting, but limited, footnote in CP/M history does what it says, even if it doesn't do it well. Compute!'s Gazette wrote, "The 64 does not make a great CP/M computer. To get around memory limitations, CP/M resorts to intensive disk access. At the speed of the 1541, this makes programs run quite slowly," Even worse for CP/M users is that the slow 1541 can't read CP/M disks. Even if it could, you're stuck in 40-column mode. How were users expected to get CP/M software loaded? We'll circle back to that a little later. At any rate, Commodore offered customers an alternative solution. Where it's older brother had to make do with a cartridge add-on, the C128 takes a different approach. To maintain backward compatible with the C64 it includes a 6510 compatible processor, the 8502. It also wants to be CP/M compatible, so it needs a Z80 processor. What to do, what to do? Maybe they could put both processors into the unit? Is that allowed? Could they do that? They could, so they did. CP/M came bundled with the system, which has a native 80-column display in CP/M mode. It is ready to go with the newer, re-programmable 1571 floppy drive. Unfortunately, its slow bus speed forces the Z80 to run at only 2MHz, slower even than a Kaypro II. Compute!'s Gazette said in their April 1985 issue, "CP/M may make the Commodore 128 a bargain buy for small businesses. The price of the Commodore 128 with the 1571 disk drive is competitive with the IBM PCjr." I predict rough times ahead for the PCjr if that's true! Atari peripherals have adorable industrial design, but were quite expensive thanks to a strange system design decision. The 8-bit system's nonstandard serial bus necessitated specialized data encoding/decoding hardware inside each peripheral, driving up unit costs. For example, the Atari 910 5 1/4" floppy drive cost $500 in 1983 (almost $2,000 in 2026) thanks to that special hardware, yet only stored a paltry 90K per disk. SWP straightened out the Atari peripheral scene with the ATR8000. Shenanigans with special controller hardware are eliminated, opening up a world of cheaper, standardized floppy drives of all sizes and capacities. It also accepts Centronics parallel and RS-232C serial devices, making tons of printers, modems, and more compatible with the Atari. The device also includes a 16K print buffer and the ability to attach up to four floppy drives without additional controller board purchases. A base ATR8000 can replace a whole stack of expensive Atari-branded add-ons, while being more flexible and performant. The saying goes, "Cheaper, better, faster. Pick any two." The ATR8000 is that rare device which delivered all three. Now, upgrade that box with its CP/M compatibility option, adding a Z80 and 64K, and you've basically bought a second computer. When plugged into the Atari, the Atari functions as a remote terminal into the unit, using whatever 40/80-column display adapter you have connected. It could also apparently function standalone, accessible through any terminal, no Atari needed. That isn't even its final form. The Co-Power-88 is a 128K or 256K PC-compatible add-on to the Z80 CP/M board. When booted into the Z80, that extra RAM can be used as a RAM disk to make CP/M fly. When booted into the 8088, it's a full-on PC running DOS or CP/M-86. Tricked out, this eight pound box would set you back US$1000 in 1984 ($3,000 in 2026), but it should be obvious why this is a coveted piece of kit for the Atari faithful to this day. For UK£399 in 1985 (£1288 in 2026; US$1750) Acorn offered a Z80 with dedicated 64K of RAM. According to the manual, the Z80 handles the CP/M software, while the 6502 in the base unit handles floppies and printers, freeing up CP/M RAM in the process. Plugged into the side of the BBC Micro, the manual suggests desk space clearance of 5 ft wide and 2 1/2 feet deep. My god. Acorn User June 1984 declared, "To sum up, Acorn has put together an excellent and versatile system that has something for everyone." I'd like to note that glowing review was almost exclusively thanks to the bundled CP/M productivity software suite. Their evaluation didn't seem to try loading off-the-shelf software, which caused me to narrow my eyes, and stroke my chin in cynical suspicion. Flip through the manual to find out about obtaining additional software, and it gets decidedly vague. "You’ll find a large and growing selection available for your Z80 personal computer, including a special series of products that will work in parallel with the software in your Z80 pack." Like the C128, the Coleco ADAM was a Z80 native machine so CP/M can work without much fuss, though the box does proclaim "Made especially for ADAM!" Since we don't have to add hardware (well, we need a floppy; the ADAM only shipped with a high-speed cassette drive), we can jump into the ecosystem for about US$65 in 1985 ($200 in 2026). Like other CP/M solutions, the ADAM really needed an 80-column adapter, something Coleco promised but never delivered. Like Dragon's Lair on laserdisc! As it stands, CP/M scrolls horizontally to display all 80 columns. This version adds ADAM-style UI for its quaint(?) roman numeral function keys. OK, CP/M is running! Now what? To be honest, I've been toying with you this whole time, dangling the catnip of CP/M compatibility. It's time to come clean and admit the dark side of these add-on solutions. There ain't no software! Even when the CPU and CP/M version were technically compatible, floppy disc format was the sticking point for getting software to run any given machine. For example, the catalog for Kaypro software in 1984 is 896 pages long. That is all CP/M software and all theoretically compatible with a BBC Micro running CP/M. However, within that catalog, everything shipped expressly on Kaypro compatible floppy discs. Do you think a Coleco ADAM floppy drive can read Kaypro discs? Would you be even the tiniest bit shocked to learn it cannot? Kaypro enthusiast magazine PRO illustrates the issue facing consumers back then. Let's check in on the Morrow Designs (founded by Computer Chronicles sometimes co-host George Morrow!) CP/M system owners. How do they fare? OK then, what about that Baby Blue from earlier? The Microsoft Softcard must surely have figured something out. The Apple II was, according to Practical Computing , "the most widespread CP/M system" of its day. Almost every product faced the same challenge . On any given CP/M-80 software disk, the byte code is compatible with your Z8o, if your floppy drive can read the diskette. You couldn't just buy a random CP/M disk, throw it into a random CP/M system, and expect it to work, which would have been a crushing blow to young me hoping to play Planetfall on the ADAM. So what could be done? There were a few options, none of them particularly simple or straightforward, especially to those who weren't technically-minded. Some places offered transfer services. XEDEX, the makers of Baby Blue, would do it for $100 per disk . I saw another listing for a similar service (different machine) at $10 per disk. Others sold the software pre-transferred, as noted on a Coleco ADAM service flyer. A few software solutions existed, including Baby Blue's own Convert program, which shipped with their card and "supports bidirectional file transfer between PC-DOS and popular CP/M disk formats." They also had the Baby Blue Conversion Software which used emulation to "turn CP/M-80 programs into PC-DOS programs for fast, efficient execution on Baby Blue II." Xeno-Copy, by Vertex Systems, could copy from over 40+ disk formats onto PC-DOS for US$99.50 ($313 in 2026); their Plus version promised cross-format read/write capabilities. Notably, Apple, Commodore, Apricot, and other big names are missing from their compatibility list. The Kermit protocol , once installed onto a CP/M system disk, could handle cross-platform serial transfers, assuming you had the additional hardware necessary. "CP/M machines use many different floppy disk formats, which means that one machine often cannot read disks from another CP/M machine, and Kermit is used as part of a process to transfer applications and data between CP/M machines and other machines with different operating systems." The Catch-22 of it all is that you have to get Kermit onto your CP/M disk in the first place. Hand-coding a bare-bones Kermit protocol (CP/M ships with an assembler) for the purposes of getting "real" Kermit onto your system so you could then transfer the actual software you wanted in the first place, was a trick published in the Kermit-80 documentation . Of course, this all assumes you know someone with the proper CP/M setup to help; basically, you're going to need to make friends. Talk to your computer dealer, or better yet, get involved in a local CP/M User's Group. It takes a village to move Wordstar onto a C64. I really enjoyed my time learning dBASE II and am heartened by the consistency of its commands and the clean interaction between them. When I realized that I had accidentally learned how to program dBASE , that was a great feeling. What I expected to be a steep learning curve wasn't "steep" per se, but rather just intimidating. That simple, blinking cursor, can feel quite daunting at the first step, but each new command I learned followed a consistent pattern. Soon enough, simple tools became force multipliers for later tools. The more I used it, the more I liked it. dBASE II is uninviting, but good. On top of that, getting data out into the real world is simple, as you'll see below in "Sharpening the Stone." I'm not locked in. So what keeps me from being super enthusiastic about the experience? It is CP/M-80 which gives me pause. The 64K memory restriction, disk format shenanigans, and floppy disk juggling honestly push me away from that world except strictly for historical investigations. Speaking frankly, I don't care for it. CP/M-86 running dBASE III+ could probably win me over, though I would probably try DR-DOS instead. Memory constraints would be essentially erased, DOSBox-X is drag-and-drop trivial to move files in and out of the system, and dBASE III+ is more powerful while also being more user-friendly. Combine that with Clipper , which can compile dBASE applications into standalone .exe files, and there's powerful utility to be had . By the way, did you know dBASE is still alive ? Maybe. Kinda? Hard to say. The latest version is dBASE 2019 (not a typo!), but the site is unmaintained and my appeal to their LinkedIn for a demo has gone unanswered. Its owner, dBase LTD, sells dBASE Classic which is dBASE V for DOS running in DOSBox, a confession they know they lost the plot, I'd humbly suggest. An ignominious end to a venerable classic. Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). When working with CP/M disk images, get to know cpmtools . This is a set of command line utilities for creating, viewing, and modifying CP/M disk images. The tools mostly align with Unix commands, prefixed with Those are the commands I wound up using with regularity. If your system of choice is a "weirdo system" you may be restricted in your disk image/formatting choices; these instructions may be of limited or no help. knows about Kaypro II disk layouts via diskdefs. This Github fork makes it easy to browse supported types. Here's what I did. Now that you can pull data out of CP/M, here's how to make use of it. Kaypro II emulation running in MAME. Default setup includes Dual floppies Z80 CPU at 2.4MHz dBase II v2.4 See "Sharpening the Stone" at the end of this post for how to get this going. Personally, I found this to be a tricky process to learn. Change the of the rating field and add in that data. Add pricing fields and related data. Add more games. and allow decision branching does iterations and will grab a character or string from the user prints text to screen at a specific character position and give control over system memory will run an assembly routine at a known memory location For this article I specifically picked a period-authentic combo of Kaypro II + CP/M 2.2 + dBASE II 2.4. You don't have to suffer my pain! CP/M-86 and dBASE III+ running in a more feature-rich emulator would be a better choice for digging into non-trivial projects. I'm cold on MAME for computer emulation, except in the sense that in this case it was the fastest option for spinning up my chosen tools. It works, and that's all I can say that I enjoyed. That's not nothing! I find I prefer the robust settings offered in products like WinUAE, Virtual ADAM, VICE , and others. Emulators with in-built disk tools are a luxury I have become addicted to. MAME's interface is an inelegant way to manage hardware configurations and disk swapping. MAME has no printer emulation, which I like to use for a more holistic retro computing experience. Getting a working, trouble-free copy of dBASE II onto a Kaypro II compatible disk image was a non-trivial task. It's easier now that I know the situation, but it took some cajoling. I had to create new, blank disks, and copy CP/M and dBASE over from other disk images. Look below under "Getting Your Data into the Real World" to learn about and how it fits into the process. Be careful of modern keyboard conventions, especially wanting to hit to cancel commands. In MAME this will hard quit the emulator with no warning! Exported data exhibited strange artifacts: The big one: it didn't export any "logical" (boolean) field values from my database. It just left that field blank on all records. Field names are not exported. Garbage data found after the last record; records imported fine. On Linux and Windows (via WSL) install thusly : view the contents of a CP/M disk image. Use the flag to tell it the format of the disk, like for the Kaypro II. : format a disk image with a CP/M file system : copy files to/from other disk or to the host operating system : remove files from a CP/M disk image : for making new, blank disk image files (still needs to be formatted) : makes a blank disk image to single-sided, double-density specification : formats that blank image for the Kaypro II : copies "DBASE.COM" from the current directory of the host operating system into the Kaypro II disk image. : displays the contents of the disk : copies "FILE.TXT" from the disk image into the current directory of the host operating system (i.e. ) dBASE has built-in exporting functionality, so long as you use the extension when saving ( in dBASE lingo). That creates a bog-standard ASCII text file, each record on its own line, comma-delimited (and ONLY comma-delimited). It is not Y2K compatible, if you're hoping to record today's date in a field. I tackled this a bit in the Superbase post . It is probably possible to hack up a Command file to work around this issue, since dates are just strings in dBASE . dBASE II doesn't offer the relational robustness of SQL. Many missing, useful tools could be built in the xBase programming language. It would be significant work in some cases; maybe not worth it or consider if you can do without those. Your needs may exceed what CP/M-80 hardware can support; its 8-bit nature is a limiting factor in and of itself. If you have big plans , consider dBASE III+ on DOS to stretch your legs. (I read dBASE IV sucks) The user interface helps at times, and is opaque at other times. This can be part of the fun in using these older systems, mastering esoterica for esoterica's sake, but may be a bridge too far for serious work of real value. Of course, when discussing older machines we are almost always excluding non-English speakers thanks to the limitations of ASCII. The world just wasn't as well-connected at the time.

0 views

The death of software, the A24 of software

Steven Sinofsky recently published Death of Software. Nah. , arguing via historical case studies that AI will not kill software any more than previous technological shifts killed their respective incumbents. I agree with the headline thesis. But I think his media analogy deserves a sharper look, because it actually complicates his optimism in ways worth taking seriously. He writes that there is "vastly more media today than there was 25 years ago," pointing to streaming as evidence that disruption creates abundance rather than destruction. This is telling, because I agree with both sides of the glass: The shift to streaming has not killed media. But it has, to put it mildly, made the aggregate quality of the product worse, and in doing so shifted the value generated away from creative labor and towards platforms and capital. Warner Bros. is, to hear some people say it, the last great conventional studio producing consistently risky and high-quality work that advances the medium forward; Netflix, Apple, et al do put out some extremely great stuff, but the vast majority of their budget goes to things like Red Notice — films designed with their audiences' revealed preferences (i.e., browsing their phone while the film is on) in mind. And yet! The greatest studio of the past decade was also a studio founded in, essentially, the past decade — A24, in 2012. I think it's uncontroversial to say that no other studio has had a higher batting average, and they've done it the right way: very pro-auteur, very fiscally disciplined, focusing more on an overall portfolio brand and strong relationships than the need for Yet Another Tentpole Franchise. A24 didn't succeed despite the streaming era — they succeeded because of it. The explosion of mediocre content created a vacuum for taste, for curation, for a brand that stood for something. When everything is abundant and most of it is forgettable, the scarce thing is discernment . The interesting question isn't "will there be more software?" — it's who captures the value, and what excellence looks like in a world of abundance. (Kicker: A24 just took a round of additional funding from Thrive Capital last year. The market, it seems, agrees.) There will be more software, not less, in the future. The quality of that software — as defined by the heuristics of yesteryear — will be lower.

0 views

The evolution of OpenAI's mission statement

As a USA 501(c)(3) the OpenAI non-profit has to file a tax return each year with the IRS. One of the required fields on that tax return is to "Briefly describe the organization’s mission or most significant activities" - this has actual legal weight to it as the IRS can use it to evaluate if the organization is sticking to its mission and deserves to maintain its non-profit tax-exempt status. You can browse OpenAI's tax filings by year on ProPublica's excellent Nonprofit Explorer . I went through and extracted that mission statement for 2016 through 2024, then had Claude Code help me fake the commit dates to turn it into a git repository and share that as a Gist - which means that Gist's revisions page shows every edit they've made since they started filing their taxes! It's really interesting seeing what they've changed over time. The original 2016 mission reads as follows (and yes, the apostrophe in "OpenAIs" is missing in the original ): OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI's benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way. In 2018 they dropped the part about "trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way." In 2020 they dropped the words "as a whole" from "benefit humanity as a whole". They're still "unconstrained by a need to generate financial return" though. Some interesting changes in 2021. They're still unconstrained by a need to generate financial return, but here we have the first reference to "general-purpose artificial intelligence" (replacing "digital intelligence"). They're more confident too: it's not "most likely to benefit humanity", it's just "benefits humanity". They previously wanted to "help the world build safe AI technology", but now they're going to do that themselves: "the companys goal is to develop and responsibly deploy safe AI technology". 2022 only changed one significant word: they added "safely" to "build ... (AI) that safely benefits humanity". They're still unconstrained by those financial returns! No changes in 2023... but then in 2024 they deleted almost the entire thing, reducing it to simply: OpenAIs mission is to ensure that artificial general intelligence benefits all of humanity. They've expanded "humanity" to "all of humanity", but there's no mention of safety any more and I guess they can finally start focusing on that need to generate financial returns! Update : I found loosely equivalent but much less interesting documents from Anthropic . You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

0 views

The Guestbook Is Back

What’s going on, Internet? Guestbooks are one of my favourite relics of the old web. My old guestbook stopped working after the database behind it shut down, and I’ve been meaning to bring it back ever since. Well, it’s finally here. The new guestbook is powered by webweav.ing , built by yequari and available to 32-Bit Cafe members. It provides web components that handle the form and comments, making it easy to drop into any site. If you’re a member, I’d recommend checking it out. Go ahead and sign the guestbook . Say what’s up. Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website.

0 views

Is the Mac having a BMW’s Neue Klasse moment?

In the last couple of months, we have seen plenty of rants , reports, analysis , and other exposés about the state of Apple software, whether it is about their bad icon design , bad icon implementation , neglect , more neglect , and plain worrisome trends . The most damning thing of all? All of these complaints are valid at the same time, and, coming from Mac enthusiasts and connoisseurs, they carry a lot of weight. This collective reaction is strong because Apple is not a brand usually associated with poor quality, odd design choices, or a lack of attention to detail. It is particularly notable on the Mac, arguably the most prominent Apple software product when it comes to enthusiasm about the brand and what they stand for. Today, some of the Apple observers and critics are almost in shock of how fast things went bad. There were warning signs before, but the core foundations of what makes the Mac a great computing platform didn’t seem threatened. The problems seemed limited to a few bugs and side apps that were quickly filed under mishaps , and the growing popularity of non-native apps that ignore Mac conventions . Now, even MacOS itself is plagued with symptoms of the “unrefined” disease. Is MacOS becoming another Windows? A couple of years ago, circa 2021, I was using a Windows computer for work. It was fine. Not great, not bad, it was just OK. Most of the tools I have to use at work live in the browser, and I managed to find peace with the few apps I was using, most of them Electron-based, like Obsidian. When I eventually got an M1 MacBook Air as a replacement, it was a breath of fresh air. Not because I’m a Mac user since 2006, but because the Mac is not fine or just OK: it’s great. Mac apps, the “real” Mac apps, are indeed very good. They feel part of the system, whereas on Windows it’s hard to distinguish between a web-wrapped app and a native app. They all feel the same. Ty Bolt said it best writing about Panic’s Nova (emphasis mine): Nova is one of the best pieces of software I’ve ever used. It’s refined and polished and there’s no equivalent on Linux and Windows. It has its own personality, but also feels like an extension of the operating system. Which is a hallmark of a great Mac app. Folks in the community call them Mac-assed Mac apps. These apps are what make MacOS really great. The best apps I have used are all Mac apps. For me, this quote is what the Mac is all about. But with all the current issues documented on MacOS Tahoe, it is not as easy to look down on Windows as it once was. For users like me, who appreciate a certain level of precision and craftsmanship in software and love Apple because of that — especially the Mac — this trend is worrisome. We know that Apple is not going away, but the Apple we love seems distracted. We worry that the Mac won’t ever feel like the Mac we love today again. We worry that our habits, our taste, and our commitments to a platform will become pointless and dépassés . We worry because there is not a proper alternative to the Mac environment. Users with a different set of tastes, values, and habits, users who may use a Mac for their best-in-class chips, but not for its software, won't understand. Some users who already use and love Linux or Windows (and easily switch between the two), for their set of tastes, values, and habits, won't understand. Users who use a Mac just to live inside a Chrome/Electron landscape of apps won't understand. This period of neglect may be over soon. It may go on for another few years. It may also be all downhill from here. We just don't know. We have to wait, we have to hope, and we have to continue pointing out what feels off about the platform we love. The most cynical will point to the obvious, saying that Mac enthusiasts are not where the money is these days for Apple. This would explain a lot, and it's very tempting to think that way. But I thought of something that may sound like wishful thinking: What if Apple is having its own BMW-Neue-Klasse moment? For BMW, Neue Klasse is the name of their brand reset, their upcoming generation of cars, from the design language to the production platform to the actual vehicle models. It was announced a few years ago, in the midst of the transition to the electric-first era. For BMW, this meant reaffirming the brand, getting back to its roots , and embracing what makes BMW a well-loved and praised car manufacturer. This kind of transition takes a lot of time, effort, and money. Between the announcement and today, brand enthusiasts and critics have perceived a regression in quality and finish , and have felt that the brand has lost touch with its premium foundations and with what makes them love it in the first place. Optimists and apologists will explain this by saying that BMW has put all their best talents and resources towards the Neue Klasse. They will tell you that the current line of models and its related perceived-quality issues are temporary while they reallocated some of their best teams , a necessary low to set things anew, with the upcoming generation of vehicles. As far as I can understand, the reasoning is that BMW knew it had enough brand capital to absorb a few awkward design cycles and perceived drops in interior quality. They surfed on their existing reputation while spending a lot of resources on a platform reset, hoping for a smooth transition. It may hurt them a little , but they considered it a small price to pay to be able to embrace this new era confidently, and regain what was lost. I want to imagine that the same thing is happening at Apple. What if the last couple of years were a transition for Apple? Unlike BMW, Apple would not share their own Neue Klasse vision: they would just unveil it when it’s ready and keep it a secret until then. Meanwhile, their best engineers, designers, and product people are reassigned and working hard on a new generation of MacOS, something that is a big step forward. Maybe Apple thinks that, for the current lineup, helped by the greatest hardware the Mac ever had, the limited resources and ongoing problems are an acceptable compromise, for now. * 1 Mark Gurman would probably have shared the scoop if that were what was really happening, but I’ll keep hoping this “Mac reset” is actually happening and good (and not a failed renaissance). After all, the Neue Klasse era could end up being a disaster, and the worrying signs we’re seeing are actually just the beginning of the end. For Apple, if we are indeed witnessing the first signs of a company that has lost its touch, if we are already at a point of no return when it comes to MacOS quality, the potential downfall won’t be nearly as consequential as it could be for BMW. Apple could lose money for decades and still be one of the richest companies in the world. Without the Mac (just 6% of revenue ), Apple would post similar financial reports for years to come. * 2 For the Mac enthusiasts like myself, there are only three upcoming scenarios in my mind right now. One, the Mac we love returns, either in its current form or as a “new class” of Mac (MacOS XX?) and all of this will just be a bad memory. Two, the Mac keeps on getting worse and worse to the point of driving long-time users away, and it ends up getting replaced with yet another version of iOS on MacBooks. Three, all operating systems end up being background tasks in the A.I. era anyway , and Apple knows this and doesn’t bother anymore. This is maybe what happened back in the butterfly keyboard era: Apple were working on the Apple-silicon Macs, and focused most of their resources towards that, hence the Mac computers of that era being underserved. I am clearly speculating, but you get my point. ^ I wonder for what part of these 6% the Mac enthusiasts are responsible for. Maybe 5%? 10%? I’m pretty sure most of the Mac revenue comes from users who won’t pay attention to all of this. ^ This is maybe what happened back in the butterfly keyboard era: Apple were working on the Apple-silicon Macs, and focused most of their resources towards that, hence the Mac computers of that era being underserved. I am clearly speculating, but you get my point. ^ I wonder for what part of these 6% the Mac enthusiasts are responsible for. Maybe 5%? 10%? I’m pretty sure most of the Mac revenue comes from users who won’t pay attention to all of this. ^

1 views
Stratechery Yesterday

2026.07: Aggregators and AI

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Stratechery video is on Microsoft and Software Survival . Individualization at Scale. Spotify had a fantastic result in its quarterly earnings, but I thought the earnings call commentary — it was former CEO and founder Daniel Ek’s last — was worth a deeper examination into the reality of modern network companies . Spotify is a music streaming service that everyone is familiar with, but the actual experience of every Spotify user is unique. That’s a feature of every monolithic network-effects company, and explains why such companies are poised to be big winners from AI. — Ben Thompson CapEx Explosions and Distinctions.  The most amazing thing I read this week was a note from a Sharp Tech emailer who observed that the combined project capital expenditures of Amazon, Google and Meta in 2026 — more than $700 billion — net out to nearly two-thirds of the annual budget for the U.S. Department of Defense. Where is that money going, and how scared should investors be? The answer there varies. On Monday, Ben explained why Google’s plans make perfect sense  given the nature of the business and their results over the past few years. Tuesday’s Update told a different story about Amazon : their spending is defensible, but shareholders who are anxious have a few good reasons to be.  — Andrew Sharp The Interviewer Becomes the Interviewee.  As Ben’s friend and colleague, I loved the inversion we got from this week’s Stratechery Interview , cross-published from Stripe Preisdent John Collison’s Cheeky Pint podcast . This was Collison interviewing Ben, talking from everything from the difference between pre- and post-smartphone Japan, Meta’s allergy to advertising evangelism, why Ben doesn’t cover TikTok as much, and Stratechery’s business model (which was in part enabled by Stripe). The 90-minute conversation was a delight and is also available to watch on YouTube , if you’d like to see what Stripe’s on-premise pub looks like and get pretty jealous.   — AS Google Earnings, Google Cloud Crushes, Search Advertising and LLMs — Google announced a massive increase in CapEx that blew away expectations; the companies earnings results explain why the increase is justified. Amazon Earnings, CapEx Concerns, Commodity AI — Amazon’s massive CapEx increase makes me much more nervous than Google’s, but it is understandable. Spotify Earnings, Individualized Networks, AI and Aggregation — Spotify’s nature as a content network means that AI is a sustaining technology, particularly because they have the right business model in place. An Interview with Ben Thompson by John Collison on the Cheeky Pint Podcast — An interview with me by John Collison on the Cheeky Pint podcast about AI, ads, and the history of Stratechery. Takaichi, Tanking and Legalization Lessons — On landslide elections in Japan, fixing a mess in the NBA, and a defining political challenge for the next generation in the United States.  Ferrari Luce Apple Losing Control The Great Golden Age of Antibiotics The Epochal Ultra-Supercritical Steam Turbine Pending Taiwan Arms Sales; Jimmy Lai Sentenced; Takaichi Secures a Supermajority; AI Models as Propaganda Vectors Radical Transparency Roundup, The Tanking Race Gets Disgusting, Giannis Takes a Share in Kalshi Spotify Spreads Its Wings, CapEx Explosions and Distinctions, Q&A on Viral AI Tweets, Anthropic, Giannis

0 views
neilzone Yesterday

Moving away from Nextcloud

I have used Nextcloud for a long time. In fact, I have used Nextcloud from before it was Nextcloud - before the fork of Owncloud. And while I have not used many of its features - sync, calendar, and contacts - I’ve been a very happy user for a long time. Until a year or so ago, at least. I’ve had a worry, at the back of my mind, for a while, that Nextcloud is trying to do too much. A collaborative document editor. An email client. A voice/video conferencing tool, and so on. I’m sure that, in some contexts, this is amazing, and convenient. For me, as someone who typically prefers a piece of software to do one thing well, it left me a bit uneasy. But that was not, in itself, enough of a reason for me to switch. A year or so ago, I had problem after problem keeping files in sync. I routinely got error messages about the database (or files; I don’t quite remember) being locked. And, for me, that was the mainstay of Nextcloud, and indeed the reason why I started to use it in the first place. I tried all sorts of things, including setting up redis, and trying other memcache options, even though I am the only regular user. I could not get it to sync reliably. And I really did try, using the voluminous logs to try to determine what was going wrong. But I failed. And so I started considering other options. Did I actually need Nextcloud at all? I’ve moved to Syncthing for syncing, and so far, that has been working fine. It is fast, and appears to be reliable. I should probably write about it at some point. Using Nextcloud to sync photos from my phone was not too bad, but from Sandra’s iPhone, it did not work well. I have switched to Immich for photo sync / gallery, and I’ve been very happy with it. For contacts and calendar sync - DAV - I am using Radicale . The main annoyance is that Sandra cannot invite me (or anyone) to appointments using the iOS or macOS calendar. For me, I’ve just given Sandra write access to my calendar, so that she can add events directly, but it is far from ideal. I’ve tried using Radicale’s server-side email functionality, and that is not suitable for my needs, as it sends out far too many email. But, for now, Radicale is tolerable, even if I might try to find another option at some point. And that just leaves the directories which I share via Nextcloud and mount in my file browser. Stuff that I don’t need on my computer, but still want to access. For that, I’m going back to samba. It works. And so, once I’ve finalised this and tested it and given it some time to bed in, I will turn off the Nextcloud server.

0 views
Martin Fowler Yesterday

Fragments: February 13

I’ve been busy traveling this week, visiting some clients in the Bay Area and attending The Pragmatic Summit. So I’ve not has as much time as I’d hoped to share more thoughts from the Thoughtworks Future of Software Development Retreat . I’m still working through my notes and posting fragments - here are some more: ❄                ❄ What role do senior developers play as LLMs become established? As befits a gathering of many senior developers, we felt we still have a bright future, focusing more on architectural issues than the messy details of syntax and coding. In some cases, folks who haven’t done much programming in the last decade have found LLMs allow them to get back to that, and managing LLM agents has a lot of similarities to managing junior developers. One attendee reported that although their senior developers were very resistant to using LLMs, when those senior developers were involved in an exercise that forced them to do some hands-on work with LLMs, a third of them were instantly converted to being very pro-LLM. That suggests that practical experience is important to give senior folks credible information to judge the value, particularly since there’s been striking improvements to models in just the last couple of months. As was quipped, some negative opinions of LLM capabilities “are so January”. ❄                ❄ There’s been much angst posted in recent months about the fate for junior developers, as people are worried that they will be replaced by untiring agents. This group was more sanguine about this, feeling that junior developers will still be needed, if nothing else because they are open-minded about LLMs and familiar with using them. It’s the mid-level developers who face the greatest challenges. They formed their career without LLMs, but haven’t gained the level of experience yet to fully drive them effectively in the way that senior developers do. LLMs could be helpful to junior developers by providing a always-available mentor, capable of teaching them better programming. Juniors should, of course, have a certain skepticism of their AI mentors, but they should be skeptical of fleshy mentors too. Not all of us are as brilliant as I like to think that I am. ❄                ❄ Attendee Margaret-Anne Storey has published a longer post on the problem of cognitive debt . I saw this dynamic play out vividly in an entrepreneurship course I taught recently. Student teams were building software products over the semester, moving quickly to ship features and meet milestones. But by weeks 7 or 8, one team hit a wall. They could no longer make even simple changes without breaking something unexpected. When I met with them, the team initially blamed technical debt: messy code, poor architecture, hurried implementations. But as we dug deeper, the real problem emerged: no one on the team could explain why certain design decisions had been made or how different parts of the system were supposed to work together. The code might have been messy, but the bigger issue was that the theory of the system, their shared understanding, had fragmented or disappeared entirely. They had accumulated cognitive debt faster than technical debt, and it paralyzed them. I think this is a worthwhile topic to think about, but as I ponder it, I look at it in a similar way to how I look at Technical Debt . Many people focus on technical debt as the bad stuff that accumulates in a sloppy code base - poor module boundaries, bad naming etc. The term I use for bad stuff like that is cruft , I use the technical debt metaphor as a way to think about how to deal with the costs that the cruft imposes. Either we pay the interest - making each further change to the code base a bit harder, or we pay down the principal - doing explicit restructuring and refactoring to make the code easier to change. What is this separation of the cruft and the debt metaphor in the cognitive realm? I think the equivalent of cruft is ignorance - both of the code and the domain the code is supporting. The debt metaphor then still applies, either it costs more to add new capabilities, or we have to make an explicit investment to gain knowledge. The debt metaphor reminds us that which we do depends on the relative costs between them. With cognitive issues, those costs apply on both the humans and The Genie . ❄                ❄ Many of us have long been advocating for initiatives to improve Developer Experience (DevEx) to improve the effectiveness of software development teams. Laura Tacho commented: The Venn Diagram of Developer Experience and Agent Experience is a circle Many of the things we advocate for developers also enable LLMs to work more effectively too. Smooth tooling, clear information about the development environment, helps LLMs figure out how create code quickly and correctly. While there is a possibility that The Genie’s Galaxy Brain can comprehend a confusing code base, there’s growing evidence that good modularity and descriptive naming is as good for the transformer as it is for more squishy neural networks. This is getting recognized by software development management, leading to efforts to smooth the path for the LLM. But as Laura observed, it’s sad the this implies that the execs won’t make the effort for humans that they are making for the robots. ❄                ❄ IDEs still have a future, but need to incorporate LLMs into their working. One way is to use LLMs to support things that cannot be done with deterministic methods, such as generating code from natural language documents. But there’s plenty of tasks where you don’t want to use an LLM - they are a horribly inefficient way to rename a function, for example. Another role for LLMs is to help users use them effectively - after all modern IDEs are complex tools, and few users know how to get the most out of them. (As a long-time Emacs user, I sympathize.) An IDE can help the user select when to use an LLM for a task, when to use the deterministic IDE features, and when to choreograph a mix of the two. Say I have “person” in my domain and I want to change it to “contact”. It appears in function names, field names, documentation, test cases. A simple search-replace isn’t enough. But rather than have the LLM operate on the entire code base, maybe the LLM chooses to use the IDE’s refactoring capabilities on all the places it sees - essentially orchestrating the IDE’s features. An attendee noted that analysis of renames in an IDE indicated that they occur in clusters like this, so it would be a useful capability. ❄                ❄ Will two-pizza teams shrink to one-pizza teams because LLMs don’t eat pizza - or will we have the same size teams that do much more? I’m inclined to the latter, there’s something about the two-pizza team size that effectively balances the benefits of human collaboration with the costs of coordination. That also raises a question about the shape of pair programming, a question that came up during the panel I had with Gergely Orosz and Kent Beck at The Pragmatic Summit. There seems to be a common notion that the best way to work is to have one programmer driving a few (or many) LLM agents. But I wonder if two humans driving a bunch of agents would be better, combining the benefits of pairing with the greater code-generative ability of The Genies. ❄                ❄                ❄                ❄                ❄ Aruna Ranganathan and Xingqi Maggie Ye write in the Harvard Business Review In an eight-month study of how generative AI changed work habits at a U.S.-based technology company with about 200 employees, we found that employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so. While this may sound like a dream come true for leaders, the changes brought about by enthusiastic AI adoption can be unsustainable, causing problems down the line. Once the excitement of experimenting fades, workers can find that their workload has quietly grown and feel stretched from juggling everything that’s suddenly on their plate. That workload creep can in turn lead to cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems. ❄                ❄                ❄                ❄                ❄ Camille Fournier : The part of “everyone becomes a manager” in AI that I didn’t really think about until now was the mental fatigue of context switching and keeping many tasks going at once, which of course is one of the hardest parts of being a manager and now you all get to enjoy it too There’s an increasing feeling that there’s a shift coming our profession where folks will turn from programmers engaged with the code to supervisory programmers herding a bunch of agents. I do think that supervisory or not, programmers will still be accountable for the code generated under their watch, and it’s an open question whether increasing context-switching will undermine the effectiveness of driving many agents. This would lead to practices that seek to harvest the parallelism of agents while minimizing the context-switching. Whatever route we go down, I expect a lot of activity in exploring what makes an effective workflow for supervisory programming in the coming months.

0 views
Pete Warden Yesterday

Announcing Moonshine Voice

Today we’re launching Moonshine Voice , a new family of on-device speech to text models designed for live voice applications, and an open source library to run them . They support streaming , doing a lot of the compute while the user is still talking so your app can respond to user speech an order of magnitude faster than alternatives , while continuously supplying partial text updates. Our largest model has only 245 million parameters , but achieves a 6.65% word error rate on HuggingFace’s OpenASR Leaderboard compared to Whisper Large v3 which has 1.5 billion parameters and a 7.44% word error rate. We are optimized for easy integration with applications, with prebuilt packages and examples for iOS , Android , Python , MacOS , Windows , Linux , and Raspberry Pis . Everything runs on the CPU with no NPU or GPU dependencies. and the code and streaming models are released under an MIT License . We’ve designed the framework to be “batteries included”, with microphone capture, voice activity detection, speaker identification (though our diarization has room for improvement), speech to text, and even intent recognition built-in, and available through a common API on all platforms. As you might be able to tell, I’m pretty excited to share this with you all! We’ve been working on this for the last 18 months, and have been dogfooding it in our own products, and I can’t wait to see what you all build with it. Please join our Discord if you have questions, and if you do find it useful, please consider giving the repository a star on GitHub, that helps us a lot.

0 views
matduggan.com Yesterday

The Small Web is Tricky to Find

One of the most common requests I've gotten from users of my little Firefox extension( https://timewasterpro.xyz ) has been more options around the categories of websites that you get returned. This required me to go through and parse the website information to attempt to put them into different categories. I tried a bunch of different approaches but ended up basically looking at the websites themselves seeing if there was anything that looked like a tag or a hint on each site. This is the end conclusion of my effort at putting stuff into categories. Unknown just means I wasn't able to get any sort of data about it. This is the result of me combining Ghost, Wordpress and Kagi Small Web data sources. Interestingly one of my most common requests is "I would like less technical content" which as it turns out is tricky to provide because it's pretty hard to find. They sorta exist but for less technical users they don't seem to have bought into the value of the small web own your own web domain (or if they have, I haven't been able to figure out a reliable way to find them). This is an interesting problem, especially because a lot of the tools I would have previously used to solve this problem are....basically broken. It's difficult for me to really use Google web search to find anything at this point even remotely like "give me all the small websites" because everything is weighted to steer me away from that towards Reddit. So anything that might be a little niche is tricky to figure out. So there's no point in building a web extension with a weighting algorithm to return less technical content if I cannot find a big enough pool of non-technical content to surface. It isn't that these sites don't exist its just that we never really figured out a way to reliably surface "what is a small website". So from a technical perspective I have a bunch of problems. I think I can solve....some of these, but the more I work on the problem the more I'm realizing that the entire concept of "the small web" had a series of pretty serious problems. First I need to reliably sort websites into a genre, which can be a challenge when we're talking about small websites because people typically write about whatever moves them that day. Most of the content on a site might be technical, but some of it might not be. Big sites tend to be more precise with their SEO settings but small sites that don't care don't do that, so I have fewer reliable signals to work with. Then I need to come up with a lot of different feeding systems for independent websites. The Kagi Small Web was a good starting point, but Wordpress and Ghost websites have a much higher ratio of non-technical content. I need those sites, but it's hard to find a big batch of them reliably. Once I have the type of website as a general genre and I have a series of locations, then I can start to reliably distribute the types of content you get. Google was the only place on Earth sending any traffic there Because Google was the only one who knew about it, there never needed to be another distribution system Now that Google is broken, it's almost impossible to recreate that magic of becoming the top of list for a specific subgenre without a ton more information than I can get from public records.

0 views
ava's blog Yesterday

focus timer

I recently felt like I couldn't trust my own judgment anymore about how much time I put into things. I could sit on my desk for 8 hours, but did I really study that much? Work that much? Volunteer that much? Blog that much? What about my breaks (chatting, videos, toilet, kitchen)? Even if I felt like I did a lot that day, I wasn't sure how much already, since so much bled together. I do not work in fixed increments (Pomodoro etc.) or set specific times when to start something. I just flow from one thing to another. I could summarize and translate a case, and then midway take a break to chat or watch a video, which then could inspire a blog post I'd write, and then I make some food, I have an idea for some pixel art and draw it, then after that I start studying, and when I need a break I continue the case again... it sounds a bit messier than it is in practice. It warps my perception though, especially because it all happens in the same location and on the same device. So I needed a lightweight timer that would keep the time, let me label it, and then log it in a file. In the end, I could see how much I did of each thing in the file. I asked for recommendations first, then couldn't really find what I was looking for otherwise, so I settled on AI-generating a solution for me. I couldn't add learning Python onto my busy schedule and waste it on a measly timer when I should be doing other things, and I needed it right that day, so I thought that's the perfect dirty work for an LLM for once. The timer has a 'Start' button that switches to 'Pause' once it is pressed, and 'Stop' opens a dialogue window to assign a label (= type in a word). After a label is assigned, it gets saved to a . In the CSV file, it shows date, current local time, the given label, and the timer time. The way this is read depends on your locale and how you set the separator options. For me, it is set like this: Different locale or separator detection can show the numbers in separate columns instead. If anyone needs it, here is the AI-generated code with some manual edits by me (added symbols, adjusted how date and time is displayed in the CSV). Probably silly as hell code, but what do I know. Put it into a file called and allow it to run as executable. Reply via email Published 13 Feb, 2026

0 views

David Cain

This week on the People and Blogs series we have an interview with David Cain, whose blog can be found at raptitude.com . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Markus Heurung and the other 116 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I’m a Canadian blogger and entrepreneur. I started doing this back when I was in a totally different line of work. I was a surveyor for an engineering company, and where I live the industry slows down in the winter because of the harsh cold, so I began poking around on the internet a little more than usual. That led to discovering blogs, and the possibility of doing that for a living. I had always been into writing, so having a way to publish my thoughts and for interested parties to read them and care was a revelation. That was 2008 or so, when the internet was a very different place. Social media was a niche and nerdy thing, big companies had no idea how to use the internet, and we were not all algorithmized. I miss that time. Aside from what I write about (see below), I’m into indoor climbing, reading, religion, history, and lifting weights. I’m also into the idea of the “Oldschool Internet.” As you know if you’re over 30, the internet used to feel different than it does now. It was freer, more creative and weird, and less dominated by big platforms and algorithms. I have a deep, deep nostalgia for it and I wish I could recreate that feeling. When I was goofing around on the internet at work I found a blog about blogging for a living, and one day decided I would do that. I had always been interested in the inner world of the human being. I was always thinking about this conundrum of having mind and a body. You have no instruction manual, and you have to go and live a life and try to be happy. I sat down and listed like a hundred obscure ideas I’d been wanting to tell the world. What I didn’t realize is that my obsession with the inner human world and managing the human condition was due to having undiagnosed ADHD, which made ordinary life stuff very complicated and difficult. My challenges led me to reading piles of self-help and spiritual-flavored stuff. A lot of it was crap but I did learn quite a bit about making the most of the mess that is human life, and shared what I found. The blog I started was called Raptitude . It was just a made-up word, combining “rapt” and “aptitude.” The idea is that you can get better at appreciating life, at being rapt by the day-to-day experience of being alive. Many of my posts were little tricks I’d figured out for getting yourself to do things, not realizing it was coming from a rather crippling psychiatric condition. I finally got diagnosed at age 40, after twelve years of blogging. I always tried to stay away from writing in the kind of mushy, therapeutic tone that dominates the self-help and spiritual space. I wrote about weird and hypothetical things instead, and I found an audience pretty quickly. This year I launched a second site to help other “productivity-challenged” people. It’s called How to Do Things , and it’s more practical and less philosophical than Raptitude, and is aimed at adults with ADHD. Today my writing is more focused, less wild. But Raptitude is the same blog it was 17 years ago when I first launched it. I have ideas all the time and take voice notes when I’m out and about. If I’m home I just mind-dump into a text document. Later I go through my ideas and find one I think I could actually write about. I play around with it, find an angle, and start typing. I do a lot of moving things around, cutting and pasting. Sometimes I’ll write 3 or 4 thousand words and end up with a 1200-word post. Sometimes I even delete the original idea and just riff on a tangential idea. It is not an efficient or structured process, it’s just habit. I take forever to write posts, even now. I don’t do drafts exactly, I just barf out the idea, try to find a bottom-line point, then revise what I’ve written to point to that bottom-line idea. I do a couple of passes to try to shorten it, which just as often ends up lengthening it. Then I add pictures with funny captions so people don’t get bored and publish it. I don’t involve anyone else in the writing and there are typos sometimes. I have a home office and that’s pretty much exclusively where I work. Everything I need is there, my desk has a lot of space, I have multiple monitors. I play instrumental music. Classical or ambient electronic. I’ve worked in coffee shops, and I do get inspired by being out in the world. But I always feel guilty about taking up their seats for too long, and the travel time seems like a waste so I don’t do that much. I have always used WordPress, and self-host on BigScoots. I love the host and am so glad I switched from a large, well-known terrible company I will not name. WordPress is good and a lot less clunky than it used to be. Today I would just do a Substack. I still might switch to Substack one day. It seems like a well-contained environment that takes eliminates a lot of technical and design considerations that can suck up writing time. You’re also built into a network of other writers and readers. What I would do differently is learn to make a kind of content that doesn’t take long to make. I take forever to do one piece and it is still hard. Another thing I’d do differently is define my topic more narrowly. I write about anything pertaining to human life, which makes it difficult to know what to write about, and difficult to do any marketing or intentional growth, because there is no identifiable crowd or demographic that I know would be into my “topic.” It costs a fortune, all told, because it’s a business and not just a blog. Hosting isn’t bad – a few hundred dollars a year. I pay someone on a monthly basis to update and maintain the site and deal with downtime and crashes and other stuff that used to blow up my life once a year or so. I’m not a super savvy technical person so this is necessary. The highest cost is the email management system, which is essential for the layers and layers of emails I send. With 40,000 people in the system it costs over $400 a month. There may be cheaper options but switching would be too big a pain. I also have tons of little subscription costs that have become necessary for product delivery (Dropbox for example). Altogether my monthly business expenses are more than my rent. I make a full-time living from my blog by offering products to my readers. I also have a Patreon. The whole operation would be way cheaper to run if I didn’t sell anything. I am all for monetizing personal blogs. Good content is hard to make and takes time, and if you want to offer something bigger than blog posts, you have to charge for it or it doesn’t get made. I am a fan of David Pinsof’s Everything is Bullshit and Scott Alexander’s Astral Codex Ten , both of which are Substacks now. Mostly I read books these days. I just want to say this was a lot of fun. Not to be the old man in the room but the internet has changed immensely since I started in 2008. Part of what has dropped away (at least for me) has been being in the “world” of blogs. Answering these questions and reading other people’s answers on your site has reminded me that some semblance of that community spirit still exists. Thanks for keeping it alive. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 128 interviews . Make sure to also say thank you to Brennan Kenneth Brown and the other 116 supporters for making this series possible.

0 views
iDiallo Yesterday

Factional Drift: We cluster into factions online

Whenever one of my articles reaches some popularity, I tend not to participate in the discussion. A few weeks back, I told a story about me, my neighbor and an UHF remote . The story took on a life of its own on Hackernews before I could answer any questions. But reading through the comment section, I noticed a pattern on how comments form. People were not necessarily talking about my article. They had turned into factions. This isn't a complaint about the community. Instead it's an observation that I've made many years ago but didn't have the words to describe it. Now I have the articles to explore the idea. The article asked this question: is it okay to use a shared RF remote to silence a loud neighbor ? The comment section on hackernews split into two teams. Team Justice, who believed I was right to teach my neighbor a lesson. And then Team Boundaries, who believed I was “a real dick”. But within hours, the thread stopped being about that question. People self-sorted into tribes, not by opinion on the neighbor, but by identity. The tinkerers joined the conversation. If you only looked through the comment section without reading the article, you'd think it was a DIY thread on how to create an UHF remote. They turned the story into one about gadget showcasing. TV-B-Gone, Flipper Zeros, IR blasters on old phones, a guy using an HP-48G calculator as a universal remote. They didn't care about the neighbor. They cared about the hack. Then came the apartment warriors. They bonded over their shared suffering experienced when living in an apartment. Bad soundproofing, cheap landlords, one person even proposed a tool that doesn't exist yet, a "spirit level for soundproofing". The story was just a mirror for their own pain. The diplomats quietly pushed back on the whole premise. They talked about having shared WhatsApp groups, politely asking, and collective norms. A minority voice, but a distinct one. Why hack someone when you can have a conversation? The Nostalgics drifted into memories of old tech. HAM radios, Magnavox TVs, the first time a remote replaced a channel dial. Generational gravity. Back in my days... Nobody decided to join these factions. They just replied to the comment that felt like their world, and the algorithm and thread structure did the rest. Give people any prompt, even a lighthearted one, and they will self-sort. Not into "right" and "wrong," but into identity clusters. Morning people find morning people. Hackers find hackers. The frustrated find the frustrated. You discover your faction. And once you're in one, the comments from your own tribe just feel more natural to upvote. This pattern might be true for this article, but what about others? I have another article that has gone viral twice . On this one the question was: Is it ethical to bill $18k for a static HTML page? Team Justice and Team Boundaries quickly showed up. "You pay for time, not lines of code." the defenders argued. "Silence while the clock runs is not transparent." the others criticized. But then the factions formed. People self-sorted into identity clusters, each cluster developed its own vocabulary and gravity, and the original question became irrelevant to most of the conversation. Stories about money and professional life pull people downward into frameworks and philosophy. The pricing philosophers exploded into a deep rabbit hole on Veblen goods, price discrimination, status signaling, and perceived value. Referenced books, studies, and the "I'm Rich" iPhone app. This was the longest thread. The corporate cynics shared war stories about use-it-or-lose-it budgets, contractors paid to do nothing, and organizational dysfunction. Veered into a full government-vs-corporations debate that lasted dozens of comments. The professional freelancers dispensed practical advice. Invoice periodically, set scope boundaries, charge what you're worth. They drew from personal contractor experience. The ethicists genuinely wrestled with whether I did the right thing. Not just "was it legal" but "was it honest." They were ignored. The psychology undergrads were fascinated by the story. Why do people Google during a repair job and get fired? Why does price change how you perceive quality? Referenced Cialdini's "Influence" and ran with it. Long story short, a jeweler was trying to move some turquoise and told an assistant to sell them at half price while she was gone. The assistant accidentally doubled the price, but the stones still sold immediately. The kind of drift between the two articles was different. The remote thread drifted laterally: people sorted by life experience and hobby (gadget lovers found gadget lovers, apartment sufferers found apartment sufferers). The $18k thread drifted deep: people sorted by intellectual framework (economists found economists, ethicists found ethicists, corporate cynics found corporate cynics). The $18k thread even spawned nested debates within subfactions. The Corporate Cynics thread turned into a full government-vs-corporations philosophical argument that had nothing to do with me or the article. But was all this something that just happens with my articles? I needed an answer. So I picked a recent article I enjoyed by Mitchell Hashimoto . And it was about AI, so this was perfect to test if these patterns exist here as well. Now here is a respected developer who went from AI skeptic to someone who runs agents constantly. Without hype, without declaring victory, just documenting what worked. The question becomes: Is AI useful for coding, or is it hype? The result wasn't entirely binary. I spotted 3 groups at first. Those in favor said: "It's a tool. Learn to use it well." Those against it said: "It's slop. I'm not buying it." But then a third group. The fence-sitters (I'm in this group): "Show me the data. What does it cost?" And then the factions appeared. The workflow optimizers used the article as a premise to share their own agent strategy. Form an intuition on what the agent is good at, frame and scope the task so that it is hard for the AI to screw up, small diffs for faster human verification. The defenders of the craft dropped full on manifestos. “AI weakens the mind” then references The Matrix. "I derive satisfaction from doing something hard." This group isn't arguing AI doesn't work. They're arguing it shouldn't work, because the work itself has intrinsic value. The history buffs joined the conversation. There was a riff on early aircraft being unreliable until the DC-3, then the 747. Architects moving from paper to CAD. They were framing AI adoption as just another tool transition in a long history of tool transitions. They're making AI feel inevitable, normal, obvious. The Appeal-to-Mitchell crowd stated that Mitchell is a better developer than you. If he gets value out of these tools you should think about why you can't. The flamewar kicked in! Someone joked: "Why can't you be more like your brother Mitchell?" The Vibe-code-haters added to the conversation. The term 'vibe coding' became a battleground. Some using it mockingly, some trying to redefine it. There was an argument that noted the split between this thread (pragmatic, honest) and LinkedIn (hyperbolic, unrealistic). A new variable from this thread was the author's credibility, plus he was replying in the threads. Unlike with my articles, the readers came to this thread with preconceived notions. If I claimed that I am now a full time vibe-coder, the community wouldn't care much. But not so with Mitchell. The quiet ones lose. The Accountants, the Fence-Sitters, they asked real questions and got minimal traction. "How much does it cost?" silence. "Which tool should I use?" minimal engagement. The thread's energy went to the factions that told a better story. One thing to note is that the Workflow Optimizers weren't arguing with the Skeptics. The Craft Defenders weren't engaging with the Accountants. Each faction found its own angle and stayed there. Just like the previous threads. Three threads. Three completely different subjects: a TV remote story, an invoice story, an AI adoption guide. Every single one produced the same underlying architecture. A binary forms. Sub-factions drift orthogonally. The quiet ones get ignored. The entertaining factions win. The type of drift changes based on the article. Personal anecdotes (TV remote) pull people sideways into shared experience. Professional stories ($18k invoice) pull people down into frameworks. Prescriptive guides (AI adoption) pull people into tactics and philosophy. But the pattern, like the way people self-sort, the way factions ignore each other, the way the thread fractures, this remained the same. The details of the articles are not entirely relevant. Give any open-ended prompt to a comment section and watch the factions emerge. They're not coordinated. They're not conscious. They just... happen. For example, the Vibe-Code Haters faction emerged around a single term "vibe coding." The semantic battle became its own sub-thread. Language itself became a faction trigger. Now that you spotted the pattern, you can't unsee it. That's factional drift.

0 views
Brain Baking Yesterday

Why Parenting Is Similar To JavaScript Development

Here’s a crazy thought: to me, parenting feels very similar to programming in JavaScript. The more I think about it, the more convinced I am. If you’re an old fart that’s been coding stuff in JavaScript since its inception, you’ll undoubtedly be familiar with Douglas Crockford’s bibles , or to be more precise, that one tiny booklet from 2008 JavaScript: The Good Parts . That book covered by a cute O’Reilly butterfly is only 172 pages long. Contrast that with any tome attempting to do a “definitive guide”, like David Flanagan’s, which is 1093 pages thick. Ergo, one starts thinking: only of Javascript is inherently good . And that was 18 years ago. Since then, the EcmaScript standard threw new stuff on top in a steady yearly fashion, giving us weird and wonderful things (Promise chaining! Constants that aren’t constants! Private members with that look weirder than ! Nullish coalescing?? Bigger integers!) that arguably can be called syntactic sugar to try and disguise the bitter taste that is released slowly but surely if you chew on JS code long enough. If that’s not confusing enough, the JS ecosystem has evolved enormously as well: we now have 20+ languages built on top of JS that compile/transpile to it. We have TypeScript that has its own keyword that has nothing to do with , go nuts! We have ClojureScript that lets you write your React Native components in Clojure that compiles to JS that compiles to Java with Expo that compiles your app! We have and and and god-knows-what-else that replaces and possibly also ? At this point, I’m starting to transpile JS into transpiration. Parenting often feels like Javascript: The Good Parts versus JavaScript: The Definitive Guide . With our two very young children, there are many, many (oh so many) moments where we feel like we’re stumbling around in the dark, getting lost in that thick tome that dictates the things that we should be doing. When the eldest has yet another I’ll-just-throw-myself-on-the-floor-here moment and the youngest keeps on puking and yelling because he just discovered rolling on his tummy, I forget The Good Parts . To be perfectly frank, in those moments, I often wonder if Crockford had been lying to us. Are there even any good parts at all? We all know JS was cobbled together overnight because Netscape needed “some” language to make static languages a bit more dynamic. A language for the masses! What a monster it has become—in both positive and negative sense. It often feels like Wouter doesn’t exist anymore. Instead, there’s only daddy. It has been months since I last touched a book, notebook, or fountain pen. It has been months since my wife & I did something together to strengthen our relationship which currently is being reduced to snapping at each other because we’re still not perfectly synced when it comes to educational rules. Perhaps just writing and publishing this is reassurance for myself: proof of existence. Hi! This is not a bot! JavaScript is a big mess. Parenting feels like that as well. The ecosystem around JS rapidly changes and only the keenest frontend developer is able to keep up. I have no idea how to keep up with parenting. During our day-to-day struggles, you barely notice that the kids are growing and changing, but when you look back, you’re suddenly surprised yet another milestone has passed. Is that part of the Good Parts or the Bad Parts ? Maybe Flanagan’s Definitive Guide should be used to smack people on the head that do not obey to the latest EcmaScript standard best practices. I often have the feeling of getting smacked on the head when trying to deal with yet another kid emergency situation. I’m exhausted. Last week I yelled so hard at our eldest that she and I both started crying—she on the outside, me on the inside. I have no idea who I am anymore. I’m not like that. But it seems that I am. Our children successfully managed to bring out the worst in ourselves, even parts that I didn’t even know where there. I’ll let you be the judge of whether that bit belongs in the Good Parts . Yet I love JS. I love its dynamic duck type system (fuck TypeScript), I love its functional , , roots, I love prototypal inheritance. But I often forget about it because it’s buried in all that contemporary mud. Of course I love my children, but right now, I can’t say that I love parenting, because it’s buried in all that attention demanding and shouting that reduces our energy meters to zero in just a few minutes. My wife made a thoughtful remark the other day: We’re no longer living. At this point, we’re merely surviving. Every single day. As I write this, it’s almost 17:30 PM. The kids spent the day at my parents so I don’t even have the right to complain. Every minute now, they can come back and the bomb will explode again. There’s a little voice in my head that says “just get to the cooking, get them to eat and shove them in bed. Only an hour and a half left.” I don’t know if that’s sad or not. I need to get cooking. Only an hour and a half left. Don’t blame me, I no longer live. We’re merely surviving. If someone manages to write Parenting: The Good Parts in only 172 pages, let me know. Related topics: / javascript / parenting / By Wouter Groeneveld on 13 February 2026.  Reply via email .

0 views
Ruslan Osipov Yesterday

What I won’t write about

I’ve been writing a lot more over the past year - in fact, I’ve written at least once a week , and this is article number 60 within the past year. I did this for many reasons: to get better at writing, to get out of a creative rut, play around with different writing voices, but also because I wanted to move my blog from a dry tech blog to something I myself am a little more excited about. I started this blog in 2012, documenting my experiences with various programming tools and coding languages. I felt like I contributed by sharing tutorials, and having some public technical artifacts helped during job searches. Over the years I branched out - short reviews for books I’ve read, recounts of my travel (and turning my Prius into a car camper to do so), notes on personal finance… All of this shares a theme: descriptive writing. I feel most confident describing and recounting events and putting together tutorials. It’s easy to verify if I’m wrong - an event either happened or didn’t, the tool either worked - or didn’t. And I was there the whole time. That kind of writing doesn’t take much soul and grit, and while it’s pretty good at drawing traffic to the site (eh, which is something I don’t particularly care about anymore ), I wouldn’t call it particularly fulfilling. Creatively, at least. I’m scared to share opinions, because opinions vary and don’t have ground truth. It’s easier to be completely wrong, or to look like a fool. I don’t want to be criticised for my writing. Privacy is a matter too - despite writing publicly, I consider myself to be a private person. So, after 13 years of descriptive writing, I made an effort to experiment in 2025. I wrote down some notes on parenthood, my thoughts on AI and Warhammer , nostalgia, identity, ego… I wrote about writing, too. It’s been a scary transition, and it still is. I have to fight myself to avoid putting together yet another tutorial or an observation on modal interfaces . I’ve been somewhat successful though, as I even wrote a piece on my anxiety about sharing opinions . But descriptive writing continues sneaking in, trying to reclaim the field. You see, I write under my own name. I like the authenticity this affords me, and it’s nice not having to make a secret blog (which I will eventually accidentally leak, knowing my forgetfulness). I mean this blog has been running for 14 years now, that’s gotta count for something. But writing under my own name also presents a major problem. It’s my real name. If you search for “Ruslan Osipov”, my site’s at the top. I don’t hide who I am, and you can quickly confirm my identity by going to my about page . This means that friends, colleagues, neighbors, bosses, government officials - anyone - can easily find my writing. If there are people out there who don’t like me - for whatever reason - they can read my stuff too. The more I write, the more I learn that good writing is 1) passionate and 2) vulnerable (it’s also well structured, but I have no intention of restructuring this essay - so you’ll just have to sit with my fragmented train of thought). It’s easy to write about things I’m passionate about. I get passionate about everything I get involved in - from parenting and housework to my work. I write this article in Vim, and I’m passionate enough about that to write a book on the subject . Vulnerability is hard. Good writing is raw, it makes the author feel things, and leaves little bits and pieces of the author scattered on the page. You just can’t fake authenticity. But here’s the thing - real life is messy. Babies throw tantrums, work gets stressful, the world changes in the ways you might not like. That isn’t something you want the whole world to know. Especially if that world involves a prospective employer, for example. So you have to put up a facade, and filter topics that could pose risk. I’m no fool: I’m not going to criticize the company that pays me money. I like getting paid money, it buys food, diapers, and video games. I still think it’s a bit weird and restrictive that a future recruiter is curating my writing today. The furthest I’m willing to push the envelope here is my essay on corporate jobs and self-worth . Curation happens to more than the work-related topics of course. And that might even be a good thing. I don’t just reminisce about my upbringing. It’s a brief jumping off point into my obsession with productivity . Curation is just good taste. You’re not getting my darkest, messiest, snottiest remarks. You’re getting a loosely organized, tangentially related set of ideas. Finding that gradient has been exciting. So, here’s what I won’t write about. I won’t share too many details about our home life. I won’t complain about a bad day at work. I won’t badmouth people. But I will write about what those things feel like - the tiredness, the frustration, the ego.

0 views

Attack of the SaaS clones

I cloned Linear's UI and core functionality using Claude Code in about 20 prompts. Here's what that means for SaaS companies.

0 views

The Final Bottleneck

Historically, writing code was slower than reviewing code. It might not have felt that way, because code reviews sat in queues until someone got around to picking it up. But if you compare the actual acts themselves, creation was usually the more expensive part. In teams where people both wrote and reviewed code, it never felt like “we should probably program slower.” So when more and more people tell me they no longer know what code is in their own codebase, I feel like something is very wrong here and it’s time to reflect. Software engineers often believe that if we make the bathtub bigger , overflow disappears. It doesn’t. OpenClaw right now has north of 2,500 pull requests open. That’s a big bathtub. Anyone who has worked with queues knows this: if input grows faster than throughput, you have an accumulating failure. At that point, backpressure and load shedding are the only things that retain a system that can still operate. If you have ever been in a Starbucks overwhelmed by mobile orders, you know the feeling. The in-store experience breaks down. You no longer know how many orders are ahead of you. There is no clear line, no reliable wait estimate, and often no real cancellation path unless you escalate and make noise. That is what many AI-adjacent open source projects feel like right now. And increasingly, that is what a lot of internal company projects feel like in “AI-first” engineering teams, and that’s not sustainable. You can’t triage, you can’t review, and many of the PRs cannot be merged after a certain point because they are too far out of date. And the creator might have lost the motivation to actually get it merged. There is huge excitement about newfound delivery speed, but in private conversations, I keep hearing the same second sentence: people are also confused about how to keep up with the pace they themselves created. Humanity has been here before. Many times over. We already talk about the Luddites a lot in the context of AI, but it’s interesting to see what led up to it. Mark Cartwright wrote a great article about the textile industry in Britain during the industrial revolution. At its core was a simple idea: whenever a bottleneck was removed, innovation happened downstream from that. Weaving sped up? Yarn became the constraint. Faster spinning? Fibre needed to be improved to support the new speeds until finally the demand for cotton went up and that had to be automated too. We saw the same thing in shipping that led to modern automated ports and containerization. As software engineers we have been here too. Assembly did not scale to larger engineering teams, and we had to invent higher level languages. A lot of what programming languages and software development frameworks did was allow us to write code faster and to scale to larger code bases. What it did not do up to this point was take away the core skill of engineering. While it’s definitely easier to write C than assembly, many of the core problems are the same. Memory latency still matters, physics are still our ultimate bottleneck, algorithmic complexity still makes or breaks software at scale. When one part of the pipeline becomes dramatically faster, you need to throttle input. Pi is a great example of this. PRs are auto closed unless people are trusted. It takes OSS vacations . That’s one option: you just throttle the inflow. You push against your newfound powers until you can handle them. But what if the speed continues to increase? What downstream of writing code do we have to speed up? Sure, the pull request review clearly turns into the bottleneck. But it cannot really be automated. If the machine writes the code, the machine better review the code at the same time. So what ultimately comes up for human review would already have passed the most critical possible review of the most capable machine. What else is in the way? If we continue with the fundamental belief that machines cannot be accountable, then humans need to be able to understand the output of the machine. And the machine will ship relentlessly. Support tickets of customers will go straight to machines to implement improvements and fixes, for other machines to review, for humans to rubber stamp in the morning. A lot of this sounds both unappealing and reminiscent of the textile industry. The individual weaver no longer carried responsibility for a bad piece of cloth. If it was bad, it became the responsibility of the factory as a whole and it was just replaced outright. As we’re entering the phase of single-use plastic software, we might be moving the whole layer of responsibility elsewhere. But to me it still feels different. Maybe that’s because my lowly brain can’t comprehend the change we are going through, and future generations will just laugh about our challenges. It feels different to me, because what I see taking place in some Open Source projects, in some companies and teams feels deeply wrong and unsustainable. Even Steve Yegge himself now casts doubts about the sustainability of the ever-increasing pace of code creation. So what if we need to give in? What if we need to pave the way for this new type of engineering to become the standard? What affordances will we have to create to make it work? I for one do not know. I’m looking at this with fascination and bewilderment and trying to make sense of it. Because it is not the final bottleneck. We will find ways to take responsibility for what we ship, because society will demand it. Non-sentient machines will never be able to carry responsibility, and it looks like we will need to deal with this problem before machines achieve this status. Regardless of how bizarre they appear to act already. I too am the bottleneck now . But you know what? Two years ago, I too was the bottleneck. I was the bottleneck all along. The machine did not really change that. And for as long as I carry responsibilities and am accountable, this will remain true. If we manage to push accountability upwards, it might change, but so far, how that would happen is not clear.

0 views
ava's blog 2 days ago

when exercise started helping me

Nowadays, exercising really always saves me without fail. I realized that today, after again feeling absolutely terrible but then dragging myself out of bed to at least walk on my foldable treadmill. I started wondering when this change exactly happened and what led to it, because I used to hate exercise. I didn't understand people who said it helped with depression. When did it truly start being a reliable way to improve my mental state? What I struggled with back then were most definitely access, energy and health . I neither had a gym membership, nor did I have gym equipment at home. Wanting to exercise consisted of pulling out some yoga mat to do crunches like once a year, or going out for a run. Both suck when you haven't built it up over weeks or months! It was immediately difficult, painful and exhausting. My undiagnosed autoimmune diseases added more pain on top; I was just too inflamed to really work out well or even recover for days on end, and I dealt with a lot of fatigue on top of everything. That makes starting and keeping at it almost impossible, except for unexpected good phases. Without at least showing up semi-regularly, I made no progress, and every attempt I did make was immediately very exhausting with no reward. I felt like I couldn't last long enough in a session or exercise regimen to even reap the benefits. It didn't help at all that I immediately always chose something rather difficult or exhausting, as if I had to jump onto a level at which I expected a "default" human being to be at. So what changed is: I was diagnosed and found a working treatment. This one is big; so much pain and fatigue gone. Training results finally showed and made getting motivated and back on track easier. Some exercise even started helping with the residual pain and symptoms. I searched for things to do that were easier on me. I shouldn't immediately run or do crunches. Instead, even just walking, yoga, and some easy Pilates are enough, and more manageable to someone in my position. They are easier to pick back up after a few weeks and allow great control over varying the difficulty. With running, for example, I had no room to vary anything; even just the act of running was so exhausting back then that adjusting speed made no difference. With other forms of movement, I could build something without feeling totally exhausted. I signed up for the gym and just made showing up and walking on the treadmill a goal, and I watched videos or listened to podcasts. This was needed, because when I started it, I was still recovering from a really bad flare up and couldn't be trusted to walk around unsupervised in the forest somewhere. At the gym while just walking, I could slowly build up my exercise tolerance and endurance while seeing it as a sort of "me time" with some enjoyable videos, and with people around in case I suddenly started feeling dizzy or anything, and with some rails to hold on to. By saving videos for this time, I made it more entertaining and had something to look forward to on it. I invested in a spinning bike, and later in a foldable treadmill for at home use. I sometimes feel too bad physically or mentally to make it to the gym (or it is closed), and this enables me to still work out without being discouraged by my issues, time or weather. It also takes away the calculation of "Is it even worth showing up?" if I might just feel like 20 minutes of treadmill that day. Better 20 minutes than nothing! With all that, I slowly built up enough of a a baseline fitness for me that wouldn't make training annoying and just exhausting. It was easier to get back in after a break, and every time I had to take one, I had lost less progress than before. I got better and better at finding my sweet spot, neither under- nor overexercising. The more times I actually pushed myself to exercise despite feeling awful mentally and left it happier, the more it didn't feel like an outlier, but a guaranteed outcome. That made it easier to show up despite everything. It's still hard, but I know now that it is basically like a button to improve my mood, and who doesn't want that? That behavior just keeps getting reinforced every time I can get myself out of a hole with this. It gets harder and harder to convincingly tell myself " No, this time will be different; you'll feel the same or worse when you do this. You should stay in bed instead. " Lying down has a much worse track record: It never makes me feel better. Reply via email Published 12 Feb, 2026 I was diagnosed and found a working treatment. This one is big; so much pain and fatigue gone. Training results finally showed and made getting motivated and back on track easier. Some exercise even started helping with the residual pain and symptoms. I searched for things to do that were easier on me. I shouldn't immediately run or do crunches. Instead, even just walking, yoga, and some easy Pilates are enough, and more manageable to someone in my position. They are easier to pick back up after a few weeks and allow great control over varying the difficulty. With running, for example, I had no room to vary anything; even just the act of running was so exhausting back then that adjusting speed made no difference. With other forms of movement, I could build something without feeling totally exhausted. I signed up for the gym and just made showing up and walking on the treadmill a goal, and I watched videos or listened to podcasts. This was needed, because when I started it, I was still recovering from a really bad flare up and couldn't be trusted to walk around unsupervised in the forest somewhere. At the gym while just walking, I could slowly build up my exercise tolerance and endurance while seeing it as a sort of "me time" with some enjoyable videos, and with people around in case I suddenly started feeling dizzy or anything, and with some rails to hold on to. By saving videos for this time, I made it more entertaining and had something to look forward to on it. I invested in a spinning bike, and later in a foldable treadmill for at home use. I sometimes feel too bad physically or mentally to make it to the gym (or it is closed), and this enables me to still work out without being discouraged by my issues, time or weather. It also takes away the calculation of "Is it even worth showing up?" if I might just feel like 20 minutes of treadmill that day. Better 20 minutes than nothing! With all that, I slowly built up enough of a a baseline fitness for me that wouldn't make training annoying and just exhausting. It was easier to get back in after a break, and every time I had to take one, I had lost less progress than before. I got better and better at finding my sweet spot, neither under- nor overexercising. The more times I actually pushed myself to exercise despite feeling awful mentally and left it happier, the more it didn't feel like an outlier, but a guaranteed outcome. That made it easier to show up despite everything. It's still hard, but I know now that it is basically like a button to improve my mood, and who doesn't want that?

0 views
Rik Huijzer 2 days ago

Jesse Strang in 2019

At a shooting on a high school in Tumbler Ridge, Canada, 10 people were shot, as reported at 03:30 in Dutch national media. There are two suspect names going around, however. For example, on Feb 11, 2026, 7:25 PM IST the _Hindustan Times_ reported the name Jesse Strang. A few hours later, however, the name was suddenly reported as Jesse Van Rootselaar without mention of the previous name. For Jesse Strang, there is the following YouTube video posted on the 18th of March 2019. Screensh...

0 views