Latest Posts (13 found)
Stone Tools 2 weeks ago

dBASE on the Kaypro II

The world that might have been has been discussed at length. In one possible world, Gary Kildall's CP/M operating system was chosen over MS-DOS to drive IBM's then-new "Personal Computer." As such, Bill Gates's hegemony over the trajectory of computing history never happened. Kildall wasn't constantly debunking the myth of an airplane joyride which denied him Microsoft-levels of industry dominance. Summarily, he'd likely be alive and innovating the industry to this day. Kildall's story is pitched as a "butterfly flaps its wings" inflection point that changed computing history. The truth is, of course, there were many points along our timeline which led to Kildall's fade and untimely death. Rather, I'd like to champion what Kildall did . Kildall did co-host Computer Chronicles with Stewart Chiefet for seven years. Kildall did create the first CD-ROM encyclopedia. Kildall did develop (and coin the term for) what we know today as the BIOS. Kildall did create CP/M, the first wide-spread, mass-market, portable operating system for microcomputers, possible because of said BIOS. CP/M did dominate the business landscape until the DOS era, with 20,000+ software titles in its library. Kildall did sell his company, Digital Research Inc., to Novell for US $120M. Kildall did good . Systems built to run Kildall's CP/M were prevalent, all built around the same 8-bit limits: an 8080 or Z80 processor and up to 64KB RAM. The Osborne 1, a 25lb (11kg) "portable" which sold for $1795 ($6300 in 2026), was the talk of the West Coast Computer Faire in 1981. The price was sweet, considering it came bundled with MSRP $1500 in software, including Wordstar and Supercalc . Andy Kay's company, Non-Linear Systems, debuted the Kaypro II (the "I" only existed in prototype form) the following year at $1595, $200 less (and four pounds heavier) than the Osborne. Though slower than an Osborne, it arguably made it easier to do actual work, with a significantly larger screen and beefier floppy disk capacity. Within the major operating system of its day, on popular hardware of its day, ran the utterly dominant relational database software of its day. PC Magazine , February 1984, said, "Independent industry watchers estimate that dBASE II enjoys 70 percent of the market for microcomputer database managers." Similar to past subjects HyperCard and Scala Multimedia , Wayne Ratcliff's dBASE II was an industry unto itself, not just for data-management, but for programmability, a legacy which lives on today as xBase. Strangely enough, dBASE also decided to attach "II" to its first release; a marketing maneuver to make the product appear more advanced and stable at launch. I'm sure the popularity of the Apple II had nothing to do with anyone's coincidentally similar roman numeral naming scheme whatsoever. Written in assembly, dBASE II squeezed maximum performance out of minimal hardware specs. This is my first time using both CP/M and dBASE. Let's see what made this such a power couple. I'm putting on my tan suit and wide brown tie for this one. As the owner of COMPUTRON/X, a software retail shop, I'm in Serious Businessman Mode™. I need to get inventory under control, snake the employee toilet, do profit projections, and polish a mind-boggling amount of glass and chrome. For now, I'll start with inventory and pop in this laserdisc to begin my dBASE journey. While the video is technically for 16-bit dBASE III , our host, Gentry Lee of Jet Propulsion Laboratory, assures us that 8-bit dBASE II users can do everything we see demonstrated, with a few interface differences. This is Gail Fisher, a smarty pants who thinks she's better than me. Tony Lima, in his book dBASE II for Beginners , concurs with the assessment of dBASE II and III 's differences being mostly superficial. Lima's book is pretty good, but I'm also going through Mastering dBASE II The Easy Way , by Paul W. Heiser, the official Kaypro dBASE II Manual, and dBase II for the First Time User by Alan Freedman. That last one is nicely organized by common tasks a dBASE user would want to do, like "Changing Your Data" and "Modifying Your Record Structure." I find I return to Freedman's book often. As I understand my time with CP/M, making custom bootable diskettes was the common practice. dBASE II is no different, and outright encourages this, lest we risk losing US$2000 (in 2026 dollars) in software. Being of its time and place in computing history, dBASE uses the expected UI. You know it, you love it, it's "a blinking cursor," here called "the dot prompt." While in-program is available, going through the video, books, and manual is a must. dBASE pitches the dot prompt as a simple, English language interface to the program. for example sets the default save drive to the B: drive. You could never intuit that by what it says, nor guess that it even needs to be done, but when you know how it works, it's simple to remember. It's English only in the sense that English-like words are strung together in English-like order. That said, I kind of like it? creates a new database, prompting first for a database name, then dropping me into a text entry prompt to start defining fields. This is a nice opportunity for me to feign anger at The Fishers, the family from the training video. Fancy-pants dBASE III has a more user-friendly entry mode, which requires no memorization of field input parameters. Prompts and on-screen help walk Gail through the process. In dBASE II , a field is defined by a raw, comma-delimited string. Field definitions must be entered in the order indicated on-screen. is the data type for the field, as string, number, or boolean. This is set by a one-letter code which will never be revealed at any time, even when it complains that I've used an invalid code. Remind me to dog-ear that page of the manual. For my store, I'm scouring for games released for CP/M. Poking through Moby Games digs up roughly 30 or so commercial releases, including two within the past five years . Thanks, PunyInform ! My fields are defined thusly, called up for review by the simple command. The most frustrating part about examining database software is that it doesn't do anything useful until I've entered a bunch of data. At this stage in my learning, this is strictly a manual process. Speaking frankly, this part blows, but it also blows for Gail Fisher, so my schadenfreude itch is scratched. dBASE does its best to minimize the amount of keyboard shenanigans during this process, and in truth data entry isn't stressful. I can pop through records fairly quickly, if the raw data is before me. The prompt starts at the first field and (not !) moves to the next. If entry to a field uses the entire field length (as defined by me when setting up the fields earlier), the cursor automatically jumps to the next field with a PC-speaker beep. I guess dBASE is trying to "help," but when touch typing I'm looking at my data source, not the screen. I don't know when I'm about to hit the end of a field, so I'm never prepared when it switches input fields and makes that ugly beep. More jarring is that if the final field of a record is completely filled, the cursor "helpfully" jumps to the beginning of a new record instantly, with no opportunity to read or correct the data I just input. It's never not annoying. Gail doesn't have these issues with dBASE III and her daughter just made dinner for her. Well, I can microwave a burrito as well as anyone so I'm not jealous . I'm not. In defining the fields, I have already made two mistakes. First, I wanted to enter the critic score as a decimal value so I could get the average. Number fields, like all fields, have a "width" (the maximum number of characters/bytes to allocate to the field), but also a "decimal places" value and as I type these very words I see now my mistake. Rubber ducking works . I tricked myself into thinking "width" was for the integer part, and "decimal places" was appended to that. I see now that, like character fields, I need to think of the entire maximum possible number as being the "width." Suppose in a value we expect to record . There are 2 decimal places, and a decimal point, and a leading 0, and potentially a sign, as or . So that means the "width" should be 5, with 2 "decimal places" (of those 5). Though I'm cosplaying as a store owner, I'm apparently cosplaying as a store owner that sucks! I didn't once considered pricing! Gah, Gail is so much better at business than I am! Time to get "sorta good." Toward that end, I have my to-do list after a first pass through data entry. Modifying dBASE "structures" (the field/type definitions) can be risky business. If there is no data yet, feel free to change whatever you want. If there is pre-existing data, watch out. will at least do the common decency of warning you about the pile you're about to step into. Modifying a database structure is essentially verboten, rather we must juggle files to effect a structure change. dBASE let's us have two active files, called "work areas," open simultaneously: a and a . Modifications to these are read from or written to disk in the moment; 64K can't handle much live data. It's not quite "virtual memory" but it makes the best of a tight situation. When wanting to change data in existing records, the command sounds like a good choice, but actually winds up being more useful. will focus in on specified fields for immediate editing across all records. It's simple to through fields making changes. I could to edit everything at once, but I'm finding it safer while learning to make small incremental changes or risk losing a large body of work. Make a targeted change, save, make another change, save, etc. 0:00 / 0:03 1× I laughed every time Gentry Lee showed up, like he's living with The Fishers as an invisible house gremlin. They never acknowledge his presence, but later he eats their salad! Being a novice at dBASE is a little dangerous, and MAME has its own pitfalls. I have been conditioned over time to when I want to "back out" of a process. This shuts down MAME instantly. When it happens, I swear The Fishers are mocking me, just on the edge of my peripheral vision, while Gentry Lee helps himself to my tuna casserole. dBASE is a relational database. Well, let's be less generous and call it "relational-ish." The relational model of data was defined by Edgar F. Codd in 1969 where "relation is used here in its accepted mathematical sense." It's all set theory stuff; way over my head. Skimming past the nerd junk, in that paper he defines our go-to relationship of interest: the join. As a relational database, dBASE keeps its data arranged VisiCalc style, in rows and columns. So long as two databases have a field in common, which is defined, named, and used identically in both , the two can be "joined" into a third, new database. I've created a mini database of developer phone numbers so I can call and yell at them for bugs and subsequent lost sales. I haven't yet built up the grin-and-bear-it temperament Gail possesses toward Amanda Covington. Heads will roll! You hear me, Lebling? Blank?! 64K (less CP/M and dBASE resources) isn't enough to do an in-memory join. Rather, joining creates and writes a completely new database to disk which is the union of two databases. The implication being you must have space on disk to hold both original databases as well as the newly joined database, and also the new database cannot exceed dBASE 's 65,535 record limit after joining. In the above , means and means , so we can precisely specify fields and their work area of origin. This is more useful for doing calculations at time, like to join only records where deletes specific records, if we know the record number, like . Commands in dBASE stack, so a query can define the target for a command, as one would hope and expect in 2026. Comparisons and sub-strings can be used as well. So, rather than deleting "Infocom, Inc." we could: The command looks for the left-hand string as a case-sensitive sub-string in the right-hand string. We can be a little flexible in how data may have been input, getting around case sensitivity through booleans. Yes, we have booleans! Wait, why am I deleting any Infocom games? I love those! What was I thinking?! Once everything is marked for deletion, that's all it is: marked for deletion. It's still in the database, and on disk, until we do real-deal, non-reversible, don't-forget-undo-doesn't-exist-in-1982, destruction with . Until now, I've been using the command as a kind of ad-hoc search mechanism. It goes through every record, in sequence, finding record matches. Records have positions in the database file, and dBASE is silently keeping track of a "record pointer" at all times. This represents "the current record" and commands without a query will be applied to the currently pointed record. Typing in a number at the dot prompt moves the pointer to that record. That moves me to record #3 and display its contents. When I don't know which record has what I want, will move the pointer to the first match it finds. At this point I could that record, or to see a list of records from the located record onward. Depending on the order of the records, that may or may not be useful. Right now, the order is just "the order I typed them into the system." We need to teach dBASE different orders of interest to a stripmall retail store. While the modern reaction would be to use the command, dBASE's Sort can only create entirely new database files on disk, sorted by the desired criteria. Sort a couple of times on a large data set and soon you'll find yourself hoarding the last of new-old 5 1/4" floppy disk stock from OfficeMax, or being very careful about deleting intermediary sort results. SQL brainiacs have a solution to our problem, which dBASE can also do. An "index" is appropriate for fast lookups on our columnar data. We can index on one or more fields, remapping records to the sort order of our heart's desire. Only one index can be used at a time, but a single index can be defined against multiple fields. It's easier to show you. When I set the index to "devs" and , that sets the record pointer to the first record which matches my find. I happen to know I have seven Infocom games, so I can for fields of interest. Both indexes group Infocom games together as a logical block, but within that block Publisher order is different. Don't get confused, the actual order of files in the database is betrayed by the record number. Notice they are neither contiguous nor necessarily sequential. would rearrange them into strict numerical record order. An Index only relates to the current state of our data, so if any edits occur we need to rebuild those indexes. Please, contain your excitement. Munging data is great, but I want to understand my data. Let's suppose I need the average rating of the games I sell. I'll first need a count of all games whose rating is not zero (i.e. games that actually have a rating), then I'll need a summation of those ratings. Divide those and I'll have the average. does what it says. only works on numeric fields, and also does what it says. With those, I basically have what I need. Like deletion, we can use queries as parameters for these commands. dBASE has basic math functions, and calculated values can be stored in its 64 "memory variables." Like a programming language, named variables can be referenced by name in further calculations. Many functions let us append a clause which shoves a query result into a memory variable, though array results cannot be memorized this way. shoves arbitrary values into memory, like or . As you can see in the screenshot above, the rating of CP/M games is (of 100). Higher than I expected, to be perfectly honest. As proprietor of a hot (power of positive thinking!) software retail store, I'd like to know how much profit I'll make if I sold everything I have in stock. I need to calculate, per-record, the following but this requires stepping through records and keeping a running tally. I sure hope the next section explains how to do that! Flipping through the 1,000 pages of Kaypro Software Directory 1984 , we can see the system, and CP/M by extension, was not lacking for software. Interestingly, quite a lot was written in and for dBASE II, bespoke database solutions which sold for substantially more than dBASE itself. Shakespeare wrote, "The first thing we do, let's kill all the lawyers." Judging from these prices, the first thing we should do is shake them down for their lunch money. In the HyperCard article I noted how an entire sub-industry sprung up in its wake, empowering users who would never consider themselves programmers to pick up the development reigns. dBASE paved the way for HyperCard in that regard. As Jean-Pierre Martel noted , "Because its programming language was so easy to learn, millions of people were dBASE programmers without knowing it... dBASE brought programming power to the masses." dBASE programs are written as procedural routines called Commands, or .CMD files. dBASE helpfully includes a built-in (stripped down) text editor for writing these, though any text editor will work. Once written, a .CMD file like can be invoked by . As Martel said, I seem to have become a dBASE programmer without really trying. Everything I've learned so far hasn't just been dot prompt commands, it has all been valid dBASE code. A command at the dot prompt is really just a one-line program. Cool beans! Some extra syntax for the purpose of development include: With these tools, designing menus which add a veneer of approachability to a dBASE database are trivial to create. Commands are interpreted, not compiled (that would come later), so how were these solutions sold to lawyers without bundling a full copy of dBASE with every Command file? For a while dBASE II was simply a requirement to use after-market dBASE solutions. The 1983 release of dBASE Runtime changed that, letting a user run a file, but not edit it. A Command file bundled with Runtime was essentially transformed into a standalone application. Knowing this, we're now ready to charge 2026 US$10,000 per seat for case management and tracking systems for attorneys. Hey, look at that, this section did help me with my profit calculation troubles. I can write a Command file and bask in the glow of COMPUTRON/X's shining, profitable future. During the 8 -> 16-bit era bridge, new hardware often went underutilized as developers came to grips with what the new tools could do. Famously, Visicalc 's first foray onto 16-bit systems didn't leverage any of the expanded RAM on the IBM-PC and intentionally kept all known bugs from the 8-bit Apple II version. The word "stop gap" comes to mind. Corporate America couldn't just wait around for good software to arrive. CP/M compatibility add-ons were a relatively inexpensive way to gain instant access to thousands of battle-tested business software titles. Even a lowly Coleco ADAM could, theoretically, run WordStar and Infocom games, the thought of which kept me warm at night as I suffered through an inferior Dragon's Lair adaptation. They promised a laserdisc attachment! For US$600 in 1982 ($2,000 in 2026) your new-fangled 16-bit IBM-PC could relive the good old days of 8-bit CP/M-80. Plug in XEDEX's "Baby Blue" ISA card with its Z80B CPU and 64K of RAM and the world is your slowly decaying oyster. That RAM is also accessible in 16-bit DOS, serving dual-purpose as a memory expansion for only $40 more than IBM's own bare bones 64K board. PC Magazine' s February 1982 review seemed open to the idea of the card, but was skeptical it had long-term value. XEDEX suggested the card could someday be used as a secondary processor, offloading tasks from the primary CPU to the Z80, but never followed through on that threat, as far as I could find. Own anApple II with an 8-bit 6502 CPU but still have 8-bit Z80 envy? Microsoft offered a Z80 daughter-card with 64K RAM for US$399 in 1981 ($1,413 in 2026). It doesn't provide the 80-column display you need to really make use of CP/M software, but is compatible with such add-ons. It was Bill Gates's relationship with Gary Kildall as a major buyer of CP/M for this very card that started the whole ball rolling with IBM, Gates's purchase of QDOS, and the rise of Microsoft. A 16K expansion option could combine with the Apple II's built-in 48K memory, to get about 64K for CP/M usage. BYTE Magazine 's November 1981 review raved, "Because of the flexibility it offers Apple users, I consider the Softcard an excellent buy." Good to know! How does one add a Z80 processor to a system with no expansion slots? Shove a Z80 computer into a cartridge and call it a day, apparently. This interesting, but limited, footnote in CP/M history does what it says, even if it doesn't do it well. Compute!'s Gazette wrote, "The 64 does not make a great CP/M computer. To get around memory limitations, CP/M resorts to intensive disk access. At the speed of the 1541, this makes programs run quite slowly," Even worse for CP/M users is that the slow 1541 can't read CP/M disks. Even if it could, you're stuck in 40-column mode. How were users expected to get CP/M software loaded? We'll circle back to that a little later. At any rate, Commodore offered customers an alternative solution. Where it's older brother had to make do with a cartridge add-on, the C128 takes a different approach. To maintain backward compatible with the C64 it includes a 6510 compatible processor, the 8502. It also wants to be CP/M compatible, so it needs a Z80 processor. What to do, what to do? Maybe they could put both processors into the unit? Is that allowed? Could they do that? They could, so they did. CP/M came bundled with the system, which has a native 80-column display in CP/M mode. It is ready to go with the newer, re-programmable 1571 floppy drive. Unfortunately, its slow bus speed forces the Z80 to run at only 2MHz, slower even than a Kaypro II. Compute!'s Gazette said in their April 1985 issue, "CP/M may make the Commodore 128 a bargain buy for small businesses. The price of the Commodore 128 with the 1571 disk drive is competitive with the IBM PCjr." I predict rough times ahead for the PCjr if that's true! Atari peripherals have adorable industrial design, but were quite expensive thanks to a strange system design decision. The 8-bit system's nonstandard serial bus necessitated specialized data encoding/decoding hardware inside each peripheral, driving up unit costs. For example, the Atari 910 5 1/4" floppy drive cost $500 in 1983 (almost $2,000 in 2026) thanks to that special hardware, yet only stored a paltry 90K per disk. SWP straightened out the Atari peripheral scene with the ATR8000. Shenanigans with special controller hardware are eliminated, opening up a world of cheaper, standardized floppy drives of all sizes and capacities. It also accepts Centronics parallel and RS-232C serial devices, making tons of printers, modems, and more compatible with the Atari. The device also includes a 16K print buffer and the ability to attach up to four floppy drives without additional controller board purchases. A base ATR8000 can replace a whole stack of expensive Atari-branded add-ons, while being more flexible and performant. The saying goes, "Cheaper, better, faster. Pick any two." The ATR8000 is that rare device which delivered all three. Now, upgrade that box with its CP/M compatibility option, adding a Z80 and 64K, and you've basically bought a second computer. When plugged into the Atari, the Atari functions as a remote terminal into the unit, using whatever 40/80-column display adapter you have connected. It could also apparently function standalone, accessible through any terminal, no Atari needed. That isn't even its final form. The Co-Power-88 is a 128K or 256K PC-compatible add-on to the Z80 CP/M board. When booted into the Z80, that extra RAM can be used as a RAM disk to make CP/M fly. When booted into the 8088, it's a full-on PC running DOS or CP/M-86. Tricked out, this eight pound box would set you back US$1000 in 1984 ($3,000 in 2026), but it should be obvious why this is a coveted piece of kit for the Atari faithful to this day. For UK£399 in 1985 (£1288 in 2026; US$1750) Acorn offered a Z80 with dedicated 64K of RAM. According to the manual, the Z80 handles the CP/M software, while the 6502 in the base unit handles floppies and printers, freeing up CP/M RAM in the process. Plugged into the side of the BBC Micro, the manual suggests desk space clearance of 5 ft wide and 2 1/2 feet deep. My god. Acorn User June 1984 declared, "To sum up, Acorn has put together an excellent and versatile system that has something for everyone." I'd like to note that glowing review was almost exclusively thanks to the bundled CP/M productivity software suite. Their evaluation didn't seem to try loading off-the-shelf software, which caused me to narrow my eyes, and stroke my chin in cynical suspicion. Flip through the manual to find out about obtaining additional software, and it gets decidedly vague. "You’ll find a large and growing selection available for your Z80 personal computer, including a special series of products that will work in parallel with the software in your Z80 pack." Like the C128, the Coleco ADAM was a Z80 native machine so CP/M can work without much fuss, though the box does proclaim "Made especially for ADAM!" Since we don't have to add hardware (well, we need a floppy; the ADAM only shipped with a high-speed cassette drive), we can jump into the ecosystem for about US$65 in 1985 ($200 in 2026). Like other CP/M solutions, the ADAM really needed an 80-column adapter, something Coleco promised but never delivered. Like Dragon's Lair on laserdisc! As it stands, CP/M scrolls horizontally to display all 80 columns. This version adds ADAM-style UI for its quaint(?) roman numeral function keys. OK, CP/M is running! Now what? To be honest, I've been toying with you this whole time, dangling the catnip of CP/M compatibility. It's time to come clean and admit the dark side of these add-on solutions. There ain't no software! Even when the CPU and CP/M version were technically compatible, floppy disc format was the sticking point for getting software to run any given machine. For example, the catalog for Kaypro software in 1984 is 896 pages long. That is all CP/M software and all theoretically compatible with a BBC Micro running CP/M. However, within that catalog, everything shipped expressly on Kaypro compatible floppy discs. Do you think a Coleco ADAM floppy drive can read Kaypro discs? Would you be even the tiniest bit shocked to learn it cannot? Kaypro enthusiast magazine PRO illustrates the issue facing consumers back then. Let's check in on the Morrow Designs (founded by Computer Chronicles sometimes co-host George Morrow!) CP/M system owners. How do they fare? OK then, what about that Baby Blue from earlier? The Microsoft Softcard must surely have figured something out. The Apple II was, according to Practical Computing , "the most widespread CP/M system" of its day. Almost every product faced the same challenge . On any given CP/M-80 software disk, the byte code is compatible with your Z8o, if your floppy drive can read the diskette. You couldn't just buy a random CP/M disk, throw it into a random CP/M system, and expect it to work, which would have been a crushing blow to young me hoping to play Planetfall on the ADAM. So what could be done? There were a few options, none of them particularly simple or straightforward, especially to those who weren't technically-minded. Some places offered transfer services. XEDEX, the makers of Baby Blue, would do it for $100 per disk . I saw another listing for a similar service (different machine) at $10 per disk. Others sold the software pre-transferred, as noted on a Coleco ADAM service flyer. A few software solutions existed, including Baby Blue's own Convert program, which shipped with their card and "supports bidirectional file transfer between PC-DOS and popular CP/M disk formats." They also had the Baby Blue Conversion Software which used emulation to "turn CP/M-80 programs into PC-DOS programs for fast, efficient execution on Baby Blue II." Xeno-Copy, by Vertex Systems, could copy from over 40+ disk formats onto PC-DOS for US$99.50 ($313 in 2026); their Plus version promised cross-format read/write capabilities. Notably, Apple, Commodore, Apricot, and other big names are missing from their compatibility list. The Kermit protocol , once installed onto a CP/M system disk, could handle cross-platform serial transfers, assuming you had the additional hardware necessary. "CP/M machines use many different floppy disk formats, which means that one machine often cannot read disks from another CP/M machine, and Kermit is used as part of a process to transfer applications and data between CP/M machines and other machines with different operating systems." The Catch-22 of it all is that you have to get Kermit onto your CP/M disk in the first place. Hand-coding a bare-bones Kermit protocol (CP/M ships with an assembler) for the purposes of getting "real" Kermit onto your system so you could then transfer the actual software you wanted in the first place, was a trick published in the Kermit-80 documentation . Of course, this all assumes you know someone with the proper CP/M setup to help; basically, you're going to need to make friends. Talk to your computer dealer, or better yet, get involved in a local CP/M User's Group. It takes a village to move Wordstar onto a C64. I really enjoyed my time learning dBASE II and am heartened by the consistency of its commands and the clean interaction between them. When I realized that I had accidentally learned how to program dBASE , that was a great feeling. What I expected to be a steep learning curve wasn't "steep" per se, but rather just intimidating. That simple, blinking cursor, can feel quite daunting at the first step, but each new command I learned followed a consistent pattern. Soon enough, simple tools became force multipliers for later tools. The more I used it, the more I liked it. dBASE II is uninviting, but good. On top of that, getting data out into the real world is simple, as you'll see below in "Sharpening the Stone." I'm not locked in. So what keeps me from being super enthusiastic about the experience? It is CP/M-80 which gives me pause. The 64K memory restriction, disk format shenanigans, and floppy disk juggling honestly push me away from that world except strictly for historical investigations. Speaking frankly, I don't care for it. CP/M-86 running dBASE III+ could probably win me over, though I would probably try DR-DOS instead. Memory constraints would be essentially erased, DOSBox-X is drag-and-drop trivial to move files in and out of the system, and dBASE III+ is more powerful while also being more user-friendly. Combine that with Clipper , which can compile dBASE applications into standalone .exe files, and there's powerful utility to be had . By the way, did you know dBASE is still alive ? Maybe. Kinda? Hard to say. The latest version is dBASE 2019 (not a typo!), but the site is unmaintained and my appeal to their LinkedIn for a demo has gone unanswered. Its owner, dBase LTD, sells dBASE Classic which is dBASE V for DOS running in DOSBox, a confession they know they lost the plot, I'd humbly suggest. An ignominious end to a venerable classic. Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). When working with CP/M disk images, get to know cpmtools . This is a set of command line utilities for creating, viewing, and modifying CP/M disk images. The tools mostly align with Unix commands, prefixed with Those are the commands I wound up using with regularity. If your system of choice is a "weirdo system" you may be restricted in your disk image/formatting choices; these instructions may be of limited or no help. knows about Kaypro II disk layouts via diskdefs. This Github fork makes it easy to browse supported types. Here's what I did. Now that you can pull data out of CP/M, here's how to make use of it. Kaypro II emulation running in MAME. Default setup includes Dual floppies Z80 CPU at 2.4MHz dBase II v2.4 See "Sharpening the Stone" at the end of this post for how to get this going. Personally, I found this to be a tricky process to learn. Change the of the rating field and add in that data. Add pricing fields and related data. Add more games. and allow decision branching does iterations and will grab a character or string from the user prints text to screen at a specific character position and give control over system memory will run an assembly routine at a known memory location For this article I specifically picked a period-authentic combo of Kaypro II + CP/M 2.2 + dBASE II 2.4. You don't have to suffer my pain! CP/M-86 and dBASE III+ running in a more feature-rich emulator would be a better choice for digging into non-trivial projects. I'm cold on MAME for computer emulation, except in the sense that in this case it was the fastest option for spinning up my chosen tools. It works, and that's all I can say that I enjoyed. That's not nothing! I find I prefer the robust settings offered in products like WinUAE, Virtual ADAM, VICE , and others. Emulators with in-built disk tools are a luxury I have become addicted to. MAME's interface is an inelegant way to manage hardware configurations and disk swapping. MAME has no printer emulation, which I like to use for a more holistic retro computing experience. Getting a working, trouble-free copy of dBASE II onto a Kaypro II compatible disk image was a non-trivial task. It's easier now that I know the situation, but it took some cajoling. I had to create new, blank disks, and copy CP/M and dBASE over from other disk images. Look below under "Getting Your Data into the Real World" to learn about and how it fits into the process. Be careful of modern keyboard conventions, especially wanting to hit to cancel commands. In MAME this will hard quit the emulator with no warning! Exported data exhibited strange artifacts: The big one: it didn't export any "logical" (boolean) field values from my database. It just left that field blank on all records. Field names are not exported. Garbage data found after the last record; records imported fine. On Linux and Windows (via WSL) install thusly : view the contents of a CP/M disk image. Use the flag to tell it the format of the disk, like for the Kaypro II. : format a disk image with a CP/M file system : copy files to/from other disk or to the host operating system : remove files from a CP/M disk image : for making new, blank disk image files (still needs to be formatted) : makes a blank disk image to single-sided, double-density specification : formats that blank image for the Kaypro II : copies "DBASE.COM" from the current directory of the host operating system into the Kaypro II disk image. : displays the contents of the disk : copies "FILE.TXT" from the disk image into the current directory of the host operating system (i.e. ) dBASE has built-in exporting functionality, so long as you use the extension when saving ( in dBASE lingo). That creates a bog-standard ASCII text file, each record on its own line, comma-delimited (and ONLY comma-delimited). It is not Y2K compatible, if you're hoping to record today's date in a field. I tackled this a bit in the Superbase post . It is probably possible to hack up a Command file to work around this issue, since dates are just strings in dBASE . dBASE II doesn't offer the relational robustness of SQL. Many missing, useful tools could be built in the xBase programming language. It would be significant work in some cases; maybe not worth it or consider if you can do without those. Your needs may exceed what CP/M-80 hardware can support; its 8-bit nature is a limiting factor in and of itself. If you have big plans , consider dBASE III+ on DOS to stretch your legs. (I read dBASE IV sucks) The user interface helps at times, and is opaque at other times. This can be part of the fun in using these older systems, mastering esoterica for esoterica's sake, but may be a bridge too far for serious work of real value. Of course, when discussing older machines we are almost always excluding non-English speakers thanks to the limitations of ASCII. The world just wasn't as well-connected at the time.

0 views
Stone Tools 4 weeks ago

Scala Multimedia on the Commodore Amiga

The ocean is huge. It's not only big enough to separate landmasses and cultures, but also big enough to separate ideas and trends. Born and raised in the United States, I couldn't understand why the UK was always eating so much pudding. Please forgive my pre-internet cultural naiveté. I should also be kind to myself for thinking the Video Toaster was the be-all-end-all for video production and multimedia authoring on the Amiga. Search Amiga World metadata on Internet Archive for "toaster" and "scala" and you'll see my point. "Toaster" brings up dozens of top-level hits, and "Scala" gets zero. The NTSC/PAL divide was as vast as the ocean. From the States, cross either ocean and Scala was everywhere, including a full, physical-dongle-copy-protection-removed, copy distributed on the cover disk of CU Amiga Magazine , issue 96. Listening to Scala founder, Jon Bøhmer, speak of Scala 's creation in an interview on The Retro Hour , his early intuition on the Amiga's potential in television production built Scala into an omnipresent staple across multiple continents. Intuition alone can't build an empire. Bøhmer also had gladiatorial-like aggression to maintain his dominance in that market. As he recounted, "A Dutch company tried to make a Scala clone, and they made a mistake of putting...the spec sheet on their booth and said all those different things that Scala didn't have yet. So I took that spec sheet back to my developers (then, later) lo and behold before those guys had a bug free version out on the street, we had all their features and totally eradicated their whole proposal." Now, of course I understand that it would have been folly to ignore the threat. Looked at from another angle, Scala had apparently put themselves in a position where their dominance could face a legitimate threat from a disruptor. Ultimately, that's neither here nor there as in the end, Scala had early momentum and could swing the industry their direction. Scala (the software) remains alive and well even now, in the digital signage authoring and playback software arena. You know the stuff, like interactive touchscreens at restaurant checkouts, or animated displays at retail stores. As with the outliner/PIM software in the ThinkTank article , the world of digital signage is likewise shockingly crowded. Discovering this felt like catching a glimpse of a secondary, invisible world just below the surface of conscious understanding. Scala didn't find success without good reason. It solved some thorny broadcast production issues on hardware that was alone in its class for a time. A unique blend of software characteristics (multitasking, IFF, ARexx) turned an Amiga running Scala into more than the sum of its parts. Scala by itself would have made rumbles. Scala on the Amiga was seismic. At heart, I'm a print guy. Like anyone, I enjoy watching cool video effects, and I once met Kiki Stockhammer in person. But my brain has never been wired for animation or motion design. My 3D art was always static; my designs were committed to ink on paper. I liked holding a physical artifact in my hands at the end of the design process. Considering the sheer depths of my video naivete, for this investigation I will need a lot of help from the tutorials. I'll build the demo stuff from the manual, and try to push myself further and see where my explorations take me. CU Amiga Magazine issues 97 - 102 contain Scala MM300 tutorials as well, so I'll check those out for a man-on-the-streets point of view. The first preconception I need to shed is thinking Scala is HyperCard for the Amiga. It flirts with certain concepts, but building Myst with this would be out of reach for most people. I'll never say it's "impossible," as I don't like tempting the Fates that way, but it would need considerable effort and development skills. A little terminology is useful before we really dig in. I usually start an exploration of GUI applications by checking out the available menus. With Scala , there aren't any. I don't mean the menubar is empty, I mean there isn't a menubar, period. It does not exist. I am firmly in Scala Land and Scala 's vision of how multimedia work gets done. As with PaperClip , I find its opinionated interface comforting. I have serious doubts about common assumptions of interface homogeneity being a noble goal, but that's a discussion for a future post. Despite its plain look, what we see when the program launches is richly complex. Anything in purple (or whatever your chosen color scheme uses) is clickable, and if it has its own boundaries it does its own thing. Across the top we have the Scala logo, program title bar, and the Amiga Workbench "depth gadget." Clicking the logo is how we save our project and/or exit the program. Then we have what is clearly a list, and judging from interface cues its a list of pages. This list ("script") is akin to a HyperCard stack with transitions ("wipes") between cards ("pages"). Each subsection of any given line item is its own button for interfacing with that specific aspect of the page. It's approachable and nonthreatening, and to my mind encourages me to just click on things and see what happens. The bottom-sixth holds an array of buttons that would normally be secreted away under standard Amiga GUI menus. On the one hand, this means if you see it, it's available; no poking through dozens of greyed-out items. On the other hand, keyboard shortcuts and deeper tools aren't exposed. There's no learning through osmosis here. Following the tutorial, the first thing to do is define my first "page." Click "New," choose a background as a visual starting point if you like, click "OK", choose a resolution and color depth (this is per-screen, not per-project), and click "OK" to finish. The program steps me through the process; it is clear how to proceed. The design team for Scala really should be commended for the artistic craftsmanship of the product. It is easy to put something professional together with the included backgrounds, images, and music. Everything is tasteful and (mostly) subdued, if aesthetically "of its time" occasionally. Thanks to IFF support, if you don't like the built-in assets, you can create your own in one of the Amiga's many paint or music programs. That visual care extends to the included fonts, which are a murderer's row of well-crafted classics. All the big stars are here! Futura, Garamond, Gill Sans, Compact, and more. Hey, is that Goudy I see coming down the red carpet? And behind them? Why it's none other than Helvetica, star of their own hit movie that has the art world buzzing! And, oh no! Someone just threw red paint all over Franklin Gothic. What a shame, because I'm pretty sure that's a pleather dress. The next screen is where probably 85% of my time will be spent. One thing I've noticed with the manual is a lack of getting the reader up to speed on the nomenclature of the program. This screen contains the "Edit Menu" but is that what I should call this screen? The "Edit Menu" screen? Screen layouts are called "pages." Is this the "Page Edit" screen? Anyway, the "Edit Menu" gives a lot of control, both fine and coarse, for text styling, shape types, creating buttons, setting the color palette, coordinating object reveals, and more. Buttons with hide extra options, for styling or importing other resources, and it could be argued the interface works against itself a bit. As Scala has chosen to eschew typical Amiga GUI conventions, they walk a delicate line of showing as much as possible, while avoiding visual confusion. It never feels overwhelming, but only just and could stand to borrow from the MacOS playbook's popup menus, rather than cycling of options. Entering text is simple; click anywhere on the screen and begin typing. Where it gets weird is how Scala treats all text as one continuous block. Every line is ordered by Y-position on screen, but every line is connected to the next. Typing too much on a given line will spill over into the next line down, wherever it may be, and however it may be styled. 0:00 / 0:17 1× Text weirdness in the Edit Screen. (I think I had a Trapper Keeper in that pattern.) The unobtrusive buttons "IN" and "OUT" on the left define how the currently selected object will transition into or out of the screen. Doing this by mouse selection is kind of a drag, as there is no visible selection border for the object being modified. There is an option to draw boxes around objects, but there is no differentiation of selected vs. unselected objects, except when there is. It's a bit inconsistent. The "List" button reveals a method for assigning transitions and rearranging object timings precisely. It is quickly my preferred method for anything more complex than "a simple piece of text flies into view." As a list we can define only a pure sequence. Do a thing. Do a second thing. Do a third thing. The end. Multiple items can be "chained" to perform precisely the same wipe as the parent object, with no variation. It's a grouping tool, not a timing tool. 0:00 / 0:04 1× "List" editing of text effect timings. Stay tuned for the sequel: "celeriac and jicama" I'm having a lot of fun exploring these tools, and have immediately wandered off the tutorial path just to play around. Everything works like I'd expect, and I don't need to consult the manual much at all. There are no destructive surprises nor wait times. I click buttons and see immediate results; my inquisitiveness is rewarded. Pages with animation are all good and well, but it is interactivity which elevates a Scala page over the stoicism of a PowerPoint slide. That means it's time for the go-to interaction metaphor: the good ole' button. Where HyperCard has the concept of buttons as objects, in Scala a button is just a region of the screen. It accepts two events: and , though it burdens these simple actions with the confusing names and . I mix up these terms constantly in my mind. To add a button, draw a box. Alternately, click something you've drawn and a box bound to that object's dimensions will be auto-generated. Don't be fooled! That box is not tethered to the object. It just happens to be sized precisely to the object's current dimensions and position on screen, as a helpful shortcut to generate the most-likely button for your needs. Button interactions can do a few things. First, it can adjust colors within its boundaries. Amiga palettes use indexed color, so color swaps are trivial and pixel-perfect. Have some white text that should highlight in red when the mouse enters it? Set the "mark" (mouse enter) palette to remap white to red. Same for "select" (mouse click), a separate palette remap could turn the white to yellow on click. Why am I talking about this when I can just show you? 0:00 / 0:24 1× I intentionally drew the button to be half the text height to illustrate that the button has no relation to the text itself. Color remapping occurs within button boundaries. The double palettes represent the current palette (top), and the remapped palette (bottom). Buttons can also contain simple logic, setting or reading global variable states to determine how to behave at any given moment. IF-THEN statements can likewise be embedded to route presentation order based on those variables. So, a click could add +1 to a global counter, then if the counter is a certain value it could transition to a corresponding page. 0:00 / 0:03 1× If we feel particularly clever with index color palette remapping, it is possible to give the illusion of complete image replacement. Buttons do not need any visible attributes, nor do they need to be mouse-clicked to perform their actions. If "Function Keys" are enabled at the Scala "System" level, the first 10 buttons on a page are automatically linked to F1 - F10. A sample script which ships with Scala demonstrates F-Key control over a page in real-time, altering the values of sports scores by set amounts. This is a clever trick, and with deeper thought opens up interesting possibilities. If every page in a script were to secretly contain such a set of buttons, a makeshift control panel could function like a "video soundboard" of sorts. F-Keys could keep a presentation dynamic, perhaps reacting to live audience participation. I mention this for no particular reason and it is not a setup for a later reveal. ahem Once we've made some pages, its time to stitch them together into a proper presentation, a "script" in Scala parlance. This all happens in the "Main Menu" which works similarly to the "List" view when editing page text elements, with a few differences. "Wipe" is the transition from the previous page to the selected page. If you want to wipe "out" from a page with transition X, then wipe "in" to next page with transition Y, a page must be added in-between to facilitate that. The quality of the real-time wipe effects surprises me. Again, my video naivete is showing, because I always thought the Amiga needed specialized hardware to do stuff like this, especially when there is video input. The wipes are fun, if perhaps a little staid compared to the Toaster 's. In Scala 's defense, they remain a bit more timeless in their simplicity. "Pause" controls, by time or frame count, how long to linger on a page before moving on to the next one. Time can be relative to the start of the screen reveal, or absolute so as to coordinate Scala animations with known timestamps on a pre-recorded video source. A mouse click can also be assigned as the "pause," waiting for a click to continue. "Sound" attaches a sound effect, or a MOD music file, to the reveal. There are rudimentary tools for adjusting pitch and timing, and even for trimming sounds to fit. An in-built sampler makes quick, crunchy, low-fidelity voice recordings, for when you need to add a little extra pizazz in a pinch, or to rough out an idea to see how it works. Sometimes the best tool for the job is the one you have with you. There are hidden tools on the Main Menu. Like many modern GUI table views, the gap between columns is draggable. Narrowing the "Name" column reveals two hidden options to the right: Variables and Execute. Now I'm finally getting a whiff of HyperCard . Unlike HyperCard , these tools are rather opaque and non-intuitive. Right off the bat, there is no built-in script editor. Rather, Scala is happy to position itself as one tool in your toolbox, not to provide every tool you need out of the box. It's going to take some time to get to know how these work, perhaps more than I have allocated for this project, but I'll endeavor to at least come to grips with these. The Scala manual says, "The Scala definition of variables (closely resembles) ARexx, since all variable operators are performed by ARexx." After 40 years, I guess it's time to finally learn about ARexx. ARexx is the Amiga implementation of the REXX scripting language . From ARexx User's Reference Manual, "ARexx is particularly well suited as a command language. Command programs, sometimes called "scripts" or "macros", are widely used to extend the predefined commands of an operating system or to customize an applications program." This is essentially the Amiga's AppleScript equivalent, a statement which surely has a pedant somewhere punching their 1084 monitor at my ignorance. Indeed, the Amiga had ARexx before Apple had AppleScript, but not before Apple had HyperCard . Amiga Magazine , August 1989, described it thusly, "Amiga's answer to HyperCard is found in ARexx, a programming and DOS command language, macro processor, and inter-process controller, all rolled into one easy-to-use command language." "Easy-to-use" you say? Commodore had their heart in the right place, but the "Getting Acquainted" section of the ARexx manual immediately steers hard into programmer-speak. From the jump we're hit with stuff like, "(ARexx) uses the double-precision math library called "mathieeedoubbas.library" that is supplied with the Amiga WorkBench disk, so make sure that this file is present in your LIBS: directory. The distribution disk includes the language system, some example programs, and a set of the INCLUDE files required for integrating ARexx with other software packages." I know exactly what I'd have thought back in the day. What is a "mathieeedoubbas?" What is a "library?" Is "LIBS" and "library" the same thing? What is "double-precision?" What is "INCLUDE"? What is a "language system?" You, manual, said yourself on page 2, "If you are new to the REXX language, or perhaps to programming itself, you should review chapters 1 through 4." So far, that ain't helpin'. Luckily for young me, now me knows a thing or two about programming and can make sense of this stuff. Well, "sense" in the broadest definition only. What this means for Scala is that we have lots of options for handling variables and logic in our project. The manual says, "Any ARexx operators and functions can be used (in the variable field)." However, a function like "Say," which outputs text to console, doesn't make any sense in a Scala context, so I'm not always 100% clear where lie the boundaries of useful operators and functions. In addition to typical math functions and simple string concatentation, ARexx gives us boolean and equality checks, bitwise operators, random number generation, string to digit conversion, string filtering and trimming, the current time, and a lot more. Even checking for file existence works, which possibly carried over from Scala 's roots as a modem-capable automated remote video-titler. Realistically, there's only so much we can do given the tiny tiny OMG it's so small interface into which we type our expressions. My aspirations are scoped by the interface design. This is not necessarily a bad thing , IMHO. " Small, sharp tools " is a handy mental scoping model. Variables are global, starting from the page on which they're defined. So page 1 cannot reach variables defined on page 2. A page can display the value of any currently defined variable by using the prefix in the on-screen text, as in . 0:00 / 0:08 1× I was trying to do Cheifet's melt effect, but I couldn't get animated brushes to work in Scala . Still, I was happy to get even this level of control over genlock/ Scala interplay. "Execution" in the Main Menu means "execute a script." Three options are available: Workbench, CLI, and ARexx. For a feature that gets two pages in the manual with extra-wide margins, this is a big one, but I get why it only receives a brief mention. The only other recourse would be to include hundreds of pages of training material. "It exists. Have fun." is the basic thrust here. "Workbench" can launch anything reachable via the Workbench GUI, the same as double-clicking it. This is useful for having a script set up the working environment with helper apps, so an unpaid intern doesn't forget to open them. For ARexx stuff, programs must be running to receive commands, for example. "CLI" does the same thing as Workbench, except for AmigaDOS programs; programs that don't have a GUI front-end. Maybe open a terminal connection or monitor a system resource. "ARexx" of course runs ARexx scripts. For a program to accept ARexx commands, it must have an active REXX port open. Scala can send commands, and even its own variable data, to a target program to automate it in interesting ways. I saw an example of drawing images in a paint program entirely through ARexx scripting. Scala itself has an open REXX port, meaning its own tools can be controlled by other programs. In this way, data can flow between software, even from different makers, to form a little self-enclosed, automation ecosystem. One unusually powerful option is that Scala can export its own presentation script, which includes information for all pages, wipes, timings, sound cues, etc, as a self-contained ARexx script. Once in that format, it can be extended (in any text editor) with advanced ARexx commands and logic, perhaps to extract data from a database and build dynamic pages from that. Now it gets wild. That modified ARexx file can then be brought back into Scala as an "Execute" ARexx script on a page. Let me clarify this. A Scala script, which builds and runs an entire multi-page presentation, can itself be transformed into just another ARexx script assigned to a single page of a Scala project. One could imagine building a Scala front-end with a selection of buttons, each navigating on-click to a separate page which itself contains a complete, embedded presentation on a given topic. Scripts all the way down. There's one more scripting language Scala supports, and that's its own. Dubbed Scala Lingo (or is it Lingua?), when we save a presentation script we're saving in Lingo. It's human-readable and ARexx-friendly, which is what made it possible to save a presentation as an ARexx script in the previous section. Here's pure Lingo. This is a 320x200x16 (default palette) page, solid blue background with fade in. It displays one line of white text with anti-aliasing. The text slides in from the left, pauses 3 seconds, then slides out to the right. Here's the same page as an ARexx script. Looks like all we have to do is wrap each line of Lingo in single quotes, and add a little boilerplate. So, we have Scala on speaking terms with the Amiga and its applications, already a thing that could only be done on this particular platform at the time. Scala's choice of platform was further benefited by one of the Amiga's greatest strengths. That was thanks to the "villain" of the PaperClip article , Electronic Arts. The hardware and software landscape of the 70s and 80s was a real Wild West, anything goes, invent your own way, period of experimentation. Ideas could grow and bloom and wither on the vine multiple times over the course of a decade. Why, enough was going on a guy could devote an entire blog to it all. ahem While this was fun for the developers who had an opportunity to put their own stamp on the industry, for end-users it could create a bit of a logistical nightmare. Specifically, apps tended to be siloed, self-contained worlds which read and wrote their own private file types. Five different art programs? Five different file formats. Data migration was occasionally supported, as with VisiCalc's use of DIF (data interchange format) to store its documents. DIF was not a "standard" per se, but rather a set of guidelines for storing document data in ASCII format. Everyone using DIF could roll their own flavor and still call it DIF, like Lotus did in extending (but not diverging from) VisiCalc's original. Microsoft's DIF variant broke with everyone else, a fact we'll just let linger in the air like a fart for a moment. Let's really breathe it in, especially those of us on Windows 11 . More often than not, especially in the case of graphics and sound, DIF-like options were simply not available. Consider The Print Shop on the Apple 2. When its sequel, The New Print Shop , arrived it couldn't even open graphics from the immediately previous version of itself . A converter program was included to bring original Print Shop graphics into New Print Shop . On the C64, the Koala file format became semi-standard for images, simply by virtue of its popularity. Even so, there was a market for helping users move graphics across applications on the exact same hardware. While other systems struggled, programs like Deluxe Video on the Amiga were bringing in Deluxe Music and Deluxe Paint assets without fuss. A cynic will say, "Well yeah, those were all EA products so of course they worked together." That would be true in today's "silos are good, actually" regression of computing platforms into rent extractors. But, I will reiterate once more, there was genuinely a time when EA was good to its users. They didn't just treat developers as artists, they also empowered users in their creative pursuits. EA had had enough of the file format wars. They envisioned a brighter future and proposed an open file standard to achieve precisely that. According Dave Parkinson's article "A bit IFFy," in Amiga Computing Magazine , issue 7, "The origins of IFF are to be found in the (Apple) Macintosh's clipboard, and the file conventions which allow data to be cut and pasted between different Mac applications. The success of this led Electronic Arts to wonder — why not generalize this?" Why not, indeed! In 1985, working directly in conjunction with Commodore, the Electronic Arts Interchange File Format 1985 was introduced; IFF for short. It cannot be overstated how monumental it was to unlocking the Amiga's potential as a creative workhorse. From the Scala manual, "Unlike other computers, the Amiga has very standardized file formats for graphics and sound. This makes it easy to exchange data between different software packages. This is why you can grab a video image in one program, modify it in another, and display it in yet another." I know it's hard for younger readers to understand the excitement this created, except to simply say that everything in computing has its starting point. EA and the Amiga led the charge on this one. So, what is it? From "A Quick Introduction to IFF" by Jerry Morrison of Electronic Arts, "IFF is a 2-level standard. The first layer is the "wrapper" or “envelope” structure for all IFF files. Technically, it’s the syntax. The second layer defines particular IFF file types such as ILBM (standard raster pictures), ANIM (animation), SMUS (simple musical score), and 8SVX (8-bit sampled audio voice)." To assist in the explanation of the IFF file format, I built a Scala presentation just for you, taken from Amiga ROM Kernel Reference Manual . This probably would have been better built in Lingo, rather than trying to fiddle with the cumbersome editing tools and how they (don't) handle overlapping objects well. What's done is done. 0:00 / 0:07 1× I used the previously mentioned "link" wipe to move objects as groups. IFF is a thin wrapper around a series of data "chunks." It begins with a declaration of what type of IFF this particular file is, known as its "FORM." Above we see the ILBM "FORM," probably the most prevalent image format on the Amiga. Each chunk has its own label, describes how many bytes long it is, and is then followed by that many data bytes. That's really all there is to it. IDs for the FORM and the expected chunks are spec'd out in the registered definition document. Commodore wanted developers to always try to use a pre-existing IFF definition for data when possible. If there was no such definition, say for ultra-specialized data structures, then a new definition should be drawn up. "To prevent conflicts, new FORM identifications must be registered with Commodore before use," says Amiga ROM Kernel Reference Manual . In Morrison's write-up on IFF, he likened it to ASCII. When ASCII data is read into a program, it is sliced, diced, mangled, and whatever else needs to be done internally to make the program go. However, the data itself is on disk in a format unrelated to the program's needs. Morrison described a generic system for storing data, of whatever type, in a standardized way which separated data from software implementations. At its heart, IFF first declares what kind of data it holds (the FORM type), then that data is stored in a series of labelled chunks. The specification of how many chunks a given FORM needs, the proper labels for those chunks, the byte order for the raw data, and so on are all in the FORM's IFF definition document. In this way, anyone could write a simple IFF reader that follows the registered definition, et voila! Deluxe Paint animations are suddenly a valid media resource for Scala to consume. It can be confusing when hearing claims of "IFF compatibility" in magazines or amongst the Amiga faithful, but this does not mean that any random Amiga program can consume any random IFF file. The burden of supporting various FORMS still rests on each individual developer. FORM definitions which are almost identical, yet slightly different, were allowed. For example, the image FORM is "almost identical to " with differences in the chunk and the requirement of a new chunk called . "Almost identical" is not "identical" and so though both RGBN and ILBM are wrapped in standardized IFF envelopes, a program must explicitly support the ones of interest. Prevalent support for any given FORM type came out of a communal interest to make it standard. Cooperation was the unsung hero of the IFF format. "Two can do something better than one," has been on infinite loop in my mind since 1974. How evergreen is that XKCD comic about standards ? Obviously, given we're not using it these days, IFF wound up being one more format on the historical pile. We can find vestiges of its DNA here and there , but not the same ubiquity. There were moves to adopt IFF across other platforms. Tom Hudson, he of DEGAS Elite and CAD-3D , published a plea in the Fall 1986 issue of START Magazine for the Atari ST development crowd to adopt IFF for graphics files. He's the type to put up, not shut up, and so he also provided an IFF implementation on the cover disk, and detailed the format and things to watch out for. Though inspired by Apple originally, Apple seemed to believe IFF only had a place within a specific niche. AIFF, audio interchange file format, essentially standardized audio on the Mac, much like ILBM did for Amiga graphics. Despite being an IFF variant registered with Commodore, Scala doesn't recognize it in my tests. So, again, IFF itself wasn't a magical panacea for all file format woes. That fact was recognized even back in the 80s. In Amazing Computing , July 1987, in an article "Is IFF Really a Standard?" by John Foust, "Although the Amiga has a standard file format, it does not mean Babel has been avoided." He noted that programs can interpret IFF data incorrectly, resulting in distorted images, or outright failure. Ah well, nevertheless . Side note: One might reasonably believe TIFF to be a successful variant of IFF. Alas, TIFF shares "IFF" in name only and stands for "tagged image file format." One more side note: Microsoft also did to IFF what they did to DIF. fart noise The last major feature of note is Scala's extensibility. In the Main Menu list view, we have columns for various page controls. The options there can be expanded by including EX modules, programs which control external systems. This feels adjacent to HyperCard's XCMDs and XFCNs, which could extend HyperCard beyond its factory settings. EX modules bundled with Scala can control Sony Laserdisc controllers, enable MIDI file playback, control advanced Genlock hardware, and more. Once installed as a "Startup" item in Scala , these show up in the Main Menu and are as simple to control as any of Scala's built-in features. As an EX module, it is also Lingo scriptable so the opportunity to coordinate complex hardware interactions all through point-and-click is abundant. I turned on WinUAE's MIDI output and set it to "Microsoft GS Wave Table." In Amiga Workbench, I enabled the MIDI EX for Scala . On launch, Scala showed a MIDI option for my pages so I loaded up Bohemian-Rhapsody-1.mid . Mamma mia, it worked! I haven't found information about how to make new EXes, nor am I clear what EXes are available beyond Scala's own. However, here at the tail end of my investigation, Scala is suddenly doing things I didn't think it could do. The potential energy for this program is crazy high. No, I'm not going to be doing that any time soon, but boy do I see the appeal. Electronic Arts's documentation quoted Alan Kay for the philosophy behind the IFF standard, "Simple things should be simple, complex things should possible." Scala upholds this ideal beautifully. Making text animate is simple. Bringing in Deluxe Paint animations is simple. Adding buttons which highlight on hover and travel to arbitrary pages on click is simple. The pages someone would typically want to build, the bread-and-butter stuff, is simple. The complex stuff though, especially ARexx scripting, is not fooling around. I tried to script Scala to speak a phrase using the Amiga's built-in voice synthesizer and utterly failed. Jimmy Maher wrote of ARexx in The Future Was Here: The Commodore Amiga , "Like AmigaOS itself, it requires an informed, careful user to take it to its full potential, but that potential is remarkable indeed." While Scala didn't make me a video convert, it did retire within me the notion that the Toaster was the Alpha and Omega of the desktop video space. Interactivity, cross-application scripting, and genlock all come together into a program that feels boundless. In isolation, Scala a not a killer app. It becomes one when used as the central hub for a broader creative workflow. A paint program is transformed into a television graphics department. A basic sampler becomes a sound booth. A database and a little Lingo becomes an editing suite. Scala really proves the old Commodore advertising slogan correct, "Only Amiga Makes it Possible." 0:00 / 1:48 1× I'm accelerating the cycle of nostalgia. Now, we long for "four months ago." The more I worked with Scala , the more I wanted to see how close I could get to emulating video workflows of the day. Piece by piece over a few weeks I discovered the following (needs WinUAE , sorry) setup for using live Scala graphics with an untethered video source in a Discord stream. Scala can't do video switching*, so I'm locked to whatever video source happens to be genlocked to WinUAE at the moment. But since when were limitations a hindrance to creativity? * ARexx and EX are super-powerful and can extend Scala beyond its built-in limitations, but I don't see an obvious way to explore this within WinUAE. This is optional, depending on your needs, but its the fun part. You can use whatever webcam you have connected just as well. Camo Camera can stream mobile phone video to a desktop computer, wirelessly no less. Camo Camera on the desktop advertises your phone as a webcam to the desktop operating system. So, install that on both the mobile device and desktop, and connect them up. WinUAE can see the "default" Windows webcam, and only the default, as a genlock source; we can't select from a list of available inputs. It was tricky getting Windows 11 to ignore my webcam and treat Camo Camera as my default, but I got it to work. When you launch WinUAE , you should see your camera feed live in Workbench as the background. So far, so good. Next, in Scala > Settings turn on Genlock. You should now see your camera feed in Scala with Scala's UI overlaid. Now that we have Scala and our phone's video composited, switch over to OBS Studio . Set the OBS "Source" to "Window Capture" on WinUAE. Adjust the crop and scale to focus in on the portion of the video you're interested in broadcasting. On the right, under "Controls" click "Start Virtual Camera." Discord, Twitch , et al are able to see OBS as the camera input for streaming. When you can see the final output in your streaming service of choice (I used Discord 's camera test to preview), design the overlay graphics of your heart's desire. Use that to help position graphics so they won't be cut off due to Amiga/Discord aspect ratio differences. While streaming, interactivity with the live Scala presentation is possible. If you build the graphics and scripts just right, interesting real-time options are possible. Combine this with what we learned about buttons and F-Keys, and you could wipe to a custom screen like "Existential Crisis - Back in 5" with a keypress. 0:00 / 0:35 1× Headline transitions were manually triggered by the F-Keys, just to pay off the threat I made earlier in the post. See? I set'em up, I knock'em down. I also wrote a short piece about Cheifet , because of course I did. Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). WinUAE v6.0.2 (2025.12.21) 64-bit on Windows 11 Emulating an NTSC Amiga 1200 2MB Chip RAM, 8MB Z2 Fast RAM AGA Chipset 68020 CPU, 24-bit addressing, no FPU, no MMU, cycle-exact emulation Kickstart/Workbench 3.1 (from Amiga Forever ) Windows directory mounted as HD0: For that extra analog spice, I set up the video Filter as per this article Scala Multimedia MM300 Cover disk version from CU Amiga Magazine , issue 96 (no copy protection) I didn't have luck running MM400 , nor could I find a MM400 manual Also using Deluxe Paint IV and TurboText Nothing to speak of. The "stock" Amiga 1200 setup worked great. I never felt the need to speed boost it, though I did give myself as much RAM as possible. I'll go ahead and recommend Deluxe Paint IV over III as a companion to Scala , because it supports the same resolutions and color depths. If you wind up with a copy of Scala that needs the hardware dongle, WinUAE emulates that as well. Under are the "red" (MM200) and "green" (MM300 and higher) variants I'm not aware of any other emulators that offer a Genlock option. I did not encounter any crashes of the application nor emulator. One time I had an "out of chip RAM" memory warning pop up in Scala . I was unclear what triggered it, as I had maxed out the chip RAM setting in WinUAE . Never saw it again after that. I did twice have a script become corrupted. Scripts are plain text and human-readable, so I was able to open it, see what was faulting, and delete the offending line. So, -6 points for corrupting my script; +2 points for keeping things simple enough that I could fix it on my own. F-Keys stopped working in Scala 's demonstration pages. Then, it started working again. I think there might have been an insidious script error that looked visually correct but was not. Deleting button variable settings and resetting them got it working again. This happened a few times. I saw some unusual drawing errors. Once was when a bar of color touched the bottom right edge of the visible portion of the screen, extra pixels were drawn into the overscan area. Another time, I had the phrase "Deluxe Paint" in Edit Mode, but when I viewed the page it only said "Deluxe Pa". Inspecting the text in "List" mode revealed unusual characters (the infinity symbol?!) had somehow been inserted into the middle of the text. I outlined one option above under "Bonus: Streaming Like Its 1993" above. OBS recording works quite well and is what I used for this post. WinUAE has recording options, but I didn't have a chance to explore them. I don't yet know how to export Scala animations into a Windows-playable format. For 2026, it would surely be nice to have native 16:9 aspect ratio support. Temporary script changes would be useful. I'd love to be able to turn off a page temporarily to better judge before/after flow. It can be difficult to visualize an entire project flow sometimes. With page transitions, object transitions, variable changes, logic flow, and more, understanding precisely what to do to create a desired effect can get a little confusing. Scala wants to maintain a super simple interface almost to its detriment. Having less pretty, more information dense, "advanced" interface options would be welcome. I suppose that's what building a script in pure ARexx is for. I'd like to be able to use DPaint animated brushes. Then I could make my own custom "transition" effects that mix with the Scala page elements. Maybe it's possible and I haven't figured out the correct methodology? The main thing I wanted was a Genlock switch, so I could do camera transitions easily. That's more of a WinUAE wishlist item though.

0 views
Stone Tools 1 months ago

PaperClip on the Atari 8-Bit

The Atari line of 8-bit computers has always been a bit of a chimera to me. Internals designed by Jay Miner, whose later work would form a foundational technology in my career path, with the Amiga's famous chipset; industrial design which never seemed to know quite how to position itself, moving me through phases of dry indifference and unquenchable technolust . Regardless, this line of systems has its own story to tell; one I have only recently begun to explore. Starting life as a hardware ROM chip on the Commodore PET and ending life as just another brand name in Electronic Arts's mausoleum, PaperClip squeezed in a unique Atari version along its development life. Considered a triumph at the time, topping sales charts for over a year (across versions), PaperClip had considerable staying power. ANTIC Magazine sold it in their "The Catalog," a kind of ANTIC-approved curated set of software, alongside CAD-3D . ANTIC went further still, using it on Atari hardware to produce their Atari magazine, sending stories to their photo-typesetting service by modem. PaperClip looks friendly, has a cute little kickstand for its manual, was used to produce professional work, and it's on a system which, had a butterfly flapped its wings at just the right time might have been the Amiga's little brother. It's high time we got to know one another. And let me state right upfront: there are no Clippy jokes nor references for this entire article. "No lazy jokes in 2026" is my New Year's resolution to you. Hello and welcome to 2026. How was your holiday? Did you receive one of those new-fangled Commodore 64s ? How about a shiny, new Intellivision ? Perhaps Atari's latest hot 3D game, I Robot , was waiting in your wooden clogs? Wait, what year is it again? I had a pleasant holiday, and gave a lot of thought to my hopes and dreams for the new year. Of course, a big goal is to continue using and discussing the productivity software of yesteryear, both popular and obscure. In fact, I'm doing so right now, typing this very post into PaperClip on the Atari XE. With zero previous experience on Atari 8-bits going into this write-up, I have to say that first impressions of both the system and PaperClip are strong. Once booted, PaperClip 's UX demands immediate conversation. The top "Status Line" shows us, in inverse text, free memory in "lines of text," a "Paste" value (I'll talk about later), and cursor position, where "Col" shows the column on screen, and "Line" shows the line of the screen-formatted document. This concept of "lines" will be discussed later. At the bottom is an ever-present "Command Line." When a keyboard command is issued, say to save the document, the bottom bar is where interfacing with the command occurs. I appreciate having a consistent spot for this, though my instinct says this area could have been put to better use. While typing, this area only shows program title and copyright information. Maybe common editing commands could have been shown instead? Chrome aside, the big standout is its font. PaperClip uses a custom-built typeface for everything. It's kind of chunky, displays 40-characters across and 20 lines (18 for text, 2 for UI chrome), has lower and upper case, ascenders and descenders, and serifs. Serifs are exceedingly rare in this kind of software, on 8-bit hardware of this era. To quell the Atari nerds, I will note the program works in Antic hardware "Mode 3." This is a special text-only mode which requires a re-definition of every character you want to type. Nothing is pre-defined, so this can be used for anything, such as alternate languages, mathematical symbols, superscript and subscript, and the like. While I have quibbles about the design of some of these letterforms, especially the funky capital "I", the font has a friendly, easy-on-the-eyes design. I'd even call it "opinionated," being that it evokes a specific sense of the program's mood. These are warm pixels, if that makes any sense. The warmness extends to the manual as well. I really like what they did with this, or at least did for a time before switching to a more traditional format. It is horizontal, spiral-bound along the top edge. Instructions show how to origami the packaging into a stand, upon which the manual sits. It's like a little companion buddy sitting next to you while you work, conceptually like the little dude on the box cover art. Supposedly! I don't have access to the real thing, I'm sorry to say, but the intent is evident and appreciated. The tone of the manual is likewise friendly, stepping users of the day through the terminology needed to make sense of word processing as a concept. It's front-loaded with typical "it's like a typewriter but better" genre explanations, and generally does not assume the user knows anything about anything to do with computers. Generally . Getting started with typing is as simple as you'd imagine and hope. The only non-obvious knowledge required before writing is how to save a document. PaperClip supports multiple disk drives, so throw a blank disk in, hit , name the file targeting the disk as and you're good to go. According to the manual "all editing commands in PaperClip are done with and then another key." The adherence to a common key combination to invoke commands is nice, but the mnemonics themselves aren't always immediately intuitive. To print we use "O." To save, we use "W" to "write" the file to disk. To get the word count we use "1". Knowing some makes other mnemonics self-evident. "Write" is complemented by "read", so "R" will load a saved file. A few annoyances are cropping up, even at this early stage. PaperClip consistently drops letters while typing, requiring quite a bit of due-diligence on my part to backspace and make corrections as I notice them. I set Altirra's "keypress mode" to "baked" as it says this is best for productivity applications, thinking it must be related to my problem. Then I found a contemporary review that complained about the exact same thing on real hardware, so I guess that means I'm having an "authentic" experience. Hurray? Every screen line of text ends with a little dot. According to the manual, this indicates "the end of the line" but when every line has a dot, that's effectively the same as not having a dot. There is a use case where this makes sense, as when line lengths are defined to be wider than the screen, but it still adds visual clutter I don't particularly enjoy. PaperClip III removed this, BTW. There also seems to be a strange limit to where the cursor can sit. If I type up to column 37 the cursor stops moving to the right, but the line of text itself shifts left off-screen by a few characters to let me type up to column 40. This behavior must surely be imposed by the "Mode 3" calls? The net effect is that the text is always in motion, my cursor jerking about from line to line depending on whether the text wrap wants to shift or the line itself wants to slide over. I can mostly ignore it, but it's odd behavior. Text enters in insert mode, not overtype, and can be switched on the fly. Word wrap happens immediately, no delay in calculating line length. immediately cancels whatever command action you might be in the middle of configuring. The entire program is snappy and performant. PaperClip was released for the Commodore PET as one component of their Execudesk software suite. The developer of that, Steve Douglas, then created a version for the Commodore 64. PaperClip on the Atari was a complete machine language rewrite which shared only the name and perhaps general guidance on feature requirements. According to developer Steve Ahlstrom, Batteries Included wanted a version which took advantage of the strengths of its host system. The basis for the Atari version was literally the line editor from the popular Action! programming language. The Action! manual says of its own line editor, "If you have used a program editor before, you will notice that the Action! Editor is far more sophisticated than most others: in fact, it could almost be called a word processor because it does so much." That was apparently taken to heart by Batteries Included in thinking they had a quick path to a great word processor with a couple of simple licensing agreements, including the PaperClip brand. The lawyers were the true heroes all along?! You know what is surprisingly not bothering me at all?: 40-column mode. I thought I would be driven to madness, but actually it is quite the opposite. I find it focuses me on the important thing: the words. If I had to do a lot of printing in the 80s, 40 columns would be a problem. I am a very visual person and I like things to be "just so," as close to WYSIWYG as possible. As a blogger in 2026, all I need is the text; the blogging platform handles the rest. PaperClip handles Markdown almost perfectly; there is no backtick in the custom font, but that's not a deal-breaker. I've done more with less. One of PaperClip's nicest features is dual window editing. This seems to come directly from its roots as the Action! line editor, in which this can also be found. (a key unique to the Atari keyboard) will open text window 2, into which an entirely different document may be started or loaded. will then toggle between the two text windows, making it easy to jot notes in one, and commit to the main corpus in the other, for example. Remember though that both windows share the same "Free" memory, so there are hard limits to this magnanimous gift. However, they also share the same "Paste" buffer, making it a snap to copy/paste between documents. Each window can load a different document, but two views into the same document is not possible. That's a little odd, because the line editor from Action! does support that. To be fair you can load the same document twice, but the two copies are independent of one another. Changes in one window are not reflected in the other. What is a "line" of text? It depends on who you ask, I suppose. If we ask the screen, it would be "about 40 characters." If I ask my printer, it will depend on my printer and may be as few as 20 characters, as in the case of the 4.5" wide paper on an Atari 1020 plotter. 80 characters is the typical promised land for letter-sized or A4 paper, as when printing on the Atari 1029 dot matrix printer. I may also decide on an arbitrary width of my choosing, and PaperClip allows me to set this for myself. The EDIT menu is invoked by the key, then , which steps me through a list of application settings. Cursor behavior, top window height, left inset margin, screen colors, and line length are user-settable. I wanted to see what my 40-column width document would look like in 20-columns. Doing so wiped my document from memory, with no warning, no prompt, thanks for nothing, there's the door. I just lost five paragraphs of work. That led me to turn ON one of the other settings: auto-save. We can set to auto-save after a given number of characters, which I've put at 300 now because I'm paranoid AF. A "bell" sound warns when it's about to auto-save, then it writes everything to a temp file on the drive of choice. I had hoped it would overwrite my actual document, but at least my work is safe. The point I want to make about lines returns to the "Free" and "Paste" counters in the top status bar. Those are counting "lines" as defined by the above-mentioned EDIT menu settings. Set line length to 20 and the Free value doubles from the default 40 calculation. Set it to 80 and get half the number of Free lines. The Paste value is similarly the number of "lines" held in the paste buffer. This is all to say that I'm never really clear how much more I can write. I just don't think in terms of arbitrary "lines" like this. Lines x characters per line = Total Characters. Just show me remaining characters, please. Likewise, paste should tell me how many characters are in the buffer; that would be a far more useful metric for knowing where I stand. Why won't you do this for me, Herbie? I thought we were friends, Herbie?! Sometimes programmers get a little tunnel-visioned in how they approach solutions to user problems. Consider this passage from the manual discussing how to set the column for the left margin inset. Values can be 0, 1, or 2. The manual says, "Computers count strangely. Their first number is 0, while we humans are used to starting with 1." Now, correct me if I'm wrong, but this program was written for humans to use. And, correct me if I'm wrong, but it is relatively trivial to subtract 1 from a number in machine language, right? So why this weird lesson about 0-based numbering? The developer could allow "human" numbering while quietly shifting the value into "computer" numbering behind the scenes. I always feel weird when manuals reference "inside baseball" terminology for no good reason. It adds that little bit of cognitive load to the learning process, and more distressingly it presents extra-nerdy gatekeeping to the software. A word processor should not require a writer to learn about zero-based numbering. The more I research software, the more this barrier becomes obvious to me and I start to see it everywhere. It begs the question, "Were computers ever user friendly?" How many people were put off by a constant barrage of these small, subtle context shifts? A computer should match the human frame of reference, not vice versa. PaperClip... Herbie ... don't worry , it's not your fault; I still enjoy your company! So far. ( ominous_string_instruments.wav ) Editing in PaperClip has many of the tools you'd expect in a modern word processor, or at least close facsimiles. Specifically, we get full cut/copy/paste, through to Mark a block of text, which can be cut or copied, and will Paste it down, keeping it in the Paste buffer for further pasting if you like. As stated earlier, this works across documents in split screen mode. Cursor control happens as you'd hope, via arrow keys and keyboard modifiers for faster navigation between words, lines, and pages. While the bog standard tools are unsurprising, PaperClip has more than a few interesting takes on typical word processor functionality. will perform letter and word swap respectively. Position the cursor on a letter or word and insta-transpose it with the letter or word immediately to the left of the current scope. will do a find and replace, from where the cursor is positioned; no backward searching here. The twist is in its scope. One of PaperClip 's most interesting ideas is "batch" files and the manipulation of multiple files at once. By associating files as a batch, find and replace can do its job through all files in the batch. We don't have to open each one up individually ourselves; the computer does the heavy lifting for us. Dance, monkey, dance! Modern text editors, like Visual Studio Code , allow for find and replace amongst all files in a project. I suspected that given PaperClip's origins as a code editor in Action! this batch processing magic must have been inherited from there, but no! It's unique to PaperClip . I'm not aware of any other word processor, including modern ones, that do this. The next trick does in fact come from its roots in Action! . Tags are named bookmarks for fast navigation through a document. Tag names can only be one character, but can be any character. sets a tag by id, and will Go to a tag id. But, beware, the program continues its pattern of adding just a pinch of evil to great features. "Tags are not saved as part of your text when you write your text to a disk file. Tags are lost if you do any editing on the line containing them." Fun while it lasted. Normally I wouldn't discuss much about printer-related features, but PaperClip's options are robust, and Altirra provides dot matrix printer emulation, so we have an opportunity for a little fun here. There are some interesting options available, keeping in mind that PaperClip is not WYSIWYG, but rather more like WYGIWYHFC (what you get is what you hope, fingers crossed) Bold, italic, underline, left/center/right alignment, super/subscripting, and full justification are available. Even if the screen doesn't show it to you live, the options for your output are plentiful. The on-screen representation can be visually tough to decipher at times. The above, printed on the emulated printer, produces the below. One other printer-only option I wanted to note is that PaperClip can do math. That doesn't look like much. Again, like other functions, this interesting utility has its own tradeoff. We'll only get the math to math when we print the document. There's a lot of things printing can do that can't be immediately seen. Print Preview can render these options, though it lacks the Commodore 64's faux-80-column preview mode. Tabular data with automatic columnar calculations, auto-build a table of contents, dual column layout, mail merge, headers, footers and more are all possible using embedded printer codes. One other non-printing option that is pretty useful, especially for group editing: comments. (then period) will hide text from that character up to the next carriage return. Pretty useful, especially considering tags don't save with the document. Maybe use comments and Find by keyword instead? Getting words onto screen is nice, but a word processor should endeavor to make life easier for the writer. PaperClip takes a stab at this through macros. This sounds like a decidedly programmery feature, and surely must be inherited from the Action! line editor. But no! Again, this feature is unique to PaperClip . Now, between you and me I don't see a whole lot of value in this particular implementation. It is simply, "type a bunch of text with a shortcut." I prefer Bank Street Writer+, as its macros can do anything the user can do with the keyboard, which includes issuing editing commands. When would I use this? I suppose I could embed my street address into a macro? Maybe there is some ridiculously long word that I need to type repeatedly? I can imagine back in the day using it to inject complex printer command codes with a stroke. Otherwise, this is another PaperClip feature that elicits an extra "sigh." Ah, but the sighs get heavier and more plentiful, just you wait. Macros can be stored in separate files to load up specific shortcuts for specific needs. Sounds useful, but here's another "sigh" moment. No, scratch that, this goes beyond. See if you can spot it in the video below as I load in a macro set. 0:00 / 0:12 1× Did you see that madness?! My entire document, dumped without warning, just to load a macro file. All the manual says is, "Enter the name of your Macro File. PaperClip will read the file into the Macro Buffer and the Macros are now ready for your use." That statement is true only in the most pedantic way, "You never asked if I'd also delete your work, so don't hold me accountable for your lack of inquisitiveness." Dumping unsaved work unceremoniously is a major strike against PaperClip . It's not just this time either. More than once, seemingly innocuous features did wholesale resets of the input buffer. I've lost work multiple times, which is particularly frustrating because I enjoy the PaperClip writing experience. At any rate, after repairing my lost text, here's the macro at work. 0:00 / 0:13 1× Spell check will definitely not catch that typo. One of the tricks about this kind of archaeology is it isn't always obvious which manual goes with which version of the software. It seems I've been using a 1.x manual, which hid a 2.0 special feature, a memory-resident spellcheck. I just needed to know the prescribed rite (and find the dictionary file) to invoke it. brings up a menu of spellcheck options, including a simple audit of unknown words. Some modern words are unknown, of course, but some omissions are inexplicable. PaperClip will step through all unknown words, giving us a chance to accept as-is, retype it, or teach PaperClip about its existence for the future. But that isn't enough, of course; there is always one extra gotcha with PaperClip . Once you've told PaperClip to learn a word, you must then save the learned words out to a supplemental dictionary, and explicitly reload that as a separate action when you want to use it later; it cannot be appended to the default dictionary. Flipping through this v2.1 manual, hunting for other hidden goodies, I see they added some notes that would have been uniquely helpful to me days earlier. I'm reminded of the wisdom of the computer elders, "If a bug art found too late, a caution ye may state. If eleven art reveal'd, thy code must yet be heal'd." The first software I looked at for this blog was Deluxe Paint , by Electronic Arts. I noted that, hard as it is to believe now, there was a time when EA was cool. These days, boy they sure do have a poor reputation, absorbing other companies and digesting them for whatever scant nutrients remain in the marrow, don't they? Since 1987 they've taken in about 40 companies, including big names like PopCap Games, Codemasters, Respawn Entertainment, Maxis, and Bullfrog. An early glimpse at how EA would manage their acquisitions was with Origin. The maker of Ultima was quite infamously pushed to get Ultimas 8 and 9 out the door before either was fully cooked, with 9 in particular being a borderline unplayable mess. EA kept the Origin brand on artificial respiration, notably to maintain Ultima Online for a few years. In 2004, Origin was removed from its iron lung and allowed the dignity of death. If only there had been some warning, some indicator of how EA would handle acquisitions. Gosh, if only. EA Founder and CEO, Trip Hawkins, seems to have made two acquisitions during his tenure. First, in 1984, he picked up Organic Software, acquiring developer Mike Posehn's Get Organized , "the first personal information manager" (claims Posehn). Even that early in EA's history, they were seen primarily as a game company. Infoworld said at the time, "Electronic Arts' reputation as a designer of games for the Apple computer may be a roadblock (to success in the productivity market)." Posehn said of Get Organized , "GO was a cool product but was ahead of its time. After lackluster sales...EA dropped the product." Posehn was later encouraged by Trip to do a little game called Desert Strike . Whatever productivity dream Trip had when purchasing Organic Software apparently didn't die. A second opportunity to kick-start a new market for EA's growth was right around the corner, and Trip would take it. Founded in 1978, Batteries Included began life as a distributor of Commodore calculators and watches. They did actually include batteries free with every purchase; the company name was a direct reference to that business practice. User-friendliness was in their hearts from day one. In 1985, ROM Magazine dubbed Batteries Included "Canada's Atari." The Toronto based software and hardware company was probably best known for PaperClip and DEGAS , a painting program for the Atari ST. DEGAS was written by Tom Hudson, the developer of CAD-3D , which I covered a few months ago . Man, everything is connected, ain't it? By 1985, PaperClip had sold "in the hundred thousand range." Director of Product Development, Michael Reichmann, said of Batteries Included's future, "Fast growth is always a problem, and we face it." In 1986 there was a lot of public talk and concrete promises about PaperClip expanding onto new platforms. B.I. essentially had only a bare minimum engineering staff. Rather, development was outsourced to contract programmers who received royalties on sales. B.I. put those developers to work on an Apple II version of PaperClip as well as PaperClip Elite, a 16-bit maturation of the product for DOS, Amiga, and the Atari ST. Elite was to include light desktop publishing features and "idea processing." Reichmann was promoted to president of Batteries Included around the end of 1985, and promised the ST software under the "Integrated Software" line for 1986. TPUG magazine, August 1986, noted that the IS line had slipped shipping date to "late summer" of '86. (Remember in print publishing, an "August" issue would hit newsstands in July, which means the story was probably written in May/June.) Perhaps the luster had worn off. Perhaps Reichmann felt the draw toward other endeavors (he would go on to be a prolific digital photographer and blogger on the subject). Whatever the case, Reichmann left Batteries Included in 1987, before any of the expanded PaperClip line had shipped. Ian Chadwick said of the event, "With (Reichmann) went a lot of the drive and determination that kept B.I. going. The remaining management was indecisive and insecure. We marched towards the inevitable: the sale of the company." In 1987, having grown to over 60 employees, Batteries Included went into receivership, and was purchased by Electronic Arts. This put the software developers, who again were not employees of B.I. , in a lurch. They were not informed of the troubles B.I. found itself in, and were quite literally left holding their software bag. They had code, but nobody to publish it. Deal-making shenanigans kept them in the dark too long, and by the time the lights came on, their code no longer held the relevance or cachet necessary to get it published. Chadwick recounted the story of developer Scott Northmore who was deep in development of B.I.'s Consultant Elite for the ST. "Scott had brought Consultant Elite to the beta test stage, but what now? EA held the rights to all products, even those in development, and were slow to release them— even those they had no intention to publish. Scott finally got the rights for his program back in December— nine months or so after the sale. For nine months he couldn't legally do anything with Consultant . All that work didn't generate any income." While EA had certainly done a lot to progress digital creative workflows, success in general home productivity never really arrived. Music Construction Set apparently sold over a million units, and Deluxe Paint remains legendary. Other software titles made rumbles, but weren't earthshaking. Before Deluxe Paint , Dan Silva got together with Tim Mott and 0thers to take "ideas from Xerox Parc" and use that knowledge (especially Mott's experience there) to build a "user friendly" word processor, Cut & Paste . "Cute, but feeble," said The Whole Earth Software Catalog . I cannot claim to know what was in the hearts and minds of the principles of Electronic Arts at the time, but we do know what happened and when. While Trip Hawkins was running the show, he greenlit a number of productivity applications. and acquired two productivity software companies. I believe it is safe to say he envisioned EA as more than just a game company and PaperClip in particular was singled out as the title to keep the Batteries Included brand alive. In 1987, EA released PaperClip III under the "Batteries Included" banner, squeezing a little juice out of the title and brand name. It was well received, though it came with a little twist of the knife, as PaperClip seems to have a habit of doing. In focusing their efforts exclusively on the C64/128, all other platforms were abandoned. The product announcements for various platforms just a year prior were officially dissolved. In 1988, EA pushed to get desktop publishing onto the C64 with PaperClip Publisher . It had nothing to do with PaperClip nor Batteries Included in their original incarnations. It was not a product B.I. had been working on that EA acquired. Rather, it was a newly commissioned port of a pre-existing product, sold in a box which happened to have a B.I. logo at the top. After Publisher was, well, nothing. That was the beginning and end of both products and the Batteries Included brand. No further updates were published for either program. According to a post by Deluxe Paint co-developer Dallas Hodgson, "By the time the first EA Sports titles hit the shelves, it was pretty clear that the company wasn't interested in PC applications anymore, and killed off the entire Tools (formerly Creativity) division." Deluxe Music 2 and Deluxe Paint V were the final releases in that lineup. Trip left EA to found 3DO in 1991, and Larry Probst took the reigns. Probst went on to snatch up some 25 companies, Origin being one of the first. What happened with Origin perhaps shouldn't have been too much of a surprise, hindsight being what it is. In a sense, the heads of Organic Software and Batteries Included were already on pikes at the moat when Origin was brought inside the gates. Given the timeline of events, and the marketing push behind the PaperClip Elite line, I can't shake the feeling that buried in the vaults of EA lie near-complete versions of PaperClip for the Apple II, DOS, Amiga, and the Atari ST in particular. Who's with me for an Ocean's Eleven style adventure this year? Let me preface my conclusion by saying I like PaperClip , quite a lot. It is easy on the eyes, and just plain nice to use. It's quick, responsive, has a robust set of core features plus some bonuses, and is overall trying its best to be my friend. I like it. I like it, but. Too many features have hard-to-swallow negative side effects, and I live in constant fear that I might "try something" and lose 30 minutes of work. Sometimes people fall in love with "dangerous" characters, and I kinda sorta get it, in my own retro nerd way. Using PaperClip feels so good, I want it be an on-going relationship. I tried the C64 version, and it felt off ; the Atari version is superior, in my estimation. But PaperClip , Herbie, buddy, you burned me a couple of times too many to make me love you. It wasn't always your fault, but the net effect is the same. In 1985, I would have adjusted and changed to meet you. Here in 2026, I must say goodbye. You'll remain in my heart, but not on my hard drive. Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). Altirra 4.3.1 on Windows 11 65XE/130XE in NTSC mode 320K Compy RAM expansion Screen effects turned on for bloom, slight screen curvature 1025 Printer emulation Speed at 100% typical XE, with Warp mode toggled during disk access PaperClip 2.0* I did eventually find 2.1 mid-way through the project. You must, must, must set Altirra's default disk access from to or you will lose your work when you least expect it. Emulators sometimes have this strange notion of saving to disk virtually, and must be manually told to write the virtual changes to the actual host file system. I had been saving constantly, with the disk in "VRW" mode which dutifully "saved to disk," but only in pantomime. After a hard reset of the emulator to test something I lost a large chunk of work. I don't understand the thinking behind this virtual file saving. When would I ever want to pretend to save? TRS80GP emulator did the same thing and that also bit me more than once. I think that when is turned off it stays off, but now I'm super paranoid. 320K Compy is enough RAM for the emulator. The "Rambo" memory type, even at the same 320K, doesn't provide as much free ram as the "Compy" option. Likewise, choosing values over 320K, hoping to game the system to write Cryptonomicon 2, seems to max out at the 320K Compy limit. I personally recommend using the "Screen Effects" for this one. With a little screen curvature, bloom, and subtle scanline intensity, I found the presentation matched well with the delightful font. It made PaperClip feel cozy. Turn on "auto-save" and set it to a low character limit. At worst, you'll basically get a prompt every typed characters reminding you to save. At best, you'll have rotating temp files providing snapshots of the work in progress. The temp files are shared by all documents, so don't rely on them as your sole source of truth for any given document. It's there to help you in a pinch, that's all. Emulators for other systems macOS has "Atari800MacX" which released v6.1.2 at the start of 2026. Linux has "Atari800" last updated at the end of 2023. Work was lost multiple times while writing this article. Those are discussed in the article and above, in the note about the emulator. The lesson learned? It is impossible to save "too often." I did not encounter any crashes of the application nor the emulator itself. Altirra has built-in support for this. Under open the disk image which contains your document. A list of files on disk is presented. Right-click on the file to export and does the trick. The main feature I found myself desiring is more on-screen visualization of formatting options. Maybe I naively thought that with the font being a custom font, bold, italic, superscript, etc could be displayed. The fact we have to print to see the effects is a bit of a drag and limits the usefulness of those features for a modern audience. It's overly printer-centric in its design philosophy.

1 views
Stone Tools 1 months ago

XPER on the Commodore 64

In 1984, Gary Kildall and Stewart Chiefet covered "The Fifth Generation" of computing and spoke with Edward Feigenbaum, the father of "expert systems." Kildall started the show saying AI/expert systems/knowledge-based systems (it's all referred to interchangeably) represented a "quantum leap" in computing. "It's one of the most promising new softwares we see coming over the horizon." One year later, Kildall seemed pretty much over the whole AI scene. In an episode on "Artificial Intelligence" he did nothing to hide his fatigue from the guests, "AI is one of those things that people are pinning to their products now to make them fancier and to make them sell better." He pushed back hard against the claims of the guests, and seemed less-than-impressed with an expert system demonstration. The software makers of those "expert systems" begged to differ. There is a fundamental programmatic difference in the implementation of expert systems which enables a radical reinterpretation of existing data, they argued. Guest Dr. Hubert Dreyfus re-begged to re-differ, suggesting it should really be called a "competent system." Rules-based approaches can only get you about 85% of the way toward expertise; it is intuition which separates man from machine, he posited. I doubt Dreyfus would have placed as high as 85% competence on a Commodore 64. The creator of XPER , Dr. Jacques Lebbe, was undeterred, putting what he knew of mushrooms into it to democratize his knowledge. XPER , he reasoned, could do the same for other schools of knowledge even on humble hardware. So, just how much expertise can one cram into 64K anyway? So, what is an "expert system" precisely? According to Edward Feigenbaum, creator of the first expert system DENDRAL, in his book The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World , "It is a computer program that has built into it the knowledge and capability that will allow it to operate at the expert's level. (It is) a high-level intellectual support for the human expert." That's a little vauge, and verges on over-promising. Let's read on. "Expert systems operate particularly well where the thinking is mostly reasoning, not calculating - and that means most of the world's work." Now he's definitely over-promising. After going through the examples of expert systems in use, it boils down to a system which can handle combinatorial decision trees efficiently. Let's look at an example. A doctor is evaluating a patient's symptoms. A way to visualize her thought process for a diagnosis might take the below form. An expert system says, "That looks like a simple decision tree. I happen to know someone who specializes in things like that, hint hint." XPER is a general-purpose tool for building such a tree from expert knowledge, carrying the subtle implication (hope? prayer?) that some ephemeral quality of the decision making process might also be captured as a result. Once the tree is captured, it is untethered from the human expert and can be used by anyone. XPER claims you can use it to build lots of interesting things. It was created to catalog mushrooms, but maybe you want to build a toy. How about a study tool for your children? Let's go for broke and predict the stock market! All are possible , though I'm going to get ahead of your question and say one of those is improbable . I have a couple of specific goals this time. First, the tutorial is a must-do, just look at this help menu. This is the program trying to HELP ME . After I get my head around that alphabet soup, I want to build a weather predictor. The manual explicitly states it as a use case and by gum I'ma gonna do it. I'm hoping that facts like, "Corns ache something intolerable today" and "Hip making that popping sound again" can factor into the prediction at some point. First things first, what does this program do? I don't mean in the high-level, advertising slogan sense, I mean "What specific data am I creating and manipulating with XPER ?" It claims "knowledge" but obviously human knowledge will need to be molded into XPER knowledge somehow. Presently, we don't speak the same language. XPER asks us to break our knowledge down into three discrete categories, with the following relationships of object, feature, and attribute: My Gen-X BS alarm is ringing that something's not fully formed with this method for defining knowledge. Can everything I know really be broken down into three meager components and simple evaluations of their qualities? Defining objects happens in a different order than querying, which makes it a little fuzzy to understand how the two relate. While we define objects as collections of attributes, when querying against attributes to uncover matching objects. The program is well-suited to taxonomic identification. Objects like mushrooms and felines have well-defined, observable attributes that can be cleanly listed. A user of the system could later go through attribute lists to evaluate, "If a feline is over 100kg , has stripes , and climbs trees which feline might it be?" For a weather predictor, I find it difficult to determine what objects I should define. My initial thought was to model "a rainy day" but that isn't predictive. What I really want is to be able to identify characteristics which lead into rainy days. "Tomorrow's rain" is an attribute on today's weather, I have naively decided. This is getting at the heart of what XPER is all about; it is a vessel to hold data points. Choosing those data points is the real work, and XPER has nothing to offer the process. This is where the manual really lets us down. In the Superbase 64 article, I noted how the manual fails by not explaining how to transition from the "old way" to the "new way" of data cataloging. For a program which suggests building toys from it, the XPER manual doesn't provide even a modicum of help in understanding how to translate my goals into XPER objects. The on-disk tutorial database of "felines" shows how neatly concepts like "cat identification" fit into XPER framing. Objects are specific felines like "jaguar," "panther," "mountain lion." Features suggest measureable qualities like "weight," "tree climbing," "fur appearance," "habitat" etc. Attributes get more specific, as "over 75kg," "yes," "striped," and "jungle." For the weather predictor, the categories of data are similarly precise. "cloud coverage," "temperature," "barometer reading," "precipitation," "time of year," "location," and so forth may serve our model. Notice that for felines we could only define rough ranges like "over 75kg" and not an exact value. We cannot set a specific weight and ask for "all cats over some value." XPER contains no tools for "fuzzy" evaluations and there is no way to input continuous data. Let's look at the barometer reading, as an example. Barometer data is hourly, meaning 24 values per day. How do I convert that into a fixed value for XPER ? To accurately enter 24 hours of data, I would need to set up hourly barometer features and assign 30? 50? possible attributes for the barometer reading. Should we do the same for temperature? Another 24 features, each with 30 or more attributes, one per degree change? Precipitation? Cloud coverage? Wind speed and direction? Adorableness of feline? Besides the fact that creating a list of every possible barometric reading would be a ridiculous waste of time, it's not even possible in the software. A project is limited to We must think deeply about what data is important to our problem, and I'd say that not even the expert whose knowledge is being captured would know precisely how to structure XPER for maximum accuracy. The Fifth Generation warns us: "GiT GuD" as the kids say. ( Do they still say that?! ) The graphic above, output by XPER 's "Printer" module, reveals the underlying data structure of the program. Its model of the data is called a "frame," a flat 2-D graph where objects and attributes collide. That's it. Kind of anticlimactic I suppose, but it imbues our data with tricks our friend Superbase can't perform. First, this lets us query the data in human-relatable terms, as a kind of Q&A session with an expert. "Is it a mammal?" "Does it have striped fur?" "Does it go crazy when a laser pointer shines at a spot on the floor?" Through a session, the user is guided toward an object, by process of elimination, which matches all known criteria, if one exists. Second, we can set up the database to exclude certain questions depending upon previous answers. "What kind of fur does it have?" is irrelevant if we told it the animal is a fish, and features can be set up to have such dependencies. This is called a father/son relationship in the program, and also a parent/child relationship in the manual. "fur depends on being a mammal," put simply. Third, we can do reverse queries to extract new understandings which aren't immediately evident. In the feline example it isn't self-evident, but can be extracted, that "all African felines which climb trees have retractile claws." For the weather predictor I hope to see if "days preceding a rainy day" share common attributes. The biggest frustration with the system is how all knowledge is boxed into the frame. For the weather predictor, this is frustrating. With zero relationship between data points, trends cannot be identified. Questions which examine change over time are not possible, just "Does an object have an attribute, yes or no?" To simulate continuous data, I need to pre-bake trends of interest into each object's attributes. For example, I know the average barometric pressure for a given day, but because XPER can't evaluate prior data, it can't evalute if the pressure is rising or falling. Since it can't determine this for itself, I must encode that as a feature like "Barometric Trend" with attributes "Rising," "No Change," and "Falling." The more I think about the coarseness with which I am forced to represent my data, the more clear it is to me how much is being lost with each decision. That 85/15 competency is looking more like 15/85 the other direction. Collecting data for the weather predictor isn't too difficult. I'm using https://open-meteo.com to pull a spreadsheet on one month of data. I'll coalesce hourly readings, like barometric pressure, into average daily values. Temperature will be a simple "min" and "max" for the day. Precipitation will be represented as the sum of all precipitation for the day. And so on. As a professional not-a-weather-forecaster, I'm pulling whatever data strikes me as "interesting." In the spirit of transparency, I mentally abandoned the "expert" part of "expert system" pretty early on. This guy *points at self with thumbs* ain't no expert. Having somewhat hippy-dippy parents, I've decided that Mother Earth holds secrets which elude casual human observation. To that end, I'm including "soil temperature (0 - 7cm)" as a data point, along with cloud coverage, and relative humidity to round out my data for both systematic and "I can't spend months of my life on this project" reasons. After collecting November data for checkpoint years 2020, 2022, and 2024, actually entering the data is easier than expected. XPER provides useful F-Key shortcuts which let me step through objects and features swiftly. What I thought would take days to input wound up only being a full afternoon. Deciding which data I want, collecting it, and preparing it for input was the actual work, which makes sense. Entering data is easy; becoming the expert is hard. Even as I enter the data, I catch fleeting glimpses of patterns emerging and they're not good. It's an interesting phenomenon, having utterly foreign data start to feel familiar. Occasionally I accidentally correctly predict if the next day's weather has rain. Am I picking up on some subliminal pattern? If so, might XPER "see" what I'm seeing? I'm not getting my hopes up, but I wonder if early fascination with these forays into AI was driven by a similar feeling of possibility? We're putting information into a system and legimitately not knowing what will come out of the processing. There is a strong sense of anticipation; a powerful gravity to this work. It is easy to fool myself into believing I'm unlocking a cheat code to the universe. Compare this to modern day events if you feel so inclined. At the same time, there's obviously not enough substance to this restricted data subset. As I enter that soil temperature data, 90% of the values keep falling into the same bin. My brainstorm for this was too clever by half, and wrong. As well, as I enter data I find sometimes that I'm entering exactly the same information twice in a row, but the weather results are different enough as to make me pause. Expert systems have a concept of "discriminating" and "non-discriminating" features. If a given data point for every object in a group of non-eliminated objects is the same, that data point is said to be "non-discriminating." In other words, "it don't matter" and will be skipped by XPER during further queries. The question then is, whose fault is this? Did I define my data attributes incorrectly for this data point or is the data itself dumb? I can only shrug, "Hey, I just work here." XPER has a bit of a split personality. Consider how a new dataset is created. From the main menu, enter the Editor. From there you have four options. First, I go to option for the seemingly redundantly named "Initializing Creating." Then I set up any features, attributes, and objects I know about, return to this screen, and save with option . Later I want to create new objects. I type for "Creating" and am asked, "Are you sure y/n" Am I sure ? Am I sure about what ? I don't follow, but yes, I am sure I want to create some objects. I hit and I'm back at a blank screen, my data wiped. That word "initializing" is doing the heavy lifting on this menu. "Initialize" means "first time setup of a dataset," which also allows, almost as a side effect, the user an opportunity to input whatever data happens to be known at that moment . "Initial Creation" might be more accurate? Later, when you want to add more data, that means you now want to edit your data, called "revising" in XPER , and that means option . Option is only ever used the very first time you start a new data set. is for every time you append/delete afterward. The prompts and UI are unfortunately obtuse and unhelpful. "Are you sure y/n" is too vague to make an informed decision. The program would benefit greatly from a status bar displaying the name of the current in-memory dataset, if it has been saved or not, and a hint on how close we are to the database limit. Prompts should be far more verbose, explaining intent and consequence. A status bar showing the current data set would be especially useful because of the other weird quirk of the program: how often it dumps data to load in a new part of the program. XPER is four independent programs bound together by a central menu. Entering a new area of the program means effectively loading a new program entirely, which requires its own separate data load. If you see the prompt "Are you sure y/n" what it really means is, "Are you sure you want me to forget your data because the next screen you go to will not preserve it. y/n" That's wordy, but honest. With that lesson learned, I'm adding three more data points to the weather predictor: temperature trend, barometric trend, and vapor pressure deficit (another "gut feeling" choice on my part). Trends should make up a little for the lack of continuous data. This will give me a small thread of data which leads into a given day, the data for that day, and a little data leading out into the next day. That fuzzes up the boundaries. It feels right, at the very least. Appending the new information is easy and painless. Before, I used F3/F4 to step through all features of a given object. This time I'm using F5/F6 to step through a single feature across all objects. This only took about fifteen minutes. I'm firmly in "manual memory management" territory with this generation of hardware. Let's see where we sit relative to the maximum potential. Features like this really makes one appreciate the simple things in life like a mouse, gui checklists, and simple grouping mechanisms. XPER can compare objects or groups of objects against one another, identifying elements which are unique to one group or the other. You get two groups, full stop. Items in those groups and only those groups will be compared when using the command. We can put objects individually into one of those two groups, or we can create an object definition and request that "all objects matching this definition go into group 1 or 2". This is called a STAR object. I created two star objects: one with tomorrow's weather as rain, and one with tomorrow's weather as every type except rain. Groups were insta-built with the simple command where means and means , my "rainy day" star object. I can ask for an AND or OR comparison between the two groups, and with any luck some attribute will be highlighted (invert text) or marked (with ) as being unique or exclusive to one group or the other. If we find something, we've unlocked the secret to rain prediction! Take THAT, Cobra Commander ! Contrary to decades of well-practiced Gen-X cynicism, I do feel a tiny flutter of "hope" in my stomach. Let's see what the XPER analysis reveals! The only thing unique between rainy days and not is the fact that it rained. The Jaccard Distance , developed by Grove Karl Gilbert in 1884, is a measure of the similiarity/diversity between two sets (as in "set theory" sets). The shorter the distance, the more "similar" the compared sets are. XPER can measure this distance between objects. where is the object ID of interest, will run a distance check of that object against all other objects. On my weather set with about 90 objects, it took one minute to compare Nov. 1, 2020 with all other days at 100% C64 speed. Not bad! What can we squeeze out of this thing? By switching into "Inquirer" mode, then loading up the data set of interest, a list of top level object features are presented. Any features not masked by a parent feature are "in the running" as possible filters to narrow down our data. So, we start by entering what we know about our target subject. One by one, we fill in information by selecting a feature then selecting the attribute(s) of that feature, and the database updates its internal state, quietly eliminating objects which fall outside our inquiry. The command will look at the "remaining objects," meaning "objects which have not yet been eliminated by our inquiry so far." With the command as in to run it against the "jaguar" we can ask XPER to tell us which features, in order, should we answer to narrrow down to the jaguar as quickly as possible. It's kind of ranking the features in order of importance to that specific object. It sounds a bit like feature weighting , but it's definitely not. XPER isn't anywhere close to that level of sophistication. In this data set, if I answer "big" for "prey size" I immediately zero in on the jaguar, it being the currently most-discriminating feature for that feline. You might be looking at this and wondering how, exactly, this could possibly predict the weather. You and me, both, buddy. The promise of Fifth Gen systems and the reality are colliding pretty hard now. Feigenbaum and "The Fifth Generation" have been mentioned a few times so far, so I should explain that a little. Announced in 1981, started in 1982, and lasting a little more than a decade, "The Fifth Generation" of computing was Japan's moniker for an ambitious nationwide initiative. According to the report of Japan's announcement, Fifth Generation Computer Systems : Proceedings of the International Conference on Fifth Generation Computer Systems, Tokyo, Japan, October 19-22, 1981, Japan had four goals: In Fifth Generation Computers: Concepts, Implementation, and Uses (1986), Peter Bishop wrote, "The impact on those attending the conference was similar to that of the launch of the Soviet Sputnik in 1957." During a hearing before the Committee on Science and Technology in 1981, Representative Margaret Heckler said , "When the Soviets launched Sputnik I, a remarkable engineering accomplishment, the United States rose to the challenge with new dedication to science and technology. Today, our technology lead is again being challenged, not just by the Soviet Union, but by Japan, West Germany, and others." Scott Armstrong writing for The Christian Science Monitor in 1983, in an article titled, "Fuchi - Japan's computer guru" said, "The debate now - one bound to intensify in the future - is whether the US needs a post-Sputnik-like effort to counter the Japanese challenge. Japan's motive (reflects) a sense of nationalism as much as any economic drive." Innovation Policies for the 21st Century: Report of a Symposium (2007) remarked of Japan's Fifth Generation inroads into supercomputers, "This occasioned some alarm in the United States, particularly within the military." It would be fair to say there was "Concern," with a capital C. In 1989's The Fifth Generation: The Future of Computer Technology by Jeffrey Hsu and Joseph Kusnan (separate from Feigenbaum's The Fifth Generation ) said Japan had three research projects The "Fifth Generation" was specifically the software side which the conference claimed, "will be knowledge information processing systems based on innovative theories and technologies that can offer the advanced functions expected to be required in the 1990's overcoming the technical limitations inherent in conventional computers." Expert systems played a huge role during the AI boom of the 80s, possibly by distancing itself from "AI" as a concept, focusing instead on far more plausible goals. It's adjacent to, but isn't really, "artificial intelligence." This Google N-Gram chart shows how "expert system" had more traction than the ill-defined "artificial intelligence." Though they do contain interesting heuristics, there is no "intelligence" in an expert system. Even the state of the art demonstrated on Computer Chronicles looked no more "intelligent" than a Twine game . That sounds non-threatening; I don't think anyone ever lost a job to a Choose Your Own Adventure book. In those days, even something that basic had cultural punch. Feigenbaum's The Fifth Generation foreshadowed today's AI climate, if perhaps a bit blithely. That guy wasn't alone. In 1985, Aldo Cimino, of Campbell's Soup Co., had his 43 years of experience trouble-shooting canned soup sterilizers dumped onto floppy by knowledge engineers before he retired. They called it "Aldo on a Disk" for a time. He didn't mind, and made no extra money off the brain dump, but said the computer "only knows 85% of what he does." Hey, that's the same percentage Hubert Dreyfus posited at the start of this article! That system was retired about 10 years later, suffering from the same thing that a lot of expert systems of the day did: brittleness. From the paper, "Expert Systems and Knowledge-Based Engineering (1984-1991)" by Jo Ann Oravec, "Brittleness (inability of the system to adapt to changing conditions and input, thus producing nonsensical results) and “knowledge engineering bottlenecks” were two of the more popular explanations why early expert system strategies have failed in application." Basically, such systems were inflexible to changing inputs (that's life), and nobody wanted to spend the time or money to teach them the new rules. The Campbell's story was held up as an exemplar of the success possible with such systems, and even it couldn't keep its job. It was canned. (Folks, jokes like that are the Stone Tools Guarantee™ an AI didn't write this.) Funnily enough, the battles lost during the project may have actually won the war. There was a huge push toward parallelism in compute during this period. You might be familiar with a particularly gorgeous chunk of hardware called the Connection Machine. Japan's own highly parallel computers, the Parallel Inference Machines (PIM), running software built with their own bespoke programming language, KL1, seemed like the future. Until it didn't. PIM and Thinking Machines and others all fell to the same culprit. Any gains enjoyed by parallel systems were relatively slight and the software to take advantage of those parallel processors was difficult to write. In the end the rise of fast, cheap CPUs evaporated whatever advantages parallel systems promised. Today we've reversed course once more on our approach to scaling compute. As Wikipedia says, "the hardware limitations foreseen in the 1980s were finally reached in the 2000s" and parallelism became fashionable once more. Multi-core CPUs and GPUs with massive parallelism are now put to use in modern AI systems, bringing Fifth Generation dreams closer to reality 35 years after Japan gave up. In a "Webster's defines an expert system as..." sense, I suppose XPER meets a narrow definition. It can store symbolic knowledge in a structured format and allow non-experts to interrogate expert knowledge and discover patterns within. That's not bad for a Commodore 64! If we squint, it could be mistaken for a "real" expert system at a distance, but it's not a "focuser ." It borrows the melody of expert systems, yet is nowhere near the orchestral maneuverings of its true "fifth generation" brothers and sisters. Because XPER lacks inference, the fuzzy result of inquiry relies on the human operator to make sense of it. Except for mushroom and feline taxonomy, you're unlikely to get a "definitive answer" to queries. Rather, the approach is to put in data and hope to narrow the possibility space down enough to have something approachable. Then, look through that subset and see if a tendency can be inferred. The expert was in our own hearts all along. Before I reveal the weather prediction results, we must heed an omen from page 10 of the manual. I'm man enough to admit my limits: I'm a dummy. When a dummy feeds information into XPER , the only possible result is that XPER itself also becomes a dummy. With that out of the way, here's a Commodore 64, using 1985's state-of-the-art AI expert system, predicting tomorrow's weather over two weeks. Honestly, not a lot. Ultimately, this wound up being far more "toy" than "productivity," much to my disappointment. A lot of that can be placed on me, for not having an adequate sense of the program's limitations going in. Some of that's on XPER though, making promises it clearly can't keep. Perhaps that was pie-in-the-sky thinking, and a general "AI is going to change everything" attitude. Everyone was excited for the sea-change! It was used for real scientific data analysis by real scientists, so it would be very unfair of me to dismiss it entirely. On the other hand, there were contemporary expert systems on desktop microcomputers which provided far more robust expert system implementations and the advantages of those heuristic evaluations. In that light, XPER can't keep up though it is a noble effort. Overall, I had fun working with it. I honestly enjoyed finding and studying the data, and imagining what could be accomplished by inspecting it just right. Notice that XPER was conspicuously absent during that part of that process, though. Perhaps the biggest takeaway is "learning is fun," but I didn't need XPER to teach me that. Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). VICE, in C64 mode Speed: ~200% (quirk noted later) Snapshots are in use; very handy for database work Drives: XPER seems to only support a single drive XPER v1.0.3 (claims C128, but that seems to be only in "C64 Mode") 250 objects 50 features 300 attributes (but no more than 14 in any given feature) To increase productivity in low-productivity areas. To meet international competition and contribute toward international cooperation. To assist in saving energy and resources. To cope with an aged society. Superspeed Computer Project (the name says it all) The Next-Generation Industries Project (developing the industrial infrastructure to produce components for a superspeed computer) Fifth-Generation Computer Project Any time we're in C64 land, disk loading needs Warp mode turned on. Actual processing of data is fairly snappy, even at normal 100% CPU speed; certainly much faster than Superbase . I suspect XPER is mostly doing bitwise manipulations, nothing processing intensive. XPER did crash once while doing inquiry on the sample feline database. Warp mode sometimes presents repeating input, sometimes not. I'm not entirely certain why it was inconsistent in that way. XPER seems to only recognize a single disk drive. Don't even think about it, you're firmly in XPER Land. XPER 2 might be able to import your data, though. You'll still be in XPER Land, but you won't be constrained by the C64 any longer. As a toy, it's fine. For anything serious, it can't keep up even with its contemporaries: No fuzzy values No weighted probabilities/certainties No forward/backward chaining Limited "explanation" system, as in "Why did you choose that, XPER ?" (demonstrated by another product in Computer Chronicles 1985 "Artificial Intelligence" episode) No temporal sequences (i.e. data changes over time) No ability to "learn" or self-adapt over time No inference

0 views
Stone Tools 2 months ago

HyperCard on the Macintosh

Throughout the Computer Chronicles 's 19 years on-air, various operating systems had full episodes devoted to them, like Macintosh System 7, UNIX, and Windows 95. Only one piece of consumer software had an entire episode devoted to it. You can see and hear Stewart Chiefet's genuine excitement watching Bill Atkinson show it off. Later, Chiefet did a second full episode on it. HyperCard was a "big deal." Big, new things are scary. In a scathing, paranoid, accidentally prescient article for Compute Magazine 's April 1988 issue, author Sheldon Leemon wrote of HyperCard , "But if this (hypertext) trend continues, we may soon see things like interactive household appliances. Imagine a toaster that selects bread darkness based on your mood or how well you slept the night before. We should all remember that HyperCard and hypertext both start with the word hype. And when it comes to hype, my advice is 'just say no.'" Well, you can't make Leemonade without squeezing a few Leemons, and this Leemon was duly squeezed. "Do we really want to give hypertext to young school children, who already have plenty of distractions? We really don't want him to click on the section where the Chinese invent gunpowder and end up in a chemistry lesson on how to create fireworks in the basement." ( obligatory ironic link ) Leemon-heads were in the minority, obviously. Steve Wozniak called Atkinson's brainchild "the best program ever written." So did David Dunham . There was a whole magazine devoted to it . Douglas Adams said , "( HyperCard ) has completely transformed my working life." The impact it made on the world is felt even today. Cyan started life as a HyperCard stack developer and continues to make games. Wikipedia was born from early experiments in HyperCard . The early web was strongly influenced by HyperCard 's (HyperTalk) vision. Cory Doctorow's first programming job was in HyperCard . Bret Victor built " HyperCard in the World ," which evolved into Dynamicland. With a pedigree like the above, it is no spoiler to say that HyperCard is good. But I must remove the spectacles of nostalgia and evaluate it fairly. A lot has happened since its rise and fall, both technologically and culturally. Can HyperCard still deliver in a vibe-coded TypeScript world? Version 2.2 of HyperCard is significant for a few notable reasons: first, it adds AppleScript support, then it adds a script debugger, and finally it marks the return of HyperCard from Claris back into Apple's fold. Reviewing and evaluating HyperCard is a bit like trying to review and evaluate "The Internet." And MacPaint . And a full application development suite. A sane man would skedaddle upon seeing the 1,000 (!) pages of The Complete HyperCard 2.0 Handbook, 3rd Edition by Danny Goodman, but not this man. Make of that what you will. It's difficult to choose a specific task for this post. I'll build the sample project from the Handbook , but let's note what the book says about its own project, "In a sense, the exercise we’ll be going through in this chapter is artificial, because it implies not only that we had a very clear vision of what the final stack would look like, but that we pursued that vision unswervingly. In reality, nothing could be further from the truth." I have no such clear vision. It's kind of like staring a blank sheet of paper and asking myself, "What should I make?" I could fold it into origami, use it as the canvas for a watercolor painting, or stick it into a Coleco ADAM SmartWriter and type a poem onto it. Art needs boundaries, and I don't yet know HyperCard 's. So, I'll just start at the beginning, launch it, and see where it takes me. Launching HyperCard takes me to the Home "stack," where a "stack" is a group of related "cards" and a card is data supercharged with interaction. In beginner's terms, it's fair to think of a stack as an application, though it requires HyperCard to run. ( HyperCard can build stand-alone apps, but that's not a first-time user's experience). Atkinson does mean to evoke the literal image of a stack of 3x5 index cards, each holding information and linked by relationships you define. Buttons provide the means to act on a card, stepping through them in order, finding related cards by keyword, searching card data, or triggering animations. All of this is possible, trivially so. At first blush that doesn't sound particularly interesting, but MYST was built in it , should you have any doubt it punches above its weight class. Today, I can describe a stack as being "like a web site" and each card as being "like a page of that site," an intellectual shorthand which didn't exist during HyperCard's heyday. To use another modern shorthand, "Home" is analogous to a smartphone's Home screen, almost suspiciously so. You can even customize it by adding or deleting stacks of personal interest to make it your own. 0:00 / 0:36 1× (contains intense flashing strobe effects) Beyond Cyberpunk pushed HyperCard boundaries in its own way. A web version is available , minus most of the original charm. Walking through the Home card, the included stacks provide concrete examples illustrating the power of HyperCard's development tools. Two notable features are present, though they are introduced so subtly it would be easy to overlook them. The first is the database functionality the program gives you for free. Open the Appointments or Addresses stacks, enter some information, and it will be available on next launch as searchable data. It's stored as a flat-file, nothing fancy, and it's easier than Superbase , which was already pretty easy. The second is that after entering new data into a stack, you don't have to save; HyperCard saves automatically. It's happens so transparently it almost tricks you into thinking all apps behave this way, but no, Atkinson specifically hated the concept of saving. He thought that if you type data into your computer and yank the power plug, your data should be as close to perfect as possible. This "your data is safe" behavior is inherent to every stack you use or build. You don't have to opt-in. You don't have to set a flag. You don't have to initialize a container. You don't need to spin up a database server. You don't even have to worry about how to transfer the data to another system; the data is all stored within the data resource of the stack itself. Just copy the stack to another computer and be assured your data comes with you. There is one downside to this behavior as a typical Macintosh end-user. If you want to tinker around with a stack, take it apart, and see how its built, you must make sure you are working with a copy of that stack! As saving happens automatically, it can be easy to forget that your changes are permanent, "I didn't hit save! What happened to my stack?" Thus, an original stack risks getting junked up or even irreparably broken due to your experiments. "Save your changes" behavior is taught to us by every other Macintosh program, but HyperCard bucked the careful conditioning Mac users had learned over the years. At its most basic level, without even wanting to make one's own stacks, HyperCard offers quite a lot. Built-in stacks give the user an address book, a phone directory, an appointment calendar, a simple graph maker, and the ability to run (and inspect!) the thousands of stacks created by others. The bundled stacks are easy to use, but far from being "robust" utilities. That said, they're prettier and easier to user than a lot of the type-in programs from the previous 8-bit era and you're free to modify them to suit your needs, even just aesthetically. Free stacks were available on BBS systems, bundled with books, or on cover disks for magazines. HyperCard offered a first glimpse at something slantingly adjacent to the early world wide web. Archive.org has thousands of stacks you can look through to get a sense of the breadth of the community. Learn about naturalism, read Hitchhiker's Guide to the Galaxy (official release!), or practice your German. There are TWO different stacks devoted to killing the purple children's dinosaur, Barney. Zines, expanded versions of the bundled stacks, games, and other esoterica was available to anyone interested in learning more about clams and clam shell art. I am being quite sincere when I say, "What's not to love?" Content consumption is fine and dandy, but it is on content creation which HyperCard focuses the bulk of its energies. With so many stacks expressing so many ideas, and reading how many of those were made by average people with no programming experience, the urge to join that community is overwhelming. Cards are split into two conceptual domains: the background and the foreground. In modern presentation software like PowerPoint or Google Slides , these are equivalent to the template theme (the stuff that tends to remain static) and the slide proper (the per-slide dynamic attributes). The layers of each domain start with a graphic layer fixed to the "back." Every object added to the domain, like a button, is placed on its own numbered layer above that graphic layer, and those can be reordered. It's simple enough to get one's mind around, but the tools don't do a particularly good job of helping the user visualize the current order of a card's elements. Each element must be individually inspected to learn where it lives relative to other layers (objects). An "Inspector" panel would be lovely. HyperCard has basic and advanced tools for creating the three primary elements which compose a card: text, graphics, and buttons. These elements can exist on both background and/or foreground layers as you wish, keeping in mind that foreground elements get first dibs on reacting to user actions. Text is put down as a "field" which can be formatted, typed into, edited, copy/pasted to & from, and made available to be searched. That grants instant database superpowers to the stack. Usually a field holds an amount of text which fits visually on the card, but it can also be presented as a scrollable sub-window to hold a much larger block of text for when your ideas are just too dang big for the fixed card size. Control over text formatting is more robust than expected. Kerning is non-existent, but font, size, character styles, alignment, and line spacing are available. Macintosh bitmap fonts shipped in pre-built sizes, meaning they were hand-drawn expressly to look their best at those sizes. Scaling text is allowed, but you may need to swallow your aesthetic pride. Or draw the text yourself? "Draw the text yourself" is a real option, thanks to the inclusion of what seems to be a complete implementation of MacPaint 1.x . The tools you know and love are all here, with selectable brush width, paint/fill patterns, lasso tool, shapes both filled and open, spray can, and bitmap fonts (if you don't need that text to be searchable). Yes, even the fabled eraser which drew such admiration during Atkinson's first MacPaint public demo is yours. Yesterday's big deal is HyperCard 's "no big deal." These tools are much more fleshed out than they first appear, as modifier keys unlock all kinds of helpful variants. Hold down while drawing with the pencil tool to constrain it horizontally or vertically. Hold down while using the brush tool to invert its usage into erasure. And so on. The tool palette itself "tears off" from the menu and is far more useful in that state. Double-clicking palette icons reveals yet further tricks: the pencil tool opens "fat bits" mode, the eraser clears the screen. The Handbook devotes over 80 pages to the drawing functions. I'll just say that if you can think it, you can draw it. Remember two gotchas: there's only one level of undo, and all freehand drawing happens in a single layer . The pixels you put down overwrite the pixels that are there, period. The inclusion of a full paint program makes it really fun to have an idea, sketch it out, see how it looks, try it, and seamlessly move back and forth between art and design tools (and coding tools). The ease of switching contexts feels natural and literal sketches instantly become interactive prototypes. Or final art, if you like! Who am I to judge? It's kind of startling to be given so much freedom in the tools. As an aside, I took a quick peek at modern no-code editor AirTable and tried to build a simple address book. Beyond the mandatory signup and frustration I felt poking around the tools, I wasn't allowed to place a header graphic without paying a subscription fee. Progress! What is hypermedia without hyperlinks? In HyperCard these are implemented as buttons, and if you've ever poked around in Javascript on the web, you already have a good "handle" (wink wink) on how they work. Like text fields, they have a unique ID, a visual style, and can trigger scripts. Remember, HyperCard debuted with scripting in 1987 and similar client-side scripting didn't appear in web browsers until Netscape Navigator 2.0 c irca 1995. This was bleeding edge stuff. Adding an icon to a button is a little weird, thanks to classic Macintosh "resource forks." All images are stored in this special container, located within the stack file itself. You can't just throw a bunch of images into a folder with a stack and access them freely. Like the lack of multiple undo, this requires a bit of "forget what you know, visitor from the future." Knowing icon modification is a pain in the butt, Atkinson helpfully added an entire icon editor mini-program to HyperCard . Typically you would have to modify these using ResEdit, a popular, free tool from Apple which allowed users to visually inspect an application's resource fork. Here's a 543 page manual all about it. (Were authors paid by the cubic centimeter back then?) With ResEdit , all sorts of fun things could be tweaked in applications and even the Finder. You could redraw icons shown during system level alerts and bomb events, or the fill patterns used to draw progress bars. You could hide or show menus in an application, change sound effects, and more. It's dangerous territory, screwing around with system resources, but it's kind of fun because its dangerous. Hack the system! Buttons can be styled in any number of normal, typical, Macintosh-y ways, but can also be transparent. A transparent button is just a rectangle defining a "hotspot" on the screen, especially useful on top of an image which already visually presents itself as "clickable." To add a hyperlink to text, draw a transparent button on top of that text, wire it up, and you're done. I imagine you can already see the problem. Rewrite the text. Now you have to manually reposition your button to overlay the new position of the rewritten text which will last for exactly as long as the text never gets moved or edited. Sure hope the text didn't split onto two lines during the move. HyperCard does have a sneaky way to fake this up programmatically, but HTML hyperlinks in text would prove to be an unquestionable improvement. Yet, HyperCard speeds ahead of yet-to-arrive-HTML once more with image hyperlinks. Draw or paste in a picture, say a map of Europe, then draw transparent buttons directly on top of each country. When you're done, it looks like a normal map, but now has clickable countries, which could be directed to transition with a wipe to an information page about the clicked country without ever touching a script. HyperCard 's links are a little brain-dead to be sure, but they are also conceptually very, very easy to grasp. What I really enjoy about the HyperCard approach is how it leverages existing knowledge of GUIs and extends that a little into something familiar yet significantly more powerful. This may be the biggest gap in HyperCard 's tool-set. This is not to say that sound effects are completely missing, but they are not given nearly the same thoughtful attention as graphics and scripting are. For reference, where graphics get 80+ pages in the manual, sound gets less than 10. You can only attach sound effects to your cards through scripting. That can be simple beeps, a sequence of notes in an instrument voice, or a prerecorded sound file. Given the lavish set of tools for drawing, I did honestly expect to have at minimum a piano keyboard for inputting simple compositions. There were music-making stacks to assist with simple compositions, shunting off responsibility to third party software. That's not a crime, per se, but does feel like a noteworthy gap in an otherwise robust tool-set. In the manual, a no-code tutorial builds a custom Daily To-Do stack, with hyperlink buttons, searchable text, and custom art in just 30 pages. By the end of the tutorial the user has hand-crafted a useful application to personal specifications, which can even be shared with the world, if desired. Not a bad day's work, and I'd be hard-pressed to duplicate that feat today, to be perfectly honest. This is a deeply empowering program. Even with just 30 minutes of work the user has the beginnings of something interesting . The gap between the user's daily apps and what she's able to build in a weekend at least feels smaller than the gap between and her daily drivers. Success looks achievable. To-do lists, address books, recipe cards, and the like are all well and good, but every artist eventually feels that urge to push forward and move beyond . At this point I'm only 1/3 through the book so what could the other 600 pages possibly have to talk about? The same thing the rest of this post will: programming. I know, I know, for a lot of people this is a boring, obtuse topic. Believe me, I understand. A lot of people were put off by programming until HyperCard made it accessible. In the January 1988 Compute Magazine (Leemon's rant shows up a few months later), David D. Thornburg noted, "it proves that the proper design of a language can open up programming to people who would never think of themselves as programmers." This is backed up by firsthand quotes during HyperCard 's 25th anniversary The fact is, if you've poked around in HyperCard at all you've already been programming , you just didn't know it. We call it "no-code" now, though I'd argue HyperCard is more like a coding butler. There is code, you just aren't required to write it for many common tasks. 0:00 / 0:22 1× "No code" only applies to you , the end-user. HyperCard is programming on your behalf. HyperTalk, Dan Winkler's contribution to the project, is HyperCard 's bespoke scripting language. Patterned after Pascal, a popular development language for the Macintosh at the time ( Photoshop was originally developed in Pascal ), HyperTalk was designed be as easy to read and write as possible. In so doing, it attempts to tear down the gates of programming and offer equal access to its community. At its core, HyperCard is a collection of objects which send and receive messages. HyperTalk takes an object-oriented approach to stack development. There are four types of HyperCard objects: stack, card, button, and text field. Let's consider the humble button. A button can receive a message, like "the user clicked the mouse on me." When that occurs, the button has an opportunity to respond to that message in some fashion. Maybe it displays a graphic image. Maybe it plays a sound. Maybe it sends its own message to a different object to do something else. Scripts define when and how objects process messages. In HyperCard 's visual editor, scripts are kind of "inside" objects. Double-click a button to poke at its internal anatomy, with the script being its "brain." Even if you don't know how to write a HyperTalk script, you can probably read it without much difficulty. Baby steps. Pressing a button on your mouse moves that button physically "down," and lifting your finger allows it to move back "up." So this script says "when the mouse button is released, beep." Want three beeps? Forget the beeps, let's go to the next card of this stack. No, not this stack, go to the last card of a different stack. Want to add a visual transition using a special effect and also do the other stuff? This compositional approach to development helps build skills at a natural pace. Add new behaviors as you learn and see the result immediately. Try new stuff. Tinker. Guess at how to do something. Saw something neat in another stack? Open it and copy out what you like. Experiment. Share. Play. HyperCard wants us to have fun. I am having fun. HyperTalk provides fast iteration on ideas and allows us to describe our intent in similar terminology as we have learned over time as end-users. The perceived distance between "desire" and "action" is shortened considerably, even if this comes with unexpected gotchas. The big gotcha can be a bit of a rude awakening, as English-ish languages tend to be. At first, they seem so simple, but in truth they are not as flexible as true natural language. Going back to the earlier examples, can you intuit which of the following will work and which will error? They all seem reasonable, but only the third one works. This is where the mimicry of English fails the language, because English-ish suggests to a newcomer a free-form expression of thought which HyperTalk cannot hope to understand. Programmers understand that function names and the parameters which drive them are necessarily rigid and ordered. A more programmer-y definition of the command might be Thus exposed, those familiar with typical coding conventions will immediately understand that HyperTalk (often) requires a similarly specific order of parameters to a command. We can't do this. We must rather adhere to the language's hidden order. HyperCard comes with help documentation in the form of searchable stacks, complete with sample scripts to test and borrow from as one grows accustomed to its almost-English sensibilities. Still, it can absolutely be frustrating when something that appears to be valid, like , fails. Another knock against HyperTalk's implementation in HyperCard is the code editor itself. It is so bare-bones I thought I had missed something when installing the program. It will format your code indentation, and that's it. At no point will you receive any warning of any kind of mistake. It happily accepts whatever you write. Only upon trying to run a script will errors surface. On the one hand it is fast and easy to write a script and test it. But it is still requires extra steps which could have been avoided had the editor behaved a bit more like Script Editor , the AppleScript development tool bundled with Mac OS. Script Editor watches your back a little more diligently. Despite the not-quite-English frustrations, it is still comfortably ahead of any other option of the day. The "Hello World" of HyperCard is a fully functional to-do management application. What a good feeling that engenders in a Mac user dipping a cautious toe into development waters. That feeling builds trust in the system and oneself, and maybe, just maybe, grows a desire to keep learning. The full list of things you can do with HyperTalk is too vast to cover. Here's a teensy weensy super tiny overview of a much longer list, just to whet your appetite: A few example properties you can set on objects: A small sampling of some built-in functions: You have plenty of boolean, arithmetic, logical, string, and type operators to solve a wide range of common problems. If you're missing a function, write a new function and use it in your scripts the same as any native command. If some core functionality is missing, HyperCard can be extended via XCMDs and XFCNs, which are new commands and functions built in native code. These can do things like add color support, access SQL databases, digitize sounds, and even let you use HyperCard to compile XCMDs inside of HyperCard itself. Real ouroboros stuff that. With HyperCard 2.2, AppleScript was added as a peer development language. At one point, HyperTalk was considered to become not just HyperCard 's scripting language, but the system-wide scripting language for the entire Macintosh ecosystem. AppleScript was developed instead, taking obvious cues from HyperTalk, while throwing all kinds of shade on Dave Winer's ( that guy again !) Frontier . Ultimately , Frontier got Sherlocked . AppleScript allows for scripting control over the system and its applications. Here's a sample (circa System 7.5.5) Like HyperTalk, you can probably understand that even if you can't write it off the top of your head. Through this synergy, external applications can be controlled directly by HyperCard . PageMaker 5.0 even shipped with a HyperCard stack. In the video below, I'm clicking a HyperCard button, prompting for text, then shoving that text into an external text editor and applying styling. 0:00 / 0:13 1× The elephant in the room Now, all of this talk about using plain English to program has many of you shouting at the screen. Don't worry, I hear you loud and clear. I agree, we should talk about a modern natural language programming environment. Let's talk about Inform 7. For those who don't know, Inform has been a stalwart programming language for the development of interactive fiction (think Zork , and other text adventures) for decades. For the longest time, Inform 6 was the standard-bearer and it looked like "real programming code" complete with semicolons, so you know it was "serious." This describes the room to the player, defines where the exits lead, and specifies that the room has light. From the "Cloak of Darkness" sample project. In the early 2000's, Inform creator Graham Nelson had an epiphany. Text adventure engines have an in-game parser which accepts English language player input and maps it to code. Could we use a similar parser to accept English language code and map it to code? Inform 7 is the result of that research and attempts something significantly more dramatic than HyperTalk. Let's see how to describe that same room from earlier. I know you might be incredulous, but this is legit Inform 7 code which compiles and plays. Inform 7 certainly looks far more natural than the mimicry of HyperTalk. This looks like I can have a conversation with the system and it will do what I want. Attempt 1. This does not work. How embarrassing. Let me try that again. Attempt 2. I have to say "holds" not "has" to give the stone to the player. The "if" section continues to fail. Success! Though we had to take a suspiciously programmery approach to get it to work. Also we use "holds" to give the object to the player, but use "carries" to check inventory. Obviously, emphasis here is it "looks like" I can write whatever I want, but look too hard and the trick of all such systems is revealed: to program requires programming and programming requires structure. Learning how to structure one's thoughts to build a program which works as expected is the real lesson to learn. That lesson can sometimes be obfuscated by the friendly languages. Alright, alright, I'll talk about vibe coding, but what, realistically, is there to say? It's a new phenomenon with very little empirical evidence to support or refute the claims of its efficacy. One barrier it may remove is of reliance on English as the base language for development tasks. A multi-lingual HyperCard -alike could be something special indeed. I asked ChatGPT to recreate a HyperCard stack for me in HTML with this prompt. Here's the result. Here's the code. It seems to work, but I don't know know web development enough to verify it. Nor is there impetus for me to learn to understand it. I could just as easily have asked for this in "the best language for this purpose." Unlike HyperTalk, this approach doesn't ask me to participate in the process which achieved the result, the result itself is all I'm asked to evaluate. When I asked for a change, I receive an entirely different design and layout, but it did contain the functional change. Was that battle won or lost? I also have no idea how to test this, because my spec was also vibes. I could write a complete spec and ask the LLM to build to that, I suppose. There are people in software development who do exactly that, and they are not called "coders." This is "vibe product management." I'm unqualified to determine if that's good enough, but I can say that there is at least one person who seems quite happy with her vibe coding results. While I'm pretty sure her project could be built in HyperCard in an hour, HyperCard doesn't exist. Of course novices like her will turn to LLMs. What other option is there? I would like to point out, however, that with "vibe coding" we aren't seeing the same Precambrian explosion of new life like we did after HyperCard debuted. So I sure have spent a good amount of time talking about the pitfalls and quicksand of using natural language as a programming language. Once we've built simple tools with ease, we quickly learn how much we don't know about programming when we try more advanced techniques. There appears to be a barrier beyond which it makes development harder. This has been covered by many people over the years. Dave Winer, of ThinkTank fame, had thoughts on this , "Hypercard was sold this way, as was COBOL. You'd tell the system, in English, what you wanted and it would make it for you. Only trouble is the programming languages only looked superficially like English, to someone who didn't know anything about programming. Once you got into it, it was programming." Yep. In the February 1998 Compute Magazine , Thornburg concluded his review of HyperCard , "My feeling at this time is that HyperCard lowers the barrier to creating applications for the Mac by quite a bit, but it still requires the discipline and planning required for any programming task." Fair enough. Edsger W. Dijkstra had thoughts on the matter of natural language as a programming language. He said of this pursuit, "When all is said and told, the "naturalness" with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious." It's true, it can be hard to describe to another human what we want, let alone a computer. We humans are nothing if not walking contradictions, in word and action. If you'll indulge me, I'd like to issue my bold rebuttal to all of this. So what if these languages aren't mathematical rigorous enough for "serious" programming? So what if they're hard to scale? So what if we sometimes get caught up in English-ish traps? So what if using these tools create "bad" (Dijkstra's word) habits which prove hard to overcome? I'm not naive; I wouldn't run a nuclear reactor on HyperTalk. However, I'm concerned that movements in programming "purity" have also gatekept the hobbyist population. Thousands of people built thousands of stacks in HyperCard . Businesses were born from it. Work got done. Non-programmers built software that helped themselves or helped other people. Isn't that the whole point of programming, is to help people solve problems? HyperCard and HyperTalk should have set a new baseline for what our computers do right out of the box. There is a case study of a Photoshop artist working for a major retail advertising department, who didn't know a thing about programming, despite many attempts. It was precisely the English-ish language of AppleScript which finally allowed the principles and structure of programming to "click." He has worked as a professional iOS developer for almost 20 years now. I doubt you're biting your nails in suspense, "Who could this mystery person possibly be?!" It was me. There is a direct line, a single red thread on the conspiracy cork-board, between my exposure to AppleScript and my current job now an iOS engineer. Seeing people's work become just a little bit easier with the AppleScript tools I built was incredibly gratifying. Those benefits were tangible, measurable. What I built was as real as any "proper" application. When I outgrew AppleScript, I moved on. Whatever bad habits I had learned, I unlearned. Was this a difficult path toward software engineering enlightenment? Perhaps, but it was my path and it was thanks to tools which were willing to meet me halfway. I think everyone should absolutely use HyperCard , but probably not for the reason you think. I do not kid myself into believing HyperCard can be a useful daily driver, except for the most tenacious of retro-enthusiast. Like other retro software, if you want to build something for yourself and it's useful, then that's great! But, the browser won the hypertext war, period. I can quote every positive review. I can enumerate every feature. I can show you stacks in motion, and you wouldn't be wrong to shrug in response. I get it. You're a worldly individual; you've seen it all before. I don't think you've felt it before, though. HyperCard must be touched to be understood. So do it. Build a few cards, a small stack even, and appreciate how HyperCard 's fluidity matches that of human expression. Feel the ease with which your ideas become reality. Build something beautiful. Now throw it all away. Then you'll understand that the only way to appreciate its brilliance is to have it taken away. When you're back in present-day, wondering why a 20GB application can't afford the same flexibility as this 1MB dynamo, then you'll understand. "Why can't I change this? Why can't I make it mine?" Such questions will cut you in a dozen small ways every day, until you're nothing but scar tissue. Then you'll understand. I don't think you'd be reading this blog if you didn't believe, deep down in your core, "things could be better." HyperCard is concrete evidence which supports that belief. And it was created and killed by the same company which voiced precisely the same "things could be better" conviction in their "1984" commercial. Apple called for a revolution. I'm calling for the next one. 0:00 / 0:22 1× My final "To Do" stack. I went beyond the book and made large, animated daily tabs (a tricky exploration of HyperCard's boundaries, as they exceed the 32x32px max icon size) and a search field. I had a lot of trouble setting up my work environment this time around. The primary stumbling block is that Basilisk II, which initially made the whole Mac environment setup easy, has a Windows timing bug which renders HyperCard animations unusably slow. Mini vMac works great with HyperCard , and feels very snappy, but it couldn't handle a disk over 2GB (I had built a 20GB hard drive for Basilisk II). So I tried to build a new disk image in Mini vMac but it has a weird issue where multi-disk installations "work" except the system ejects the first disk it sees to "accept" the next disk in the install process. That disk was the disk I was trying to install the operating system onto, so the whole endeavor became a comedy of errors. I had to go back into Basilisk II to build a 2GB disk for use in Mini vMac . There's not really a foolproof way to do this. HyperCard 's stack format has been reverse-engineered and made compatible after-the-fact by a few modern attempts to revive the platform. Importing a stack into a modern platform is possible, as is conversion to HTML. There's going to be a lot of edge cases and broken things during this process, but it's worth it if the stack is awesome. You did build an awesome stack, didn't you?! (I don't think any of these ford the AppleScript river though.) Mini vMac v36.04 for x64 on Windows 11 Running at 4x speed Magnification at 2x Macintosh System 7.5.5 (last version Mini vMac supports) Adobe Type Manager v3.8.1 StickyClick v1.2 AppleScript additions 8MB virtual RAM ( Mini vMac default) HyperCard 2.2 w/scripting additions It seems to work, but I don't know know web development enough to verify it. Nor is there impetus for me to learn to understand it. I could just as easily have asked for this in "the best language for this purpose." Unlike HyperTalk, this approach doesn't ask me to participate in the process which achieved the result, the result itself is all I'm asked to evaluate. When I asked for a change, I receive an entirely different design and layout, but it did contain the functional change. Was that battle won or lost? I also have no idea how to test this, because my spec was also vibes. I could write a complete spec and ask the LLM to build to that, I suppose. There are people in software development who do exactly that, and they are not called "coders." This is "vibe product management." I'm unqualified to determine if that's good enough, but I can say that there is at least one person who seems quite happy with her vibe coding results. While I'm pretty sure her project could be built in HyperCard in an hour, HyperCard doesn't exist. Of course novices like her will turn to LLMs. What other option is there? I would like to point out, however, that with "vibe coding" we aren't seeing the same Precambrian explosion of new life like we did after HyperCard debuted. The struggle is real So I sure have spent a good amount of time talking about the pitfalls and quicksand of using natural language as a programming language. Once we've built simple tools with ease, we quickly learn how much we don't know about programming when we try more advanced techniques. There appears to be a barrier beyond which it makes development harder. This has been covered by many people over the years. Dave Winer, of ThinkTank fame, had thoughts on this , "Hypercard was sold this way, as was COBOL. You'd tell the system, in English, what you wanted and it would make it for you. Only trouble is the programming languages only looked superficially like English, to someone who didn't know anything about programming. Once you got into it, it was programming." Yep. In the February 1998 Compute Magazine , Thornburg concluded his review of HyperCard , "My feeling at this time is that HyperCard lowers the barrier to creating applications for the Mac by quite a bit, but it still requires the discipline and planning required for any programming task." Fair enough. Edsger W. Dijkstra had thoughts on the matter of natural language as a programming language. He said of this pursuit, "When all is said and told, the "naturalness" with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious." It's true, it can be hard to describe to another human what we want, let alone a computer. We humans are nothing if not walking contradictions, in word and action. If you'll indulge me, I'd like to issue my bold rebuttal to all of this. So what? So what if these languages aren't mathematical rigorous enough for "serious" programming? So what if they're hard to scale? So what if we sometimes get caught up in English-ish traps? So what if using these tools create "bad" (Dijkstra's word) habits which prove hard to overcome? I'm not naive; I wouldn't run a nuclear reactor on HyperTalk. However, I'm concerned that movements in programming "purity" have also gatekept the hobbyist population. Thousands of people built thousands of stacks in HyperCard . Businesses were born from it. Work got done. Non-programmers built software that helped themselves or helped other people. Isn't that the whole point of programming, is to help people solve problems? HyperCard and HyperTalk should have set a new baseline for what our computers do right out of the box. If this is your idea of no-code Nirvana, I'm not going to stand in your way. I'm not going to join you, but I'm not going to stop you. Plot twist There is a case study of a Photoshop artist working for a major retail advertising department, who didn't know a thing about programming, despite many attempts. It was precisely the English-ish language of AppleScript which finally allowed the principles and structure of programming to "click." He has worked as a professional iOS developer for almost 20 years now. I doubt you're biting your nails in suspense, "Who could this mystery person possibly be?!" It was me. There is a direct line, a single red thread on the conspiracy cork-board, between my exposure to AppleScript and my current job now an iOS engineer. Seeing people's work become just a little bit easier with the AppleScript tools I built was incredibly gratifying. Those benefits were tangible, measurable. What I built was as real as any "proper" application. When I outgrew AppleScript, I moved on. Whatever bad habits I had learned, I unlearned. Was this a difficult path toward software engineering enlightenment? Perhaps, but it was my path and it was thanks to tools which were willing to meet me halfway. Get hyped I think everyone should absolutely use HyperCard , but probably not for the reason you think. I do not kid myself into believing HyperCard can be a useful daily driver, except for the most tenacious of retro-enthusiast. Like other retro software, if you want to build something for yourself and it's useful, then that's great! But, the browser won the hypertext war, period. I can quote every positive review. I can enumerate every feature. I can show you stacks in motion, and you wouldn't be wrong to shrug in response. I get it. You're a worldly individual; you've seen it all before. I don't think you've felt it before, though. HyperCard must be touched to be understood. So do it. Build a few cards, a small stack even, and appreciate how HyperCard 's fluidity matches that of human expression. Feel the ease with which your ideas become reality. Build something beautiful. Now throw it all away. Then. Then! Then you'll understand that the only way to appreciate its brilliance is to have it taken away. When you're back in present-day, wondering why a 20GB application can't afford the same flexibility as this 1MB dynamo, then you'll understand. "Why can't I change this? Why can't I make it mine?" Such questions will cut you in a dozen small ways every day, until you're nothing but scar tissue. Then you'll understand. I don't think you'd be reading this blog if you didn't believe, deep down in your core, "things could be better." HyperCard is concrete evidence which supports that belief. And it was created and killed by the same company which voiced precisely the same "things could be better" conviction in their "1984" commercial. Apple called for a revolution. I'm calling for the next one. 0:00 / 0:22 1× My final "To Do" stack. I went beyond the book and made large, animated daily tabs (a tricky exploration of HyperCard's boundaries, as they exceed the 32x32px max icon size) and a search field. Sharpening the Stone I had a lot of trouble setting up my work environment this time around. The primary stumbling block is that Basilisk II, which initially made the whole Mac environment setup easy, has a Windows timing bug which renders HyperCard animations unusably slow. Mini vMac works great with HyperCard , and feels very snappy, but it couldn't handle a disk over 2GB (I had built a 20GB hard drive for Basilisk II). So I tried to build a new disk image in Mini vMac but it has a weird issue where multi-disk installations "work" except the system ejects the first disk it sees to "accept" the next disk in the install process. That disk was the disk I was trying to install the operating system onto, so the whole endeavor became a comedy of errors. I had to go back into Basilisk II to build a 2GB disk for use in Mini vMac . Emulator Improvements I found 4x speed in the emulator felt very nice; max speed for installations. Install the ImportFl and ExportFl utilities for Mini vMac to get data into and out of the virtual Macintosh from your host operating system easily. The emulator never crashed, though I definitely had troubles with Macintosh apps crashing or running out of memory. While I could get the emulator to boot with my virtual hard drive, I couldn't get that to persist during a "system reboot." Maybe there's a setting I overlooked. Don't forget to "Get Info" on your apps and manually set their memory usage. Extensions enable classic Mac conveniences. I recommend StickyClick , otherwise you have to continuously hold the mouse button down when using menus. Decker looks very much like HyperCard , but looks only. No importing of original stacks and the scripting language is completely different, but that adorable drawing app built in it looks great! Source code for Decker here! LiveCode is trying to keep the dream alive in a modern context and can apparently import HyperCard stacks (with caveats). (note: a completely new version, with obligatory AI, went live just a few days before this post!) HyperNext Studio might be interesting to some. Stacksmith seems to have stalled out in development, but could be fun to play around with. HyperCard Simulator looks quite interesting and has a stack importer. 日本語でも使えるよ! I imported Apple's "HyperTalk Reference" stack without issue, though its hyperlinks don't work; I can step through cards. I also see misaligned images and buttons at times, but otherwise it's a great presentation. It can also export a stack to HTML. WyldCard , a Java implementation, was updated in 2024. Needs a little knowhow to get your Java build environment set up, and I'm unclear if it can import original stacks. hypercard-stack-importer says it will convert a stack to HTML Built-in sound support is woeful. What you build is essentially constrained to the classic Macintosh ecosystem (though you may find a way to convert the stack; see above) The script editor is too bare bones for those who aspire to Myst -like greatness. Color support is a 3rd party afterthought. Textual hyperlinks can really only be faked through button overlays. It is possible to use HyperTalk to grab a clicked word in a text field, which might be enough context to make a decision upon. HyperTalk can be frustrating when implementing advanced ideas. YMMV.

0 views
Stone Tools 3 months ago

Bank Street Writer on the Apple II

Stop me if you're heard this one . In 1978, a young man wandered into a Tandy Radio Shack and found himself transfixed by the TRS-80 systems on display. He bought one just to play around with, and it wound up transforming his life from there on. As it went with so many, so too did it go with lawyer Doug Carlston. His brother, Gary, initially unimpressed, warmed up to the machine during a long Maine winter. The two thus smitten mused, "Can we make money off of this?" Together they formed a developer-sales relationship, with Doug developing Galactic Saga and third brother Don developing Tank Command . Gary's sales acumen brought early success and Broderbund was officially underway. Meanwhile in New York, Richard Ruopp, president of Bank Street College of Education, a kind of research center for experimental and progressive education, was thinking about how emerging technology fit into the college's mission. Writing was an important part of their curriculum, but according to Ruopp , "We tested the available word processors and found we couldn’t use any of them." So, experts from Bank Street College worked closely with consultant Franklin Smith and software development firm Intentional Educations Inc. to build a better word processor for kids. The fruit of that labor, Bank Street Writer , was published by Scholastic exclusively to schools at first, with Broderbund taking up the home distribution market a little later. Bank Street Writer would dominate home software sales charts for years and its name would live on as one of the sacred texts, like Lemonade Stand or The Oregon Trail . Let's see what lessons there are to learn from it yet. 1916 Founded by Lucy Sprague Mitchell, Wesley Mitchell, and Harriet Johnson as the “Bureau of Educational Experiments” (BEE) with the goal of understanding in what environment children best learn and develop, and to help adults learn to cultivate that environment. 1930 BEE moves to 69 Bank Street. (Will move to 112th Street in 1971, for space reasons.) 1937 The Writer’s Lab, which connects writers and students, is formed. 1950 BEE is renamed to Bank Street College of Education. 1973 Minnesota Educational Computing Consortium (MECC) is founded. This group would later go on to produce The Oregon Trail . 1983 Bank Street Writer, developed by Intentional Educations Inc., published by Broderbund Software, and “thoroughly tested by the academics at Bank Street College of Education.” Price: $70. 1985 Writer is a success! Time to capitalize! Bank Street Speller $50, Bank Street Filer $50, Bank Street Mailer $50, Bank Street Music Writer $50, Bank Street Prewriter (published by Scholastic) $60. 1986 Bank Street Writer Plus $100. Bank Street Writer III (published by Scholastic) $90. It’s basically Plus with classroom-oriented additions, including a 20-column mode and additional teaching aides. 1987 Bank Street Storybook, $40. 1992 Bank Street Writer for the Macintosh (published by Scholastic) $130. Adds limited page layout options, Hypercard-style hypertext, clip art, punctuation checker, image import with text wrap, full color, sound support, “Classroom Publishing” of fliers and pamphlets, and electronic mail. With word processors, I want to give them a chance to present their best possible experience. I do put a little time into trying the baseline experience many would have had with the software during the height of its popularity. "Does the software still have utility today?" can only be fairly answered by giving the software a fighting chance. To that end, I've gifted myself a top-of-the-line (virtual) Apple //e running the last update to Writer , the Plus edition. You probably already know how to use Bank Street Writer Plus . You don't know you know, but you do know because you have familiarity with GUI menus and basic word processing skills. All you're lacking is an understanding of the vagaries of data storage and retrieval as necessitated by the hardware of the time, but once armed with that knowledge you could start using this program without touching the manual again. It really is as easy as the makers claim. The simplicity is driven by very a subtle, forward-thinking user interface. Of primary interest is the upper prompt area. The top 3 lines of the screen serve as an ever-present, contextual "here's the situation" helper. What's going on? What am I looking at? What options are available? How do I navigate this screen? How do I use this tool? Whatever you're doing, whatever menu option you've chosen, the prompt area is already displaying information about which actions are available right now in the current context . As the manual states, "When in doubt, look for instructions in the prompt area." The manual speaks truth. For some, the constant on-screen prompting could be a touch overbearing, but I personally don't think it's so terrible to know that the program is paying attention to my actions and wants me to succeed. The assistance isn't front-loaded, like so many mobile apps, nor does it interrupt, like Clippy. I simply can't fault the good intentions, nor can I really think of anything in modern software that takes this approach to user-friendliness. The remainder of the screen is devoted to your writing and works like any other word processor you've used. Just type, move the cursor with the arrow keys, and type some more. I think most writers will find it behaves "as expected." There are no Electric Pencil -style over-type surprises, nor VisiCalc -style arrow key manipulations. What seems to have happened is that in making a word processor that is easy for children to use, they accidentally made a word processor that is just plain easy. The basic functionality is drop-dead simple to pick up by just poking around, but there's quite a bit more to learn here. To do so, we have a few options for getting to know Bank Street Writer in more detail. There are two manuals by virtue of the program's educational roots. Bank Street Writer was published by both Broderbund (for the home market) and Scholastic (for schools). Each tailored their own manual to their respective demographic. Broderbund's manual is cleanly designed, easy to understand, and gets right to the point. It is not as "child focused" as reviews at the time might have you believe. Scholastic's is more of a curriculum to teach word processing, part of the 80s push for "computers in the classroom." It's packed with student activities, pages that can be copied and distributed, and (tellingly) information for the teacher explaining "What is a word processor?" Our other option for learning is on side 2 of the main program disk. Quite apart from the program proper, the disk contains an interactive tutorial. I love this commitment to the user's success, though I breezed through it in just a few minutes, being a cultured word processing pro of the 21st century. I am quite familiar with "menus" thank you very much. As I mentioned at the top, the screen is split into two areas: prompt and writing. The prompt area is fixed, and can neither be hidden nor turned off. This means there's no "full screen" option, for example. The writing area runs in high-res graphics mode so as to bless us with the gift of an 80-character wide display. Being a graphics display also means the developer could have put anything on screen, including a ruler which would have been a nice formatting helper. Alas. Bank Street offers limited preference settings; there's not much we can do to customize the program's display or functionality. The upshot is that as I gain confidence with the program, the program doesn't offer to match my ability. There is one notable trick, which I'll discuss later, but overall there is a missed opportunity here for adapting to a user's increasing skill. Kids do grow up, after all. As with Electric Pencil , I'm writing this entirely in Bank Street Writer . Unlike the keyboard/software troubles there, here in 128K Apple //e world I have Markdown luxuries like . The emulator's amber mode is soothing to the eyes and soul. Mouse control is turned on and works perfectly, though it's much easier and faster to navigate by keyboard, as God intended. This is an enjoyable writing experience. Which is not to say the program is without quirks. Perhaps the most unfortunate one is how little writing space 128K RAM buys for a document. At this point in the write-up I'm at about 1,500 words and BSW's memory check function reports I'm already at 40% of capacity. So the largest document one could keep resident in memory at one time would run about 4,000 words max? Put bluntly, that ain't a lot. Splitting documents into multiple files is pretty much forced upon anyone wanting to write anything of length. Given floppy disk fragility, especially with children handling them, perhaps that's not such a bad idea. However, from an editing point of view, it is frustrating to recall which document I need to load to review any given piece of text. Remember also, there's no copy/paste as we understand it today. Moving a block of text between documents is tricky, but possible. BSW can save a selected portion of text to its own file, which can then be "retrieved" (inserted) at the current cursor position in another file. In this way the diskette functions as a memory buffer for cross-document "copy/paste." Hey, at least there is some option available. Flipping through old magazines of the time, it's interesting just how often Bank Street Writer comes up as the comparative reference point for home word processors over the years. If a new program had even the slightest whiff of trying to be "easy to use" it was invariably compared to Bank Street Writer . Likewise, there were any number of writers and readers of those magazines talking about how they continued to use Bank Street Writer , even though so-called "better" options existed. I don't want to oversell its adoption by adults, but it most definitely was not a children-only word processor, by any stretch. I think the release of Plus embraced a more mature audience. In schools it reigned supreme for years, including the Scholastic-branded version of Plus called Bank Street Writer III . There were add-on "packs" of teacher materials for use with it. There was also Bank Street Prewriter , a tool for helping to organize themes and thoughts before committing to the act of writing, including an outliner, as popularized by ThinkTank . (always interesting when influences ripple through the industry like this) Of course, the Scholastic approach was built around the idea of teachers having access to computers in the classroom. And THAT was build on the idea of teachers feeling comfortable enough with computers to seamlessly merge them into a lesson-plan. Sure, the kids needed something simple to learn, but let's be honest, so did the adults. There was a time when attaching a computer to anything meant a fundamental transformation of that thing was assured and imminent. For example, the "office of the future" (as discussed in the Superbase post ) had a counterpart in the "classroom of tomorrow." In 1983, Popular Computing said, "Schools are in the grip of a computer mania." Steve Jobs took advantage of this, skating to where the puck would be, by donating Apple 2s to California schools. In October 1983, Creative Computing did a little math on that plan. $20M in retail donations brought $4M in tax credits against $5M in gross donations. Apple could donate a computer to every elementary, middle, and high school in California for an outlay of only $1M. Jobs lobbied Congress hard to pass a national version of the same "Kids Can't Wait" bill, which would have extended federal tax credits for such donations. That never made it to law, for various political reasons. But the California initiative certainly helped position Apple as the go-to system for computers in education. By 1985, Apple would dominate fully half of the education market. That would continue into the Macintosh era, though Apple's dominance diminished slowly as cheaper, "good enough" alternatives entered the market. Today, Apple is #3 in the education market, behind Windows and Chromebooks . It is a fair question to ask, "How useful could a single donated computer be to a school?" Once it's in place, then what? Does it have function? Does anyone have a plan for it? Come to think of it, does anyone on staff even know how to use it? When Apple put a computer into (almost) every school in California, they did require training. Well, let's say lip-service was paid to the idea of the aspiration of training. One teacher from each school had to receive one day's worth of training to attain a certificate which allowed the school to receive the computer. That teacher was then tasked with training their coworkers. Wait, did I say "one day?" Sorry, I meant about one HOUR of training. It's not too hard to see where Larry Cuban was coming from when he published Oversold & Underused: Computers in the Classroom in 2001. Even of schools with more than a single system, he notes, "Why, then, does a school's high access (to computers) yield limited use? Nationally and in our case studies, teachers... mentioned that training in relevant software and applications was seldom offered... (Teachers) felt that the generic training available was often irrelevant to their specific and immediate needs." From my perspective, and I'm no historian, it seems to me there were four ways computers were introduced into the school setting. The three most obvious were: I personally attended schools of all three types. What I can say the schools had in common was how little attention, if any, was given to the computer and how little my teachers understood them. An impromptu poll of friends aligned with my own experience. Schools didn't integrate computers into classwork, except when classwork was explicitly about computers. I sincerely doubt my time playing Trillium's Shadowkeep during recess was anything close to Apple's vision of a "classroom of tomorrow." The fourth approach to computers into the classroom was significantly more ambitious. Apple tried an experiment in which five public school sites were chosen for a long-term research project. In 1986, the sites were given computers for every child in class and at home. They reasoned that for computers to truly make an impact on children, the computer couldn't just be a fun toy they occasionally interacted with. Rather, it required full integration into their lives. Now, it is darkly funny to me that having achieved this integration today through smartphones, adults work hard to remove computers from school. It is also interesting to me that Apple kind of led the way in making that happen, although in fairness they don't seem to consider the iPhone to be a computer . America wasn't alone in trying to give its children a technological leg up. In England, the BBC spearheaded a major drive to get computers into classrooms via a countrywide computer literacy program. Even in the States, I remember watching episodes of BBC's The Computer Programme on PBS. Regardless of Apple's or the BBC's efforts, the long-term data on the effectiveness of computers in the classroom has been mixed, at best, or even an outright failure. Apple's own assessment of their "Apple Classrooms of Tomorrow" (ACOT) program after a couple of years concluded, "Results showed that ACOT students maintained their performance levels on standard measures of educational achievement in basic skills, and they sustained positive attitudes as judged by measures addressing the traditional activities of schooling." Which is a "we continue to maintain the dream of selling more computers to schools" way of saying, "Nothing changed." In 2001, the BBC reported , "England's schools are beginning to use computers more in teaching - but teachers are making "slow progress" in learning about them." Then in 2015 the results were "disappointing, "Even where computers are used in the classroom, their impact on student performance is mixed at best." Informatique pour tous, France 1985: Pedagogy, Industry and Politics by Clémence Cardon-Quint noted the French attempt at computers in the classroom as being, "an operation that can be considered both as a milestone and a failure." Computers in the Classrooms of an Authoritarian Country: The Case of Soviet Latvia (1980s–1991) by Iveta Kestere, Katrina Elizabete Purina-Bieza shows the introduction of computers to have drawn stark power and social divides, while pushing prescribed gender roles of computers being "for boys." Teachers Translating and Circumventing the Computer in Lower and Upper Secondary Swedish Schools in the 1970s and 1980 s by Rosalía Guerrero Cantarell noted, "the role of teachers as agents of change was crucial. But teachers also acted as opponents, hindering the diffusion of computer use in schools." Now, I should be clear that things were different in the higher education market, as with PLATO in the universities. But in the primary and secondary markets, Bank Street Writer 's primary demographic, nobody really knew what to do with the machines once they had them. The most straightforwardly damning assessment is from Oversold & Underused where Cuban says in the chapter "Are Computers in Schools Worth the Investment?", "Although promoters of new technologies often spout the rhetoric of fundamental change, few have pursued deep and comprehensive changes in the existing system of schooling." Throughout the book he notes how most teachers struggle to integrate computers into their lessons and teaching methodologies. The lack of guidance in developing new ways of teaching means computers will continue to be relegated to occasional auxiliary tools trotted out from time to time, not integral to the teaching process. "Should my conclusions and predictions be accurate, both champions and skeptics will be disappointed. They may conclude, as I have, that the investment of billions of dollars over the last decade has yet to produce worthy outcomes," he concludes. Thanks to my sweet four-drive virtual machine, I can summon both the dictionary and thesaurus immediately. Put the cursor at the start of a word and hit or to get an instant spot check of spelling or synonyms. Without the reality of actual floppy disk access speed, word searches are fast. Spelling can be performed on the full document, which does take noticeable time to finish. One thing I really love is how cancelling an action or moving forward on the next step of a process is responsive and immediate. If you're growing bored of an action taking too long, just cancel it with ; it will stop immediately . The program feels robust and unbreakable in that way. There is a word lookup, which accepts wildcards, for when you kinda-sorta know how to spell a word but need help. Attached to this function is an anagram checker which benefits greatly from a virtual CPU boost. But it can only do its trick on single words, not phrases. Earlier I mentioned how little the program offers a user who has gained confidence and skill. That's not entirely accurate, thanks to its most surprising super power: macros. Yes, you read that right. This word processor designed for children includes macros. They are stored at the application level, not the document level, so do keep that in mind. Twenty can be defined, each consisting of up to 32 keystrokes. Running keystrokes in a macro is functionally identical to typing by hand. Because the program can be driven 100% by keyboard alone, macros can trigger menu selections and step through tedious parts of those commands. For example, to save our document periodically we need to do the following every time: That looks like a job for to me. 0:00 / 0:23 1× Defining a macro to save, with overwrite, the current file. After it is defined, I execute it which happens very quickly in the emulator. Watch carefully. If you can perform an action through a series of discrete keyboard commands, you can make a macro from it. This is freeing, but also works to highlight what you cannot do with the program. For example, there is no concept of an active selection, so a word is the smallest unit you can directly manipulate due to keyboard control limitations. It's not nothin' but it's not quite enough. I started setting up markdown macros, so I could wrap the current word in or for italic and bold. Doing the actions in the writing area and noting the minimal steps necessary to achieve the desired outcome translated into perfect macros. I was even able to make a kind of rudimentary "undo" for when I wrap something in italic but intended to use bold. This reminded me that I haven't touched macro functionality in modern apps since my AppleScript days. Lemme check something real quick. I've popped open LibreOffice and feel immediately put off by its Macros function. It looks super powerful; a full dedicated code editor with watched variables for authoring in its scripting language. Or is it languages? Is it Macros or ScriptForge? What are "Gimmicks?" Just what is going on? Google Docs is about the same, using Javascript for its "Apps Script" functionality. Here's a Stack Overflow post where someone wants to select text and set it to "blue and bold" with a keystroke and is presented with 32 lines of Javascript. Many programs seem to have taken a "make the simple things difficult, and the hard things possible" approach to macros. Microsoft Word reportedly has a "record" function for creating macros, which will watch what you do and let you play back those actions in sequence. (a la Adobe Photoshop's "actions") This sounds like a nice evolution of the BSW method. I say "reportedly" because it is not available in the online version and so I couldn't try it for myself without purchasing Microsoft 365. I certainly don't doubt the sky's the limit with these modern macro systems. I'm sure amazing utilities can be created, with custom dialog boxes, internet data retrieval, and more. The flip-side is that a lot of power has has been stripped from the writer and handed over to the programmer, which I think is unfortunate. Bank Street Writer allows an author to use the same keyboard commands for creating a macro as for writing a document. There is a forgotten lesson in that. Yes, BSW's macros are limited compared to modern tools, but they are immediately accessible and intuitive. They leverage skills the user is already known to possess . The learning curve is a straight, flat line. Like any good word processor, user-definable tab stops are possible. Bringing up the editor for tabs displays a ruler showing tab stops and their type (normal vs. decimal-aligned). Using the same tools for writing, the ruler is similarly editable. Just type a or a anywhere along the ruler. So, the lack of a ruler I noted at the beginning is now doubly-frustrating, because it exists! Perhaps it was determined to be too much visual clutter for younger users? Again, this is where the Options screen could have allowed advanced users to toggle on features as they grow in comfort and ambition. From what I can tell in the product catalogs, the only major revision after this was for the Macintosh which added a whole host of publishing features. If I think about my experience with BSW these past two weeks, and think about what my wish-list for a hypothetical update might be, "desktop publishing" has never crossed my mind. Having said all of that, I've really enjoyed using it to write this post. It has been solid, snappy, and utterly crash free. To be completely frank, when I switched over into LibreOffice , a predominantly native app for Windows, it felt laggy and sluggish. Bank Street Writer feels smooth and purpose-built, even in an emulator. Features are discoverable and the UI always makes it clear what action can be taken next. I never feel lost nor do I worry that an inadvertent action will have unknowable consequences. The impression of it being an assistant to my writing process is strong, probably more so than many modern word processors. This is cleanly illustrated by the prompt area which feels like a "good idea we forgot." (I also noted this in my ThinkTank examination) I cannot lavish such praise upon the original Bank Street Writer , only on this Plus revision. The original is 40-columns only, spell-checking is a completely separate program, no thesaurus, no macros, a kind of bizarre modal switch between writing/editing/transfer modes, no arrow key support, and other quirks of its time and target system (the original Apple 2). Plus is an incredibly smart update to that original, increasing its utility 10-fold, without sacrificing ease of use. In fact, it's actually easier to use, in my opinion than the original and comes just shy of being something I could use on a regular basis. Bank Street Writer is very good! But it's not quite great . Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). AppleWin 32bit 1.31.0.0 on Windows 11 Emulating an Enhanced Apple //e Authentic machine speed (enhanced disk access speed) Monochrome (amber) for clean 80-column display Disk II controller in slot 5 (enables four floppies, total) Mouse interface in slot 4 Bank Street Writer Plus At the classroom level there are one or more computers. At the school level there is a "computer lab" with one or more systems. There were no computers. Hit (open the File menu) Hit (select Save File) Hit three times (stepping through default confirmation dialogs) I find that running at 300% CPU speed in AppleWin works great. No repeating key issues and the program is well-behaved. Spell check works quickly enough to not be annoying and I honestly enjoyed watching it work its way through the document. Sometimes there's something to be said about slowing the computer down to swift human-speed, to form a stronger sense of connection between your own work and the computer's work. I did mention that I used a 4-disk setup, but in truth I never really touched the thesaurus. A 3-disk setup is probably sufficient. The application never crashed; the emulator was rock-solid. CiderPress2 works perfectly for opening the files on an Apple ][ disk image. Files are of file extension, which CiderPress2 tries to open as disassembly, not text. Switch "Conversion" to "Plain Text" and you'll be fine. This is a program that would benefit greatly from one more revision. It's very close to being enough for a "minimalist" crowd. There are four, key pieces missing for completeness: Much longer document handling Smarter, expanded dictionary, with definitions Customizable UI, display/hide: prompts, ruler, word count, etc. Extra formatting options, like line spacing, visual centering, and so on. For a modern writer using hyperlinks, this can trip up the spell-checker quite ferociously. It doesn't understand, nor can it be taught, pattern-matching against URLs to skip them.

0 views
Stone Tools 3 months ago

ThinkTank on the PC w/DOS

There's just no denying or sugar-coating it: I'm getting older. My brain ain't what it once was. How many times have I forgotten to take the reusable grocery bag with me to the store? It's hanging ON THE FRONT DOOR AS I EXIT, and I still forget. Yet, with age comes ideas. Even good ones sometimes! Like the grocery bag, they too can be forgotten as I scramble to commit them, but get sidetracked by Affinity Designer wanting to apply an update. "What was I doing? Why did I open Designer ? Ha! That YouTuber's cat just sneezed!" And thus, the moment is lost. I've long been peripherally aware of "outliners" which promise to help capture and develop my fleeting thoughts. "Idea processors," "personal knowledge managers," "mind mappers," software of this ilk has multiplied in recent years, but I wanted to try the one that started it all. I'm a "back to basics" kind of guy, after all. At first glance ThinkTank looks threadbare, but a 240 page manual and a 388 page (!) companion book suggest a richer tapestry. Have I unjustly miscategorized this software as mere weapons-grade text indenters? Is there actually a better way to think? Note: Dollar amounts in parentheses are inflation-adjusted for 2025. ThinkTank is launched in DOS by the .exe file named not for reasons I cannot imagine. A handsomely arranged text mode splash-screen welcomes me in. The date prompt happens with every launch, and most likely believes the year is nineteen twenty-five, not twenty twenty-five. Nothing we can do about that, so with an "n" we begin. First impressions don't do much to dissuade me from my gut reaction of, "We need a 400 page book for this?" Don't get me wrong, I like it! The screen is essentially blank, save for the 4-line prompt area. Contextual (sometimes insufficient) help is displayed there. I find it comforting having ever-present guidance on screen at all times, like a helper watching over my shoulder. I like it, even if I think it could be executed better. (spoilers: we'll see it done better in the next post) Unlike the mnemonics of VisiCalc , ThinkTank keyboard shortcuts almost feel random. Top-level menu commands can be triggered directly without entering the menu, while secondary menu commands (under extra/F10) can mostly be triggered directly. But not always. With the command menu open, options are chosen by arrow keys or a shortcut key, displayed in the bottom bar flanking the left and right. In the screenshot above, "insert" will "add new headline(s)" and can be activated by the "Ins" (Insert) key on an extended keyboard. "Insert" will "add". Word choices like that are trivial in the long run, but also make me "hmm...." as they did some reviewers. As a self-professed "idea processor," what precisely does ThinkTank want to help me build? The brainchild of Dave Winer, the core conceit is to surface the invisible computer science notion of a data "tree" into a visual structure. You see this nested structure everywhere now, even just navigating the contents of your hard drive in list view. Folders, which contain files and folders, which contain further files and folders, ad infinitum. To start entering an outline for this very post, I type a header in Insert mode, hit , type in another header at the same level, and continue down recording top-level thoughts. Left and right arrow will indent/unindent to capture subordinate ideas. This is fast and efficient for "first capture" of an idea, no doubt about it. When done, every header is auto-prefixed with either a or meaning "has nested data" or not. headers can be expanded and collapsed to reveal/hide subordinate data; said data may or may not be currently visible. Visually, there is no distinction between "expanded/collapsed" state, as disclosure triangles tend to signify these days. At this point, even Electric Pencil could serve as a simple outliner if this is all you need, and Pencil 's first manual was a mere 26 pages. ThinkTank justifies its existence through specialized tools for preserving and manipulating data as a tree. William R. Hershey wrote for BYTE Magazine in May 1984, "Computers use trees all the time in their internal workings. Users, however, are seldom aware of them because programs reveal only the forest. With Thinktank 's tree structure out in the open, we can expect to see some very interesting uses made of this program." As far as I can tell, ThinkTank led the way on this. "Folding" data at the tree level really is a nice way to visually simplify a project, to focus on a specific sub-unit of information. Headings can be reordered, "promoted" in hierarchy, and so on. If a heading has subheadings, manipulation of the major heading affects all nested data as a singular unit, like how moving a folder also moves the files it contains. You get it. We know it can build an outline, but ThinkTank promises much more than that. Can it process my ideas? Can it help me think? In my most humble (and correct!) opinion, the user interface for a tool devoted to ideas and thoughts should be as second-nature as possible. Contrary to a previous post, this is one time where friction really is bad, due to the stated goals of the program. In this case, the friction exists in our minds and ThinkTank is explicitly offering to function as WD-40. ThinkTank takes a stab at effortless idea recording by adopting the number pad as a kind of central control panel for navigation. The first impression is that this will handle much more minute-to-minute editing than it actually does. It's useful enough for exploring a finished outline, but doesn't prove particularly helpful when trying to process and refine the outline. In practice, I find I jump around the keyboard a lot, from number pad to main keys to F-keys. There's a lot to remember in the usage of the program, even as I'm simultaneously trying not to forget my good ideas. This is not quite the smooth process I'd hoped for. I'm not feeling flow . Flow escapes me in part due to software usage rules which don't come naturally. The book I'm studying, Mastering ThinkTank on the IBM-PC by Jonathan Kamin, spends an inordinate amount of time talking about two things. First, it is chockablock with stories about fictional characters in the fictional company Sky High Technologies, and how those people use ThinkTank individually and as a team. Entire chapters are devoted to these people. Second, it talks about the rules of tool usage, and there are a lot of them. Many describe, with fairly long-winded stories, what will be manipulated at any given moment when enacting a menu action, and where to position the cursor to achieve a desired outcome. For reference, in the command glossary, the "Copy" function has five different entries, with five different keyboard commands, describing the unique rules around copying under various circumstances. "Delete" has six such entries. For crying out loud, the key alone is so convoluted it has an entire section in the back of the book devoted to its quirks. Where modern software would have you expect to handle indentation level editing, in ThinkTank it is used to "mark" headers. Marked headers receive a 🔹symbol in place of a or , which has the unfortunate side-effect of removing header state identification. Today we call this "selecting" and here it's been. . . let's call it "overthought." Marking can be done in three ways: by the key for individual items, by the "mark" key to mark "all" or "none" of the items in a header group, or by "keyword" then to "mark" every item which contains your search keyword. Three ways to mark, all with different key commands and under different menus. Once we have our marked elements, we can "gather" them. This means to cut all marked items out of their current positions and paste them into a new top-level group with the fixed name "gathered outlines." In this way, it is proposed, we can quickly rearrange our outline to reflect new insights gleaned by inspecting the ideas committed during the initial capture. Keyword mark-and-gather presumes the user has applied keywords to headers carefully, so as to facilitate this impromptu rearrangement. Put bluntly, to get the most out of ThinkTank we must meet it halfway, subtly changing our way of thinking to more align with the program. The book explicitly acknowledges this. The "cloning" function further opens up the possibility of using ThinkTank more like a database. Clone a header to another spot and those two headers maintain a quantum link, signified by the symbol. Once again we lose insight into the or status of cloned items. That said, changing one clone instantly changes all clones, a nice "could only be done with a computer" enhancement over paper methods. Outlines are not restricted to mere "headings with subheadings." While full-blown media attachments cannot be embedded, as later, more advanced outliners (and operating systems) allow, we can add long-ish chunks of text. At any level of an outline, tap and an inline word processor, complete with rulers and tab stops, springs to existence. Yes, ThinkText has word processing functions built right in, with caveats. There's no spellcheck or thesaurus, and formatting options are restricted to whatever you can do with a tab key and the space bar. Additionally, each subhead can hold only one block of text, up to a maximum of about 20,000 characters each. Once typed, those text blocks can be moved around at the header level easily. Moving text between blocks, or between headers, is basically easy if you remember that is "paste." (the program will not inform you of this, unlike other tools where it will inform you of the state) Hershey's review in BYTE notes, "Differences in the sequence of commands for creating and editing paragraphs is bothersome. Within the same paragraph, the New mode, for example, requires a different set of cursor moves than the Edit mode..." My intention was to write this entire article in ThinkTank and once again, Hershey and I agree, "I originally intended to write this review entirely with Thinktank. But the editing routine for Thinktank paragraphs is so cumbersome that I decided to use the Word Juggler program from Quark, which now has me spoiled. I still believe, however, in the value of Thinktank for organizing information and writing outlines." In essence, arranging top-level thoughts in an outline form is a good way to mentally organize ideas. As a way to write a fleshed out document, it's frustrating having to break up longer texts into dozens of little chunks. But, I did use this tool to capture a fleeting sentence here and there. At the end of the day, the document editor is a delightful addition while simultaneously being too much to bother with. Is there a German word for this? There are quite a lot of fumbles in the user experience. I'll start with how a simple toggle state for a header has three separate keys for expanding and collapsing. Just hitting should collapse/expand as applicable, no? No. We need to collapse and to expand. almost functions as a toggle, because why use the two existing keys when you can assign a third key while also subtly altering its usage? There is a very nice function called "hoisting" which "zooms in" and filters out everything but the selected header level and its sub-items. "dehoist" zooms back out. "promote" will shift a selected header (and sub-data) "up" a level but there is no "demote" equivalent. Doing that requires a "move" which might not capture all previously promoted data. "delete" a header can be undone by counterintuitively selecting "delete" again, then "undo." ThinkTank also offers alphabetic sorting, which feels counter to a tool framed around the organization of ideas rather than lexicon. At whatever level your cursor is at in the tree hierarchy, the "alpha" function will sort all headers of the same level in ascending order, dragging their subordinate items along with them. No, you can't do the opposite and get descending sort. Outside of database-style header entries, I can't see how to work this into the creative process. But, it's available and does give the program flexibility to be used in non-obvious ways. Of course, every time I say "opposite" I mean "opposite to my personal way of thinking." Like grammar, software also has the concepts of "verbs" and "nouns." "promote" is a verb, which we can select as our intended action, then we select a header, the "noun" in this case, onto which to apply that verb. The "opposite" way to consider this would be to select a noun, then choose which verb to apply to it. This, for example, is how marking keywords works. Choose the noun "keyword" then the verb "mark." ThinkTank is inconsistent in choosing one approach over the other. Sometimes it's verb-first, sometimes it's noun-first. I am not proposing either is "better" than the other. I simply want to note this as a philosophical difference in how two people may approach the same problem. In fact, this all points to the core, central matter which lingers over this entire software genre. It's a deep question not even a Juggalo can answer. This gets to the heart of my interest in, and frustration with, this software. When it works like me, it's great and when it doesn't, it's annoying; there's not a lot of in-between. In fact, I bristle when it works counter to intuition. It's almost a personal insult. But why? Could this be a driving force behind so many attempts to re-envision and/or re-create such software? Have you seen how many apps there are today? Drummer , OmniOutliner , Workflowy , Dynalist , Checkvist , TreeLine , Scrivener , Cloud Outliner , Capacities , Logseq , DevonThink , CarbonFin Outliner , Tinderbox , and Roam . Should have kept an outline of outliners; feels like I'm forgetting something. 🤔 "I'm going to build an outliner that works !" must be a common developer thought for there to be so many competing products. Heck, I've even thought it a few times myself while working with ThinkTank . Heck again, Dave Winer himself seems unable to resist the siren's call. His repo for Concord was updated just last year; that marks 50 years of development on the matter by the same man. Interestingly, Winer didn't conceive of the outliner as a writer's tool, but rather for software developers. Developers rejected it outright, but writers gravitated to it, much to his surprise. He developed ThinkTank with their needs in mind, though I think it is important to note that he himself was not a writer. Some think outlining should happen early and first, to kickstart the writing process. One study published in 2023 concluded, "From the results, can be concluded that there is a significant effect of the Outline Technique on Students' Writing Skills in Coherent Paragraphs..in the 2022/2023 academic year." Conversely, Peter Elbow, Distinguished Professor of English at the University of Massachusetts, abandoned outlining entirely. Winer's core functionality of "outlining" has been incorporated into any number of products, including venerable Microsoft Word back in v3.0 for DOS . In a way it has become "just the way we do things." That ubiquity definitely speaks to the lasting allure of the power of the outline. There remains a subset of the population, however, who are convinced that outliners are but a baby step toward something greater. We don't need outliners, we need, like, super -outliners! Knowledge itself must be tamed and wrangled into submission. Personal knowledge managers are the evolution of the outliner into something greater, for which 17K weekly visitors to r/PKMS are clearly on the hunt. I had no idea this category of software attracted users so (I will choose my word carefully here) passionate(?) about finding the "right" or "best" PKM workflow . As a man who maintains a blog about productivity software pre-1995, I am certainly not one to judge another's passions. Even with the research I've done, I'm fuzzy where lie the boundaries between outliners, PKMs, mind mappers, "note takers" and so on. Is a word processor which contains an outliner fair game in the PKM world, or is that a withering insult to the genre? Did I just get myself added to a "PKM Morons" outline by asking that question? My initial plan was to sample the latest outliners, as suggested by the Reddit forums, and see what they offer compared to ThinkTank . I'm sorry to say, I gave up; there's just too many. Outside of very focused, simple outliners, most felt heavy and cognitively burdensome. In fact, I've come out the other side of this research to believe that no idea processor can ever win, because the concept of "idea" is itself not a fixed thing. How we think drifts over time, sometimes subtly, sometimes not. Just when we believe we’ve captured the shape of our own mind, it slips through our fingers like mercury. Using software like this has felt a bit like trying to sculpt with quicksilver. I committed myself to giving ThinkTank a fair shake and organized this post with it before committing anything to the blog. I did find it useful for organizing my initial thoughts, but then I found I didn't really need the advanced tools. The high-level structuring that proved useful could have been done (and has been to date) in any word processor or blogging platform. I can envision scenarios where the advanced tools would be useful. The book offers good examples, mostly centered around using ThinkTank like a Rolodex or light database. So use it more like a PKM than an idea processor? The core concept, that my thoughts are so scattered that I'll transcribe them willy-nilly to rearrange later simply doesn't match the way I think. As I thought of new ideas, I added them into their appropriate tree position at that moment. When I was done, everything was naturally self-ordered as a result. Where the program really failed me, personally, was in feeling denied a flow-state. There is a mechanical feeling to its tools which was, to me, at odds with its stated goal. Menu actions get the job done at a level which does technically do what was asked, but whose aftermath tends to require no small amount of housekeeping to reorganize everything back into a tidy structure. You have to really love moving things around, promoting and demoting, copying and cloning, and generally just fiddling about with your outline. I can imagine there are personality types who are deeply attracted to this kind of tinkering, but it didn't come naturally to me. Bullet dodged? It takes two to love and ThinkTank is making a good effort, even if it fumbles on the UI. The onus is on me to extend equal love back, but I can't. Sorry, ThinkTank , it's not you, it's me. Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). DOSBox-X did a fantastic job for me right out of the gate. All I really did to enhance my user-experience was edit the file with the line This mounted ThinkTank upon program launch, ready to go. I didn't encounter any reason to boost the "cycle speed" for the emulator during my work sessions. Probably my outlines were too simplistic to push the program in any significant way. With DOSBox-X, saving data goes straight to the native OS file system. So there is no trouble getting data "out" of the emulator. The real trick is in getting the data into a format that is useful. ThinkTank can save data via the menu option, going straight to . This gives three options: formatted, word processor, structured. specifically means WordStar compatible, which ultimately means just a raw text file, each header on its own line, formatting removed. replaces all of the indentations with decimal numbering, such that you have headers numbered like , and so on. gives us, perhaps, the best shot at automating the format and massaging it into Markdown (for example). All indentations are replaced with text prefix markers The format is rigid and consistent, meaning a find/replace routine should be able to swap those prefixes out for Markdown equivalents fairly easily. I took a stab at it in Python, though I'm not completely happy with the final result. Winer's "antique" releases OPML 2.0 spec Concord (Little Outliner's engine) DOSBox-X 2025.10.07, Windows x64 build Default hardware configuration (3000 cycles/ms) ThinkTank folder mounted as drive E:\ 2x (forced) scaling TrueType text (sorry to the bitmap purists!) ThinkTank v2.41NP for DOS (courtesy Dave Winer's website, link above) Neither the application nor OS ever crashed. It was a stable, smooth, snappy experience start-to-finish. For my tastes, it separates the outlining from the writing a little too discretely. I'd like to be able to slowly change an outline into a finished work, but that's pretty much out of the question with these tools. A memory-resident version of ThinkTank called Ready! allowed a slightly stripped down version of the outliner to be called up with a hot key. It would be possible then to have the outliner and word processor running concurrently, which would mostly alleviate my concern, as it would be easy to jump between programs on-the-fly. Inconsistent terminology and keyboard usage can be frustrating to learn. Tools can be half-baked. Why just-and-only "ascending alphabetic" sort order?

0 views
Stone Tools 4 months ago

CAD-3D on the Atari ST

There are wizards among us who can bend hardware like a Uri Geller spoon to perform tricks thought impossible. Bill Budge springs to mind, with Steve Wozniak calling Pinball Construction Set the "greatest program ever written for an 8-bit machine." A pinball physics simulation, table builder, paint program, software distribution system, and more, driven by one of the first point-and-click GUIs, all in 48K on an Apple 2. Likewise, Bill Atkinson seemed able to produce literal magic on the Macintosh's 68000 processor. Even when he felt a task were impossible, when pushed he'd regroup, rethink, and come back with an elegant solution . QuickDraw and HyperCard are legendary, not just in what they could do, but in how they did it. Meanwhile, over on the Atari ST, Tom Hudson was producing a steady string of minor miracles. With CAD-3D , he both pushed the machine beyond what many thought possible, while also creating something that had its users drooling at the prospect of advanced systems yet to come. For the most part, the ST crowd had to wait essentially forever for machines that were up to the mathematical task. Hudson, frustrated with Atari's broken promises, and anxious to continue pushing the limits of 3D modeling and rendering, defected to DOS. Atari would die, but Hudson's work would live and grow. You know it today as 3ds Max . Let's see how it started. This quite literally marks my first time using an Atari ST GEM environment and software. I don't anticipate any serious transition pains coming from an Amiga/Mac background. The desktop has a cursor and a trashcan; I should be fine. My first 3D software experience was generating magazine illustrations in Infini-D on the Macintosh around 1996. Since then, form-Z , Poser, Bryce , Strata Studio Pro , and Cinema 4D came and went; these days its just Blender . It's fair to say I have a healthier-than-average amount of experience going into CAD-3D . I found two tutorials worth looking into. The first is in the manual; always a good starting point. The second is a mini-tutorial which ran in Atari ST Review , issue 8, December 1992. The cover disk, a literal 3.5" floppy disk glued to the cover, included CAD-3D 1.0 and 2.0 on it. As far as I can tell, it was the full featured software, not stripped-down "demos." A special offer in the magazine gave readers a discount on ordering the manual for £34 (about $50 US, $115 adjusted for 2025). Those suckers could have saved a bunch of money if they'd waited 30 years like I did. The Atari ST emulator STEEM comes bundled with a free operating system ROM replacement called EmuTOS. At first, it seems like a nice alternative to GEM/TOS until I try to run CAD-3D . I have obtained a TOS which works. I am accepting NO follow-up questions. Upon boot, I get a desktop icon of a filing cabinet (?) for floppy disk B (??) even though I don't have a disk in that drive (???). Do I want a "blitter?" CAD-3D wants "medium resolution" (640x200), which forces GEM to draw icons and cursors at half-width, giving the interface an elongated look. Click-and-drag operations in GEM need a half-second pause to "grab" a tool, lest the cursor slide off the intended drag target. Coming from the Mac and Amiga, I get distinct parallel universe vibes. The aspect of GEM driving me most crazy is the menu bar. It is summoned by mere cursor proximity, no click required, but requires a click outside the menu to dismiss. Inadvertent menus which cover the tool I want to interact with require an extra dismissal step so I can continue with the interface element it obscured. It's maddening. Launching the app is relatively quick and I can kick the emulator into overdrive, which exhibits rreeppeeaattiinngg kkeeyyss. But this will be necessary from time to time, as it would for any older system trying to do the math this program demands. Once launched, I'm hit with a pretty intense set of tools and windows. The left 1/3 of the screen is a fixed set of iconography for the tools. These were hidden away under menus in v1.0, but now they're front and left-of-center. The four-way split view on the right 2/3 of the screen is par for the modeling course. What's different here is the Camera view is "look, but don't touch." I spent a lot of time trying to move things around in there before I remembered "RTFM". There was a moment early on with Electric Pencil when I felt attuned to its way of thinking, summoning menu commands without reading the manual. Deluxe Paint was the same, the tools doing intuitively what I expected. I really enjoy such moments when an affinity between the interface metaphor and my exploration is rewarded. CAD-3D is resisting this, preferring to remain inscrutable for now. Screenshots in the manual contain a lot more fine detail than I see while using the program. In the GEM desktop preferences there was a greyed out "high resolution" option, unlocked by setting the emulator itself to a beefier 4MB MegaSTE. This hardware upgrade brings the display up to Macintosh-style high quality B&W, which feels very nice to use, but for this I want color so back to medium-res for me. While the top/right/front views function similarly to modern modelers, these have one notable difference. They are not "cameras" into those views, they are showing you the full, literal totality of a cube which contains your entire 3D world. Need more space? Make your objects very small. Objects are too small to see? Well, that's just how we do things here in 1986. It feels claustrophobic, but is also a simple mental model for comprehending "the universe" of your scene. What I'm quickly learning is how the program is configured to conserve processing time and memory at all times. Changing a value, say camera zoom, doesn't change anything until you "apply" it. Shades of VisiCalc 's "recalculate" function. The low memory inherent to the hardware is subverted by a "split the advanced functions out into separate products" approach. Complex extrusions are relegated to Cyber Sculpt . Model texturing is available in Cyber Texture . Advanced compositing can be done in Cyber Paint . Memory budget is accessed by the big button just left of the center of the screen. This will show how close you are to maximizing the vertex/face count of the scene. Again, shades of VisiCalc's free-memory counter ticking down to zero as you work. Adjusting objects, by scale or position, requires selecting them. Despite the mouse and pointer GUI interface, there is no direct manipulation of objects to be found here. If you enjoy grabbing handles on objects and dragging them into scale and position, you're going to have some hard habits to break. Object selection is convoluted; the basic tenant is "visible = selected." The "Objects" modal window provides a button for every object. Each button toggles its associated object's visibility after you click "OK." Selections here will modify the currently active group, designated by the tools mid-screen. Scaling sliders (left side of screen) affect every visible object along the active view's axes. So horizontal/vertical in "Top" view scales different axes than "Front" view. Per-object scaling is possible, but only if you deselect all but one object. Switch from "Scale" to "Rotate" and the scaling sliders switch their function accordingly. Selections can be rotated around one of three pivot points: view center, selection center, or an arbitrary point within a given view. Sliders control horizontal and vertical rotation, but the third axis can also be rotated upon, though for some reason it was decided this should be via a pop-up window pie chart. The ST in medium resolution can display up to 16 colors. Applying a color to an object is setting the "brightest" color for rendering that object. Any other colors within that color's (user-definable) group will fill in the mid and darker tones. Simple palette gradients make for "realistic" lighting, but bright orange highlights with fluorescent green shading is also possible, if you want to fight the power. In the toolbox, at the bottom, is an unassuming button labelled . This is a boolean tool, presented as a kind of equation builder called "Object Join Control." Choose an action: Add, Subtract, And, or Stamp. Select the first object then the second object, and name the resulting object. "Add" will produce something like "Subtract" will read and so on. With this tool we have what we need to sculpt complex, asymmetric figures. To paraphrase Michelangelo, the shape we want is already inside the primitive volume. We just have to remove the superfluous parts and set it free. If only I were so talented. Like sculptors of yore, setting shapes free can take a lot of time. Click "OK" on a join operation and prepare to wait, depending on the number of faces involved. I put down a default sphere "3" and a default torus, shrank the torus a bit to intersect with the sphere's circumference, and requested . Now I've done it. Even at top emulator speed in overdrive I've been waiting well over 20 minutes for the subtraction to complete. Did I kill it? Is this like in The Prisoner when #6 made the computer self-destruct? Despite the computation time, there are very good reasons for performing booleans beyond the sculpting power they impart. Earlier I noted that there is a vertex/face count limit on the scene. Intersecting objects added together can reduce their vertex count by eliminating interior and unused faces. It also turns out there is a 40 object limit to our scenes. Adding two objects together reduces them to one, deleting the originals. Understand that there is no undo function. Only a rigorous discipline of saving before performing such destructive functions will save you from yourself if you needed the original objects. This will become the mantra for this blog, "Save often, kids." Boolean functions respect object colors, which makes for neat effects. Subtract a yellow sphere from a purple cube and the scoop taken out of the cube will be yellow while the rest of the cube stays purple. A pleasant surprise! The "Stamp" option tattoos the target surface with the shape and color of the second object. Stencil text onto surfaces (provided you have 3D text objects), add decorative filigree, generate multi-color objects, and so on. It kind of depends on how well you can master the stiff, limited extrude tools to generate surfaces worth stamping. OK, this torus/sphere boolean operation still isn't done, so I'm chalking it up as, "This is a thing CAD-3D cannot do." While waiting for the numbers to crunch, I realized I could create the intended shape manually with a custom lathe. Only while experiencing the computational friction did the second method occur to me. That reminds me of something I've been thinking about since starting this site. Working with retro-computing means choosing to accept the boundaries of a given machine or piece of software. Working within these boundaries takes real, conscious effort; nothing comes easy. Meanwhile, technology in 2025 is designed to make the journey from "I want something" to "I have something" instantaneous and frictionless. It is a monkey's paw fulfilling wishes, and like a monkey's paw, it can go wrong. Not just "turkey's a little dry" wrong, but "it obscures objective truths" wrong. The first is seen with (say it with me now) the spread of AI into everything. You want something? Prompt and get it. Our every whim or half-considered idea must be rewarded, nay PRAISED! We needn't even prompt, services will prompt on our behalf . Every search delivers what you asked for, even if it delivers lies to do so. There are plenty of others more qualified to discuss the ramifications of AI on the arts, society, and our very minds. For this article I want to use it to illustrate what much of tech has become: an unchallenging baby's toy. A pacifier. Another way "friction" is stigmatized as detrimental is, admittedly, a personal bias but I know I'm not alone . UI density is typically considered "friction" and it's "bad" because a user may disengage from a piece of software. To keep engagement up, interfaces simplify, slowly conflating "user-friendly" with "childlike." The net result is a trend toward UIs with scant few pieces of real information distributed over vast plains of pure white or oversaturated swoops of color. UI/UX professionals like to call it " playful " or " delightful ." I don't want to come off as a killjoy against "fun" user interfaces , but I'm an adult. I eat vegetables as well as candy. Where are the vegetables in the modern tech landscape? Where is the roughage which requires me to chew on its ideas? The industry wants to eliminate friction, but without friction there can be no spark . "Spark" is what I felt struggling against a hyper-strict budget during my publishing days. I found it when examining the depth of Deluxe Paint in the animation controls. It is what I felt when I overcame the Y2K bug in Superbase . I felt it again just now as I realized the lathe solution while waiting for the boolean to finish. Each little struggle forced me to shift my frame of mind, which revealed new opportunities. If my very first thought is brought to life instantly, with no artistic struggle (one hour of prompting is not a struggle), then why ever "waste time" thinking of second options? Or alternate directions? Even, heaven forbid, throwing ideas away ? These common creative pathways are discouraged in a modern computing landscape. Put another way, I can't think of a time when my first idea was my best idea. Given the protective bubble-wrap our software tends to wrap itself in, perhaps it will not surprise you that computer literacy and computational thinking scores have dropped in the US over the past five years. Some readers may be thinking, "In the Deluxe Paint article you picked on Adobe for airplane cockpit UIs. Isn't that the "friction" you're describing?" That is complexity , which can cause a type of friction, true. But it is the friction of a rug-burn, not a spark. Back into the program, I've only touched on it so far, but that "Superview" button is far more important than its obfuscated name suggests. That is the renderer. Double-click it for options, like rendering style and quality, including stereoscopic if you're lucky enough to own the Stereotek 3D glasses. Images, for example previous renders, can be brought in as a background for a new render. All drawing is restricted to the same 16-color palette though, so plan accordingly. The basic wireframe renderer is quite interesting because it provides real-time interactive view manipulation, like a video game. That makes sense because the algorithm that drives this view came from a video game. It was purchased by Hudson from Jez San, the creator of real-time 3D graphics game Starglider . Even if you never touched Starglider , you know San's work today. He was a developer of the Super FX chip for Nintendo, which made Starfox on the SNES possible. CAD-3D has one more significant trick up its sleeve. Using a simple, clever method of XOR compression, animations can be generated. Turn "ON" animation with the clapboard icon. Set up the scene and camera position for a frame. Capture a render with "Superview." Commit that frame to the sequence with the frame counter icon. Repeat until you're done. It's stop-motion animation, essentially. This is time consuming and requires some blind trust as there is no previewing your work. Luckily, a more elegant, and far more complex, option comes bundled on disk in the form of an entire movie-making scripting language . I tried to understand it, but my utter lack of ability to make movies was exposed. I wanted to at least try the on-disk animation tutorial. Unfortunately, the program which can play back the XOR compression method is nowhere to be found. No disk image I could find contained it. Regardless, the scope of what was being attempted with this product, and really the entire suite, is clear. ANTIC Software wanted your Atari ST to be nothing short of a full movie-production studio. If there were some way to calculate price per unit of coolness, an ST paired with CAD-3D may be quite high up that chart. Put simply, no. On its own CAD-3D lacks modeling and rendering tools which many would consider absolute basics in a modern workflow. Lighting control is restrictive, there are no cast shadows, no dithering to make up for the limited rendering palette, extrusion along splines isn't possible, views into the world are rigid and hard to work in, and basic object selection requires a clunky series of menus and button presses. A theoretical v3.0 could have been amazing. But I must concede a few points here. First, this is really just one part of a larger package. It's the main part, but not the only part. The bundled scripting was expanded into Cyber Control and the bundled "super extruder" was expanded into Cyber Sculpt, for example. There was a VCR controller, genlock support, stereoscopic 3D glasses support, multiple paint programs, a sound controller, and more. Certain deficiencies are more than adequately compensated for if we take the full suite into account. Second, there's something to be said about the simple aesthetic of CAD-3D . There is absolutely a subset of people out there who just want to play with 3D like a toy, not a career. I think the success of PicoCAD speaks to this; just look at the fun things people are creating in a 128x128, 16-color modeler in 2025. Third, working within limits is, paradoxically ( and also well-acknowledged ), creatively freeing. The human need to bend our tools beyond their design is a powerful force. We see it when a working Atari 2600 is built within Minecraft . We see it in the PicoCAD projects I linked to above. Full-screen, full-motion video on a TRS-80 Model III ? Hey, why not? In that sense, I can feel a pull toward CAD-3D. I managed to model a none-too-shabby CX40 joystick, and I catch myself wondering now what more I could do. I started feeling a groove with its tools, so how far could I push it? How far could I push myself ? I hope you'll understand my positive take on CAD-3D when I say, there is friction to be found here. Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). As you may expect, enable the best version of the system you can. TOS 4.x seems to be incompatible with CAD-3D , so keep the OS side simple. In fact, it's better to crank up the virtual CPU speed than to fumble around with toggling on/off the warp mode. It's less fiddly and doesn't suffer from key repeat troubles. Steem SSE 4.2.0 R3 64bit on Windows 11 Emulating a MegaSTE 4MB RAM TOS v.???? (it doesn't report its version number?) Stereo CAD-3D v2.02 Neither the application nor OS ever crashed on me even once. There's a lot to be said for that stability. Converting the 3D data into a format that could be brought into Blender , for example, seems like a bespoke conversion tool would be needed. I've not found such a thing yet. .PI1 render files can be converted into .pngs thanks to XnConvert . I don't know of a way to convert animations, though a capture system like OBS Studio would work in a pinch. You could also render each frame out separately and stack them together into a movie file. ImageMagick's function can take a folder of sequentially numbered images and stitch them together into a movie. Rendering quality. The engine can't do dithering, and the flat colors of the limited palette can visually flatten a render that isn't very carefully lit. Getting around object limitations means locking yourself out of scene design flexibility as you "Add" multiple objects together to collapse them into single objects and reduce the object count. The object selection process really hurts.

0 views
Stone Tools 4 months ago

VisiCalc on the Apple II

Unless Dan Fylstra had the world's largest vest pockets, Steve Jobs's story about "Dan Fylstra walked into my office and pulled a diskette from his vest pocket" to introduce the spreadsheet in 1977 is apocryphal. The punchline, that VisiCalc propelled the Apple II to its early success, is supported by the earnings calls. While VisiCalc remained exclusive to the Apple II, estimates say that 25% of all Apple II sales (at $10K a pop, in 2025 money) were solely for the purpose of running VisiCalc . An un-patented gift to the world , it would go on to be subsumed by the very industry it spawned. What's surprising in looking at VisiCalc today is how much it got right, straight out of the gate. Dan Bricklin's clear product vision, combined with Bob Frankston's clean programming produced a timeless, if clunky by modern standards, killer app. Here at Stone Tools, "clunky" does not equal "useless." I have copies of Spreadsheet Applications for Animal Nutrition and Feeding and VisiCalc and I'm ready to ration protein to my swine. First, Happy Spreadsheet Day for those who practice. Did you buy a big sheet-cake to celebrate? Wait a second. Spreadsheet. Sheet-cake. Spreadsheet-cake. Did I just invent something new?! To understand VisiCalc 's legacy, I'm working through the tutorial that shipped with the software . I think it's important to look at how the software pitched itself to customers. Then, I will examine Spreadsheet Applications for Animal Nutrition and Feeding by Ronald J. Lane and Tim L. Cross. I want perspective on how it can be used to assist business owners of all types, not just the white-collar office executives depicted in the advertising. Booting into VisiCalc in AppleWin from Windows 11 desktop is fast and frictionless, though I do have to answer at every launch, "Do you want to use 80 columns (Y/N)?" Of course the answer is YES, but until AppleWin supports the 80-column Videx card, I must reluctantly answer NO. SuperCalc , a contemporary rival, does run in 80 columns. That's if I can boot SuperCalc . AppleWin complains that it needs 128K of RAM, which is supposedly what it has. For now, I'll leave that mystery to the AppleWin developers . (P.S. - the trouble I had at the time of writing has since been resolved at the time of publishing) Once launched, I'm at a screen layout recognizable even to a generation born post-Google; info bar across the top, input bar below it, then the spreadsheet proper. Alphabetic column identifiers run horizontally and numeric row identifiers run vertically down the left-hand side of the screen inscribing the familiar "A1" system. A kindergartner who knows their numbers and ABCs could find cell . Type and what first appears to be "the entire alphabet" pops up at the top of the screen. Sometimes referred to as the "slash menu," it remained a fixed and expected option in spreadsheets for many years after its introduction, and still exists ! It's cryptic at first, but once you know, you know. Further submenus follow a common logic and tend to be similarly mnemonically simple to remember. opens the formatting menu and sets the cell to "dollars and cents." opens the window menu and splits it vertically at the cursor position. inserts a new column. Destructive options, like to clear the entire sheet are always behind a safety prompt. More complex menu options, like for cell replication step through its usage, decision by decision, to perform the action exactingly. It's far more user friendly than one may expect of the time, though an online help system would still be appreciated. As a time traveler from 2025 visiting 1978, there are absolutely mental adjustments needed. Let's start with the below screenshot. represents three discrete pieces of unrelated information: As the sheet grows, VisiCalc dynamically allocates RAM to accommodate and the free memory indicator drops accordingly. A flashing means your sheet has outgrown available RAM. While VisiCalc dynamically allocates RAM it does not dynamically de -allocate it. Save-and-reload will force the sheet into the smallest memory footprint necessary to run it. Still not enough memory? Quick, start a new spreadsheet and see if you can afford a computer upgrade! As I noted earlier, it's important to view the software through contemporaneous training material, though it is also impossible to forget what I've learned about spreadsheets since 1978. Gilligan's Island s aid a bamboo pole to the head can temporarily erase memories but it hasn't worked yet. VisiCalc 's tutorial has to pull triple-duty. Being the first computerized spreadsheet, it has to help us understand, "What is a computerized spreadsheet?" Then it has to prove, by example, the benefits over traditional pencil and paper methods. Lastly, with foundational knowledge set, it must introduce us to the extended set of tools and purposes thereof. The manual pulls this hat-trick off admirably. The quality of the manual was foremost in the publisher's mind from the start. Personal Software co-founder Peter Jennings, creator of MicroChess , recounts the approach taken toward documentation . The tutorial is divided into four sections, each of which builds upon the previous. It offers gentle guidance into the world of computers and spreadsheets, carefully navigating the reader through the unfamiliar interface and keyboard commands. One thing I'm finding as I continue the tutorial each day is how easy it is to recall what I learned in previous days. Even after two days away on other matters, I still find previous knowledge to be "sticky." That owes a lot to the intuitiveness of the menu commands; every tool feels logical and carefully considered for the task. Additionally, there is a kind of rudimentary "autocomplete" for functions. Type and VisiCalc will fill in . You won't get as-you-type autocomplete, but you will make a best effort to give you the correct function name even if the best you can do is half-remember it. The general usage of the program is to move the cursor around on screen and start typing into the highlighted cell. Type text to create a "label" and type numbers to create a "value." This binary distinction essentially persists to this day, even if formatting control over the cells gives us greater flexibility, or "values" have been sub-typed to be more precise in their intent. Values are not restricted to simple numbers or math equations. Functions exist, like , , or which can be chained and nested with other mathematical functions to create very complex formulas. Functions update themselves based on the latest sheet calculation, and this dynamism obsoleted paper methods instantly. It still feels a little magical to see dozens or hundreds of numbers update in a cascade across a large sheet, each cell dutifully contributing its small part to the overall whole. "replication" is one of the most powerful tools in VisiCalc. Cell formulas can be replicated, i.e. copy-pasted, from one-or-a-group of cells to one-or-a-group of cells. When replicating, we are given a chance to shape how each copy of the formula references other cells. Source: Cell A3 adds A1 + A2. Replicate A3 to B3 and we're asked, for each cell reference, if we want it to be relative to the new cell or fixed ("no change" in VisiCalc parlance). We'll answer relative for the first and fixed for the second. Target: Cell B3 now adds B1 (relative) + A2 (fixed) This functionality remains essentially unchanged today. In Excel, click a formula cell and a small dot appears in the bottom right corner. Drag that dot out to marquee a rectangle of cells, inside of which every cell instantly and automatically receives a relative-only copy of the source cell's formula. In VisiCalc I'm prompted for a "relative or fixed?" decision for every cell reference in every target cell. Replicate a formula with 5 cell references across a column of 100 cells and be ready to answer 5 x 100 prompts. Unfortunate and unavoidable. Once I have even a mildly complex sheet, perhaps one that includes processing transcendental functions , making changes becomes time consuming as I have to wait for the entire sheet to update with every change I make to any cell. for "global recalculation manually," turns off the sheet's automatic recalculations with every cell change. When I want to, I can explicitly demand a full recalculation of the sheet with the cute command. This saves a lot of time when making multiple sheet changes, or even setting things up for the first time. You know what else saves a lot of time? Setting the emulator to run at the fastest speed possible. It's so fast I wasn't even sure it had done anything. It instantly transforms the system from "fun, if slow" to "I could be productive with this." This is not to take away from the simple joy of watching the sheet work, but sometimes enough is enough. There's an interesting phenomenon in culture where the ideas and language of a particular work of art become so utterly commonplace it can be hard to appreciate the original for what it was at the time . Kind of a Citizen Kane effect. In a sense, using VisiCalc in 2025 feels so familiar it's almost anticlimactic. It's a little hard to remember, but there was a time when this kind of direct manipulation of data simply wasn't commonplace. The display and input of data were often highly separated, for many good system performance reasons. Bricklin and Frankston understood that the real power of their system would be unleashed if it paralleled closely the system it proposed to replace. I think the genius of VisiCalc 's design is that mundanity is a feature, not a bug. In the product timeline I highlighted a few of the notable competitors that arrived shortly after VisiCalc redefined what could be done with a home computer. That is but a small fraction of the "VisiClones" that joined the fray. Here's a taste. Type-in programs proliferated, giving people a simple way to sample the spreadsheet world without undue financial investment. Boxed software spread like crazy, each hoping to capture some small slice of a rapidly growing financial pie: PerfectCalc, CalcStar, AceCalc, DynaCalc, Memocalc, CalcNow, CalcResult, The Spreadsheet, OmniCalc, and so on. These were often quite shameless in their copying of VisiCalc 's layout and usage. Thanks to a very simple file format, as well as support for DIF (disk interchange format), it is trivial to open VisiCalc sheets in a clone and continue working, even using the exact same keyboard commands. (Lest Apple feel left out, so too did the Apple 2 have its clones. About 500 of them. ) Applications for Animal Nutrition and Feeding by Ronald J. Lane and Tim L. Cross is an interesting peek into a world I know nothing about, despite having lived on a farm in my youth. After introducing the reader to the concept of spreadsheets, the book goes on to describe all commands and functionality twice: once for VisiCalc and once for SuperCalc . These two were chosen because, "They are very common among agricultural microcomputer users." which is an interesting cultural note. I really enjoy seeing software terminology couched in terms of an agricultural target audience. Here, the concept of boolean logic functions is introduced. The book gets further points because the writers do something I think is all too infrequent in guides of this nature. Typically there is an exclusive focus on vagaries like "getting the most out of VisiCalc ." In this book, each chapter of real-world application begins with a section called "Define and Understand the Problem." For the chapter "Swine Applications" a full TEN PAGES are devoted to "Define and Understand the Problem." VisiCalc isn't even touched until that's been done as formulas are constructed and tested by hand well before any cells are defined in a sheet. Even then, the very first thing created is in-sheet documentation, including tips for legible formatting. Yes, a thousand times yes, to that approach. Teach us to fish ! One passage in particular struck me. Please bear with me, I swear I have a point. From the book, "The object in formulating a ration for any farm animal is to supply sufficient amounts of the nutrients that will enable the animal to satisfy its needs for a specific function or functions. For example, if we are feeding a first-calf heifer six months after calving, we must be concerned with the productive functions of lactation, growth, gestation, and possibly extra work if extreme distances must be traveled daily. The actual process of ration formulation may require up to four sets of information." I know the suspense is killing you. The four sets of information are: nutrient requirements, feedstuff composition, actual feed intake, and economic considerations (least-cost ration formulation). Consider now how VisiCalc nimbly adapts to such a specific and esoteric use case. I can't imagine Bricklin and Frankston ever once thought, "Gosh, we need to ensure that users calculating post-calving heifer nutrition are adequately covered!" What they did right was to stick to a clear vision and not let presumptions of usage cloud their development. As noted earlier, mundanity is an intentional feature of the program's design. We can really understand that in the above passage and the flexibility this gives VisiCalc to rise to this agricultural challenge. I may even go so far as to posit that a piece of software passes the "timeless" threshold if can be used for hog slop protein content as well as Q2 financials at a Wall Street equity firm. This may be the start of a list of Stone Tools Maxims. VisiCalc can absolutely be productive in 2025, unless you're heavy into graphing. It just can't help you with that at all (though add-ons were released later). I had a lot of fun learning its tools, exploring its capabilities, and seeing it do real-world work. Even in an emulator it felt performant and frictionless. I cloned it for a reason ; it's a good, solid piece of useful software. Despite losing the throne, every modern spreadsheet is, at the foundation, still VisiCalc no matter how much UI chrome has been applied. Check out the below list of features today which started with VisiCalc and you'll understand. The sparkle of today's UI may dazzle, but it's VisiCalc providing the shine. Getting VisiCalc data into Excel , while keeping formulas intact , is troublesome. Each step of the conversion process requires a different tool, and you may be well-served without going the full distance to Excel . I looked into the published .xlsx XML format and it certainly seems possible to write a direct .vc -> .xlsx conversion utility. The XML specification document is over 400 pages long, but an introspection of a barebones .xlsx file (one with text in A1 and a single digit in A2) appears to contain mostly boilerplate. It has its charms, and if you're willing to keep your data inside your Apple 2's private world, you could make good use of its tools. It's just fun! The primary factors that might dissuade one from doing anything important in VisiCalc on the Apple 2 in 2025 are: AppleWin 1.30.21.0 on Windows 11 "Enhanced Apple //e" (128K) "Use Authentic Machine Speed" "Enhanced disk access speed (all drives)" VisiCalc VC-208B0-AP2 40-column display (80-column won't start) indicates the calculation direction. is for so VisiCalc will evaluate cell formulas beginning with A1, work its way down to , move to B1, and so on. The alternative is for which steps through horizontally. If later cell functions reference earlier cell values, and those were calculated out of order, you can wind up with wrong calculations or even errors. Plan your sheet accordingly or run the calculations twice to catch misses. indicates cursor direction, horizontal or vertical. The Apple 2 only has left and right arrow keys, no up and down (those were introduced on later models). toggles direction allowing two arrows to perform the work of four. Clever, but annoying. Using to "go to" specific cells immediately is often preferred. is a true remnant of a simpler time, when RAM was so limited we had to quite literally watch usage while working. VisiCalc is showing us how many free kilobytes of memory are available to use. A1 notation Start a formula with Visual representation separate from calculated representation Resizeable column widths Direct cell manipulation @ notation for functions ( Excel supports this hidden feature) Entry bar. There, between the column headers and menu bar, what do you see? LOOKUP tables One of my favorite functions in VisiCalc, it cleverly works around certain deficiencies by providing on-the-fly swap outs of data representations. Boolean logic. Even today , "The IF function is one of the most popular functions in Excel." Green (for some reason!) AppleWin seems well-behaved at 1x, 2x, and Fastest processing modes. I enjoy the speed halfway between 2x and Fastest for the fun of watching things process without also feeling my limited time on earth slipping away. Fastest would probably be most people's preferred mode; transcendental function graphing is "blink and you'll miss it" quick. Other emulation options Dan Bricklin received permission to distribute VisiCalc for DOS . I've been testing it a little under DOSBox-X and it's working great. You get full arrow-key support, a lot more control over your virtual processor speed, and files are saved directly to your host operating system drive; no need to "extract" the data. The above DOS version can be run straight in your web browser , if you just want to play around. microm8 is available for all platforms and has a neat "voxel rendering" trick to breathe funky new life into trusty old software. It also gives direct access to a huge online library of software, so finding disk images is basically a solved problem. Apple ][js is an online javascript emulator which allows you to load disk images from your system, as well as to save data back to your local file system. It is the only emulator I've tried so far that runs VisiCalc in 80-column mode, though it can't draw inverse characters. This makes it almost impossible to use; you can watch the upper left to see where your cursor is in the sheet, I suppose. Virtual ][ for macOS is a very nice emulator which even includes a very cool virtual dot matrix printer emulation as PDF output. An Apple //e physically has all four arrow keys and it seems like they should work, as in SuperCalc . Unfortunately, VisiCalc ignores the extra hardware. I could not find a way to get 80-column display in an emulated Apple //e for VisiCalc , though SuperCalc does work in this mode on the same system. Get the file out of the Apple 2. CiderPress2 does this easily. You can also "Print" your VisiCalc document structure to the Apple 2 emulator "printer." This will give you a text file whose contents are identical to what you'll find inside the .vc file on the Apple 2 virtual floppy image. Install Lotus 1-2-3 for DOS using DOSBox-X; specifically the "Translate" tool. Translate the file from VisiCalc format to a Lotus 1-2-3 v2 .wk1 file Use LibreOffice 25.8 to open the converted Lotus 1-2-3 .wk1 file. The layout might not be beautiful, but formulas appear to convert properly. "Export" the sheet as Excel 2003 .xls Excel on the web cannot open 2003 files, but Google Sheets can. Kind of. Sort of. Well, it makes an attempt. Cell references seem to be shifted by one or two, and junk data is inserted at the beginning of each formula. Fix your formulas. "Download" as a Microsoft Excel .xlsx file. Open the .xlsx file in Excel . Was that so hard? 🤷‍♂️ I keep saying it, but the 40-column display can feel cramped. The finger-gymnastics for missing keys grows tiresome. Graphing is woeful, almost non-existent. Getting your sheet into a modern app doesn't work, in any practical sense. There is a numeric precision limit of 11 significant digits. More than precise enough for many, not nearly precise enough for some.

0 views
Stone Tools 4 months ago

Superbase on the Commodore 64

When it comes to databases, I've never been much more than a dabbler. I remember helping dad with PFS:File so he could do mail merge. I remember address books and recipe filers. I once tried committing my comic book collection to ClarisWorks . Regardless of the actual efficacy of those endeavors, working with database management systems never stopped feeling important. I was "getting work done," howsoever illusory it may have been. These days, the average consumer probably shies away from any kind of hardcore database software. Purpose-built apps which manage specific data (address books, invoicing software) do most of our heavy lifting, and basic spreadsheets ( Google Sheets , Notion , Airbase ) tend to fill in the remaining niche gaps. The industry was hell-bent on transforming rapidly improving home computers into productivity powerhouses and database software promised to unlock a chunk of that power. Superbase on the Commodore 64 was itself put to work in forensic medicine in England and to help catch burglars in Florida . Maybe it can help me keep track of who borrowed my VHS copy of Gremlins 2: The New Batch. The manual has a three-part tutorial, the first two parts of which have an audio component (ripped from cassette tapes). I will absolutely use it for an authentic learning experience. I'm looking forward to some pre-YouTube tutorial content, "What's up everyone, it's ya boy Peter comin' atchu with another Superbase tutorial. If you're enjoying these audio tapes, drop a like on our answering machine and subscribe to AHOY! Magazine. " From first boot, I feel the pain. After the almost instantaneous launching of trs80gp into Electric Pencil last blog, getting Superbase launched in VICE is annoyingly slow. I appreciate a pedantic pursuit of accuracy as much as anyone, but two full minutes to load Superbase is ridiculous, for my 2025 interests. Luckily VICE has a "WARP" mode which runs some 1500% faster, bringing boot time to under 10 seconds. A C64 one could only dream of is a keyboard stroke away, to enable or dismiss on a whim. How spoiled we are! Here I am, a businessman of 1983, knitted tie looking sharp with my mullet, ready to thrust my 70s HVAC business into the neon-soaked future of 80s information technology. (The company must pivot or die !) First things first, “What is a database?” I wonder, sipping a New York Seltzer. According to the very slow audio tutorial, "It's an electronic filing cabinet!" So far, so good. "And just as in an ordinary filing cabinet, information is stored in batches called 'files'. and you can think of Superbase as an office containing a number of electronic filing cabinets." OK, so if Superbase is my office, and my office currently contains seven filing cabinets with 150 files/per, I’ll make seven databases to hold my information? "Superbase will allow you to hold up to 15 files in each database." OK, I'm not sure I heard that correctly. Rather than having seven cabinets with 150 files each, I instead have 70 cabinets with 15 files each? Is this the " office of the future ?" Come to think of it, are we even using the same definition of the word "file?" When I ask Marlene to bring me "the Doogan file" I receive a file folder filled with Doogan-related stuff: one client, one file. "Each of the files is made of bits of information known as RECORDS. For example, you may have a file containing names of companies. In that case each company name would be one RECORD." A file which contains only the names of companies? Now I'm learning that records are made of FIELDS. But we were just told that a RECORD is "a bit of information" like a company name. This filing cabinet metaphor is falling apart and I'm only five minutes into a 60-minute tutorial. Not only did society have to learn how to create new tools for moving into the information age, we also had to learn how to teach one another how to use those tools. In Superbase's case, I find the manual mostly OK. It offers a glossary, sample code, and a robust rundown of each menu and command. What's missing here is an explanation of the mental shift required in moving from analog to digital files. Where a traditional filing cabinet is organized by relation, our C64 will discover relations (though this is not a relational database); a kind of inversion of the physical filing cabinet strategy. Without my 2025 understanding of such things, I would be completely lost right now about how Superbase and databases work. At any rate, working through the tutorial, I do find the operation of the software quite simple so far. Place the cursor where you want to add a field name or field input area and start typing. and set the start and end points of a field, which doubles as a visual way to set the length of that field. The field's reference name is only ever the word to the immediate left of the field entry area. Simple, if inflexible. Setting field types is also easy enough, even if the purpose and usage of the "key" field is never made explicitly clear. It is only ever described as being the field that records will be sorted on by default. Guidance on choosing an appropriate key field and how to format it is essentially nonexistant. Querying records is straightforward, though there is definitely a learning curve. Partials, wildcards, absence of a value, value sets and ranges, and comparatives (values <100, for example) are all possible and chainable. The syntax is relatively clear, even if conventions ( is the wildcard token) have subtly changed. I've now built something like a phone book and entered some sample data. This usage of the database matches my mental model of the object being replaced and I'm feeling somewhat confident. But this is also something I could have built with a type-in BASIC program from Popular Computing Weekly . If I put myself in the mindset of someone reading a contemporary book like Business Systems on the Commodore 64 by Susan Curran and Margaret Norman , it is quite unclear how my filing cabinet data and organizational structure translates to floppy disk. With floppy drives, a printer, and more I have spent almost $5000 (in 2025 money) on this system. For that outlay of cash, am I really asking too much for someone to help guide me into a "paperless office?" Speaking of which. George Pake of Xerox PARC (yes, that Xerox PARC ) gave an interview to Businessweek in June 1975 in which he spoke of his vision for a "paperless office." The later spread of that concept into larger circles seems to owe a lot to F.W. Lancaster. In 1978, Lancaster published Toward Paperless Information Systems and spent a full chapter contemplating what a paperless research lab might look like in the year 2000. Lancaster's vision paralleled a fair amount of what we know today as the internet. To readers of the time it was all brand new conceptually, so he spent a lot of time explaining concepts like "keeping a journal on the computer" and how databases could just as easily be located 5000 miles away as 5 feet away. He couldn't quite envision high resolution video displays, and expected graphic data to remain in microfilm/fiche. He could envision "pay as you go" for data access, however. It should be noted that the phrase "paperless office" does not appear in Lancaster's book (it does in his previous book). That phrase had already started an upward trend since before the Pake interview, but in my research it does seem that Lancaster really helped mainstream the concept. Lancaster identified three main functions of computer use in a paperless office. Especially in the 80s, transmit and receive were a long way from being cheap and ubiquitous enough to replace paper between two parties. That sounds obvious, but hype around the "paperless office" made it easy to overlook such flaws. Besides, wasn't it a matter of time before the flaws were resolved? Wasn't everyone working toward the same paperless vision? Well that's hard to say, given the slightly mixed messaging of the time. 1983's The Work Revolution by Gail Garfield Schwartz PhD and William Neikirk says explicitly, "we are at the brink of the paperless office." 1982's The Word Processing Handbook by Russell Allen Stultz cautions us, "The notion of a 'paperless office' is just that, a notion." But May 1983's Compute Magazine keeps the dream alive with a multi-page article, "VICSTATION: A Paperless Office" as though it had already arrived and was waiting for you to catch up. Computer magazines and academic investigations were typically cold on the idea of the "paperless office" ever coming to fruition. Rather they saw (quite correctly) that if everyone had simple, easy-to-use publishing tools at their fingertips paper usage would increase . The mainstream, ever one to latch onto a snappy catch phrase, really did seem to push the idea to the masses as an inevitability . A CEO in 1983 really couldn't be blamed for buying into the hype. To not have bought into it would have felt tantamount to corporate negligence. I asked ChatGPT for a modern parallel and all it said was, "Time is a flat circle." Building out anything more advanced than the most rudimentary of rolodexes required a lot of patience and forbidden knowledge. As noted earlier, the manual only gets you so far. There was a decent stream of books published during the early 80s which tried to fill various knowledge gaps. Some would tackle general "using your computer for business" while others would target specific software + hardware combinations. Database Management for the Apple from 1983, the release year for Superbase , has some great illustrations and explanations about databases and how they work conceptually. It digs into how to mentally adjust your thinking from manual filing to electronic filing. It also includes fully commented source code in BASIC for an entire database program. A bargain for $12.95 ($40 in 2025), but probably ignored by C64 Superbase users? Unfortunately for us in 1983, the book we Superbase users desperately need won't be published for three more years. Superbase: The Book , by Dr. Bruce Hunt, was published by Precision Software Ltd, the very makers of Superbase itself in 1986 for $15.95 ($47 in 2025). It straight up acknowledges the lack of help over the years in making the most of Superbase . "Part I: Setting Up a System" addresses almost every single thing I complained about in the tutorial. It contains a mea culpa for failing to help users build anything beyond the most rudimentary of address books. It then moves into "the most important discussion in the book." A conceptual framework for thinking about your existing files, and how to translate them into data that leverages Superbase's power, is well explained with concrete examples. As well, it works diligently to show you that the way files and fields were set up in the tutorials that shipped with Superbase was woefully inadequate for making good use of Superbase . We learned it by watching you! As an example, what was just "firstname" and "lastname" fields in the tutorial are considered here more thoroughly. We are given a proper mental context for why a name is more complex than it first looks. As data , it is better broken into at least five fields: title, initials, first name, surname, suffix. Heck, I'd throw "middle" in the mix as well. Then Dr. Hunt explains what is actually a very powerful idea: record fields don't have to exist exclusively for human-readable output purposes. That is true, and almost counter to the shallow ways fields are treated in the manual, which only ever seemed to consider field data as output to the screen or a printer. "The crucial realization is that you don't need to restrict the fields in the record to the ones that will be printed." Many examples of private data that you might want to attach to a customer record (for example) are given, as well as ways to use fields solely for the purpose of increasing the flexibility of Superbase's query tools. Lastly, in what felt like the book had thoroughly invaded my mind and read my thoughts directly, an entire section is devoted to understanding key values, how they work, and ideas for generating robust, flexible keys. The remainder of the book continues on in the same fashion, providing straightforward explanations and solutions to common user issues and confusions. It's a solid B+ effort, even if the Apple database book feels more friendly and carefully designed. I'd give this book an A had Precision Software not made its customers wait three years for it. Here in 2025, the further into the tutorial I delve, the more the word "deal-breaker" comes up. I'll start with the format of the "Date" field type, and maybe you can spot the problem? We can enter the date in two ways: means a two-digit year and ONLY two-digits. This restricts our range of possible years to 1900 - 1999. That's right, returning after a 30 year absence: it's the Y2K problem ! Not only does this prevent us from bringing Superbase into the future, but we also cannot log even the recent (relative to 1983) historical past. I had a great-grandmother alive at that time who was born in the late 1800s, yet Superbase cannot calculate her age. Moving on, a feature I enjoy in modern databases (or at least more sophisticated than Superbase ) is input validation. Being able to standardize certain field data against a master file, to ensure data consistency, would be really nice. It's also a bit of a drag that a record's key value can only ever be a text string, even if you only use numbers. The manual gives a specific workaround for this issue which is to pad a number string with leading zeros. This basically equates to no auto-increment for you. Something I very much appreciate is that the entire program can be run strictly through textual commands; no F-keys or menus necessary. In fact, I dare say the menus hide the true power of the system, functioning as a "beginner's mode" where the user is expected to graduate to command-line "expert mode" later. Personally, I say just jump straight into expert mode. We can use a convention in a command to read and write values from records. BASIC-style variables can store those values for further processing inside longer, complex commands. As a developer, I'm happy. As a non-developer, this would be an utter brick wall of complexity for which I'd probably hire an expert to help me build a bespoke database solution. "Batch" is similar to "Calc" (itself a free-form or record-specific calculator) which works across a set of records. We can perform a query, store the result as a "list," then "Batch" perform actions or calculations on every record in that list. Very useful, but it comes with a note. "Takes a while" is just south of an outright lie. I must remember that this represents many users' first transition to electronic file management. Anything faster than doing work by hand had already paid for itself; that's true even today. That said, consider this. I ran "Batch" on eight (8!) records to read a specific numeric field, reduce that value by 10%, then write that new, lower value back into each record. Now, further consider that a C64 floppy can hold about 500 records, which seems like a perfectly reasonable amount of data for a business to want to process. ONE AND A QUARTER HOURS! Look, I know it was magical to type a command, hit a button, and have tedious work done while you took a long lunch. I once tasked a Macintosh to a 48-hour render in Infini-D . Here in 2025, I'm balking even at the 6 minute best case scenario in VICE. On real hardware, we must also heed the advice from the book Business Systems on the Commodore 64 : In fairness, most of the things I'd want to do are simple lookups and record updates from time to time. Were I stuck on 1982 hardware, it would be possible to mitigate the slow processing by working processing-time into my weekly work schedule. I wouldn't necessarily be "happy" about that situation, and may even start to question my investment if that were the end of the features. Luckily, Superbase offers a killer feature which offsets the speed issue: programmability. The commands we've been using so far are in reality one-line BASIC programs, and more complex, proper programs can be authored in the "Prog" menu. We are now unbound, limited only by our knowledge of BASIC (so I'm quite limited) to extend the program, and work around the "deal-breakers" I encountered earlier. Not every standard BASIC command is available (we can't do graphics, for example), but 40 of the heavy hitters are here plus 50 Superbase- specific additions . I don't want to sound naive, but I was shocked at the depth and robustness, yes even the inclusion of its programming language. It's far more forward thinking than I expected for $99 on a 64K machine. But I also cannot credit the manual with giving too much help with these functions. It's quite bare-bones. After all is said and done, the simple form building and robust search tools have won me over, but the limitations are frustrating. Whether I could make this any kind of a daily driver depends on what I can make of the programmability. It's asking a lot of me to become proficient in BASIC here in 2025. But the journey is its own reward. I press onward. Initially I thought I would build a database of productivity software for the Commodore 64, inspired by Lemon64 . The truth is, after my training to-date I am still a fair distance from accomplishing that, though I can visualize a path to success. There are two main issues I need to solve within the confines of Superbase's tools and limitations. Doing so will give more confidence that it is still useful for projects of humble sizes. Thinking of a Lemon64-alike, to constrain the software "genre" field (for example), I need a master list against which to validate my input. Superbase has some interesting commands that appear to do cross-file lookups: The code examples are not particularly instructive, at least not for what I want to do. The linking feature needs a lot more careful attention and practice to leverage. Rethinking my approach to the problem of data conformity, I have come to realize that the answer was right in front of me. All I really need is the humble checkbox. There is no such UI element on a machine which pre-dates the Macintosh nor has a GUI operating system, but I can mimic one with a list of genre field names each of a single-character field length. Type anything into a corresponding field to designate that genre. When doing a query for genre, I can search for records whose matching field is "not empty." Faking it is A-OK in my book. Without a working date solution, my options for using Superbase in 2025 are restricted. I can either only track things from the 20th century, or only track things that don't need dates. Neither is ideal. Working on UNIX-based systems professionally all day long, I think it would be nice to get this C64 on board the "epoch time" train. Date representation as a sequential integer feels like a good solution. It would allow me to do easy chronologically sorting, do calendar math trivially, and standardize my data with the modern world. However, the C64's signed integers don't have the numeric precision to handle epoch time's per-second precision . A "big numbers" solution could overcome this, but that is a heavy way just to track the year 2000. If I limit myself to per-day precision (ignoring timezones, ahem ), that would cover me from 1970 - 2059. Not bad! I poked around looking for pre-existing BASIC solutions to the Y2K problem and came up empty-handed. Hopping into Pico-8 (my programming sketchpad of choice) I roughed out my idea as a proof of concept. Then, after many "How do I fill an array with data in BASIC?" simpleton questions answered by blogs, forum posts, and wikis I converted my Lua into a couple of BASIC routines which do successfully generate an epoch date from YYYY and back again. Y2K solved! Snippet from my date <-> epoch converter routines; now it's 2059's Chris's problem. 1 REM human yyyy mm dd to epoch day format 5 REM set up our globals and arrays 10 y=2025:m=8:d=29 11 isleap=0:yd=0:ep=0 15 dim dc%(12) 16 for i=1 to 12 17 read dc%(i) 18 next 99 REM this is the program proper, just a sequence of subroutines 100 gosub 1000 200 gosub 2000 300 gosub 3000 400 print "epoch: ";ep 900 end 999 REM is the current year (y) a leap year or not? 0=yes, 1=no 1000 if y-(int(y/4)*4) >0 then leap=1:goto 1250 1050 leap=0 1100 if y-(int(y/100)*100) > 0 then goto 1250 1150 leap=1 1200 if y-(int(y/400)*400) = 0 then leap=0 1250 isleap = leap 1300 return 1999 REM calculate number of days that have passed in the current year 2000 yd = dc%(m) 2010 yd= yd+ d 2020 if isleap=0 then yd=yd+1 2030 return 2999 REM the epoch calculation, includes leap year adjustments 3000 ty=y-1900 3010 p1 = int((ty-70)*365) 3020 p2 = int((ty-69)/4) 3030 p3 = int((ty-1)/100) 3040 p4 = int((ty+299)/400) 3050 ep=yd+p1+p2-p3+p4-1 3060 return 4999 REM days passed tally for subroutine at 2000 5000 data 0,31,59,90,120,151 5001 data 181,212,243,273,304,334 -------------------------------------------------------------------------------- 5 REM epoch date back to human readable format 10 y=0:m=0:d=0 11 isleap=0:yd=0:ep=20329 15 dim md%(12) 16 for i=1 to 12 17 read md%(i) 18 next 100 gosub 2000 200 print y, m, d 900 end 999 REM is the current year (y) a leap year or not? 0=yes, 1=no 1000 if y-(int(y/4)*4) >0 then leap=1:goto 1250 1050 leap=0 1100 if y-(int(y/100)*100) > 0 then goto 1250 1150 leap=1 1200 if y-(int(y/400)*400) = 0 then leap=0 1250 isleap = leap 1300 return 1999 REM add days to 1970 Jan 1 counting up until we reach our epoch (ep) target 2000 y=1970:dy=0:td=ep 2049 REM ---- get the year 2050 gosub 1000 2100 if isleap=0 then dy=366 2200 if isleap>0 then dy=365 2300 if td>dy or td=dy then td=td-dy:y=y+1:goto 2050 2399 REM ---- get the month 2400 m=1:dm=0 2500 dm=md%(m) 2700 if m=2 and isleap=0 then dm=dm+1 2800 if td>dm or td=dm then td=td-dm:m=m+1:goto 2500 2899 REM add in the remaining days, +1 because calendars start day 1, not 0 2900 d=td+1 3000 return 4999 REM days-per-month lookup array data 5000 data 31,28,31,30,31,30 5001 data 31,31,30,31,30,31 I'm hedging here as I've had a kind of up-and-down experience with the software. I have the absolute luxury of having the fastest, most tricked out, most infinite storage of any C64 that ever existed in 1983. Likewise, I possess time travel abilities, plucking articles and books from "the future" to solve my problems. I have it made. There are limitations to be sure, starting with the 40-column display. But I also find the limitations kind of liberating? I can't do anything and everything, so I have to focus and zero in on what data is truly important and how to store that data efficiently. The form layout tools are as simplistic as it gets, which also means I can't spend hours fiddling with layouts. Even if the manual let me down, the intention behind its design unlocks a vast untapped power in a Commodore 64. It's almost magical how much it can do with so little. I can easily see why it won over so many reviewers back in the day. Though the cost and complexity would have frustrated me back in the day, in the here and now with the resources available to me, it could possibly meet my needs for a basic, occasional, nuts-and-bolts database. It would require learning a fair bit more BASIC to really do genuinely useful things, but overall it's pretty good! Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). While Warp mode in VICE is very handy, it's only truly useful when I hit slowness due to disk access. I'm sure I'll find more activities that benefit as this blog progresses, but for text-input based productivity tools, warp mode also warps the keyboard input. Utterly unusable. Basically I just use the system at normal speed. When I commit to a long-term action like loading the database, sorting, or something, I temporarily warp until I get feedback that the process is complete. Superbase The Book tells us that realistically a floppy will accommodate about 480 records. However, 1Mb and 10Mb hard drives are apparently supported, so storage should be fine with a proper VICE setup. VICE v3.9 (64-bit, GTK3) on Windows 11 x64sc ("cycle-based and pixel-accurate VIC-II emulation") drive 8: 1541-II; drive 9: 1581 model settings: C64C PAL printer 4: "IEC device", file system, ASCII, text, device 1 to .out file Superbase v3.01 (multi-floppy, 1581-compatible) Create information Transmit information Receive information "Ultimately, the workstation configuration will probably replace the usual office furnishings as the organization evolves toward the 'paperless office'" - The Office of the Future , Ronald P. Uhlig, 1979 "Transformation of the office into a paperless world began in the early 1980s. Computers have been an integral component of the paperless office concept." - O MNI Future Almanac , 1982 "This information revolution is transforming society through basic changes in our jobs and lifestyles. Indeed, the paperless office of the future and computerized home communications centers are information age miracles not to be hoped for, but expected." - America Wants to Know: The Issues and the Answers of the Eighties , George Horace Gallup, 1983 Base C64: 1 minute, 6.88 seconds In WARP: 6.22 seconds Base C64: 1.25 hours In WARP: 6 minutes I want to constrain some data to a standardized set of fixed values. I want to solve the Y2K problem. : select a second file in the same database only whose records you want to look up (no cross-database lookups) : specify the specific field in the file against which you want to do lookups : close the link to the second file : "reverse" the linked files; the linked becomes primary and vice versa Key input repeating like the system is demon possessed? Warp mode is probably still on. A snapshot saves the C64 state, but not the emulator state. So if you have a disk in the drive when you take a snapshot, that disk will not be inserted when you restore state. Save your snapshot with a name that reminds you which diskette should be inserted in which drive to continue smoothly from the snapshot. Superbase developers understood that data migration and interoperability are critical. We cannot have our data locked down into a proprietary format with no option to move to a different system. Print and Export accept formatting parameters which allow us to effectively duplicate CSV format. Printing with VICE generates an ASCII file. Exporting puts the data onto our virtual disk image. To get data off that disk image into our host operating system, we need to be able to browse disk contents and extract files. On Windows, DirMaster works nicely . For macOS and Linux, the DirMaster dev created dm , a command-line utility for browsing and working with C64 disk image files. Speed. I'm spoiled, I admit it. For standard searches it's snappy enough, but batch operations are tedious. Superbase isn't particularly easy to use with multiple floppies. The manual addendum for v3.01 says that a two-drive setup is supported, but I didn't really see how to do that. The initial data disk formatting routine offered no opportunity to point to drive #9, for example. I wish VICE would show the name of each .d64 file currently inserted into the virtual floppy drives. It's a little tough not having access to modern GUI elements in the form builder, like pull-down menus. "Build Your Own" is a powerful, flexible, time-consuming process using Superbase's programming tools. Getting around limitations of the pre-built fields, forms, etc seems possible with enough BASIC knowledge, time, and desire to commit to Superbase . Once that data is in there, it's honestly easier to let it stay there than try to work out some export/import function. This may be an issue for your use case.

0 views
Stone Tools 5 months ago

Electric Pencil on the TRS-80

As the story goes, 40-year old filmmaker Michael Shrayer was producing a Pepsi-Cola commercial in 1974 when he had a crisis of conscience on set. He looked around at the shoot, the production, the people, the apparatus and found a kind of veil lifted from his eyes. Prior to that moment, it was work. Suddenly it was insanity in service to a "dancing bottle cap." Almost on-the-spot, he swore off Hollywood for good, tore up his union card (forfeiting union pensions in the process!), and moved to California to semi-retire as a beach bum . After moving to California, with time on his hands, he found a distraction in a new-ish gizmo on the market, the MITS Altair 8800 . Once a creator, always a creator, Shrayer put his efforts into learning how the machine worked, and more importantly how to make things with it. That was not an easy task back then on a machine whose base model used physical toggle switches to input byte code. Undeterred, he put together Software Package 1 , a collection of public domain development tools, then started work on ESP-1 ("Extended" Software Package 1). Tired of using a typewriter to write documentation, he wondered if he could use the Altair itself to help him compose the document. That software didn't exist, so he wrote it himself . Dubbed the Electric Pencil by he and his wife , it was used it to write its own manual, and began its spread through the home computing scene. Landing on the TRS-80 series in 1978 opened it to a mass market, and the word processor genre was well and truly born. For the first couple of years Electric Pencil was the only word processing option for home microcomputers. This gave Michael Shrayer what some may call arrogance, but which I'll call "work/life balance" to utterly ignore the support phone line come 5pm sharp. Customers literally had no choice but to wait until morning to call again. Shrayer was living the dream, I guess is what I'm saying. (He also said he sold 10,000 copies. At ~$200/per. In late 70s money. Again, "living the dream.") In 1982, James Fallows called Electric Pencil , "satisfying to the soul." My expectations are considerably lower than having my soul soothed, but maybe I can at least find a place for it in my heart? I can tell already this is going to be a little painful. No undo. A bit of a clunky UX with the underlying system. No spell checking. Limited number of visible lines of text. Basic cut-copy-paste is nowhere to be seen. But maybe I don't rely on those things as much as I assume I do in this context? The primary work I'm committing to is to write this very section, the bulk of the review, within Electric Pencil itself. I'll also use it to revise the document as much as possible, but this blog platform has unique tools I need to finalize the piece. Once I feel comfortable using Pencil , I think I'll take a stab at writing some fan-fiction in it. Any Mork & Mindy fans out there? I am off to a rough start. The simple fact of the matter is that the interaction between a modern keyboard, an emulator, and Electric Pencil is not strictly harmonious. One of the primary interactions with the software worked best with a key that wasn't even part of the original TRS-80 hardware. I monkey-paw the keyboard, trying to deduce which modern key maps to which retro key and finally figure it out. I write it down on my memo pad and refer to it constantly during my evaluation. Once I'm into Electric Pencil though, I'm rather struck by its stark simplicity. I am here to write, just me and my words; alone together with no external pressure to do anything but the task at hand. It's nice! I'm definitey reminded of any number of "distraction free word processors" in recent years and dozens of attempts to return to a clean, uncluttered interface for writing. I suppose, like Pepsi executives reinventing souless product shoots over and over, we are similarly doomed to reinvent "a blank screen with a blinking cursor" from time to time. Once I start typing though, I realize the blank screen is less a great, untapped plain of fruitful potential and more of a Mad Max style desert. War Boys are at the ready, poised to steal my blood and guzzoline. Initially I was concerned about how I would translate my writing into something I can use on the blog. It struck me that I kind of don't need to worry about it. Electric Pencil saves in plain ASCII format so Markdown is a realistic option. Rather than bring esoteric file formats of the past to the present, we'll instead bring text formatting of the present into the past. Well, that was the working theory until I learned that there is simply no way to type things like or or and even requires a weird combination. I hadn't mentally prepared myself for the idea that the physical keyboards of the past might present their own writing challenges unique from the software. Still, and exist, making HTML input an option for the missing characters. I move on and discover organically that deletes a character. The brief adrenaline rush of feeling attuned to the vibe of the program is nice, until I saved the document. Until I tried to save the document. No lie, this represents is my sixth attempt at getting things working. Look, I do not claim to be a smart person. I overlooked an obvious warning message or two, simply because I didn't expect them. I was looking to the left for cross-traffic when I should have looked to the right for the coming train. A fun fact about Electric Pencil is that when you save your work, you are saving from the current location of the cursor to the end of the document. Think for a moment about the implications of that statement. There will be no advanced warning when you do this, only an after-the-fact message about what was done. It will obediently wipe out large chunks, or even all, of your document, depending on the current cursor position. If you make a mistake and lose your work, the manual explicitly puts the blame on you , "The truth of the matter is that...you screwed up." It then says there are recovery techniques you can learn if you buy a copy of the book TRS-80 Disk & Other Mysteries by Harvard Pennington, the same guy who bought the rights to Electric Pencil from Shrayer. The same guy who published this very version of the program. Hate the game, not the player? But wait, the emulator itself has its own quirks. With the various things that could go wrong having actually gone wrong, dear reader I doubt you'll be surprised to learn that I lost my work multiple times. I cannot blame Electric Pencil for everything, as the emulator kind of worked against me as well, but the total experience of using this in 2025 was quite frustrating. I persevered for your entertainment . The tip jar is to your right. Back in the day, Electric Pencil was notorious for losing the user's typing. In Whole Earth Software Review , Spring 1984, Tony Bove and Cheryl Rhodes wrote, "(It) dropped characters if you typed too fast. During “wraparound” it nibbles at your keystrokes while it does what it wants." I did not personally encounter this in my time with the program. It may have been addressed as a bug fix in v2.0z, though I can't find evidence it did. It may be that the emulator itself provides more consistent keyboard polling than the original hardware did and keeps up with my typing. Or maybe I didn't flex my superior Typing of the Dead skills hard enough? No, for me the most immediate and continually frustrating aspect of using Electric Pencil is its "overtype" mode. This is a feature you still see in text editors and word processors, maybe hidden under the "Advanced" preferences, requiring a conscious choice to enable it. The modern default is to "insert" when typing. Place a cursor, start typing, and your words are injected at the point of the cursor. The text to the right moves to accommodate the new text. Overtype, as the name suggests, types over existing words replacing them. The amount of time I've spent reversing out inadvertent overtyping when I just wanted to add a space, or especially a line break, must surely have subtracted a full hour of my life by now. I have to remember to jump into "insert mode" when I want to retroactively add more space between paragraphs. I'm not one to suggest it is without its merits, though if my life were dependent on finding a reason for its existence my life would be forfeit. But what I can say absolutely is that losing your words to overtype really sucks in a program with no "undo" option. I mentioned Harvard Pennington earlier, and I want to spend a little time talking about the transfer of Electric Pencil from Shrayer to Pennington. I'm using v2.0z of Electric Pencil and it would be unfair to fail to note Michael Shrayer's absence in its development. According to the v2.0z manual, "Shrayer continued to sell ( Electric Pencil ) until January of 1981. By this time (it) was losing ground...Michael was not ready to devote his life to the daily chore of...doing business around the world." Harvard Pennington had already published a book written in Electric Pencil and wanted to keep his beloved software alive. Shrayer sold Pencil to Pennington's company International Jewelry Group. Yes, you read that right, "jewelry." But it's OK, they just called themselves IJG and their hard pivot was papered over nicely. "Pennington got together with...fans and programmers" to create a patched, improved version. Now I say "improved" because I read a review or two that suggested that. I can say absolutely that the command menu in v2.0z is a marked improvement over v1. It helps push some nuclear options (like to clear all text) behind a much-needed protective wall. But it also has, in hindsight perhaps, a couple of baffling decisions. First, if IJG was trying to improve the product, I don't really understand why the save file mechanism remains so inextricably linked to cursor position. Rather than fix it, a slightly snarky lambaste in the manual was added. Good job team, problem solved? The biggest change is the utter removal of WordStar -esque cursor controls. v1 had it, v2 doesn't. The cursor control keys are "unused" according to the v2.0z manual. The functionality was removed and replaced with nothing . Why concede a feature to your competitor that is so useful they literally named their own product for it? Just above I even called it WordStar -esque despite Pencil being first-to-market! In Creative Computing Magazine , November 1984, Pennington, newly elected chairman of the board at IJG, wrote an article about the future of computing. A jewelry salesman turned author and Pencil fanboy, now in charge of stewarding Pencil 's future, saw the coming wave of Macintosh-inspired GUIs. What did he think of this sea-change? "So what is in our future? Windows? A mouse? Icons? This is the hot stuff for 1984. How do we know it is the hot stuff? Because the computer press tells us. And how do they know? Because the marketing people tell them (and they know) because the finance people have determined the next "hot" item. How does the financial community know? They don't. However, no one is going to tell them to take their money elsewhere ... If you can come up with an idea that is bizarre enough, you can probably raise (millions) to bring it to market ... It is hype." Within two years of this bold, anti-GUI stance, Pennington would sell Electric Pencil to Electric Software Corporation. Later, PC Magazine would review ESC's v3.0 in the February 24, 1987 edition. They praised how much you got for $50, but also called it "not easy to learn" and "not for beginners." By then, with so much competition in the genre it had birthed, Electric Pencil was effectively dead. As the v1 manual states, "THE BEST WAY TO LEARN TO OPERATE THIS SYSTEM IS TO USE IT !!!" That's proving true! Why did the v2.0z manual remove that statement?! The learning curve is relatively shallow, benefitting from the software being sparse; there's just not that much to learn. From the perspective of a "daily driver," it's not growing on me. Getting into a flow is proving difficult. (future Chris here; as I come back to edit this document later, the editing flows more freely though I can't claim it is "natural." More like I'm just better at anticipating quirks, and there are plenty.) Part of editing is rearranging, but we need to forget what we know about "cut, copy, and paste." Those words had not yet been commonly adopted to describe those actions. Instead, we have the "block" tool. adds a visual marker to indicate the start of a block. Move the cursor and do that again to add a second marker at the end of a block. The text between markers is your block. Place the cursor elsewhere in the document and will clone the delimited block into the current cursor position; the original block stays as-is. You can clone as much as you like. "Cut" as we understand it today doesn't exist. deletes the block. It is GONE, no longer in memory at all. Remember also, there is no "undo" in this program, so gone really does mean GONE. Good enough for government work, I guess, just watch your step. The only feature left worth discussing is find/replace. brings up a modal screen for searching for a word. In James Fallow's discussion on it, he noted that he would use it for complicated words. He would insert a or some other unusual character to stand-in for a complex name, for example the Russian surname . Then, when he was done editing he would do a find-and-replace on the character for the surname. It only looks for the literal, case-sensitive text you type, although wild cards may be used. This is also your only method for jumping to known parts of the document. Search for a word and replace it with nothing to jump from search result to search result. Aside from some era-specific, esoteric commands, that's basically all Electric Pencil has to offer. It would have been fun if I could have tried the tape-dictation controls to transcribe a conversation. Spell-check isn't part of the base system, though it was available as an add-on. It is bare-bones, utilitarian, and sometimes frustrating for a modern user. It's forcing me to evaluate my personal threshold between "just enough" and "not enough" in such software. For me, this one is "not enough." With a few quality of life additions I suppose it could be sharpened up for the "distraction free" crowd. Maybe v3.0 of Electric Pencil PC is just right, if overtype is your jam? As-is, it is hard to recommend it on the TRS-80 for much more than writing a review of it. But don't worry, I teased some fan-fiction and I am a man of my word. I'd like to return to the Creative Computing Magazine issue where Pennington kind of poo-pooed windows, mice, and icons, "What is their future? They are here to stay. That does not mean that they will be used." In that same issue, Shrayer also gave his thoughts about the future of computing. Michael Shrayer died October 19, 2006 in Arlington, Texas. He was 72. Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). Speaking honestly, too much is missing to recommend Electric Pencil on the TRS-80 for a modern workflow. trs80gp v2.5.5 on Windows 11 Emulating the default TRS-80 Model III 48K RAM 4 floppy drives TRSDOS v1.3 Electric Pencil 2.0z Move the cursor to the start of the document with (move to the "beginning") to open the command tool screen. type as Can't remember the filename? type Don't see your file listed? That's because only lists /pcl files. will list everything on disk, including /txt files like I'm using. In the menu, go to an empty drive number Select to get a blank, formatted disk for your work See how the disk name has around it? That means it is referencing an internal copy of a master blank disk. Your changes to it will not be saved to this disk In the drive menu for this disk, select and give your disk a name. In the drive menu for this disk, select for the named disk Now and select the disk you exported above. This disk is your personal copy, saved to the host operating system, ready to accept your writes. When you finish working and want to shut down the emulator, check the menu to see if any disks are prefixed with . If so, that disk has changes in memory that have not yet been written to the filesystem. Those changes will be lost if you shut down. Use on that diskette to save its changes out. trs80gp offers relatively minimal options for speeding up performance. It automatically kicks in "Auto Turbo" mode when accessing the disk drive, so I didn't find it annoyingly slow to read/write even though I'm using virtual floppies. A virtual hard disk is an option, but configuration looks... complex . I'll dig into that later. mode makes a noticeable difference in input and scroll speed. I didn't notice any troubles using that mode with this software. It was definitely a time-saver to set up the Windows shortcut with launch parameters (Properties > Shortcut > Target) to auto-insert program and save disks on launch, enable "Turbo", and set the emulator's typing mode to "Typing Layout" (see Troubleshooting, below) Options for those not running on Windows The TRS-80 emulator scene itself seems fairly anemic, especially outside of Windows. There is a Javascript emulator , but it feels a little heavy for my lightweight needs, and the hosted online versions seem to disallow arbitrary disk insertion. I'm completely unclear how to get at my personal data even if I did manage to save my work. That said, it is open source on Github and may be a better option than my initial tests indicated. So I suppose you could run a Node server? Or run it in a web browser? to interface with an emulator which runs the software. How many abstraction layers are you willing to put up with? For a native macOS app, the only option I can recommend is kind of non-native: run trs80gp and trstools in WINE. No other app is maintained to work on modern macOS, or if it runs its "experimental" or broken in some fundamental way to render it unusable. On Linux, I'm still investigating. Remember: in trs80gp's Diskette menu if you see beside a drive, that means it has been written to virtually but has not been written to the host operating system yet. This can happen if you have a diskette with brackets, indicating a virtual copy of an internally defined master diskette. Export that diskette, stat! Keyboard input has three options. One is notably called "Typing Layout" and it addresses the issues I encountered with certain character inputs doing a kind of double-input. For example typing always resulted in printing to screen. "Typing Layout" felt much more stable and behaved as expected, though it had its own quirks (see What's Lacking, below) If you get an error saying , you probably have the cursor at the end of the file and are trying to save. to move to the start of the file before saving. Cannot stress this enough. In a sense it is easy, and in a sense it is annoying. The easiest way I found to get my data out of TRSDOS world is through the utility trstools . Use it to browse your TRSDOS disk, then simply open a file and copy out the contents. It's just plain ASCII; there are no special word processing code embeds. Caveat! There are no embeds unless you use the printer formatting functions! Then there are absolutely embedded codes and they're unfriendly to a modern workflow. They only apply to doing real printing to real vintage printers, so I recommend ignoring those features. No undo whatsoever. This bit me more than once thanks to (delete TO end of line) and (delete ENTIRE line) being right next to each other. You are essentially restricted to no formatting or a subset of markdown formatting. Getting the emulator keyboard and Electric Pencil to be happy together has simply not panned out for me. If I use "Logical Layout" I get double-input on many keys. If I use "Physical Layout" my muscle memory of where lives (for example) betrays me every time. If I use "Typing Layout" keys like stop working and keyboard commands for marking blocks of text don't work any more. There is no perfect keyboard choice for this program that I can find. No spell-checking without a secondary package like Blue Pencil or Electric Webster's . Search is strictly, only case-sensitive. For writing a basic skeleton of a document for this very blog, it worked well-enough. But to restrict all editing to Electric Pencil means not touching a thing within the Ghost blog platform. That is hard to resist. Limited keyboard support means writing without certain characters that come up in a modern context quite a lot, like The default "overtype" mode definitely has an adjustment period to get through, and will surprise you with how often it deletes the first character of a line of text when all you wanted to do was insert a new line. Getting the data out isn't a horrible process, but adds enough friction to the process to make it frustrating in a rapid write-edit-publish cycle. The small amount of text on screen at one time makes it difficult to read and scan through long text to find specific passages for editing. If you're a visual editor, it's going to be a rough ride. This predates Unicode and software-based fonts, so no international writing!

0 views
Stone Tools 5 months ago

Deluxe Paint on the Commodore Amiga

VisiCalc on the Apple II. Lotus 1-2-3 on the IBM PC. Aldus PageMaker on the Macintosh. Deluxe Paint on the Amiga. The computer industry loves a “killer app,” that unique piece of software that compels consumers to purchase new computer hardware just for the privilege of running it. I can personally attest to Deluxe Paint as it compelled even my technophobic mother to buy into its potential. One day at a small strip-mall in Charlotte, North Carolina, mom went shopping for quilting fabric. I, a teen with less than zero interest in quilting or fabric or being seen with a parent, wandered into the computer store a few doors down to lust over the black and chrome Ataris. Maybe this would be the day I finally saw an Atari 1450XLD in the wild! I was obsessed with the futuristic stylings of that line; they looked powerful . I would get some serious work done . That specific device never released, but what I found instead changed my life. The salesman really didn't have to work too hard that day. So much of the demo of the Commodore Amiga (not yet called the 1000) left us in awe, but it was the art and color-cycle animations in Deluxe Paint that sealed the deal. The decision was made to upgrade the family computer from the ADAM to the Amiga. Though it was my first experience with a mouse, I was pushing pixels around like I was born to it. As I learned the way of the pixel, I remember feeling clever when I devised a way to smooth hard pixel edges (“anti-aliasing,” I later learned) and to checkerboard two colors to simulate a new color (“dithering,” I later learned). Those who used Deluxe Paint continue to chase its high. Would-be successors to its pixel-art throne come and go, but none have clicked in quite the same way. Let's see if a re-connection is possible with my old love after. . . (I feel physical pain writing these words) FORTY YEARS Deluxe Paint III was my last personal experience with the program and was the last version creator Dan Silva developed, so that’s the version I’ve chosen to examine. I remember it being more than enough for me at the time, and I suspect it could yet be a good fit for my (occasional) pixel-art needs. It lacks one key feature that I think I’ll miss: layers. We’ll see how it goes. To evaluate the program, I need a goal. I will work through Chapter 6 in the manual, titled “Painting Tutorials.” As well, I’ll take a peek at the animation tools introduced with this version. I’m particularly interested in “animpainting” which is, “painting while the frames flip.” That sounds pretty rad. From Windows desktop, I'm "ready to paint" in about 30 seconds and the sight of that familiar toolbar feels like a small reunion. A slim vertical column, it's neatly packed with the tools you’ll spend nearly all of your time in: brushes plain and dotted, shapes filled and unfilled, airbrush, text, magnification, and my personal favorite "symmetry." Most are summonable by keyboard, even mid-stroke. When present, the UI never feels like an obstruction, but it gives way by F10 for a full-screen canvas, or politely steps aside by F9 , hiding but not disabling the menu bar. The default color palette stokes some dormant ember of love that I realize now will smoulder until my death bed. It's been 40 years, I should have moved on. I thought I had! But feeling again the precise, silky movement of the brush confirms that all future loves will be (fairly or not) compared against Deluxe Paint . As I play around exploring the tools, I do wish I/O polling of the mouse were better. Swiftly-drawn curves are flattened out into a series of discrete lines which only approximate my motion. If I wanted to lay down large washes, I would want fast, perfect, full-screen brush motions. I don't typically work that way, so my personal drawing needs are well-met. I see demonstrable lag in certain brush effects, however. A fat brush in symmetry mode requires slow, precise movements to let the system "catch up." The “smear” effect is similarly slow, especially considering it just push-scatters pixels along the brush stroke (the 32-color palette can't actually "mix" colors). Combining those into “symmetry smear” just combines the slowness. Clearly we’re at the boundary of what a 68000 CPU @7.16MHz system can pull off with finesse, even with Denise's assistance . That said, even if the system struggles it doesn't fail. *knock on wood* Perhaps a virtual CPU upgrade is in order. The tutorial begins with a focus on color. Mixing colors, RGB vs HSV, "spreads" (gradients), "ranges" (a set of contiguous palette colors, for flood fill and color-cycling), and other palette manipulations. I'm struck how Deluxe Paint 's UX feels somehow more "color-centric" than current software. For example, here's how gradients are built. Generating a spread just redefines the colors in our palette. There is no relationship between the colors, other than visually. This is great for mixing up new colors, but spreads are made more useful by "ranges." A color “range” is a user-defined contiguous subset of palette colors which has an associated dithering, speed, and direction. Perhaps you’ve seen the color-cycle animations of Mark Ferrari ? (Perhaps you’re an Aseprite user who has been patiently waiting since 2019 for color-cycling to be added?) The clever application of colors within a range can produce pleasing color-cycle animation effects. 0:00 / 0:05 1× “Ugly Waves on an Ugly Beach" - Christopher Drum, 2025 (look at the palette to see how the colors are cycled through in position) Any brush that does flood-fill can flood-fill with a range. One option forced me to check if it exists in Affinity Photo (it does not). Linear gradient has horizontal fill and horizontal line fill. Horizontal line fill scales the gradient per horizontal line, so each line begins and ends with the start and end colors of the spread. Now we can draw “realistic spheres!” (I guess "realism" had a looser definition back then.) While working through the tutorial, I jump into modern programs for direct comparisons: DPaint.js , Aseprite , PixiEditor 2.0 (released while I was writing this), Pro Motion NG , and Affinity Photo . Until I started doing this I hadn’t really internalized how much the Adobe-ification (Microsoft-ification?) of palettes and panels and toolbars and menu options which open entirely other screens of UI panels and toolbars and menus has frozen the evolution of creativity software. The UI chrome of modern products suddenly feels boxy and oppressive, and I find my emotional approach to that software reflects the same rigidity. Dan Silva once told the Computer History Museum I’d like to just sit with that thought a moment. The first version of Deluxe Paint had no menus; it was all keystrokes and it worked fine. A keyboard-friendly mindset continues into Deluxe Paint III. Its robust keyboard controls allow the artist to switch up drawing tools on the fly without having to mouse back over to a toolbar. Mind you, I’m not lamenting how things “should have” turned out. I’m not angrily shaking my fist at the clouds, onions tied to my belt, “We’re doing it all wrong!" However, I think there is a kernel of knowledge in there about software’s relationship to the artist that is under-explored. The common cry “artists are visual people” discredits artists as being somehow incapable of learning software which doesn't shove every tool, option, and setting into their faces at all times. Basically, I’m not convinced a cockpit of controls is the best UI for making art. Returning to the tutorial, now we get to dig into real, practical work. “We will take a relatively plain corporate logo and spice it up,” it says. The manual is from the 80s, so are pastel triangles with neon squiggles, the "giant shoulder pads" of graphic design, coming our way? Well, we’re here for the software, not to pass judgement on the times. Full steam ahead! Deluxe Paint has the concept of a “brush.” I say that like you've never seen a brush before, but it is a very core object upon which a lot of Deluxe Paint 's advanced functionality rests. Simply put, a brush is an arbitrary selection of pixels. You can freeform paint with it, use it to draw shapes, or just click around stamping down copies. Brushes may be modified by applying irreversible transformations. This is the point of the review where I say a silent prayer of thanks to Monsieurs Casteljau and Bezier and whomever developed unlimited undo. Low resolution logos with aggressive transformations applied can quickly degenerate into a heap of pixels that resemble the original only if we squint and agree to the fiction. Multi-color brushes can be used as single-color in "Color" mode, great for laying down pixel-perfect drop shadows. Colors in a brush can be swapped or replaced to quickly experiment with new color combinations. You can "Outline" the brush in the current background color, but the effect is weak, and there are easy, manual ways to produce better results. Part three of the tutorial digs into a very cool feature: stenciling. You know it as masking, but with indexed colors it works a little differently. We can lock specific colors of our palette to prevent them from being affected by any further paint or fill effects. It’s quite satisfying to see thin strokes being pixel-perfect filled with a dithered gradient leaving the background untouched. Stenciling is shown as a real-time composite effect, which feels like magic for a system from 1985. If I have a big, multi-color brush loaded it will peek through unstenciled colors, and hide behind stencils without lag. It’s awesome. Then, a big secret is revealed. While we have been working on this canvas, there has been a second blank canvas waiting for us the entire time. Press to “jump” between the two. Use one as a scratchpad, use the second for your artwork proper. Or just draw two pictures? Stamp a copy of your brush down in one, transform it in the other. The program continues to reveal its depth. In addition to locking individual colors, we can also lock the entire image, defining it as our background. Once set, "Clear" on the canvas reverts back to that locked image, like a permanent underpainting. With the background locked, we can paint on top then lock those pixels as the "foreground." Having now locked both the background and the foreground, what happens when we try to paint? What other ground is there? Some kind of. . . middle?. . . ground? No, that word isn’t even in the manual. I guess I could try it and. . . GASP ! So we are now painting between the background and foreground images, making "layers" tantalizingly close to realization. Even if there had only been three (fore, middle, back), Deluxe Paint could have beaten Fractal Design Painter to the punch. Marvel must surely have a multiverse where that happened. The tutorial ends with the 3D perspective tools. I remember back in the day being amazed at anything resembling 3D perspective on the computer screen. Even the mundane runway numbers on the tarmac in Flight Simulator seemed like impossible magic. Trying to follow along with the perspective tutorial is proving difficult. The steps required to use it smoothly are complicated, though it is still a neat trick. is used to apply a 3D rotation to the brush. For example, spins the brush backward on the X-axis, which gives us a quick and dirty (emphasis dirty) Star Wars text crawl effect. Brush movement is handled by mouse for X and Y, and for Z. Rotation is handled by keypad. allows coarse steps with the keyboard control. moves forward/backward through Z-space. There are tools for setting the horizon line, changing angle calculations, determining 3D relativity (screen space vs object space, which can be separate for movement and rotation), setting a grid for fixed-space snapping, setting the anchor point of the brush around which transformations occur, and more. It’s overwhelming and doesn’t feel coalesced around a solid vision of “how to control objects in 3D space.” My love feels a first twinge of betrayal. Working with this tool is not pleasurable . Often, a 3D movement results in a coarse, jerky response that is visually hard to read and when it's done I can't immediately understand the current state of the brush. Even the example image provided on disk to show off the feature is lame. It’s a lot of training and learning for a small payout. A novelty. Animation, on the other hand, is fantastic. Forget what I said before about betrayal! I'm sorry, I should never have doubted you Deluxe Paint . You're awesome! At its most basic usage animation is just a flip-book. Set a number of frames (memory permitting!), draw on each frame (keyboard control for stepping forward and back), then play those frames back in sequence. It's a simple, easy to navigate animation method. “Animpainting” auto-steps the frames while you draw. You can do something akin to "puppeteering" a brush. Take a “ball” brush, move to frame 1, hold down (left ALT on a non-Amiga keyboard) and start drawing. As you draw, the brush paints itself across frames. Played back, your brush movement has defined the animation movement. The relatively complex “Move” tool lets you precisely define a rudimentary motion path across a desired number of frames. Using the oddly-named “Dist” and “Angle” settings, you numerically define the position and orientation in 3D space where the brush should appear in the final frame. This is a slow process, so thankfully we can “Preview” the motion as a wireframe before committing to a render. When satisfied, “Draw” will go frame by frame, moving, rendering, and frame-stepping your brush automatically. This single-handedly sold me on the Perspective tool I soured on earlier. 0:00 / 0:03 1× This blog is not about games, but we can throw in a sly reference sometimes as a treat. Where "animpainting" captures your brush across frames, "anim brush" captures animation frames as a brush. As you move, the brush steps through its own frames, animating while you paint. These ideas can be combined into "animpainting with an anim brush" such that each frame of your brush is applied in turn to each frame of the canvas. In this way you can animate (say) a bird flying along any arbitrary path by simply painting a brush stroke. The brush stroke is your animation path. "anim brushes" can be used in conjunction with other tools the same as any other brush can. Turn on airbrush and we can have "airbrushed animpainting with anim brushes." Then, turn on symmetrical drawing to get "symmetrical airbrushed animpainting with anim brushes." Then turn on color-cycling to get "color-cycled symmetrical airbrushed animpainting with anim brushes." Consider something as simple as drawing in symmetry mode, then applying fills. In Deluxe Paint we can combine our tools into a "symmetry gradient flood-fill" and flood-fill multiple areas symmetrically, simultaneously. In Affinity Photo "symmetry," "gradient," and "flood fill" are three utterly discrete tools that do not interact with one another. Have I beaten my point to death yet? I wonder if an overly developer-centric approach to modern software design has locked us into this "one tool at a time" mindset. Keeping tools independent and discrete must surely make it easier for Adobe to add features year-over-year. (argue in the comments if that is "good" or even "necessary"). Need a new tool? Just cram another button into one of the panels, slap an adjustment slider on it, and Bob's your uncle. But returning to Deluxe Paint after all these decades, I understood something I hadn't realized before. Its tools don't just coexist, they collaborate with one another. It feels akin to an artist cobbling together a unique tool from stuff lying about the studio. As the artist using them, I feel similar collaboration between Deluxe Paint and myself. My sense of creative exploration is encouraged. I’m encouraged to play, not just follow procedure. My mid-range laptop today is some 50,000x (?) more powerful than that Amiga, yet the laptop’s power seems almost exclusively funneled toward the mechanical goals of speed, scale, and rigid precision. I humbly suggest we're overdue in turning that power back to humanistic goals like play, intuition, and surprise. Deluxe Paint still holds up, admirably so in fact. Using it again after all these decades, it remains one of the purest forms of digital expression I think I've ever used. It was a genuine pleasure to reconnect with this beautiful software. This being the first in the series, an explanation of this section is in order. Here, I will address ways to improve the experience of using the software, point out notable deficiencies, explain workarounds, and in general give pointers about incorporating the software into modern workflows, if possible. A wonderful side-effect of using an emulator on modern hardware is that "hardware upgrades" to the virtual machine are just a click away, not "solder an expensive daughtercard to a motherboard." Not every program handles faster hardware appropriately, but Deluxe Paint III absolutely benefits. Multi-color symmetrical gradient fills are very fast, but not instant. Fast mouse motion is captured more accurately, but again it's not perfect. WinUAE v6.0.0 (2025.07.02) 64-bit on Windows 11 Emulating an NTSC Amiga 1000 512KB Chip RAM, 512KB Fast RAM 68000 CPU, 24-bit addressing, no FPU, no MMU, cycle-exact emulation Kickstart 1.3 / Workbench 1.3.5 HDF (from Amiga Forever ) Windows directory mounted as my “hard drive” which makes it trivial to access the images I create from Windows Deluxe Paint III v3.25 Deluxe Paint : select a start color, click "Spread", click an end color Affinity Photo : select a pre-built gradient, adjust colors in a gradient designer Tweaked WinUAE settings: 68030 CPU CPU emulation speed 16x 8MB of FastRAM Options for those not running on Windows FS-UAE looks like a nice simplification of the UAE setup. vAmiga on macOS offers open-source fallbacks for those who don't own Kickstart and Workbench. Keypad not working in WinUAE? In see if a Port is occupied by a keypad override, such as a virtual joystick. “Error while opening DPaint: 103” means you need more RAM allocated to your Amiga virtual machine. If you have a host system folder set up as a "drive" in WinUAE, you have immediate access to its contents. By far the easiest way to access your work. XnConvert can trivially convert IFF images to any other format you can imagine. Page-flip animations are saved as a series of individual image files, one per frame, numbered sequentially. After batch converting them to .png, the resultant files can be handled by various free and open-source tools: ImageMagick : GIMP : import images as layers, then export as GIF animation Ezgif.com : drag in your numbered images then click "Make a GIF!" Saving out a color-cycle animation is tricky as it requires an application which understands indexed colors (common) and can also do color-cycling (rare!). Brute-force it with an OBS Studio window capture of the animation as it plays in a WinUAE window (I found Windows Snipping Tool stutters). DeluxePaint.js will load and play color-cycle animation, but doesn't have any video export options. I also saw mis-colored pixels in "presentation mode" Pro Motion NG has color-cycling, but doesn't seem to import IFF files. I must reiterate that Kickstart and Workbench are owned by Amiga Forever. Minimum cost for Kickstart/Workbench 1.3 is US $20. One level of undo is definitely frustrating at times. Be sure to use your jump page to lay down brushes before altering them. No layers means you have to get pretty creative with the stencils and background/foreground locks. It's not "intuitive" per se, but it is learnable. Other color modes exist, and were expanded as Amiga hardware improved. Deluxe Paint IV in AGA mode might feel more flexible to many readers. There have absolutely been useful advancements in pixel-art tools over the years. Aseprite 's shading tool is very neat! While I'm not a fan of its UI, Pro Motion NG does a lot to bring Deluxe Paint tools into the modern day. It can even handle symmetry gradient contour fills, and beyond!

0 views
Stone Tools 7 months ago

Introducing “Stone Tools”

I grew up with computers. That’s a less-than-revelatory statement for sure, but for a child of the 70s it is a little rare. Growing up in rural Tennessee, no-one I knew owned a home computer, nor did most people really understand why someone would even want one. You already had a typewriter for typing, recipe cards for the kitchen, and board games for fun. If you wanted to play on the TV, buy a Pong machine. What more could a computer contribute to life? According to a slick Tandy/Radio Shack salesman, we could be organizing family finances, writing the Great American Novel, and educating the children. Computers are inevitable, the next frontier. Don’t be left behind! That sales pitch easily convinced my gadget-loving father to bring home a TRS-80 Model I when I was six or so. Dad also bought the “Expansion Interface,” a completely separate box big enough to support the monitor, for its increased RAM and floppy drive support. I recall him pointedly explaining that it must be turned on before the CPU, never the other way around, or I would destroy the entire computer. I quickly learned a new emotion that day: utter fascination combined with equal parts terror. Thankfully, I never did destroy the computer, but I did destroy a lot of ASCII “H” tie-fighters with my “X” x-wing fighter in Star Wars (a non-TM joint). As well, the London flat opening sequence of Scott Adams’s Pirate Adventure has CRT burn-in on my brain. The first game I purchased with my own lawn-mowing money was Zork , in the ziploc baggie, from a clearance bin in the mall; fond memories of betrayal by the cover art versus the actual gameplay. Games absolutely had a huge impact on the trajectory of my hobbying. That said, it would be a dishonest assessment of my computing history to say, “The Model I introduced me to computer games and I played a lot of games and loved playing games and that’s who I am today: a gamer.” You know what else made an equally large impact on me? “Biorhythm,” a junk science “calculator” that supposedly determines your physical/emotional/mental high and low points during a month. As a 7-year old, I bought into the concept wholeheartedly, fueled in no small part by the 70s' obsession with horoscopes and the zodiac . Combine mysticism with a computer, and the feeling of unlocking universal secrets was powerfully strong. The computer was calculating! Something! It MUST be true! Just look at that graph! With secret knowledge of the universe unlocked, the computer had already paid for itself as far as I was concerned, especially as I had no firm concept of money at that age. But I knew enough to know money was important to daily life, so I also played around with a “recipe cost” calculator. I didn’t have any real idea of how to work it, but it felt important. So now we have our finances in order, as well as an understanding of our attenuation to the universe. What else can we do? Well, we all have things we’re thinking so it would be nice to write it all down without needing to go out and buy a new ink-ribbon for the typewriter. We could even share those thoughts without needing carbon paper or a ditto machine. Sounds like a plan. It can be hard for anyone born from the 90s onward to understand how magical it felt to type on a keyboard and see your words appear on a TV/monitor. Coming from Pong, word processing was itself a joyful experience. With that enjoyment came a cultural learning process, as we transitioned away from typewriters. We had to learn the benefits, not just the playful joy. That took longer than you might expect, especially for those of us who weren’t growing up in Silicon Valley. Even into the 80s in North Carolina, my junior-high typing class was conducted on manual typewriters. Likewise, the zeitgeist understanding of word processors vs. typewriters had cognitive hurdles to jump, as evidenced by the UI of the Coleco ADAM word processor SmartWRITER . On-screen, a typewriter platen with roller knobs is displayed onto which your letters are typed, reassuring the user, “Don’t worry. It looks strange, but this is functionally equivalent to using a typewriter. Squeeze my hand, you’ll be OK! OK, don’t squeeze THAT hard, you’re just typing not defusing a bomb.” Culturally, we had paradigms to relearn in the transition between “What’s a computer?” and “Everyone has a computer.” So while I experienced firsthand the leap in graphics fidelity between Pong and ASCII Star Wars, why did I enjoy productivity software so much? What could possibly have fascinated me about “recipe conversions?” I think the answer is pretty simple. I was empowered. The computer helped me feel like I could contribute to family life, and even the world, beyond just mowing the lawn and weeding the garden. I could write a bedtime story for my sister. I could organize our recipes. I could read the stars and produce astrological guidance for family and friends . The potential was tangible. As technology improved, so too did my ability to harness my newfound empowerment. PFS: File on the TRS-80 Model III, Deluxe Paint and Instant Music on the Amiga, Aldus PageMaker and HyperCard on the Mac, and so many more great productivity tools shaped my personal and professional lives. Of course, I certainly continued to play more than my fair share of games over the decades and they made a huge impact on me. I was utterly blown away when I saw the arcade-perfect Marble Madness on my Amiga at home. But to say I was “impacted by” software can have two very different meanings. Games impacted me similarly to how the movie Raiders of the Lost Ark did. I had a great time, but I wasn’t motivated to start making games or movies, nor did I become an archaeologist. On the other hand, Deluxe Paint impacted me in a way that changed the course of my life. That introduced me to creating art on the computer, which prompted me to enroll in the art program at UNC-Charlotte, which introduced me to graphic design. That was reinforced by exposure to Aldus PageMaker while working as a journalist on the school newspaper and features magazine. That’s a completely other kind of impact. I recognize games had just such an impact on some subset of people. The Carmacks and Mechners of the world played games and felt compelled to make games. Still others played and studied games and game history, and continue that obsession into present-day. The gaming side of retro computing is well-chronicled and continues to be covered by many nostalgia-based content creators. This blog is not about games; it is about the other software. I’m interested in looking holistically at productivity (lets say “non-entertainment”) software, both the contributions and dead-ends to their genres, during the early years of the home computer: 1977 - 1995; basically the length of OMNI Magazine 's print run . That was a period of "anything goes" with a lot of developers experimenting with what was possible on a home computer, until Windows95 kind of homogenized the industry ( though challengers to the hegemony did pop up from time to time ). I do this research recreationally and for my personal projects as well. It struck me that what fascinates me must surely be of interest to others, and I’d love to share what I learn with the community. Surely I can't be the only one who is getting a little bored of so much retro-gaming discourse? Surely?! my voice echoes into the TCP/IP void, recedes, and dies We can understand the intent of this blog as a spreadsheet with machines on the horizontal axis and software genres on the vertical. Any given intersection forms the basis of a potential article in the general form of . Posts take the form of a kind of a review/journal/retrospective. I use every piece of software as I write about it, so you'll get a real-time account of my experience learning and using it productively. You'll get a humanistic assessment of the software, its role in history, and a consideration of its viability today. The title was chosen to reflect that I’m not ( necessarily ) advocating we all buy Apple IIes and return to the software of the past “when things were better.” Rather, it's a recognition that early productivity tools were the rough-hewn flint axes which got the job done, even as the industry struggled to understand how best to solve the problem of “getting work done.” They were a necessary evolutionary step before the bronze age tools of today. As someone who cloned VisiCalc , I understand just how crude those early tools are. Even so, I find myself continually re-exploring that old software. There is utility in those old tools and interesting ideas to be mined. Recently I stumbled across something that by all accounts should have set the world on fire , but whose ideas needed more time to germinate before blossoming much later . Discoveries like this are not just nostalgic “what ifs” to opine wistfully upon, they can be dormant seeds of the future. Computing moves at such an unrelenting pace, those seeds may lie dormant for any number of reasons: bad marketing, released on a dying platform, too expensive, or even too large a mental leap for the public to “get” at the time. I see this blog as a way to explore the history of the work tools we use every day. I don’t do this out of misty-eyed sentimentality, but rather pragmatic curiosity. The past isn’t sacred, but it is still useful. Perhaps I can sum it up more succinctly like this: stone tools still cut meat.

0 views