Posts in Scala (10 found)
Stone Tools 2 weeks ago

dBASE on the Kaypro II

The world that might have been has been discussed at length. In one possible world, Gary Kildall's CP/M operating system was chosen over MS-DOS to drive IBM's then-new "Personal Computer." As such, Bill Gates's hegemony over the trajectory of computing history never happened. Kildall wasn't constantly debunking the myth of an airplane joyride which denied him Microsoft-levels of industry dominance. Summarily, he'd likely be alive and innovating the industry to this day. Kildall's story is pitched as a "butterfly flaps its wings" inflection point that changed computing history. The truth is, of course, there were many points along our timeline which led to Kildall's fade and untimely death. Rather, I'd like to champion what Kildall did . Kildall did co-host Computer Chronicles with Stewart Chiefet for seven years. Kildall did create the first CD-ROM encyclopedia. Kildall did develop (and coin the term for) what we know today as the BIOS. Kildall did create CP/M, the first wide-spread, mass-market, portable operating system for microcomputers, possible because of said BIOS. CP/M did dominate the business landscape until the DOS era, with 20,000+ software titles in its library. Kildall did sell his company, Digital Research Inc., to Novell for US $120M. Kildall did good . Systems built to run Kildall's CP/M were prevalent, all built around the same 8-bit limits: an 8080 or Z80 processor and up to 64KB RAM. The Osborne 1, a 25lb (11kg) "portable" which sold for $1795 ($6300 in 2026), was the talk of the West Coast Computer Faire in 1981. The price was sweet, considering it came bundled with MSRP $1500 in software, including Wordstar and Supercalc . Andy Kay's company, Non-Linear Systems, debuted the Kaypro II (the "I" only existed in prototype form) the following year at $1595, $200 less (and four pounds heavier) than the Osborne. Though slower than an Osborne, it arguably made it easier to do actual work, with a significantly larger screen and beefier floppy disk capacity. Within the major operating system of its day, on popular hardware of its day, ran the utterly dominant relational database software of its day. PC Magazine , February 1984, said, "Independent industry watchers estimate that dBASE II enjoys 70 percent of the market for microcomputer database managers." Similar to past subjects HyperCard and Scala Multimedia , Wayne Ratcliff's dBASE II was an industry unto itself, not just for data-management, but for programmability, a legacy which lives on today as xBase. Strangely enough, dBASE also decided to attach "II" to its first release; a marketing maneuver to make the product appear more advanced and stable at launch. I'm sure the popularity of the Apple II had nothing to do with anyone's coincidentally similar roman numeral naming scheme whatsoever. Written in assembly, dBASE II squeezed maximum performance out of minimal hardware specs. This is my first time using both CP/M and dBASE. Let's see what made this such a power couple. I'm putting on my tan suit and wide brown tie for this one. As the owner of COMPUTRON/X, a software retail shop, I'm in Serious Businessman Mode™. I need to get inventory under control, snake the employee toilet, do profit projections, and polish a mind-boggling amount of glass and chrome. For now, I'll start with inventory and pop in this laserdisc to begin my dBASE journey. While the video is technically for 16-bit dBASE III , our host, Gentry Lee of Jet Propulsion Laboratory, assures us that 8-bit dBASE II users can do everything we see demonstrated, with a few interface differences. This is Gail Fisher, a smarty pants who thinks she's better than me. Tony Lima, in his book dBASE II for Beginners , concurs with the assessment of dBASE II and III 's differences being mostly superficial. Lima's book is pretty good, but I'm also going through Mastering dBASE II The Easy Way , by Paul W. Heiser, the official Kaypro dBASE II Manual, and dBase II for the First Time User by Alan Freedman. That last one is nicely organized by common tasks a dBASE user would want to do, like "Changing Your Data" and "Modifying Your Record Structure." I find I return to Freedman's book often. As I understand my time with CP/M, making custom bootable diskettes was the common practice. dBASE II is no different, and outright encourages this, lest we risk losing US$2000 (in 2026 dollars) in software. Being of its time and place in computing history, dBASE uses the expected UI. You know it, you love it, it's "a blinking cursor," here called "the dot prompt." While in-program is available, going through the video, books, and manual is a must. dBASE pitches the dot prompt as a simple, English language interface to the program. for example sets the default save drive to the B: drive. You could never intuit that by what it says, nor guess that it even needs to be done, but when you know how it works, it's simple to remember. It's English only in the sense that English-like words are strung together in English-like order. That said, I kind of like it? creates a new database, prompting first for a database name, then dropping me into a text entry prompt to start defining fields. This is a nice opportunity for me to feign anger at The Fishers, the family from the training video. Fancy-pants dBASE III has a more user-friendly entry mode, which requires no memorization of field input parameters. Prompts and on-screen help walk Gail through the process. In dBASE II , a field is defined by a raw, comma-delimited string. Field definitions must be entered in the order indicated on-screen. is the data type for the field, as string, number, or boolean. This is set by a one-letter code which will never be revealed at any time, even when it complains that I've used an invalid code. Remind me to dog-ear that page of the manual. For my store, I'm scouring for games released for CP/M. Poking through Moby Games digs up roughly 30 or so commercial releases, including two within the past five years . Thanks, PunyInform ! My fields are defined thusly, called up for review by the simple command. The most frustrating part about examining database software is that it doesn't do anything useful until I've entered a bunch of data. At this stage in my learning, this is strictly a manual process. Speaking frankly, this part blows, but it also blows for Gail Fisher, so my schadenfreude itch is scratched. dBASE does its best to minimize the amount of keyboard shenanigans during this process, and in truth data entry isn't stressful. I can pop through records fairly quickly, if the raw data is before me. The prompt starts at the first field and (not !) moves to the next. If entry to a field uses the entire field length (as defined by me when setting up the fields earlier), the cursor automatically jumps to the next field with a PC-speaker beep. I guess dBASE is trying to "help," but when touch typing I'm looking at my data source, not the screen. I don't know when I'm about to hit the end of a field, so I'm never prepared when it switches input fields and makes that ugly beep. More jarring is that if the final field of a record is completely filled, the cursor "helpfully" jumps to the beginning of a new record instantly, with no opportunity to read or correct the data I just input. It's never not annoying. Gail doesn't have these issues with dBASE III and her daughter just made dinner for her. Well, I can microwave a burrito as well as anyone so I'm not jealous . I'm not. In defining the fields, I have already made two mistakes. First, I wanted to enter the critic score as a decimal value so I could get the average. Number fields, like all fields, have a "width" (the maximum number of characters/bytes to allocate to the field), but also a "decimal places" value and as I type these very words I see now my mistake. Rubber ducking works . I tricked myself into thinking "width" was for the integer part, and "decimal places" was appended to that. I see now that, like character fields, I need to think of the entire maximum possible number as being the "width." Suppose in a value we expect to record . There are 2 decimal places, and a decimal point, and a leading 0, and potentially a sign, as or . So that means the "width" should be 5, with 2 "decimal places" (of those 5). Though I'm cosplaying as a store owner, I'm apparently cosplaying as a store owner that sucks! I didn't once considered pricing! Gah, Gail is so much better at business than I am! Time to get "sorta good." Toward that end, I have my to-do list after a first pass through data entry. Modifying dBASE "structures" (the field/type definitions) can be risky business. If there is no data yet, feel free to change whatever you want. If there is pre-existing data, watch out. will at least do the common decency of warning you about the pile you're about to step into. Modifying a database structure is essentially verboten, rather we must juggle files to effect a structure change. dBASE let's us have two active files, called "work areas," open simultaneously: a and a . Modifications to these are read from or written to disk in the moment; 64K can't handle much live data. It's not quite "virtual memory" but it makes the best of a tight situation. When wanting to change data in existing records, the command sounds like a good choice, but actually winds up being more useful. will focus in on specified fields for immediate editing across all records. It's simple to through fields making changes. I could to edit everything at once, but I'm finding it safer while learning to make small incremental changes or risk losing a large body of work. Make a targeted change, save, make another change, save, etc. 0:00 / 0:03 1× I laughed every time Gentry Lee showed up, like he's living with The Fishers as an invisible house gremlin. They never acknowledge his presence, but later he eats their salad! Being a novice at dBASE is a little dangerous, and MAME has its own pitfalls. I have been conditioned over time to when I want to "back out" of a process. This shuts down MAME instantly. When it happens, I swear The Fishers are mocking me, just on the edge of my peripheral vision, while Gentry Lee helps himself to my tuna casserole. dBASE is a relational database. Well, let's be less generous and call it "relational-ish." The relational model of data was defined by Edgar F. Codd in 1969 where "relation is used here in its accepted mathematical sense." It's all set theory stuff; way over my head. Skimming past the nerd junk, in that paper he defines our go-to relationship of interest: the join. As a relational database, dBASE keeps its data arranged VisiCalc style, in rows and columns. So long as two databases have a field in common, which is defined, named, and used identically in both , the two can be "joined" into a third, new database. I've created a mini database of developer phone numbers so I can call and yell at them for bugs and subsequent lost sales. I haven't yet built up the grin-and-bear-it temperament Gail possesses toward Amanda Covington. Heads will roll! You hear me, Lebling? Blank?! 64K (less CP/M and dBASE resources) isn't enough to do an in-memory join. Rather, joining creates and writes a completely new database to disk which is the union of two databases. The implication being you must have space on disk to hold both original databases as well as the newly joined database, and also the new database cannot exceed dBASE 's 65,535 record limit after joining. In the above , means and means , so we can precisely specify fields and their work area of origin. This is more useful for doing calculations at time, like to join only records where deletes specific records, if we know the record number, like . Commands in dBASE stack, so a query can define the target for a command, as one would hope and expect in 2026. Comparisons and sub-strings can be used as well. So, rather than deleting "Infocom, Inc." we could: The command looks for the left-hand string as a case-sensitive sub-string in the right-hand string. We can be a little flexible in how data may have been input, getting around case sensitivity through booleans. Yes, we have booleans! Wait, why am I deleting any Infocom games? I love those! What was I thinking?! Once everything is marked for deletion, that's all it is: marked for deletion. It's still in the database, and on disk, until we do real-deal, non-reversible, don't-forget-undo-doesn't-exist-in-1982, destruction with . Until now, I've been using the command as a kind of ad-hoc search mechanism. It goes through every record, in sequence, finding record matches. Records have positions in the database file, and dBASE is silently keeping track of a "record pointer" at all times. This represents "the current record" and commands without a query will be applied to the currently pointed record. Typing in a number at the dot prompt moves the pointer to that record. That moves me to record #3 and display its contents. When I don't know which record has what I want, will move the pointer to the first match it finds. At this point I could that record, or to see a list of records from the located record onward. Depending on the order of the records, that may or may not be useful. Right now, the order is just "the order I typed them into the system." We need to teach dBASE different orders of interest to a stripmall retail store. While the modern reaction would be to use the command, dBASE's Sort can only create entirely new database files on disk, sorted by the desired criteria. Sort a couple of times on a large data set and soon you'll find yourself hoarding the last of new-old 5 1/4" floppy disk stock from OfficeMax, or being very careful about deleting intermediary sort results. SQL brainiacs have a solution to our problem, which dBASE can also do. An "index" is appropriate for fast lookups on our columnar data. We can index on one or more fields, remapping records to the sort order of our heart's desire. Only one index can be used at a time, but a single index can be defined against multiple fields. It's easier to show you. When I set the index to "devs" and , that sets the record pointer to the first record which matches my find. I happen to know I have seven Infocom games, so I can for fields of interest. Both indexes group Infocom games together as a logical block, but within that block Publisher order is different. Don't get confused, the actual order of files in the database is betrayed by the record number. Notice they are neither contiguous nor necessarily sequential. would rearrange them into strict numerical record order. An Index only relates to the current state of our data, so if any edits occur we need to rebuild those indexes. Please, contain your excitement. Munging data is great, but I want to understand my data. Let's suppose I need the average rating of the games I sell. I'll first need a count of all games whose rating is not zero (i.e. games that actually have a rating), then I'll need a summation of those ratings. Divide those and I'll have the average. does what it says. only works on numeric fields, and also does what it says. With those, I basically have what I need. Like deletion, we can use queries as parameters for these commands. dBASE has basic math functions, and calculated values can be stored in its 64 "memory variables." Like a programming language, named variables can be referenced by name in further calculations. Many functions let us append a clause which shoves a query result into a memory variable, though array results cannot be memorized this way. shoves arbitrary values into memory, like or . As you can see in the screenshot above, the rating of CP/M games is (of 100). Higher than I expected, to be perfectly honest. As proprietor of a hot (power of positive thinking!) software retail store, I'd like to know how much profit I'll make if I sold everything I have in stock. I need to calculate, per-record, the following but this requires stepping through records and keeping a running tally. I sure hope the next section explains how to do that! Flipping through the 1,000 pages of Kaypro Software Directory 1984 , we can see the system, and CP/M by extension, was not lacking for software. Interestingly, quite a lot was written in and for dBASE II, bespoke database solutions which sold for substantially more than dBASE itself. Shakespeare wrote, "The first thing we do, let's kill all the lawyers." Judging from these prices, the first thing we should do is shake them down for their lunch money. In the HyperCard article I noted how an entire sub-industry sprung up in its wake, empowering users who would never consider themselves programmers to pick up the development reigns. dBASE paved the way for HyperCard in that regard. As Jean-Pierre Martel noted , "Because its programming language was so easy to learn, millions of people were dBASE programmers without knowing it... dBASE brought programming power to the masses." dBASE programs are written as procedural routines called Commands, or .CMD files. dBASE helpfully includes a built-in (stripped down) text editor for writing these, though any text editor will work. Once written, a .CMD file like can be invoked by . As Martel said, I seem to have become a dBASE programmer without really trying. Everything I've learned so far hasn't just been dot prompt commands, it has all been valid dBASE code. A command at the dot prompt is really just a one-line program. Cool beans! Some extra syntax for the purpose of development include: With these tools, designing menus which add a veneer of approachability to a dBASE database are trivial to create. Commands are interpreted, not compiled (that would come later), so how were these solutions sold to lawyers without bundling a full copy of dBASE with every Command file? For a while dBASE II was simply a requirement to use after-market dBASE solutions. The 1983 release of dBASE Runtime changed that, letting a user run a file, but not edit it. A Command file bundled with Runtime was essentially transformed into a standalone application. Knowing this, we're now ready to charge 2026 US$10,000 per seat for case management and tracking systems for attorneys. Hey, look at that, this section did help me with my profit calculation troubles. I can write a Command file and bask in the glow of COMPUTRON/X's shining, profitable future. During the 8 -> 16-bit era bridge, new hardware often went underutilized as developers came to grips with what the new tools could do. Famously, Visicalc 's first foray onto 16-bit systems didn't leverage any of the expanded RAM on the IBM-PC and intentionally kept all known bugs from the 8-bit Apple II version. The word "stop gap" comes to mind. Corporate America couldn't just wait around for good software to arrive. CP/M compatibility add-ons were a relatively inexpensive way to gain instant access to thousands of battle-tested business software titles. Even a lowly Coleco ADAM could, theoretically, run WordStar and Infocom games, the thought of which kept me warm at night as I suffered through an inferior Dragon's Lair adaptation. They promised a laserdisc attachment! For US$600 in 1982 ($2,000 in 2026) your new-fangled 16-bit IBM-PC could relive the good old days of 8-bit CP/M-80. Plug in XEDEX's "Baby Blue" ISA card with its Z80B CPU and 64K of RAM and the world is your slowly decaying oyster. That RAM is also accessible in 16-bit DOS, serving dual-purpose as a memory expansion for only $40 more than IBM's own bare bones 64K board. PC Magazine' s February 1982 review seemed open to the idea of the card, but was skeptical it had long-term value. XEDEX suggested the card could someday be used as a secondary processor, offloading tasks from the primary CPU to the Z80, but never followed through on that threat, as far as I could find. Own anApple II with an 8-bit 6502 CPU but still have 8-bit Z80 envy? Microsoft offered a Z80 daughter-card with 64K RAM for US$399 in 1981 ($1,413 in 2026). It doesn't provide the 80-column display you need to really make use of CP/M software, but is compatible with such add-ons. It was Bill Gates's relationship with Gary Kildall as a major buyer of CP/M for this very card that started the whole ball rolling with IBM, Gates's purchase of QDOS, and the rise of Microsoft. A 16K expansion option could combine with the Apple II's built-in 48K memory, to get about 64K for CP/M usage. BYTE Magazine 's November 1981 review raved, "Because of the flexibility it offers Apple users, I consider the Softcard an excellent buy." Good to know! How does one add a Z80 processor to a system with no expansion slots? Shove a Z80 computer into a cartridge and call it a day, apparently. This interesting, but limited, footnote in CP/M history does what it says, even if it doesn't do it well. Compute!'s Gazette wrote, "The 64 does not make a great CP/M computer. To get around memory limitations, CP/M resorts to intensive disk access. At the speed of the 1541, this makes programs run quite slowly," Even worse for CP/M users is that the slow 1541 can't read CP/M disks. Even if it could, you're stuck in 40-column mode. How were users expected to get CP/M software loaded? We'll circle back to that a little later. At any rate, Commodore offered customers an alternative solution. Where it's older brother had to make do with a cartridge add-on, the C128 takes a different approach. To maintain backward compatible with the C64 it includes a 6510 compatible processor, the 8502. It also wants to be CP/M compatible, so it needs a Z80 processor. What to do, what to do? Maybe they could put both processors into the unit? Is that allowed? Could they do that? They could, so they did. CP/M came bundled with the system, which has a native 80-column display in CP/M mode. It is ready to go with the newer, re-programmable 1571 floppy drive. Unfortunately, its slow bus speed forces the Z80 to run at only 2MHz, slower even than a Kaypro II. Compute!'s Gazette said in their April 1985 issue, "CP/M may make the Commodore 128 a bargain buy for small businesses. The price of the Commodore 128 with the 1571 disk drive is competitive with the IBM PCjr." I predict rough times ahead for the PCjr if that's true! Atari peripherals have adorable industrial design, but were quite expensive thanks to a strange system design decision. The 8-bit system's nonstandard serial bus necessitated specialized data encoding/decoding hardware inside each peripheral, driving up unit costs. For example, the Atari 910 5 1/4" floppy drive cost $500 in 1983 (almost $2,000 in 2026) thanks to that special hardware, yet only stored a paltry 90K per disk. SWP straightened out the Atari peripheral scene with the ATR8000. Shenanigans with special controller hardware are eliminated, opening up a world of cheaper, standardized floppy drives of all sizes and capacities. It also accepts Centronics parallel and RS-232C serial devices, making tons of printers, modems, and more compatible with the Atari. The device also includes a 16K print buffer and the ability to attach up to four floppy drives without additional controller board purchases. A base ATR8000 can replace a whole stack of expensive Atari-branded add-ons, while being more flexible and performant. The saying goes, "Cheaper, better, faster. Pick any two." The ATR8000 is that rare device which delivered all three. Now, upgrade that box with its CP/M compatibility option, adding a Z80 and 64K, and you've basically bought a second computer. When plugged into the Atari, the Atari functions as a remote terminal into the unit, using whatever 40/80-column display adapter you have connected. It could also apparently function standalone, accessible through any terminal, no Atari needed. That isn't even its final form. The Co-Power-88 is a 128K or 256K PC-compatible add-on to the Z80 CP/M board. When booted into the Z80, that extra RAM can be used as a RAM disk to make CP/M fly. When booted into the 8088, it's a full-on PC running DOS or CP/M-86. Tricked out, this eight pound box would set you back US$1000 in 1984 ($3,000 in 2026), but it should be obvious why this is a coveted piece of kit for the Atari faithful to this day. For UK£399 in 1985 (£1288 in 2026; US$1750) Acorn offered a Z80 with dedicated 64K of RAM. According to the manual, the Z80 handles the CP/M software, while the 6502 in the base unit handles floppies and printers, freeing up CP/M RAM in the process. Plugged into the side of the BBC Micro, the manual suggests desk space clearance of 5 ft wide and 2 1/2 feet deep. My god. Acorn User June 1984 declared, "To sum up, Acorn has put together an excellent and versatile system that has something for everyone." I'd like to note that glowing review was almost exclusively thanks to the bundled CP/M productivity software suite. Their evaluation didn't seem to try loading off-the-shelf software, which caused me to narrow my eyes, and stroke my chin in cynical suspicion. Flip through the manual to find out about obtaining additional software, and it gets decidedly vague. "You’ll find a large and growing selection available for your Z80 personal computer, including a special series of products that will work in parallel with the software in your Z80 pack." Like the C128, the Coleco ADAM was a Z80 native machine so CP/M can work without much fuss, though the box does proclaim "Made especially for ADAM!" Since we don't have to add hardware (well, we need a floppy; the ADAM only shipped with a high-speed cassette drive), we can jump into the ecosystem for about US$65 in 1985 ($200 in 2026). Like other CP/M solutions, the ADAM really needed an 80-column adapter, something Coleco promised but never delivered. Like Dragon's Lair on laserdisc! As it stands, CP/M scrolls horizontally to display all 80 columns. This version adds ADAM-style UI for its quaint(?) roman numeral function keys. OK, CP/M is running! Now what? To be honest, I've been toying with you this whole time, dangling the catnip of CP/M compatibility. It's time to come clean and admit the dark side of these add-on solutions. There ain't no software! Even when the CPU and CP/M version were technically compatible, floppy disc format was the sticking point for getting software to run any given machine. For example, the catalog for Kaypro software in 1984 is 896 pages long. That is all CP/M software and all theoretically compatible with a BBC Micro running CP/M. However, within that catalog, everything shipped expressly on Kaypro compatible floppy discs. Do you think a Coleco ADAM floppy drive can read Kaypro discs? Would you be even the tiniest bit shocked to learn it cannot? Kaypro enthusiast magazine PRO illustrates the issue facing consumers back then. Let's check in on the Morrow Designs (founded by Computer Chronicles sometimes co-host George Morrow!) CP/M system owners. How do they fare? OK then, what about that Baby Blue from earlier? The Microsoft Softcard must surely have figured something out. The Apple II was, according to Practical Computing , "the most widespread CP/M system" of its day. Almost every product faced the same challenge . On any given CP/M-80 software disk, the byte code is compatible with your Z8o, if your floppy drive can read the diskette. You couldn't just buy a random CP/M disk, throw it into a random CP/M system, and expect it to work, which would have been a crushing blow to young me hoping to play Planetfall on the ADAM. So what could be done? There were a few options, none of them particularly simple or straightforward, especially to those who weren't technically-minded. Some places offered transfer services. XEDEX, the makers of Baby Blue, would do it for $100 per disk . I saw another listing for a similar service (different machine) at $10 per disk. Others sold the software pre-transferred, as noted on a Coleco ADAM service flyer. A few software solutions existed, including Baby Blue's own Convert program, which shipped with their card and "supports bidirectional file transfer between PC-DOS and popular CP/M disk formats." They also had the Baby Blue Conversion Software which used emulation to "turn CP/M-80 programs into PC-DOS programs for fast, efficient execution on Baby Blue II." Xeno-Copy, by Vertex Systems, could copy from over 40+ disk formats onto PC-DOS for US$99.50 ($313 in 2026); their Plus version promised cross-format read/write capabilities. Notably, Apple, Commodore, Apricot, and other big names are missing from their compatibility list. The Kermit protocol , once installed onto a CP/M system disk, could handle cross-platform serial transfers, assuming you had the additional hardware necessary. "CP/M machines use many different floppy disk formats, which means that one machine often cannot read disks from another CP/M machine, and Kermit is used as part of a process to transfer applications and data between CP/M machines and other machines with different operating systems." The Catch-22 of it all is that you have to get Kermit onto your CP/M disk in the first place. Hand-coding a bare-bones Kermit protocol (CP/M ships with an assembler) for the purposes of getting "real" Kermit onto your system so you could then transfer the actual software you wanted in the first place, was a trick published in the Kermit-80 documentation . Of course, this all assumes you know someone with the proper CP/M setup to help; basically, you're going to need to make friends. Talk to your computer dealer, or better yet, get involved in a local CP/M User's Group. It takes a village to move Wordstar onto a C64. I really enjoyed my time learning dBASE II and am heartened by the consistency of its commands and the clean interaction between them. When I realized that I had accidentally learned how to program dBASE , that was a great feeling. What I expected to be a steep learning curve wasn't "steep" per se, but rather just intimidating. That simple, blinking cursor, can feel quite daunting at the first step, but each new command I learned followed a consistent pattern. Soon enough, simple tools became force multipliers for later tools. The more I used it, the more I liked it. dBASE II is uninviting, but good. On top of that, getting data out into the real world is simple, as you'll see below in "Sharpening the Stone." I'm not locked in. So what keeps me from being super enthusiastic about the experience? It is CP/M-80 which gives me pause. The 64K memory restriction, disk format shenanigans, and floppy disk juggling honestly push me away from that world except strictly for historical investigations. Speaking frankly, I don't care for it. CP/M-86 running dBASE III+ could probably win me over, though I would probably try DR-DOS instead. Memory constraints would be essentially erased, DOSBox-X is drag-and-drop trivial to move files in and out of the system, and dBASE III+ is more powerful while also being more user-friendly. Combine that with Clipper , which can compile dBASE applications into standalone .exe files, and there's powerful utility to be had . By the way, did you know dBASE is still alive ? Maybe. Kinda? Hard to say. The latest version is dBASE 2019 (not a typo!), but the site is unmaintained and my appeal to their LinkedIn for a demo has gone unanswered. Its owner, dBase LTD, sells dBASE Classic which is dBASE V for DOS running in DOSBox, a confession they know they lost the plot, I'd humbly suggest. An ignominious end to a venerable classic. Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). When working with CP/M disk images, get to know cpmtools . This is a set of command line utilities for creating, viewing, and modifying CP/M disk images. The tools mostly align with Unix commands, prefixed with Those are the commands I wound up using with regularity. If your system of choice is a "weirdo system" you may be restricted in your disk image/formatting choices; these instructions may be of limited or no help. knows about Kaypro II disk layouts via diskdefs. This Github fork makes it easy to browse supported types. Here's what I did. Now that you can pull data out of CP/M, here's how to make use of it. Kaypro II emulation running in MAME. Default setup includes Dual floppies Z80 CPU at 2.4MHz dBase II v2.4 See "Sharpening the Stone" at the end of this post for how to get this going. Personally, I found this to be a tricky process to learn. Change the of the rating field and add in that data. Add pricing fields and related data. Add more games. and allow decision branching does iterations and will grab a character or string from the user prints text to screen at a specific character position and give control over system memory will run an assembly routine at a known memory location For this article I specifically picked a period-authentic combo of Kaypro II + CP/M 2.2 + dBASE II 2.4. You don't have to suffer my pain! CP/M-86 and dBASE III+ running in a more feature-rich emulator would be a better choice for digging into non-trivial projects. I'm cold on MAME for computer emulation, except in the sense that in this case it was the fastest option for spinning up my chosen tools. It works, and that's all I can say that I enjoyed. That's not nothing! I find I prefer the robust settings offered in products like WinUAE, Virtual ADAM, VICE , and others. Emulators with in-built disk tools are a luxury I have become addicted to. MAME's interface is an inelegant way to manage hardware configurations and disk swapping. MAME has no printer emulation, which I like to use for a more holistic retro computing experience. Getting a working, trouble-free copy of dBASE II onto a Kaypro II compatible disk image was a non-trivial task. It's easier now that I know the situation, but it took some cajoling. I had to create new, blank disks, and copy CP/M and dBASE over from other disk images. Look below under "Getting Your Data into the Real World" to learn about and how it fits into the process. Be careful of modern keyboard conventions, especially wanting to hit to cancel commands. In MAME this will hard quit the emulator with no warning! Exported data exhibited strange artifacts: The big one: it didn't export any "logical" (boolean) field values from my database. It just left that field blank on all records. Field names are not exported. Garbage data found after the last record; records imported fine. On Linux and Windows (via WSL) install thusly : view the contents of a CP/M disk image. Use the flag to tell it the format of the disk, like for the Kaypro II. : format a disk image with a CP/M file system : copy files to/from other disk or to the host operating system : remove files from a CP/M disk image : for making new, blank disk image files (still needs to be formatted) : makes a blank disk image to single-sided, double-density specification : formats that blank image for the Kaypro II : copies "DBASE.COM" from the current directory of the host operating system into the Kaypro II disk image. : displays the contents of the disk : copies "FILE.TXT" from the disk image into the current directory of the host operating system (i.e. ) dBASE has built-in exporting functionality, so long as you use the extension when saving ( in dBASE lingo). That creates a bog-standard ASCII text file, each record on its own line, comma-delimited (and ONLY comma-delimited). It is not Y2K compatible, if you're hoping to record today's date in a field. I tackled this a bit in the Superbase post . It is probably possible to hack up a Command file to work around this issue, since dates are just strings in dBASE . dBASE II doesn't offer the relational robustness of SQL. Many missing, useful tools could be built in the xBase programming language. It would be significant work in some cases; maybe not worth it or consider if you can do without those. Your needs may exceed what CP/M-80 hardware can support; its 8-bit nature is a limiting factor in and of itself. If you have big plans , consider dBASE III+ on DOS to stretch your legs. (I read dBASE IV sucks) The user interface helps at times, and is opaque at other times. This can be part of the fun in using these older systems, mastering esoterica for esoterica's sake, but may be a bridge too far for serious work of real value. Of course, when discussing older machines we are almost always excluding non-English speakers thanks to the limitations of ASCII. The world just wasn't as well-connected at the time.

0 views
Martin Fowler 2 weeks ago

Fragments: February 9

Some more thoughts from last week’s open space gathering on the future of software development in the age of AI. I haven’t attributed any comments since we were operating under the Chatham House Rule , but should the sources recognize themselves and would like to be attributed, then get in touch and I’ll edit this post. ❄                ❄ During the opening of the gathering, I commented that I was naturally skeptical of the value of LLMs. After all, the decades have thrown up many tools that have claimed to totally change the nature of software development. Most of these have been little better than snake oil. But I am a total, absolute skeptic - which means I also have to be skeptical of my own skepticism. ❄                ❄ One of our sessions focused on the problem of “cognitive debt”. Usually, as we build a software system, the developers of that system gain an understanding both the underlying domain and the software they are building to support it. But once so much work is sent off to LLMs, does this mean the team no longer learns as much? And if so, what are the consequences of this? Can we rely on The Genie to keep track of everything, or should we take active measures to ensure the team understands more of what’s being built and why? The TDD cycle involves a key (and often under-used) step to refactor the code. This is where the developers consolidate their understanding and embed it into the codebase. Do we need some similar step to ensure we understand what the LLMs are up to? When the LLM writes some complex code, ask it to explain how it works. Maybe get it do so in a funky way, such as asking it to explain the code’s behavior in the form of a fairy tale. ❄                ❄ OH: “LLMs are drug dealers, they give us stuff, but don’t care about the resulting system or the humans that develop and use it”. Who cares about the long-term health of the system when the LLM renews its context with every cycle? ❄                ❄ Programmers are wary of LLMs not just because folks are worried for their jobs, but also because we’re scared that LLMs will remove much of the fun from programming. As I think about this, I consider what I enjoy about programming. One aspect is delivering useful features - which I only see improving as LLMs become more capable. But, for me, programming is more than that. Another aspect I enjoy about programming is model building. I enjoy the process of coming up with abstractions that help me reason about the domain the code is supporting - and I am concerned that LLMs will cause me to spend less attention on this model building. It may be, however, that model-building becomes an important part of working effectively with LLMs, a topic Unmesh Joshi and I explored a couple of months ago. ❄                ❄ In the age of LLMs, will there still be such a things as “source code”, and if so, what will it look like? Prompts, and other forms of natural language context can elicit a lot of behavior, and cause a rise in the level of abstraction, but also a sideways move into non-determinism . In all this is there still a role for a persistent statement of non-deterministic behavior? Almost a couple of decades ago, I became interested in a class of tools called Language Workbenches . They didn’t have a significant impact on software development, but maybe the rise of LLMs will reintroduce some ideas from them. These tools rely on a semantic model that the tool persists in some kind of storage medium, that isn’t necessarily textual or comprehensible to humans directly. Instead, for humans to understand it, the tools include projectional editors that create human-readable projections of the model. Could this notion of a non-human deterministic representation become the future source code? One that’s designed to maximize expression with minimal tokens? ❄                ❄ OH: “Scala was the first example of a lab-leak in software. A language designed for dangerous experiments in type theory escaped into the general developer population.” ❄                ❄                ❄                ❄                ❄ elsewhere on the web Angie Jones on tips for open source maintainers to handle AI contributions I’ve been seeing more and more open source maintainers throwing up their hands over AI generated pull requests. Going so far as to stop accepting PRs from external contributors. But yo, what are we doing?! Closing the door on contributors isn’t the answer. Open source maintainers don’t want to hear this, but this is the way people code now, and you need to do your part to prepare your repo for AI coding assistants. ❄                ❄                ❄                ❄                ❄ Matthias Kainer has written a cool explanation of how transformers work with interactive examples Last Tuesday my kid came back from school, sat down and asked: “How does ChatGPT actually know what word comes next?” And I thought - great question. Terrible timing, because dinner was almost ready, but great question. So I tried to explain it. And failed. Not because it is impossibly hard, but because the usual explanations are either “it is just matrix multiplication” (true but useless) or “it uses attention mechanisms” (cool name, zero information). Neither of those helps a 12-year-old. Or, honestly, most adults. Also, even getting to start my explanation was taking longer than a tiktok, so my kid lost attention span before I could even say “matrix multiplication”. I needed something more visual. More interactive. More fun. So here is the version I wish I had at dinner. With drawings. And things you can click on. Because when everything seems abstract, playing with the actual numbers can bring some light. A helpful guide for any 12-year-old, or a 62-year-old that fears they’re regressing. ❄                ❄                ❄                ❄                ❄ In my last fragments , I included some concerns about how advertising could interplay with chatbots. Anthropic have now made some adverts about concerns about adverts - both funny and creepy. Sam Altman is amused and annoyed.

0 views
Stone Tools 4 weeks ago

Scala Multimedia on the Commodore Amiga

The ocean is huge. It's not only big enough to separate landmasses and cultures, but also big enough to separate ideas and trends. Born and raised in the United States, I couldn't understand why the UK was always eating so much pudding. Please forgive my pre-internet cultural naiveté. I should also be kind to myself for thinking the Video Toaster was the be-all-end-all for video production and multimedia authoring on the Amiga. Search Amiga World metadata on Internet Archive for "toaster" and "scala" and you'll see my point. "Toaster" brings up dozens of top-level hits, and "Scala" gets zero. The NTSC/PAL divide was as vast as the ocean. From the States, cross either ocean and Scala was everywhere, including a full, physical-dongle-copy-protection-removed, copy distributed on the cover disk of CU Amiga Magazine , issue 96. Listening to Scala founder, Jon Bøhmer, speak of Scala 's creation in an interview on The Retro Hour , his early intuition on the Amiga's potential in television production built Scala into an omnipresent staple across multiple continents. Intuition alone can't build an empire. Bøhmer also had gladiatorial-like aggression to maintain his dominance in that market. As he recounted, "A Dutch company tried to make a Scala clone, and they made a mistake of putting...the spec sheet on their booth and said all those different things that Scala didn't have yet. So I took that spec sheet back to my developers (then, later) lo and behold before those guys had a bug free version out on the street, we had all their features and totally eradicated their whole proposal." Now, of course I understand that it would have been folly to ignore the threat. Looked at from another angle, Scala had apparently put themselves in a position where their dominance could face a legitimate threat from a disruptor. Ultimately, that's neither here nor there as in the end, Scala had early momentum and could swing the industry their direction. Scala (the software) remains alive and well even now, in the digital signage authoring and playback software arena. You know the stuff, like interactive touchscreens at restaurant checkouts, or animated displays at retail stores. As with the outliner/PIM software in the ThinkTank article , the world of digital signage is likewise shockingly crowded. Discovering this felt like catching a glimpse of a secondary, invisible world just below the surface of conscious understanding. Scala didn't find success without good reason. It solved some thorny broadcast production issues on hardware that was alone in its class for a time. A unique blend of software characteristics (multitasking, IFF, ARexx) turned an Amiga running Scala into more than the sum of its parts. Scala by itself would have made rumbles. Scala on the Amiga was seismic. At heart, I'm a print guy. Like anyone, I enjoy watching cool video effects, and I once met Kiki Stockhammer in person. But my brain has never been wired for animation or motion design. My 3D art was always static; my designs were committed to ink on paper. I liked holding a physical artifact in my hands at the end of the design process. Considering the sheer depths of my video naivete, for this investigation I will need a lot of help from the tutorials. I'll build the demo stuff from the manual, and try to push myself further and see where my explorations take me. CU Amiga Magazine issues 97 - 102 contain Scala MM300 tutorials as well, so I'll check those out for a man-on-the-streets point of view. The first preconception I need to shed is thinking Scala is HyperCard for the Amiga. It flirts with certain concepts, but building Myst with this would be out of reach for most people. I'll never say it's "impossible," as I don't like tempting the Fates that way, but it would need considerable effort and development skills. A little terminology is useful before we really dig in. I usually start an exploration of GUI applications by checking out the available menus. With Scala , there aren't any. I don't mean the menubar is empty, I mean there isn't a menubar, period. It does not exist. I am firmly in Scala Land and Scala 's vision of how multimedia work gets done. As with PaperClip , I find its opinionated interface comforting. I have serious doubts about common assumptions of interface homogeneity being a noble goal, but that's a discussion for a future post. Despite its plain look, what we see when the program launches is richly complex. Anything in purple (or whatever your chosen color scheme uses) is clickable, and if it has its own boundaries it does its own thing. Across the top we have the Scala logo, program title bar, and the Amiga Workbench "depth gadget." Clicking the logo is how we save our project and/or exit the program. Then we have what is clearly a list, and judging from interface cues its a list of pages. This list ("script") is akin to a HyperCard stack with transitions ("wipes") between cards ("pages"). Each subsection of any given line item is its own button for interfacing with that specific aspect of the page. It's approachable and nonthreatening, and to my mind encourages me to just click on things and see what happens. The bottom-sixth holds an array of buttons that would normally be secreted away under standard Amiga GUI menus. On the one hand, this means if you see it, it's available; no poking through dozens of greyed-out items. On the other hand, keyboard shortcuts and deeper tools aren't exposed. There's no learning through osmosis here. Following the tutorial, the first thing to do is define my first "page." Click "New," choose a background as a visual starting point if you like, click "OK", choose a resolution and color depth (this is per-screen, not per-project), and click "OK" to finish. The program steps me through the process; it is clear how to proceed. The design team for Scala really should be commended for the artistic craftsmanship of the product. It is easy to put something professional together with the included backgrounds, images, and music. Everything is tasteful and (mostly) subdued, if aesthetically "of its time" occasionally. Thanks to IFF support, if you don't like the built-in assets, you can create your own in one of the Amiga's many paint or music programs. That visual care extends to the included fonts, which are a murderer's row of well-crafted classics. All the big stars are here! Futura, Garamond, Gill Sans, Compact, and more. Hey, is that Goudy I see coming down the red carpet? And behind them? Why it's none other than Helvetica, star of their own hit movie that has the art world buzzing! And, oh no! Someone just threw red paint all over Franklin Gothic. What a shame, because I'm pretty sure that's a pleather dress. The next screen is where probably 85% of my time will be spent. One thing I've noticed with the manual is a lack of getting the reader up to speed on the nomenclature of the program. This screen contains the "Edit Menu" but is that what I should call this screen? The "Edit Menu" screen? Screen layouts are called "pages." Is this the "Page Edit" screen? Anyway, the "Edit Menu" gives a lot of control, both fine and coarse, for text styling, shape types, creating buttons, setting the color palette, coordinating object reveals, and more. Buttons with hide extra options, for styling or importing other resources, and it could be argued the interface works against itself a bit. As Scala has chosen to eschew typical Amiga GUI conventions, they walk a delicate line of showing as much as possible, while avoiding visual confusion. It never feels overwhelming, but only just and could stand to borrow from the MacOS playbook's popup menus, rather than cycling of options. Entering text is simple; click anywhere on the screen and begin typing. Where it gets weird is how Scala treats all text as one continuous block. Every line is ordered by Y-position on screen, but every line is connected to the next. Typing too much on a given line will spill over into the next line down, wherever it may be, and however it may be styled. 0:00 / 0:17 1× Text weirdness in the Edit Screen. (I think I had a Trapper Keeper in that pattern.) The unobtrusive buttons "IN" and "OUT" on the left define how the currently selected object will transition into or out of the screen. Doing this by mouse selection is kind of a drag, as there is no visible selection border for the object being modified. There is an option to draw boxes around objects, but there is no differentiation of selected vs. unselected objects, except when there is. It's a bit inconsistent. The "List" button reveals a method for assigning transitions and rearranging object timings precisely. It is quickly my preferred method for anything more complex than "a simple piece of text flies into view." As a list we can define only a pure sequence. Do a thing. Do a second thing. Do a third thing. The end. Multiple items can be "chained" to perform precisely the same wipe as the parent object, with no variation. It's a grouping tool, not a timing tool. 0:00 / 0:04 1× "List" editing of text effect timings. Stay tuned for the sequel: "celeriac and jicama" I'm having a lot of fun exploring these tools, and have immediately wandered off the tutorial path just to play around. Everything works like I'd expect, and I don't need to consult the manual much at all. There are no destructive surprises nor wait times. I click buttons and see immediate results; my inquisitiveness is rewarded. Pages with animation are all good and well, but it is interactivity which elevates a Scala page over the stoicism of a PowerPoint slide. That means it's time for the go-to interaction metaphor: the good ole' button. Where HyperCard has the concept of buttons as objects, in Scala a button is just a region of the screen. It accepts two events: and , though it burdens these simple actions with the confusing names and . I mix up these terms constantly in my mind. To add a button, draw a box. Alternately, click something you've drawn and a box bound to that object's dimensions will be auto-generated. Don't be fooled! That box is not tethered to the object. It just happens to be sized precisely to the object's current dimensions and position on screen, as a helpful shortcut to generate the most-likely button for your needs. Button interactions can do a few things. First, it can adjust colors within its boundaries. Amiga palettes use indexed color, so color swaps are trivial and pixel-perfect. Have some white text that should highlight in red when the mouse enters it? Set the "mark" (mouse enter) palette to remap white to red. Same for "select" (mouse click), a separate palette remap could turn the white to yellow on click. Why am I talking about this when I can just show you? 0:00 / 0:24 1× I intentionally drew the button to be half the text height to illustrate that the button has no relation to the text itself. Color remapping occurs within button boundaries. The double palettes represent the current palette (top), and the remapped palette (bottom). Buttons can also contain simple logic, setting or reading global variable states to determine how to behave at any given moment. IF-THEN statements can likewise be embedded to route presentation order based on those variables. So, a click could add +1 to a global counter, then if the counter is a certain value it could transition to a corresponding page. 0:00 / 0:03 1× If we feel particularly clever with index color palette remapping, it is possible to give the illusion of complete image replacement. Buttons do not need any visible attributes, nor do they need to be mouse-clicked to perform their actions. If "Function Keys" are enabled at the Scala "System" level, the first 10 buttons on a page are automatically linked to F1 - F10. A sample script which ships with Scala demonstrates F-Key control over a page in real-time, altering the values of sports scores by set amounts. This is a clever trick, and with deeper thought opens up interesting possibilities. If every page in a script were to secretly contain such a set of buttons, a makeshift control panel could function like a "video soundboard" of sorts. F-Keys could keep a presentation dynamic, perhaps reacting to live audience participation. I mention this for no particular reason and it is not a setup for a later reveal. ahem Once we've made some pages, its time to stitch them together into a proper presentation, a "script" in Scala parlance. This all happens in the "Main Menu" which works similarly to the "List" view when editing page text elements, with a few differences. "Wipe" is the transition from the previous page to the selected page. If you want to wipe "out" from a page with transition X, then wipe "in" to next page with transition Y, a page must be added in-between to facilitate that. The quality of the real-time wipe effects surprises me. Again, my video naivete is showing, because I always thought the Amiga needed specialized hardware to do stuff like this, especially when there is video input. The wipes are fun, if perhaps a little staid compared to the Toaster 's. In Scala 's defense, they remain a bit more timeless in their simplicity. "Pause" controls, by time or frame count, how long to linger on a page before moving on to the next one. Time can be relative to the start of the screen reveal, or absolute so as to coordinate Scala animations with known timestamps on a pre-recorded video source. A mouse click can also be assigned as the "pause," waiting for a click to continue. "Sound" attaches a sound effect, or a MOD music file, to the reveal. There are rudimentary tools for adjusting pitch and timing, and even for trimming sounds to fit. An in-built sampler makes quick, crunchy, low-fidelity voice recordings, for when you need to add a little extra pizazz in a pinch, or to rough out an idea to see how it works. Sometimes the best tool for the job is the one you have with you. There are hidden tools on the Main Menu. Like many modern GUI table views, the gap between columns is draggable. Narrowing the "Name" column reveals two hidden options to the right: Variables and Execute. Now I'm finally getting a whiff of HyperCard . Unlike HyperCard , these tools are rather opaque and non-intuitive. Right off the bat, there is no built-in script editor. Rather, Scala is happy to position itself as one tool in your toolbox, not to provide every tool you need out of the box. It's going to take some time to get to know how these work, perhaps more than I have allocated for this project, but I'll endeavor to at least come to grips with these. The Scala manual says, "The Scala definition of variables (closely resembles) ARexx, since all variable operators are performed by ARexx." After 40 years, I guess it's time to finally learn about ARexx. ARexx is the Amiga implementation of the REXX scripting language . From ARexx User's Reference Manual, "ARexx is particularly well suited as a command language. Command programs, sometimes called "scripts" or "macros", are widely used to extend the predefined commands of an operating system or to customize an applications program." This is essentially the Amiga's AppleScript equivalent, a statement which surely has a pedant somewhere punching their 1084 monitor at my ignorance. Indeed, the Amiga had ARexx before Apple had AppleScript, but not before Apple had HyperCard . Amiga Magazine , August 1989, described it thusly, "Amiga's answer to HyperCard is found in ARexx, a programming and DOS command language, macro processor, and inter-process controller, all rolled into one easy-to-use command language." "Easy-to-use" you say? Commodore had their heart in the right place, but the "Getting Acquainted" section of the ARexx manual immediately steers hard into programmer-speak. From the jump we're hit with stuff like, "(ARexx) uses the double-precision math library called "mathieeedoubbas.library" that is supplied with the Amiga WorkBench disk, so make sure that this file is present in your LIBS: directory. The distribution disk includes the language system, some example programs, and a set of the INCLUDE files required for integrating ARexx with other software packages." I know exactly what I'd have thought back in the day. What is a "mathieeedoubbas?" What is a "library?" Is "LIBS" and "library" the same thing? What is "double-precision?" What is "INCLUDE"? What is a "language system?" You, manual, said yourself on page 2, "If you are new to the REXX language, or perhaps to programming itself, you should review chapters 1 through 4." So far, that ain't helpin'. Luckily for young me, now me knows a thing or two about programming and can make sense of this stuff. Well, "sense" in the broadest definition only. What this means for Scala is that we have lots of options for handling variables and logic in our project. The manual says, "Any ARexx operators and functions can be used (in the variable field)." However, a function like "Say," which outputs text to console, doesn't make any sense in a Scala context, so I'm not always 100% clear where lie the boundaries of useful operators and functions. In addition to typical math functions and simple string concatentation, ARexx gives us boolean and equality checks, bitwise operators, random number generation, string to digit conversion, string filtering and trimming, the current time, and a lot more. Even checking for file existence works, which possibly carried over from Scala 's roots as a modem-capable automated remote video-titler. Realistically, there's only so much we can do given the tiny tiny OMG it's so small interface into which we type our expressions. My aspirations are scoped by the interface design. This is not necessarily a bad thing , IMHO. " Small, sharp tools " is a handy mental scoping model. Variables are global, starting from the page on which they're defined. So page 1 cannot reach variables defined on page 2. A page can display the value of any currently defined variable by using the prefix in the on-screen text, as in . 0:00 / 0:08 1× I was trying to do Cheifet's melt effect, but I couldn't get animated brushes to work in Scala . Still, I was happy to get even this level of control over genlock/ Scala interplay. "Execution" in the Main Menu means "execute a script." Three options are available: Workbench, CLI, and ARexx. For a feature that gets two pages in the manual with extra-wide margins, this is a big one, but I get why it only receives a brief mention. The only other recourse would be to include hundreds of pages of training material. "It exists. Have fun." is the basic thrust here. "Workbench" can launch anything reachable via the Workbench GUI, the same as double-clicking it. This is useful for having a script set up the working environment with helper apps, so an unpaid intern doesn't forget to open them. For ARexx stuff, programs must be running to receive commands, for example. "CLI" does the same thing as Workbench, except for AmigaDOS programs; programs that don't have a GUI front-end. Maybe open a terminal connection or monitor a system resource. "ARexx" of course runs ARexx scripts. For a program to accept ARexx commands, it must have an active REXX port open. Scala can send commands, and even its own variable data, to a target program to automate it in interesting ways. I saw an example of drawing images in a paint program entirely through ARexx scripting. Scala itself has an open REXX port, meaning its own tools can be controlled by other programs. In this way, data can flow between software, even from different makers, to form a little self-enclosed, automation ecosystem. One unusually powerful option is that Scala can export its own presentation script, which includes information for all pages, wipes, timings, sound cues, etc, as a self-contained ARexx script. Once in that format, it can be extended (in any text editor) with advanced ARexx commands and logic, perhaps to extract data from a database and build dynamic pages from that. Now it gets wild. That modified ARexx file can then be brought back into Scala as an "Execute" ARexx script on a page. Let me clarify this. A Scala script, which builds and runs an entire multi-page presentation, can itself be transformed into just another ARexx script assigned to a single page of a Scala project. One could imagine building a Scala front-end with a selection of buttons, each navigating on-click to a separate page which itself contains a complete, embedded presentation on a given topic. Scripts all the way down. There's one more scripting language Scala supports, and that's its own. Dubbed Scala Lingo (or is it Lingua?), when we save a presentation script we're saving in Lingo. It's human-readable and ARexx-friendly, which is what made it possible to save a presentation as an ARexx script in the previous section. Here's pure Lingo. This is a 320x200x16 (default palette) page, solid blue background with fade in. It displays one line of white text with anti-aliasing. The text slides in from the left, pauses 3 seconds, then slides out to the right. Here's the same page as an ARexx script. Looks like all we have to do is wrap each line of Lingo in single quotes, and add a little boilerplate. So, we have Scala on speaking terms with the Amiga and its applications, already a thing that could only be done on this particular platform at the time. Scala's choice of platform was further benefited by one of the Amiga's greatest strengths. That was thanks to the "villain" of the PaperClip article , Electronic Arts. The hardware and software landscape of the 70s and 80s was a real Wild West, anything goes, invent your own way, period of experimentation. Ideas could grow and bloom and wither on the vine multiple times over the course of a decade. Why, enough was going on a guy could devote an entire blog to it all. ahem While this was fun for the developers who had an opportunity to put their own stamp on the industry, for end-users it could create a bit of a logistical nightmare. Specifically, apps tended to be siloed, self-contained worlds which read and wrote their own private file types. Five different art programs? Five different file formats. Data migration was occasionally supported, as with VisiCalc's use of DIF (data interchange format) to store its documents. DIF was not a "standard" per se, but rather a set of guidelines for storing document data in ASCII format. Everyone using DIF could roll their own flavor and still call it DIF, like Lotus did in extending (but not diverging from) VisiCalc's original. Microsoft's DIF variant broke with everyone else, a fact we'll just let linger in the air like a fart for a moment. Let's really breathe it in, especially those of us on Windows 11 . More often than not, especially in the case of graphics and sound, DIF-like options were simply not available. Consider The Print Shop on the Apple 2. When its sequel, The New Print Shop , arrived it couldn't even open graphics from the immediately previous version of itself . A converter program was included to bring original Print Shop graphics into New Print Shop . On the C64, the Koala file format became semi-standard for images, simply by virtue of its popularity. Even so, there was a market for helping users move graphics across applications on the exact same hardware. While other systems struggled, programs like Deluxe Video on the Amiga were bringing in Deluxe Music and Deluxe Paint assets without fuss. A cynic will say, "Well yeah, those were all EA products so of course they worked together." That would be true in today's "silos are good, actually" regression of computing platforms into rent extractors. But, I will reiterate once more, there was genuinely a time when EA was good to its users. They didn't just treat developers as artists, they also empowered users in their creative pursuits. EA had had enough of the file format wars. They envisioned a brighter future and proposed an open file standard to achieve precisely that. According Dave Parkinson's article "A bit IFFy," in Amiga Computing Magazine , issue 7, "The origins of IFF are to be found in the (Apple) Macintosh's clipboard, and the file conventions which allow data to be cut and pasted between different Mac applications. The success of this led Electronic Arts to wonder — why not generalize this?" Why not, indeed! In 1985, working directly in conjunction with Commodore, the Electronic Arts Interchange File Format 1985 was introduced; IFF for short. It cannot be overstated how monumental it was to unlocking the Amiga's potential as a creative workhorse. From the Scala manual, "Unlike other computers, the Amiga has very standardized file formats for graphics and sound. This makes it easy to exchange data between different software packages. This is why you can grab a video image in one program, modify it in another, and display it in yet another." I know it's hard for younger readers to understand the excitement this created, except to simply say that everything in computing has its starting point. EA and the Amiga led the charge on this one. So, what is it? From "A Quick Introduction to IFF" by Jerry Morrison of Electronic Arts, "IFF is a 2-level standard. The first layer is the "wrapper" or “envelope” structure for all IFF files. Technically, it’s the syntax. The second layer defines particular IFF file types such as ILBM (standard raster pictures), ANIM (animation), SMUS (simple musical score), and 8SVX (8-bit sampled audio voice)." To assist in the explanation of the IFF file format, I built a Scala presentation just for you, taken from Amiga ROM Kernel Reference Manual . This probably would have been better built in Lingo, rather than trying to fiddle with the cumbersome editing tools and how they (don't) handle overlapping objects well. What's done is done. 0:00 / 0:07 1× I used the previously mentioned "link" wipe to move objects as groups. IFF is a thin wrapper around a series of data "chunks." It begins with a declaration of what type of IFF this particular file is, known as its "FORM." Above we see the ILBM "FORM," probably the most prevalent image format on the Amiga. Each chunk has its own label, describes how many bytes long it is, and is then followed by that many data bytes. That's really all there is to it. IDs for the FORM and the expected chunks are spec'd out in the registered definition document. Commodore wanted developers to always try to use a pre-existing IFF definition for data when possible. If there was no such definition, say for ultra-specialized data structures, then a new definition should be drawn up. "To prevent conflicts, new FORM identifications must be registered with Commodore before use," says Amiga ROM Kernel Reference Manual . In Morrison's write-up on IFF, he likened it to ASCII. When ASCII data is read into a program, it is sliced, diced, mangled, and whatever else needs to be done internally to make the program go. However, the data itself is on disk in a format unrelated to the program's needs. Morrison described a generic system for storing data, of whatever type, in a standardized way which separated data from software implementations. At its heart, IFF first declares what kind of data it holds (the FORM type), then that data is stored in a series of labelled chunks. The specification of how many chunks a given FORM needs, the proper labels for those chunks, the byte order for the raw data, and so on are all in the FORM's IFF definition document. In this way, anyone could write a simple IFF reader that follows the registered definition, et voila! Deluxe Paint animations are suddenly a valid media resource for Scala to consume. It can be confusing when hearing claims of "IFF compatibility" in magazines or amongst the Amiga faithful, but this does not mean that any random Amiga program can consume any random IFF file. The burden of supporting various FORMS still rests on each individual developer. FORM definitions which are almost identical, yet slightly different, were allowed. For example, the image FORM is "almost identical to " with differences in the chunk and the requirement of a new chunk called . "Almost identical" is not "identical" and so though both RGBN and ILBM are wrapped in standardized IFF envelopes, a program must explicitly support the ones of interest. Prevalent support for any given FORM type came out of a communal interest to make it standard. Cooperation was the unsung hero of the IFF format. "Two can do something better than one," has been on infinite loop in my mind since 1974. How evergreen is that XKCD comic about standards ? Obviously, given we're not using it these days, IFF wound up being one more format on the historical pile. We can find vestiges of its DNA here and there , but not the same ubiquity. There were moves to adopt IFF across other platforms. Tom Hudson, he of DEGAS Elite and CAD-3D , published a plea in the Fall 1986 issue of START Magazine for the Atari ST development crowd to adopt IFF for graphics files. He's the type to put up, not shut up, and so he also provided an IFF implementation on the cover disk, and detailed the format and things to watch out for. Though inspired by Apple originally, Apple seemed to believe IFF only had a place within a specific niche. AIFF, audio interchange file format, essentially standardized audio on the Mac, much like ILBM did for Amiga graphics. Despite being an IFF variant registered with Commodore, Scala doesn't recognize it in my tests. So, again, IFF itself wasn't a magical panacea for all file format woes. That fact was recognized even back in the 80s. In Amazing Computing , July 1987, in an article "Is IFF Really a Standard?" by John Foust, "Although the Amiga has a standard file format, it does not mean Babel has been avoided." He noted that programs can interpret IFF data incorrectly, resulting in distorted images, or outright failure. Ah well, nevertheless . Side note: One might reasonably believe TIFF to be a successful variant of IFF. Alas, TIFF shares "IFF" in name only and stands for "tagged image file format." One more side note: Microsoft also did to IFF what they did to DIF. fart noise The last major feature of note is Scala's extensibility. In the Main Menu list view, we have columns for various page controls. The options there can be expanded by including EX modules, programs which control external systems. This feels adjacent to HyperCard's XCMDs and XFCNs, which could extend HyperCard beyond its factory settings. EX modules bundled with Scala can control Sony Laserdisc controllers, enable MIDI file playback, control advanced Genlock hardware, and more. Once installed as a "Startup" item in Scala , these show up in the Main Menu and are as simple to control as any of Scala's built-in features. As an EX module, it is also Lingo scriptable so the opportunity to coordinate complex hardware interactions all through point-and-click is abundant. I turned on WinUAE's MIDI output and set it to "Microsoft GS Wave Table." In Amiga Workbench, I enabled the MIDI EX for Scala . On launch, Scala showed a MIDI option for my pages so I loaded up Bohemian-Rhapsody-1.mid . Mamma mia, it worked! I haven't found information about how to make new EXes, nor am I clear what EXes are available beyond Scala's own. However, here at the tail end of my investigation, Scala is suddenly doing things I didn't think it could do. The potential energy for this program is crazy high. No, I'm not going to be doing that any time soon, but boy do I see the appeal. Electronic Arts's documentation quoted Alan Kay for the philosophy behind the IFF standard, "Simple things should be simple, complex things should possible." Scala upholds this ideal beautifully. Making text animate is simple. Bringing in Deluxe Paint animations is simple. Adding buttons which highlight on hover and travel to arbitrary pages on click is simple. The pages someone would typically want to build, the bread-and-butter stuff, is simple. The complex stuff though, especially ARexx scripting, is not fooling around. I tried to script Scala to speak a phrase using the Amiga's built-in voice synthesizer and utterly failed. Jimmy Maher wrote of ARexx in The Future Was Here: The Commodore Amiga , "Like AmigaOS itself, it requires an informed, careful user to take it to its full potential, but that potential is remarkable indeed." While Scala didn't make me a video convert, it did retire within me the notion that the Toaster was the Alpha and Omega of the desktop video space. Interactivity, cross-application scripting, and genlock all come together into a program that feels boundless. In isolation, Scala a not a killer app. It becomes one when used as the central hub for a broader creative workflow. A paint program is transformed into a television graphics department. A basic sampler becomes a sound booth. A database and a little Lingo becomes an editing suite. Scala really proves the old Commodore advertising slogan correct, "Only Amiga Makes it Possible." 0:00 / 1:48 1× I'm accelerating the cycle of nostalgia. Now, we long for "four months ago." The more I worked with Scala , the more I wanted to see how close I could get to emulating video workflows of the day. Piece by piece over a few weeks I discovered the following (needs WinUAE , sorry) setup for using live Scala graphics with an untethered video source in a Discord stream. Scala can't do video switching*, so I'm locked to whatever video source happens to be genlocked to WinUAE at the moment. But since when were limitations a hindrance to creativity? * ARexx and EX are super-powerful and can extend Scala beyond its built-in limitations, but I don't see an obvious way to explore this within WinUAE. This is optional, depending on your needs, but its the fun part. You can use whatever webcam you have connected just as well. Camo Camera can stream mobile phone video to a desktop computer, wirelessly no less. Camo Camera on the desktop advertises your phone as a webcam to the desktop operating system. So, install that on both the mobile device and desktop, and connect them up. WinUAE can see the "default" Windows webcam, and only the default, as a genlock source; we can't select from a list of available inputs. It was tricky getting Windows 11 to ignore my webcam and treat Camo Camera as my default, but I got it to work. When you launch WinUAE , you should see your camera feed live in Workbench as the background. So far, so good. Next, in Scala > Settings turn on Genlock. You should now see your camera feed in Scala with Scala's UI overlaid. Now that we have Scala and our phone's video composited, switch over to OBS Studio . Set the OBS "Source" to "Window Capture" on WinUAE. Adjust the crop and scale to focus in on the portion of the video you're interested in broadcasting. On the right, under "Controls" click "Start Virtual Camera." Discord, Twitch , et al are able to see OBS as the camera input for streaming. When you can see the final output in your streaming service of choice (I used Discord 's camera test to preview), design the overlay graphics of your heart's desire. Use that to help position graphics so they won't be cut off due to Amiga/Discord aspect ratio differences. While streaming, interactivity with the live Scala presentation is possible. If you build the graphics and scripts just right, interesting real-time options are possible. Combine this with what we learned about buttons and F-Keys, and you could wipe to a custom screen like "Existential Crisis - Back in 5" with a keypress. 0:00 / 0:35 1× Headline transitions were manually triggered by the F-Keys, just to pay off the threat I made earlier in the post. See? I set'em up, I knock'em down. I also wrote a short piece about Cheifet , because of course I did. Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). WinUAE v6.0.2 (2025.12.21) 64-bit on Windows 11 Emulating an NTSC Amiga 1200 2MB Chip RAM, 8MB Z2 Fast RAM AGA Chipset 68020 CPU, 24-bit addressing, no FPU, no MMU, cycle-exact emulation Kickstart/Workbench 3.1 (from Amiga Forever ) Windows directory mounted as HD0: For that extra analog spice, I set up the video Filter as per this article Scala Multimedia MM300 Cover disk version from CU Amiga Magazine , issue 96 (no copy protection) I didn't have luck running MM400 , nor could I find a MM400 manual Also using Deluxe Paint IV and TurboText Nothing to speak of. The "stock" Amiga 1200 setup worked great. I never felt the need to speed boost it, though I did give myself as much RAM as possible. I'll go ahead and recommend Deluxe Paint IV over III as a companion to Scala , because it supports the same resolutions and color depths. If you wind up with a copy of Scala that needs the hardware dongle, WinUAE emulates that as well. Under are the "red" (MM200) and "green" (MM300 and higher) variants I'm not aware of any other emulators that offer a Genlock option. I did not encounter any crashes of the application nor emulator. One time I had an "out of chip RAM" memory warning pop up in Scala . I was unclear what triggered it, as I had maxed out the chip RAM setting in WinUAE . Never saw it again after that. I did twice have a script become corrupted. Scripts are plain text and human-readable, so I was able to open it, see what was faulting, and delete the offending line. So, -6 points for corrupting my script; +2 points for keeping things simple enough that I could fix it on my own. F-Keys stopped working in Scala 's demonstration pages. Then, it started working again. I think there might have been an insidious script error that looked visually correct but was not. Deleting button variable settings and resetting them got it working again. This happened a few times. I saw some unusual drawing errors. Once was when a bar of color touched the bottom right edge of the visible portion of the screen, extra pixels were drawn into the overscan area. Another time, I had the phrase "Deluxe Paint" in Edit Mode, but when I viewed the page it only said "Deluxe Pa". Inspecting the text in "List" mode revealed unusual characters (the infinity symbol?!) had somehow been inserted into the middle of the text. I outlined one option above under "Bonus: Streaming Like Its 1993" above. OBS recording works quite well and is what I used for this post. WinUAE has recording options, but I didn't have a chance to explore them. I don't yet know how to export Scala animations into a Windows-playable format. For 2026, it would surely be nice to have native 16:9 aspect ratio support. Temporary script changes would be useful. I'd love to be able to turn off a page temporarily to better judge before/after flow. It can be difficult to visualize an entire project flow sometimes. With page transitions, object transitions, variable changes, logic flow, and more, understanding precisely what to do to create a desired effect can get a little confusing. Scala wants to maintain a super simple interface almost to its detriment. Having less pretty, more information dense, "advanced" interface options would be welcome. I suppose that's what building a script in pure ARexx is for. I'd like to be able to use DPaint animated brushes. Then I could make my own custom "transition" effects that mix with the Scala page elements. Maybe it's possible and I haven't figured out the correct methodology? The main thing I wanted was a Genlock switch, so I could do camera transitions easily. That's more of a WinUAE wishlist item though.

0 views
Neil Madden 3 months ago

Monotonic Collections: a middle ground between immutable and fully mutable

This post covers several topics around collections (sets, lists, maps/dictionaries, queues, etc) that I’d like to see someone explore more fully. To my knowledge, there are many alternative collection libraries for Java and for many other languages, but I’m not aware of any that provide support for monotonic collections . What is a monotonic collection, I hear you ask? Well, I’m about to answer that. Jesus, give me a moment. It’s become popular, in the JVM ecosystem at least, for collections libraries to provide parallel class hierarchies for mutable and immutable collections: Set vs MutableSet, List vs MutableList, etc. I think this probably originated with Scala , and has been copied by Kotlin , and various alternative collection libraries, e.g. Eclipse Collections , Guava , etc. There are plenty of articles out there on the benefits and drawbacks of each type. But the gulf between fully immutable and fully mutable objects is enormous: they are polar opposites, with wildly different properties, performance profiles, and gotchas. I’m interested in exploring the space between these two extremes. (Actually, I’m interested in someone else exploring it, hence this post). One such point is the idea of monotonic collections, and I’ll now explain what that means. By monotonic I mean here logical monotonicity : the idea that any information that is entailed by some set of logical formulas is also entailed by any superset of those formulas. For a collection data structure, I would formulate that as follows: If any (non-negated) predicate is true of the collection at time t , then it is also true of the collection at any time t’ > t . For example, if c is a collection and c.contains(x) returns true at some point in time, then it must always return true from then onwards. To make this concrete, a MonotonicList (say) would have an append operation, but not insert , delete , or replace operations. More subtly, monotonic collections cannot have any aggregate operations: i.e., operations that report statistics/summary information on the collection as a whole. For example, you cannot have a size method, as the size will change as new items are added (and thus the predicate can become false). You can have (as I understand it) map and filter operations, but not a reduce / fold . So why are monotonic collections an important category to look at? Firstly, monotonic collections can have some of the same benefits as immutable data structures, such as simplified concurrency. Secondly, monotonic collections are interesting because they can be (relatively) easily made distributed, per the CALM principle: Consistency as Logical Monotonicity (insecure link, sorry). This says that monotonic collections are strongly eventually consistent without any need for coordination protocols. Providing such collections would thus somewhat simplify making distributed systems. Interestingly, Kotlin decided to make their mutable collection classes sub-types of the immutable ones: MutableList is a sub-type of List, etc. (They also decided to make the arrows go the other way from normal in their inheritance diagram, crazy kids). This makes sense in one way: mutable structures offer more operations than immutable ones. But it seems backwards from my point of view: it says that all mutable collections are immutable, which is logically false. (But then they don’t include the word Immutable in the super types). It also means that consumers of a List can’t actually assume it is immutable: it may change underneath them. Guava seems to make the opposite decision: ImmutableList extends the built-in (mutable) List type, probably for convenience. Both options seem to have drawbacks. I think the way to resolve this is to entirely separate the read-only view of a collection from the means to update it. On the view-side, we would have a class hierarchy consisting of ImmutableList, which inherits from MonotonicList, which inherits from the general List. On the mutation side, we’d have a ListAppender and ListUpdater classes, where the latter extends the former. Creating a mutable or monotonic list would return a pair of the read-only list view, and the mutator object, something like the following (pseudocode): The type hierarchies would look something like the following: This seems to satisfy allowing the natural sub-type relationships between types on both sides of the divide. It’s a sort of CQRS at the level of data structures, but it seems to solve the issue that the inheritance direction for read-only consumers is the inverse of the natural hierarchy for mutating producers. (This has a relationship to covariant/contravariant subtypes, but I’m buggered if I’m looking that stuff up again on my free time). Anyway, these thoughts are obviously pretty rough, but maybe some inklings of ideas if anyone is looking for an interesting project to work on.

0 views

Improving my Distributed System with Scala 3: Consistency Guarantees & Background Tasks (Part 2)

Improving Bridge Four, a simple, functional, effectful, single-leader, multi worker, distributed compute system optimized for embarrassingly parallel workloads by providing consistency guarantees and improving overall code quality (or something like that).

0 views
emiruz 2 years ago

Advent of Code in Prolog, Haskell, Python and Scala

Here are some Advent of Code solutions: 2023 (Prolog) 2022 (Haskell) 2021 (Python & Scala) (in progress at the time of writing). Here are some comparative notes: My Haskell solutions were mostly < 27 LoC. The Prolog solutions where considerably longer. The Prolog solutions were, on average, much harder to code for me. My Prolog solutions ended up looking rather functional for the most part.

0 views

Tiny Telematics: Tracking my truck's location offline with a Raspberry Pi, redis, Kafka, and Flink (Part 2)

Tracking vehicle location offline with a Raspberry Pi, Part 2: Apache Flink, scala, Kafka, and road-testing.

0 views

Functional Programming concepts I actually like: A bit of praise for Scala (for once)

Types, type classes, implicits, tagless-final, effects, and other things: Not everything in the world of functional programming is bleak and overly academic. A view on FP & scala concepts someone who loves to complain actually likes.

0 views

Scala, Spark, Books, and Functional Programming: An Essay

Reviewing 'Essential Scala' and 'Functional Programming Simplified', while explaining why Spark has nothing to do with Scala, and asking why learning Functional Programming is such a pain. A (maybe) productive rant (or an opinionated essay).

0 views
sunshowers 4 years ago

Open and closed universes

Type systems are tools for modeling some aspect of reality. Some types need to represent one of several different choices. How should these different kinds of choices be modeled? If you’re writing an end-user application, none of this matters. You have full control over all aspects of your code, so use whatever is most convenient. However, if you’re writing a library that other developers will use, you may care about API stability and forward compatibility. A simple example is the type in Rust, or its analogs like in Haskell. is defined as : Part of the definition of is that it can only be either or . This is never going to change, so values form a closed universe. A simple enum is appropriate for such cases. The main advantage of this approach is that it minimizes the burden on consumers: they know that an can only be or , and only have to care about such cases. The main disadvantage is that any sort of new addition would constitute API breakage. You’re expanding a universe you formerly assumed to be closed, and everyone who depends on you needs to be made aware of this. As a library author, you may run into two kinds of open universes: If you’re trying to model semi-open universes, consider using a non-exhaustive enum. For example, you may have an error type in a library that bubbles up other errors. One way to write this is: The problem with this approach is that errors typically do not form a closed universe. A future version may depend on some other library, for example , and you’d have to add a new option to the type: Anyone matching on the type would have to handle this new, additional case… unless they’ve specified a wildcard pattern . Some programming languages allow library developers to force consumers to specify a wildcard pattern. In Rust, you can annotate a type with : Anyone matching against the error type would have to add an additional wildcard pattern: This ensures that the library developer can continue to add more options over time without causing breakages. The main downside is that all consumers need to handle the additional wildcard pattern and do something sensible with it 1 . The non-exhaustive enum approach is best suited for semi-open universes. What about situations where consumers should have the ability to add new choices? The best way is often to use a string, or a newtype wrapper around it. For example, one may wish to represent a choice between computer architectures such as , , and . More architectures may be added in the future, too, but often the library just needs to pass the architecture down to some other tool without doing much processing on it. One way to represent this is 2 : Well-known strings can be represented as constants: This allows known choices to not be stringly-typed , and currently-unknown ones to be used without having to change the library. The format doesn’t have to be restricted to strings. For example, an may wish to also carry its bitness . In general, this works best for choices that are: One pattern I’ve seen is to use another variant that represents an unknown value. For example: The problem with this approach is that it becomes very hard to define equality and comparisons on these types. Are and the same, or are they different? In practice, this becomes tricky very quickly. In general, this approach is best avoided. The final, most general approach is to use traits, typeclasses or other interface mechanisms. For example, architectures can be represented as a trait in Rust: Then, other library methods could use: And custom architectures can be defined as: This also works for output types such as returned errors. You can try and downcast the error into a more specific type: but this turns a formerly compile-time check into a runtime one. A common bug that happens with this approach is version skew: for example, let’s say the library had originally been using . In a minor release, it upgrades to and return errors from the new version, while the consumer continues to downcast to the old version. Such cases will fail silently 3 . Traits are similar to the many-strings approach because they both model fully open universes. However, there are some tradeoffs between the two. One benefit of traits is that they offer greater implementation flexibility: trait methods allow for custom behavior on user-defined types. A downside is that traits make each choice in an open universe a type , not a value , so operations like equality become harder to define. It isn’t impossible, though: the trait could require a method which returns a many-strings type. The contents of can then be used for equality checks. Another downside is ergonomics: traits and interfaces are more awkward than values. A sufficiently complex trait may even require some sort of code generation to use effectively. Rust has declarative and procedural macros to help, which work okay but add even more complexity on top. Whether strings are sufficient or full-blown traits are required is a case-by-case decision. In general, the many-strings approach is much simpler and is powerful enough for many kinds of choices; traits provide greater power and are always worth keeping in mind, though. A trait can also model a semi-open universe if you seal it : The library itself may add new implementations of the trait, but downstream consumers cannot because they do not have access to 4 . Type systems are often used to represent one of several choices. The choices can either be fixed (closed universe), or might continue to change over time (open universe). These are different situations that usually need to be handled in different ways. Hope this helps! The idea of writing this post arose from several conversations with members of the Rust community. Thanks to Munin , Jane Lusby , Manish Goregaokar , Michael Gattozzi and Izzy Muerte for reviewing drafts of this post. Updated 2021-08-04: edited some text, and corrected the footnote about to indicate that it is required. Updated 2021-08-13: added a note about more complex versions of the many-strings pattern, expanded on the difference between it and traits, and expanded on what it would mean to unify sealed traits and non-exhaustive enums. Thanks to Manish for the inspiration. I wish that Rust had a way to lint, but not fail to compile, on missing patterns in a non-exhaustive match.  ↩︎ This implementation uses the type, which with the lifetime stands for either a borrowed string that lasts the duration of the entire program—typically one embedded in the binary—or an owned created at runtime. If this were just a , then the implementation below wouldn’t be possible because s can’t allocate memory.  ↩︎ With static types, this sort of version skew error is usually caught at compile time. With dynamic casting it can only be caught at runtime. You might argue that if the error type comes from a public dependency, an incompatible upgrade to the dependency constitutes a semantic version breakage. However, this is a bit of a boundary case; the library authors might disagree with you and say that their stability guarantees don’t extend to downcasting. Semantic versioning is primarily a low-bandwidth way for library developers and consumers to communicate about relative burdens in the event of a library upgrade.  ↩︎ Given that both sealed traits and non-exhaustive enums model the same kinds of semi-open universes in fairly similar fashions, a direction languages can evolve in is to unify the two. Scala has already done something similar . Rust could gain several features that bring non-exhaustive enums and sealed traits incrementally closer to each other: Sometimes, all the choices may be known in advance and will likely never change—this is often called a closed universe of values. Other times, the set of options will change over time, and the type needs to represent an open universe . Open with respect to the library itself: new choices can be defined by library authors in the future, but not by downstream consumers. I’m going to call these semi-open universes . Open with respect to users of the library: new choices can be defined by both library authors and consumers. These might be described as fully open universes . used as input types to a library that doesn’t do much processing on them, and the data associated with each choice doesn’t vary by choice. I wish that Rust had a way to lint, but not fail to compile, on missing patterns in a non-exhaustive match.  ↩︎ This implementation uses the type, which with the lifetime stands for either a borrowed string that lasts the duration of the entire program—typically one embedded in the binary—or an owned created at runtime. If this were just a , then the implementation below wouldn’t be possible because s can’t allocate memory.  ↩︎ With static types, this sort of version skew error is usually caught at compile time. With dynamic casting it can only be caught at runtime. You might argue that if the error type comes from a public dependency, an incompatible upgrade to the dependency constitutes a semantic version breakage. However, this is a bit of a boundary case; the library authors might disagree with you and say that their stability guarantees don’t extend to downcasting. Semantic versioning is primarily a low-bandwidth way for library developers and consumers to communicate about relative burdens in the event of a library upgrade.  ↩︎ Given that both sealed traits and non-exhaustive enums model the same kinds of semi-open universes in fairly similar fashions, a direction languages can evolve in is to unify the two. Scala has already done something similar . Rust could gain several features that bring non-exhaustive enums and sealed traits incrementally closer to each other: Allow statements to match sealed traits if a wildcard match is specified. Make enum variants their own types rather than just data constructors . Automatically convert the return values of functions like into enums if necessary. (The trait doesn’t need to be sealed in this case, because the enum would be anonymous and matching on its variants wouldn’t be possible.) Inherent methods on an enum would need to be “lifted up” to become trait methods. As of Rust 1.54, there are several kinds of methods that can be specified on enums but not on traits. Features like existential types help narrow that gap.

0 views