Posts in Design (20 found)
JSLegendDev 2 days ago

Reverting Donovan's Demise back to its original vision.

After I announced the Steam page for my game Donovan’s Demise, I received a lot of discontent feedback. A lot didn’t like the new direction in which I decided to convert the project into a Roguelike. After reflection, I decided to turn back to my original vision and therefore, I have updated the Steam page to reflect this. You can wishlist the game and view more info about it here : https://store.steampowered.com/app/4393750/Donovans_Demise Here is the game’s short description : King Donovan has proclaimed himself God, forcing the people of Hydralia to worship him. Many have attempted to kill him, but none have succeeded. Will you? In this small action RPG with mouse-driven bullet hell combat, defeat relentless foes, grow stronger, face Donovan!

0 views
iDiallo 2 days ago

You paid for it, you should be comfortable in it

A friend of mine bought a Tesla Roadster back in the early 2010s. At the time, spotting a Tesla on the road was a rare event. Maybe even occasion enough to stop and take a picture. I never got the chance to photograph one, let alone drive one, until I met this new friend recently. This was my chance to experience the car firsthand. We walked to the parking structure to see it. As soon as he opened the door, something looked... off. On the outside, it was a pristine, six-figure roadster. But the inside looked completely custom. Not "custom" in the sense of a professional shop install, but more like the driver himself grabbed a hammer and chisel and made it his own. First, the driver's seat had been altered. It was much lower than usual and didn't match the passenger seat. My friend stands 6'7", and the Roadster is a tiny car. He physically couldn't fit, so he modified the seat rails to lower it. But that fix created a new problem: the door armrest now dug into his hip. So, he took a file to the interior panel, shaved it down, and 3D printed a smaller, ergonomic armrest. He even 3D printed a cup holder for the passenger side so his coffee was within reach. To me, the idea of taking a Dremel or a file to a $100,000+ car was unimaginable. You must be crazy to do it. He caught the look on my face and shrugged. "Hey, it's my car. I paid for it. I intend to be comfortable in it." I never thought of it like this. That sentiment stuck with me. Recently when I read an article by Kent Walters about filing the corners of his MacBook , those same feelings resurfaced. My work MacBook has edges so sharp that I've often felt like I was slicing my wrist on the chassis. I treated this as a design flaw I had to endure. But not Kent. He treated it as an obstacle to be removed. He literally filed down the corners of his laptop to ensure the machine he uses every day was comfortable. I may not have the guts to file my work issued MacBook, but I'm no stranger to customization... in software. I modify my tools constantly. I spend days tweaking my IDE, remapping keyboard shortcuts, and writing custom scripts until the software is unrecognizable to anyone else on my team. I don't think twice about rewriting a config file to make the tool fit my brain. When I was a kid, I always had a screw driver around, fixing a device that wasn't really broken. On the home computer, I modified everything. I once deleted all files to improve performance. It didn't work, but it led to a fruitful career. But somehow, when it comes to expensive hardware now, I freeze. I treat the physical object as a museum piece to be preserved. I bought a docking station to banish the laptop to a shelf, using an external mouse and keyboard to avoid touching the sharp chassis. I built a complex workaround to accommodate the tool, rather than performing the simple, brutal act of modifying the tool to accommodate me. We treat our physical tools as if they are on loan from the manufacturer. You'll see a musician buying a vintage guitar but refuses to adjust the action, terrified of ruining the "collector's value." Meanwhile, the working guitarist has sanded down the neck and covered it in stickers because it feels better in their hand. The software engineer accepts the default keybindings to avoid "bad habits," while the power user creates a layout that doubles their speed. If you own a tool, whether it's a car, a computer, or a line of code, you own the right to change it. The manufacturer designed it for the "average" user, but you are a specific human with specific needs. Remember grandma's couch in the living room? It had that plastic cover on it. It was so uncomfortable, but no one dared to remove it. The plastic was to preserve the sofa. No one got to enjoy it, instead everyone accommodated the couch only to preserve its value. A value that one ever benefits from. Don't let the perceived value of an object stop you from making it truly yours. A tool with battle scars is a tool that is loved.

0 views
Josh Comeau 2 days ago

Squash and Stretch

Have you ever heard of Disney’s 12 Basic Principles of Animation? In this tutorial, we’ll explore how we can use the very first principle to create SVG micro-interactions that feel way more natural and believable. It’s one of those small things that has a big impact.

0 views
Playtank 3 days ago

Analogue Prototyping

There is a lot to say about prototyping . Chris Hecker talked about advanced prototyping at GDC 2006, and provided a hierarchy of priorities that goes like this: Analogue prototyping comes in right away at Step 1: Don’t . By not launching straight into your game engine, you can save giant heaps of time between hypothesis and implementation. You can also figure out what kinds of references will be relevant before you reach Step 4: Gather References . There’s another side to analogue prototyping as well. In the book Challenges for Game Designers , Brenda Romero says: “A painter gets better by making lots of paintings; sculptors hone their craft by making sculptures; and game designers improve their skills by designing lots of games. […] Unfortunately, designing a complete video game (and implementing it, and then seeing all the things you did right and wrong) can take years, and we’d all like to improve at a faster rate than that.” Brenda Romero Using cards, dice, and paper leads to some of the fastest prototyping possible. It can be just ten minutes between idea and test, fitting really well into those two days of Step 2: Just Do It . Of course, it can also take weeks and require countless iterations, but that’s part of the game designer’s job after all. This post focuses on what to gain from analogue prototypes of digital games, and the practical process involved. It’s also unusually full of real work, since this is something I’ve done quite a bit for my personal projects and is therefore not under NDA. If you’re curious about something or need to tell me I’m wrong, don’t hesitate to comment or e-mail me at [email protected] . Why you should care about analogue prototyping when all you want to do is the next amazing digital game may seem like a mystery. A detour that leads to having your fingers glued together and a bunch of leftover paper clippings you can’t use for anything. In Chris Hecker’s talk, the first suggestion is that you should cheat before you put too much time into anything else. Since you will be cutting and gluing and sleeving, and some of that work takes time, this counts double with analogue prototypes. The easiest way to cheat is to use proxies. If you have a collection of boardgames, this is easy. You can also go out and buy some used games cheap or ask friends if they have some lying around that they don’t use. Perhaps that worn copy of Monopoly that almost caused a family breakup can finally get some table time again, in a different form. Aesthetics matter. If you want to take shortcuts with how a game feels to play, getting something that looks the part can be a shortcut. Go to your local Dollar Store or second hand shop and pick up some plastic toys or a game with miniatures that are similar to what you are after. They can merely be there to act as center pieces for your prototype. The easiest and most efficient reference board that exists is a standard chessboard. Square grid with a manageable size. You can also use a Go board, with the extra benefit that the Go beads also make for excellent proxy components. Beyond those two, you can really use any other board game board too. Just make sure to remember where you got it from if you want to play those games in the future. Or you can even pick up games with missing parts at yard sales, usually super cheap, and scavenge proxy parts from those. For some types of games, finding a good real-world map, perhaps even a tourist map or subway map, can be an excellent shortcut. Not just for wargames, but for anything with a spatial component. The guide map from a theme park or museum works, too. Packs of 52 standard playing cards are fantastic proxies. You can use suits, ladders, make face cards have a different meaning, and much more. Countless prototypes have used these excellent decks to handle anything from combat resolution to hidden information. It’s also possible to go even further, and make your own game use regular playing gards and the known poker combos as a feature. Balatro comes to mind. Many families have a Yatzy set lying around, providing you with a small handful of six-sided standard dice. You can do a lot with just this simple straightforward randomisation element. But don’t limit yourself to just six-sided dice, if you don’t have to. Get yourself a set of Dungeons & Dragons polyhedrals and you’ll have four-, eight-, ten-, twelve- and twenty-sided dice rounding out your randomisation armory. Just want to make an honorable mention of this fantasy wargame, because of its diversity. You can build all manner of strange scenery from just a core HeroScape set and use it effectively to represent almost anything. The same goes for Lego. The main issue with these kinds of proxies is that they can take a lot of space. Particularly HeroScape , since it has a predefined scale. With Lego, you just need to figure out a scale and stick to it. If there’s a game the people you will play with are especially familiar with, you can skip over having to design one of your systems by substituting a mechanic from a game you already know. Say, if you know that you will want to have statistics in your game, you can copy the traditional lineup of six abilities from Dungeons & Dragons , as well as their scale, to get started. Even if you know that you will want a different lineup later, this means you can test elements that are more unique to your game faster. An effective way to minimise cut-and-paste time is to print your cards very small. Preferably so all of them fit on a single piece of paper. They will be a bit trickier to shuffle this way, but that’s rarely an issue in testing. This way, you need less paper and you can cut everything faster. Going from eight cards to a sheet to 32 is a pretty big difference. Just avoid miniaturizing to the point that you need a magnifying glass. There’s no need to get fancy with real cardstock. Here are some things you can use. I usually just keep any interesting sheets from deliveries I receive. Say, the sturdy sheet of paper used in a plastic sleeve to make sure a comic book doesn’t bend in the mail. Perfect for gluing counters. There are three things you need to consider for paper: size, weight, and texture. For size, since I’m in Europe, I use the standardized A-sizes. A0 is a giant piece of paper, A1 is half as big, A2 half as big again, and so on. The standard office paper format is A4, roughly equivalent to U.S. Letter. This can easily be folded into A5 pamphlets. I also keep A3 papers around (twice the size of A4), but those I use to draw on. Not for printing. I don’t have a big enough home to fit a floor printer. The next thing is paper weight, measured in grams per square meter (GSM). Most home printers can’t handle heavier paper than 120-200 GSM. I always keep standard paper (80 GSM) around, and some heavier papers too. If I print counters or cards I sometimes use the sturdier stock. For reference, Magic cards are printed on 300 GSM black core paper stock. The black core is so you can’t see through the card and is taken directly from the gambling circuit. Lastly, the paper’s texture. If you want to work a little on the presentation, it can be nice to find paper canvas, or other sturdier variants. I’ve found that glossy photo paper is almost entirely useless in my own printer, however, always smearing or distorting the print. So when I buy any higher-GSM paper I try to find paper with coarser texture. There are many different kinds of cardboard, and you should try to keep as many around as possible. Some can be good for gluing boards or counters onto, while others can help make your prototype sturdier. This isn’t as important as paper, but gets used frequently enough that it felt worth mentioning. There will be a lot of rambling about cards later, and how to use them. For now, I only refer to loose cards you can use to prop up your thin paper printouts. These are not strictly necessary, but make shuffling easier. I don’t play much Magic: The Gathering anymore, but I still have lots and lots of leftover Magic cards, so those are the ones that get used as backing in most of my prototypes. You can cheaply buy colored wooden cubes as well as glass and plastic beads in bulk. It’s not always obvious what you may need, so keeping some different types around can be helpful. More specific pieces, like coins or pawns, can also be useful but unless these components provide unique affordances the kinds of components you have access to is rarely important. It’s usually enough to be able to move them around and separate them into groups. Storage is another thing that needs solving. If you mostly print paper and iterate on rules, a binder can be quite helpful. Especially paired with plastic sleeves so you can group iterations of your rules together and store them easily. If you also need to transport your prototypes, the kinds of storage boxes you find in office supply stores will have you sorted. You can push your analogue prototyping really far and build a whole workshop. A 3D printer for making scenery and miniatures, a laser cutter for custom MDF components, and a big floor-sized professional printer that takes over a whole room. If you have the space and the resources for that, you do you, but let’s focus on the smallest possible toolbox for making analogue prototypes. If you want to buy a printer, you just need to be aware that all of them have the same problems of losing connections and failing to print still to this day. Those same problems that have plagued printers since forever. I use a laser color printer with duplex (double-sided) printing support and the ability to print slightly heavier paper, up to 220 GSM. This has been more than enough for my needs. Specifically the duplex feature helps a lot if you want to print rulebooks. Having a good store of pencils and pens, including alcohol- and water-based markers, is more than enough. You can go deeper into the pen rabbit hole by looking at Niklas Wistedt’s spectacular tutorial on how to draw dungeon maps : it’ll have you covered in the pen and pencil department. Some tools you keep around to hold piles of paper or cards together. Paper clips are extra handy, because they can also be used as improvised sliders pointing at health numbers or other variables. Rubber bands are handy for keeping decks of cards together inside a box and for transportation. Almost every paper-based activity without decent scissors on hand will be a futile effort. Just beware that cutting things out by hand takes more time than you think. If you have a game with many cards, you may have to put on a couple of episodes of your favorite show as you cut them out. If you need more precision than scissors can provide, the next rung on the cutting lader is to get a proper cutting mat, a steel ruler, and a set of good sharp knives. These can be craft scalpels, metal handles with interchangeable blades (Americans insist on calling these “x-acto knives”), or carpet knives. Once you have rules and test documents printed, you’ll quickly disappear under a veritable ocean of paper. Though smaller sheafs can be pinned together with a paper clip, staplers are even better. A standard small office stapler is enough. But if you want to staple booklets and not just sheafs, it can be worth it to get a long-reach stapler capable of punching 20 sheets or more. Attaching paper to other paper can be done in more ways than with clips or staples. Sometimes you want to use glue or adhesive tape. Keeping a standard gluestick and a can of spray glue around is perfect. Regular tape and double-sided tape is also great for many things, even if the main use for tape can just be to make larger scale maps out of individual pieces of paper. As mentioned previously, it can take some time to cut out all the cards you want to print. You can cut this time down to a fraction, metaphorically and physically, by getting a paper guilloutine. These can usually take a few sheets at a time and will give you clean cuts along identified lines. Yelling “vive la France” when you drop the blade is optional. Lastly, a more decadent piece of machinery that isn’t strictly needed is a paper laminator. These will heat up a plastic pocket and melt the edges together to provide the paper with a plastic surface. It makes the paper much sturdier and has the added benefit of allowing you to use dry erase markers to make notes and adjustments right on the sheet itself. There is a lot of software out there that can be used to make cards, boards, illustrations, and whatever else you may need. The following is merely a list of what I personally use. Since you will often want to test things at different sizes, vector graphics are generally more useful for board game prototyping than pixel graphics. This is by no means a hard rule, but resolution of pixel images tends to limit how large you can scale them, while vector graphics have no such limitations. My go-to for vector graphics is Illustrator, but there are free alternatives like Affinity available as well. My other go-to piece of software for analogue shenanigans is InDesign, another Adobe program that can also be replaced by Affinity . I’m just personally too stuck in the Adobe ecosystem, after decades of regular use, that it’s too late for me to switch. You can’t teach an old dogs new tricks, as the saying goes. Indesign is great for multiple reasons. Not least of all its ability to use comma-separated value (CSV) files to populate unique pages or cards with data. A feature called DataMerge. Speaking of spreadsheets, all system designers have a lovely relationship to their tool of choice. This can be Microsoft Excel , OpenOffice Calc , or Google Spreadsheets, but the many convenient features of spreadsheets are a huge part of our bread and butter. I don’t even want to know how many sheets I create in an average year. Very broadly speaking, when making an analogue prototype, I will make use of spreadsheets for these reasons: The fantastic Tabletop Simulator is not just a great place to play tabletop games, it’s also a great place to test your own games. Renown board game designer Cole Wehrle has recorded some workshops for people interested in this specific adventure, and let’s just say that once you have this up and running it will make it a lot easier to test your game. Especially if the members of your team doesn’t all live in the same city. Its biggest strength is how quickly you can update new versions for anyone with a module already installed. If you share your module through Steam Workshop, it’s even easier. For most analogue prototypes, this isn’t doable, simply because of NDAs and rights issues. So much stuff ! Let’s put it all together. The way I’ve talked about this, there are really six steps to the process of making an analogue prototype: This is more important than you may think. An analogue prototype can easily become a design detour. Because of this, your goal needs to formulate why you are making this analogue prototype. “Test if it’s fun with infinitely respawning enemies” could be a goal. “See what works best: party or individual character” could be another one. But it can also be a lot narrower, for example designed to test the gold economy in your game. Perhaps even to balance it. The point is that you need a goal, and you need to stick to it and cut everything out that doesn’t serve that goal. If you need to test how travelling works on the map, you probably don’t need a full-fledged combat system, for example. Facts are the smallest units of decision in your game’s design . Stuff that every decision maker on your team has agreed on and that can therefore safely inform your analogue prototype. This can be super broad, like “the player plays a hamster,” or it can be more specific, like “the player character always has exactly one weapon.” You need these facts to keep your prototype grounded, but you don’t necessarily need to refer to them all at once. Pick the ones that are most important to your goal. With a goal and some facts, you need to figure out what systems you will use. Try to narrow it down more than you may think. Don’t make a “combat system,” but rather one “attack system” and another “defense system.” The reason for this is that what you are after is the resource exchanges that come from this, and the dynamics of the interactions. The attack system may take player choices as input and dish out damage as output, while the defense system may accept armor and damage input and send health loss as output. You can refer to the examples of building blocks in this post for inspiration. This is where we come to the biggest strength of analogue prototyping: real humans provide a lot more nuance and depth than any prototype can do on its own. Analogue or digital. One player can take on the role of referee or game master, similar to how it would work in a tabletop role-playing game . In many wargames of the past, this was called an umpire. Someone who would know all the rules and act as a channel between the players and the systems. If you have built a particularly complicated analogue prototype, a good way to test it can be to act as a referee and then simply ask players what they want to do instead of teaching them the details of the rules. Players can play each other’s opponents, representing different factions, interest groups, or feature sets via their analogue mechanics. If you built an analogue prototype of StarCraft , you’d probably do it this way, with three players taking on one faction each. One player can play the enemies, while another plays the economy system, or the spawning system. The goal here is to put one player in charge of the decisions made within the related system. If someone wants to trade their stock for a new space ship, and this isn’t covered by the rules, the economy system player can decide on the exchange rate and the spawning system player can say that this spawns a patrol of rival ships. Just take ample notes, so you don’t forget the nuances that come out of this process. There are many different ways to use the components you collected previously. Some of them may not be intuitive at all. The humble die: perhaps the most useful component in your toolbox. Just look at the following list and be amazed: People have been using playing cards for leisure activities since at least medieval times. Just as for dice, you’ll see why right here, and perhaps these things will fit your needs better than dice: Humans are spatial beeings that think in three dimensions. Even such a simple thing as a square grid where you put miniatures will create relationships of behind, in front of, far away from, close to, etc. All analogue prototypes don’t need this, but if you do need it, here are some alternatives to explore: With the fast iterations of analogue prototypes, you can usually just change a word or an image somewhere and print a new page. This means you may have many copies of the same page after a while. To prepare for this situation, make sure to have a system for versioning. It doesn’t have to be too involved, especially if you’re the only designer working on this prototype, but you need to do something. I usually just iterate a number in the corner of each page. The 3 becomes a 4. I may also write the date, if that seems necessary. I may also add a colored dot (usually red) to pages that have been deprecated, since just the number itself won’t say much and you may end up referring to the wrong pages if you don’t have an indicator like this. Step 1: Don’t : Steal it, fake it, or rehash stuff you have already made before you start a new prototype. Step 2: Just Do It : If it takes less than two days, just do it. As the saying goes, it’s easier to ask for forgiveness than for permission. Step 3: Fail Early : When something feels like a dud even at an early stage, you can assume that it is in fact a dud. There’s nothing wrong about abandoning a prototype. In fact, learning to kill things early is a skill. Step 4: Gather References : Prototypes can only really help with small problems. Big problems, you must break apart and figure out. Collect references. White papers, mockup screenshots, music, asset store packs, and so on. Anything that helps you understand the problem space. The same psychology applies . Rewards, risk-taking, information overload. Many of our intrinsic and extrinsic motivators are triggered the same by boardgames as by digital games. The distance is not nearly as far as we may tell ourselves. Players can represent complex systems . A player has all the complexity of a living breathing human, making odd decisions and concocting strange plans. This lets you use players as representations of systems, from enemy behaviors to narrative. Analogue games are “pure” systems . If you can’t make sense of your mechanic in its naked form, you can probably not expect your players to make sense of it either. Similar affordances . Generating random numbers with dice, shuffling cards, moving things around a limited space; analogue gaming is always extremely close to digital gaming, even to the point that we use similar verbs and parlance. Holism . Probably the best part of the analogue format is that you can actually represent everything in your game in one way or another. It doesn’t have to be a big complex system, as long as you provide something to act as that system’s output. Listing all the actions, components, elements, etc., that are relevant. Just getting things into a list can show you if something is realistic or not. Cross-matrices for fleshing out a game’s state-space. If I know the features I want, and the terrains that exist, a cross-matrix can explore what those mean: a feature-terrain matrix. Notes on playtests. How many players played, what happened, who won and why, etc. Calculators of various kinds, incorporating more spreadsheet scripting. Can be used to check probabilities, damage variation, feature dominance, etc. Session logging. If I want to be more detailed, I can log each action from a whole session and see if there are things that can be added or removed. Set a Goal Identify Facts Systemify the Facts Consider the roles of Players Tie it together with Components Types of dice : you can use any number of sides, and make use of the corresponding probabilities. Dividing a result by the number of sides gives you the probability of that result. So, 1/6 = 0.1666 means there’s a ~17% chance to roll any single side on a six-sided die. Use the dice that best represents the percentage chances you have in mind. Singles : rolling a single die and reading the result. Pretty straightforward. Sums : rolling two or more dice and adding the result together. Pools : rolling a handful of dice and checking for specific individual results or adding them together. Buckets : rolling a lot of dice and checking for specific results. The only reason buckets of dice are separated from dice pools here is because they have a different “feel” to them; they are functionally identical. Add/Subtract : add or subtract one die from the result of another, or use mathematical modifiers to add or subtract from another result. X- or X+ : require specific results per die. In these cases X- would mean “X or lower,” and X+ would mean “X or higher.” Patterns : like Yatzy, or what the first The Witcher called “Dice Poker:” you want doubles, triples, full houses, etc. Reroll : allowing rerolls of some or all of the dice you just rolled. Makes the rolling take longer but also provides increased chances of reaching the right result. Some games allow rerolling in realtime and then use other time elements to restrict play. So you can frantically keep trying to get that 6, but if an hourglass runs out first you lose. Spin : spinning the die to the specific side you want. Trigger : if you roll a specific result, something special happens. It could be the natural 20 that causes a critical hit in Dungeons & Dragons , or it can be that a roll of 10 means you roll another ten-sided die and add it to your result. Hide : you roll or you set your result under a cupped hand or physical cup, hiding the result until everyone reveals at the same time or the game rules require it. Statistics : common sense may say that you can’t possibly roll a fifth one after the first four, but in reality you can. Dice are truly random. Shuffle : shuffling cards is a great way to randomise outcomes. This can be done in many different ways, as well, where you shuffle a “bomb” into half of the pile and then shuffle the other half to place on top, for example. There are many ways to mix up how to shuffle a deck of cards. Uniqueness : each card can only be drawn once, which means that you can make each card in a deck unique and you can affect the mathematics of probability by adding multiple copies of the same card. Just like the board game Maria uses standard playing cards but in different numbers. Front and back : the face and back of the cards can have different print on them, or the back can just inform you what kind of card it is so you can shuffle them together in setup. Of course, the fact that you can hide the faces for other players is also what makes bluffing in poker interesting. Turn, sideways : what Magic calls “tapping” and other games may call exhausting or something else. Some cards can be turned sideways (in landscape mode instead of portrait mode) by default. Turn, over : flipping a card to its other side can serve to show you new information or to hide its face from everyone around the table. It can represent a card being exhausted, or injured, or other state changes like a person transforming into a werewolf. Over/under : cards can be placed physically over or under other cards, to show various kinds of relationships. An item equipped by a character, or a condition suffered by an army, for example. Card grids : cards can be placed in a grid to generate a board, or to act as a sheet selection for a character. One card could be your character class, another could be a choice of quest, etc. It’s a neat way to test combinations. Hide cards : if you want to get really physical, you can hide cards on your person, under boards, and so on. This was one way you could play Killer , by hiding notes your opponents would find. Card text : if you print your own cards, you can have any text you want on them. Reminders, rules exceptions, etc. Deck composition : how you put decks together will affect how the game plays, and predesigning decks for different tests can be very effective. Perhaps you remove all the goblins in one playtest and have only goblins in another. Deck building : decks can also be constructed through play, similarly to how Slay the Spire works. A style of mechanic where you can start small and then grow in complexity throughout a session. Stats : cards can be in different states. On the table, in your hand, available from an open tableau, shuffled into a deck, discarded to a discard pile, and even removed from the game due to in-game effects. Semantics : something that Magic: The Gathering ‘s designer, Richard Garfield, was particularly good at was to figure out interesting names for the things you were doing. You don’t just play a card, you’re casting a spell. It’s not a discard pile, it’s your graveyard. These kinds of semantics can be strong nods back to the digital game you are making, or they can serve a more thematic purpose. Statistics : with every card you draw, the deck shrinks, increasing the chances of drawing the specific card you may want. You are guaranteed to draw every card if you go through a whole deck, which is one of the biggest strengths of decks of cards. Node or point maps : picture a corkboard with pins and red thread, or just simple circular nodes with lines between them. You can draw this easily on a large sheet of paper and just write simple names next to each circle to provide context. Sector maps : one step above the node or point map is the sector map, where regions share proximity. Grand strategy games have maps like this, where provinces share borders. Another example are more abstract role-playing games, where a house’s interior is maybe divided into two sectors and the whole exterior area around it is another sector. It’s excellent for broad-stroke maps. Square grids : if you want a grid, the square grid is probably the most intuitive. But it also has some mathematical problems: diagonals reach twice as far as cardinals. This means you need to either not allow diagonals or allow them and account for the problems that will emerge. Hexagon grids : these are more accurate and classic wargame fare, but they will also often force you to adapt your art to the grid in ways that are not as intuitive as with a square grid. Freeform : finally, you can just take any satellite image or nice drawn map, perhaps an overhead screenshot from a level you’ve made, and use it as a map in a freeform capacity. This may force you to use a tape measure or other way to measure distances, but if the distances are not important that matters a lot less. For example if your game shares sensibilities with Marvel’s Midnight Suns .

0 views

The dumber, the better

Zhenyi Tan, in a blog post titled Ensheinification , writes: Every time I replace something with a new thing, the new thing is worse. My mother-in-law bought a new rice cooker. It has 20 settings and none of them cook good rice. The old one had one button and made perfect rice for 10+ years. I talked to her about it. She said she tried three different rice cookers. The first one made the rice too sticky. The second one had many buttons and bad design3. And all the buttons turned out to cook the same way. The third was also full of buttons and also made sticky rice. She went back to ask the shop staff how the buttons worked. Nobody knew. They’re just salespeople. Reading this article, I could almost taste the frustration that I often experience myself when I am in the market for something. The rice cooker is actually a great example of an object that is supposed to do one thing, and do it well. It turns out that last Christmas, my wife got me something I had on my wish list for a while: you guessed it, a rice cooker. But not any rice cooker: this “analogue”, beautiful, and simple Hario rice cooker . No button. No plug. No screen. No LED indicator. Just a rice cooker that whistles when the rice is about to be ready. Is it perfect? No. The rice is very good, every time, but I would not call it perfect. But if I prepare the rice the right way, the results are repeatedly and predictably great . The object itself is well-made too. A nice glass lid, a stainless steel and aluminium body, an easy-to-clean and replaceable whistle part: I think this thing could last decades if I take care of it properly. This article by Zhenyi Tan also reminded me of Bradley Taunt’s My Coffee Maker Just Makes Coffee post that I have shared a few times already : Both digital and industrial design suffer from bloat. Far too often I witness fellow designers over-engineer customer requests. Or they add excessive bloat to new product features. It’s almost a rarity these days to find designers who tackle work as single items. Everything expands. Everything needs to do one little extra “cool” thing. Nothing is ever taken away. My new rice cooker and my dear old coffee maker are great examples of this philosophy applied to everyday objects, and the more I think about it, the more satisfying it gets. * 1 As you know, I also love to take away and remove stuff to keep things light and simple . When my soon-to-be brother-in-law first visited our new flat last year, he asked me about the kind of roller shutters we had installed, if they were electrically operated and if I could activate them remotely. I told him that the real estate developer had stuck to manual levers to keep the cost down as much as possible, but we could, if we wanted, easily add a little motor on the side. But I told him that I preferred this manual system anyway. If one day I can’t open or close the shutters, I will know where the problem comes from: a mechanical issue with the roller. If I had a smart system, and if tapping the button on my iPhone screen didn’t do anything, the problem could not only be caused by more things, but also become harder to pinpoint. Is the Wi-Fi working? Do the shutters have internet access? * 2 Should I restart the app or my phone? Does my flat have power? Do I need to reset the connection? Is it a bug? Do I have to update the app? Do I need to give the app access to my location? And finally, is there a mechanical issue with the roller? I get that these modern and more complex solutions exist: some people might prefer them over “dumb” systems, some people may actually need 20+ functions for their rice cooker. But if the price to pay for these is less reliability and simplicity, I wouldn’t count this as progress, but as regression indeed. My coffee maker is this fantastic Braun Aromaster Classic KF 47/1 , in white, and not only do I find that it looks a little Dieter-Rams-esque , but it just works. I bought it in 2020, and I plan to keep it for at least another six years. Sounds like a lot these days. ^ This sentence alone should be a warning sign urging us to keep things as dumb as possible. ^ My coffee maker is this fantastic Braun Aromaster Classic KF 47/1 , in white, and not only do I find that it looks a little Dieter-Rams-esque , but it just works. I bought it in 2020, and I plan to keep it for at least another six years. Sounds like a lot these days. ^ This sentence alone should be a warning sign urging us to keep things as dumb as possible. ^

0 views

What does it mean to create with AI?

For some weird reason, I always had some kind of slight “mental hesitation” with the meaning of data encoding versus decoding . Which one goes in what direction? To be honest, I have the same kind of weirdness with other concepts: daylight saving time for instance (are we gaining or losing an hour? I can never tell, sometimes even for many days after a change). So I wanted to create a diagram to illustrate the dichotomy between encoding and decoding, for a course I’m creating on software engineering. So one way to “create with AI” would be to ask one: “Can you please create a diagram to illustrate the difference between data encoding and decoding”.

0 views
Anton Sten 1 weeks ago

What a UX strategy is — and why most teams should write one

A UX strategy is a short document that says what good looks like for the people using your product, and how the team plans to deliver it. That's it. No frameworks, no McKinsey decks, no 113 slides. When I join a project, one of the first things I do is ask if there's a UX strategy in place. Most of the time there isn't. Sometimes there's a brand book, or a product roadmap, or a Notion page someone wrote a year ago and then forgot. Rarely is there a document that actually says: this is what we mean by a good experience, and this is how we're going to get there. It's not that teams don't care. They almost always do. It's that nobody's written it down, so the care gets spent in fifty different directions and the product ends up feeling like a committee made it. Which, in a way, it did. ## What a UX strategy actually is The phrase trips people up, so I usually pull the two words apart. **UX** is what someone experiences when they use your product. Not what the product does — what it feels like to use. Two apps can have identical feature lists and feel completely different. iPhone and Android. Notion and Confluence. Linear and Jira. The features are the same on paper. The experience isn't. **Strategy**, stripped of the consultant baggage, is three questions. Where are we now. Where do we want to be. How do we get there. That's the whole shape of it. Everything else is detail. Put them together and a UX strategy is a document that answers those three questions specifically about the experience you're building. What's the experience like today, what do you want it to feel like, and what are you actually going to do to close the gap. It's not a deliverable for clients. It's not a marketing document. It's a working tool the team uses to make decisions when nobody's in the room to ask. ## What goes in one I've written a lot of these over the years and the contents vary, but the bones are usually the same. **Where you are now.** A short, honest snapshot. Who your users are, what they're trying to do, where they get stuck, and how the current experience compares to alternatives. Not a research report — a summary the team can hold in their heads. If it's longer than two pages, it's not doing its job. **Where you want to be.** This is the part most teams skip, because it requires picking a direction and sticking to it. Not goals like "improve the user experience" — that's not a goal, that's a wish. Specific principles you can actually hold a design decision against. We'll come back to those in a minute, because they're the part that does the most work. **How you'll get there.** The practical bit. What's going to change. Who's going to do it. What you'll stop doing to make room. I'm partial to this section because it's the part most strategies leave out, and it's the reason most strategies don't survive contact with real work. A direction without a plan is a wish list. Length-wise, a good UX strategy is short. A page can be enough. Two is plenty. Anything longer and people will paste it into Claude and ask for the summary — which means the summary is the real strategy, and you wrote the rest for nothing. ## Goals are principles, not action items The most important section in any UX strategy is the one defining what good looks like. And the best way I've found to do that is to write principles, not features. Principles are desired outcomes. They sound like sentences, not roadmap items. Some I've used over the years: **Design for everyone.** Build for the eighty percent, not the loudest twenty. Every team I've worked with has someone — a stakeholder, a power user, a vocal customer — who keeps asking for the next feature. Most of those features serve almost nobody. A principle like this gives the team something to point at when the request comes in, instead of just saying no and feeling bad about it. **Optimize for speed.** Most products are judged on how fast they feel before they're judged on anything else. Not literal load time — perceived speed. How quickly something responds. How few steps it takes. People will forgive almost anything if the product feels fast. **Different is good.** Make the important thing obvious. The primary action should be visually distinct, placed where people expect it, and impossible to miss. Insecurity is the root of bad user experiences. If the user is wondering what to do next, you've already failed. **Always start with what's familiar.** Your users spend most of their time in other apps, not yours. Look at the patterns they already know. Borrow shamelessly from the conventions of the industry you're in. Familiarity isn't a lack of imagination — it's respect for the user's time. These are just examples. Yours will be different, and they should be. The point isn't the specific principles, it's the form. Write things you can hold a real design decision against on a Tuesday afternoon when nobody's watching. ## Why this matters more now For most of my career, UX strategy was the kind of document large teams wrote because they could afford to. Smaller teams skipped it. They were busy shipping, and shipping was the hard part. Shipping isn't the hard part anymore. The cost of building has collapsed. Anyone can put a working product on the internet in a weekend. Tools write half the code. AI handles the parts that used to take a junior designer a week. The bottleneck used to be execution, and execution is now nearly free. Which means the hard part is the part that was always hard but easier to ignore: knowing what's worth building, who it's for, and what good would even look like. When making things was expensive, you had to be careful before you started. Now you can start anything, which is exactly why so many teams are shipping a lot of things that nobody needed. A UX strategy used to be a luxury. It's becoming the thing that separates teams who ship useful work from teams who ship a lot of work. I've written about this from a couple of different angles — [vibe coding for designers](https://www.antonsten.com/articles/vibe-coding-for-designers/) covers what changes when designers can build, and [simple is hard](https://www.antonsten.com/articles/simple-is-hard/) is about why restraint is the harder discipline. A UX strategy is the document that makes restraint possible. ## What a strategy actually does The thing nobody tells you about UX strategies is that the document itself isn't really the point. The point is the conversations you have while writing it. The disagreements that surface. The assumptions that turn out not to be shared. The "wait, is *that* what we're optimizing for?" moments that happen when you try to put it on paper. The strategy is the artifact. The alignment is the work. When I look back at the projects where the team shipped well and the ones where it didn't, the difference was almost never talent or budget. It was whether the team agreed on what they were actually trying to do. Sometimes that agreement existed without a document. More often it didn't, and the document was what made it real. If you're on a team without a UX strategy, you don't need a long one. You don't need a template. You don't even need to call it a strategy if the word makes people roll their eyes. You need a few pages that say what good looks like, what you're going to do about it, and what you're going to stop doing to make room. Then you need everyone on the team to actually read it. The surprising thing isn't that most teams don't have a UX strategy. It's that most of them are doing fine without one, until suddenly they aren't. A strategy is what you wish you'd written before things got hard. *I wrote a chapter on UX strategy in [Products People Actually Want](https://www.antonsten.com/books/products-people-actually-want/) — if you want the longer version.*

0 views
Jim Nielsen 1 weeks ago

Prototyping with LLMs

Did you know that Jesus gave advice about prototyping with an LLM? Here’s Luke 14:28-30: Suppose one of you wants to build a tower. Won’t you first sit down and estimate the cost to see if you have enough money to complete it? For if you lay the foundation and are not able to finish it, everyone who sees it will ridicule you, saying, ‘This person began to build and wasn’t able to finish.’ That pretty much sums me up when I try to vibe a prototype . Don’t get me wrong, I’m a big advocate of prototyping . And LLMs make prototyping really easy and interesting. And because it’s so easy, there’s a huge temptation to jump straight to prototyping. But what I’ve been finding in my own behavior is that I’ll be mid-prototyping with the LLM and asking myself, “What am I even trying to do here?” And the thought I have is: “I’d be in a much more productive place right now if I’d put a tiny bit more thought upfront into what I am actually trying to build.” Instead, I just jumped right in, chasing a fuzzy feeling or idea only to end up in a place where I’m more confused about what I set out to do than when I started. Don’t get me wrong, that’s fine. That’s part of prototyping. It’s inherent to the design process to get more confused before you find clarity. But there’s an alternative to LLM prototyping that’s often faster and cheaper: sketching. I’ve found many times that if I start an idea by sketching it out, do you know where I end up? At a place where I say, “Actually, I don’t want to build this.” And in that case, all I have to do is take my sketch and throw it away. It didn’t cost me any tokens or compute to figure that out. Talk about efficiency! I suppose what I’m saying here is: it’s good to think further ahead than the tracks you’re laying out immediately in front of you. Sketching is a great way to do that. (Thanks to Facundo for prompting these thoughts out of me.) Reply via: Email · Mastodon · Bluesky

0 views
Jim Nielsen 1 weeks ago

The Blandness of Systematic Rules vs. The Delight of Localized Sensitivity

Marcin Wichary brings attention to this lovely dialog in ClarisWorks from 1997: this breaks the rule of button copy being fully comprehensible without having to read the surrounding strings first, perhaps most well-known as the “avoid «click here»” rule. Never Register/​Register Later/​Register Now would solve that problem, but wouldn’t look so neat. This got me thinking about how you judge when an interface should bend to fit systematic rules vs. exert itself and its peculiarities and context? The trade-off Marcin points out is real: "Never Register / Register Later / Register Now" is fully self-describing and avoids the «click here» rule. However, it kills the elegant terseness that makes that dialog so delightful. “Now / Later / Never” is three words with no filler and a perfect parallel structure. It feels like one of those cases where the rule is sound as a guideline but a thoughtful design supersedes the baseline value provided by the rule. Rules, in a way, are useful structures when you don’t want to think more. But more thinking can result in delightful exceptions that prove better than the outcome any rule can provide. I suppose it really is trade-offs everywhere : As software moves towards “scale”, I can’t help but think that systematic rules swallow all decision making because localized exceptions become points of friction — “We can’t require an experienced human give thought and care to the design of every single dialog box.” What scale wants is automated decision making that doesn’t require skill or expertise because those things, by definition, don’t scale. Then again, when you manufacture upon inhuman lines how can you expect humane outcomes? Reply via: Email · Mastodon · Bluesky When you choose to make decisions on a case-by-case basis, the result can be highly-tailored to the specific context of the problem at hand. However, within a larger system, you can start to lose consistency and coherence across similar UX decision points. When you choose to make system rules override the sensitivities of individual cases, you can lose the magic and delight of finding waypoints tailored exclusively to their peculiarities.

0 views
ava's blog 2 weeks ago

some silly art

Made some silly art of my online friends and myself today, redrawing memes or other images I saw online. I love mango. (this is referencing this meme ) These are Suliman as purple Keroppi and Mono as a mix of his Jiji icon and Googie . >:). ( orginal art ) This is Kami :3 ( original art from an anime called House of the Sun) Reply via email Published 30 Mar, 2026

0 views
HeyDingus 2 weeks ago

Launchpad was great for uninstalling apps; Spotlight is not

Apple published this video to their Support channel on YouTube yesterday, and it motivated me to get this off my chest: Uninstalling apps on macOS is not as easy as it should be. Yes, I know, I know that you can just drag an app to the trash and technically it’s gone. That’s what Apple recommends doing in its video. But then why do are apps like Raycast , CleanMyMac , and AppCleaner able to find leftover files scattered around your system by the deleted app? Maybe it’s just the completionist in me, but I don’t want those files left behind! One thing — the only thing? — I liked about Launchpad was that it made it super obvious how to uninstall (Mac App Store) apps. 1 Just like on your iPad/iPhone, you could click and hold on the app’s icon to send it into “ jiggle mode” and then click the ‘ X’ would remove it. I could be confident that all the app’s associated bits and bobs would be removed from my system. But that changed with Tahoe. While Spotlight got a huge boost in capability as a whole with clipboard history and actions, it also subsumed Launchpad’s role as the main, well, launcher for apps. But there are no affordances in Spotlight for removing apps like Launchpad had. AppCleaner was my go-to tool back in the day, but now I use Raycast to get the job done with confidence. Raycast’s implementation could offer some inspiration for Apple. After searching for an app within Raycast, a simple ⌘K shortcut reveals a host of actions that can be taken on the app. You can open an app, reveal it in the Finder, quit it, and, yes, uninstall it — among other things. Apple could follow this model and provide an ‘ Uninstall App’ action to take within Spotlight. Spotlight’s interface, seeing as it replaced Launchpad, should offer the same capability for removing apps. And it should be as thorough as on an iPhone or iPad. P.S. I also occasionally use Raycast to quit apps that stubbornly have no icon in the Dock or menu bar and therefore make it tricky quit completely. Apps installed outside of the Mac App Store would not display the ‘ X’ to remove it. You had to do it the “ old fashioned” way of dragging the app to the trash and then hunt down its system files. ↩︎ HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email . Apps installed outside of the Mac App Store would not display the ‘ X’ to remove it. You had to do it the “ old fashioned” way of dragging the app to the trash and then hunt down its system files. ↩︎

0 views
ava's blog 2 weeks ago

art feelings

Inspired by Vaudeville Ghost’s make bad art . I’ve always felt a resistance towards learning how to do art “properly”. Over the course of my life so far, I did occasionally look at short tutorials for some things, or booked one-time art workshops, but I just couldn’t find anything in there that made me want to stay at it perfecting it, and I don’t think I kept any technique long term. I also felt restricted by art class in school (despite great grades), which just wanted me to reproduce a style as closely as possible. I know most people in art progress by emulating others and learning the rules before producing stunning art in their own style. They grind practice sessions and drawing exercises and use palettes that have all the right values and complement each other, and they set all the shadows and highlights just right, their use of color underlines the piece. The result is something really amazing and kind to the eyes, but it’s also very technical and mechanical at times. Some of it treats art like this thing you can win, that can be graded finely and put neatly into boxes, and that it’s something you perform for others. That if you follow the rules to a T, the result is always good art. And they’re mostly right about that. But I’ve never had the drive to optimize my art this hard, for it to be checking off a list. I never could see it as a challenge to master and learn specific techniques (aside from some oil stuff I tried). For me, the more I look at what others do in art, the more my creativity and style disappears, and I want to protect that instead. I don’t want to feel limited by having to think about whether I’m doing something right. I know some limits and rules can set others’ art free, or polish the piece, but not mine; too much time spent looking elsewhere and I just emulate others, too many limits and I just stop. Some things in art also sound too serious to me, mathematical and snobbish. Like colors are formulas you are only allowed to calculate in a specific way, or a language whose vocabulary lists you learn by heart. I was never good at math, and my difficulties with my mental eye has forced me to be more experimental and see where I end up. Color theory is a law to others and an optional guide for me; I think the rules are not bent and questioned enough. For a lot of things, I think “This only “looks right/better” to you because we are inundated with this style or use of color everywhere.” I hate that people’s style is called wrong due to weird dimensions, weird use of color or not respecting the rules of the medium, but if they stick to it enough and it becomes popular, suddenly it’s “allowed” and taken seriously, analyzed and retroactively has reasons and interpretations applied to it. It only gets legitimized when close enough to an existing style or palatable enough or following some made-up rules. I think if I really tried, my art would be so much better objectively, and it would be nicer for others to see, but simultaneously, it would ruin the experience for me. It would introduce guardrails I don’t wanna have. This used to be a point of shame for me, like I’m choosing to stay uneducated, ignorant, and with unused potential. Then years ago, I read a post by an artist whose art I really like who said they ignored all advice that’s usually given for the medium (you aren’t allowed to do this, only xyz is the proper way to do it!) but now they’re successful with their style. I know people will say a good study will have you learn in a few weeks what could otherwise take you years to learn on your own (if at all), but I am fine with it. I don’t want to become a professional artist, and I don’t wanna become good at this hobby; I just wanna do it when I feel like it. This is also protecting me from the effects of perfectionism. Some hobby artists seem like they’re only allowing themselves to enjoy and engage with this hobby if they’re aiming for a specific standard and pretending they’re gonna have to pass an exam about it, because free time has to be productive as well, and they cannot bear to spend time on something that isn’t useful or earning admiration by others. Time is scarce, why throw paint on the paper for fun, if you can follow a YouTube guide in earnest tension and afterwards say you have studied a technique? So much more worth your time in today’s metrics. A while ago I was obsessed with drawing butterflies, currently it’s circles and gradients and colorful waves. Nothing impressive. I would like to draw more pixel art of rooms, more nature landscapes with gouache again, and - surprisingly, after writing all this - I’d live to try the jelly art style. Probably the closest I’ve ever come to wanting to submit to a set of rules, because obviously I need to adhere to nail the style. We’ll see. I like my mixed stuff the most. I have a canvas where I mixed acrylics, gouache and makeup. I have another that has acrylic paint and some crystals stitched on it. Honestly, looking back on it all, I think there’s also been too many times where I felt like making art for others was a net negative for me, or that my style wasn’t understood or respected and people didn’t go about feedback in a respectful way. Like, if their character feature is big, it’s “stylized”, when I do it it’s “awkward”; and I know it’s because one works within the established rules and one doesn’t, so one is seen as skill and one as an accident or lack of skill. People will always see a person as more skilled even they make art that’s more harmonious to look at, and it seems to me I just don’t consistently create art that looks harmonious to anyone else but me; makes sense, with so many mediums and months or years of not making art. If I wanted to make better art, I’d have to draw more often, and draw like others do. I remember a time a teacher scribbled over my art, and I never want to experience that again. So I released that expectation, and I make “bad” art for me. Reply via email Published 27 Mar, 2026

0 views
Stone Tools 2 weeks ago

Aldus PageMaker on the Apple Macintosh

In life, there are love affairs and there are marriages. Deluxe Paint was (and is) an amazing, beautiful piece of software. It taught me so much about color, texture, and painting with light, but more specifically it opened my eyes to the possibilities of digital art as a medium. Yet, as much fun as I had, I never became a digital painter, I don't really do any pixel art these days, and over time the passion faded, never truly gone, but certainly diminished. A love affair. In college, with a declared major in electrical engineering, I took a chance at writing for the school newspaper, at the urging of my English professor. I was hooked from the jump, caught the reporting bug, and learned the ins and outs of journalism. Over the next four years, I became adept at Aldus PageMaker , the heart of our student media production process, fascinated by its ability to amplify the written word. It was because of PageMaker I switched majors to graphic design; we stuck together well into my professional career. A marriage. Sometimes these retrospectives are fun peeks into the past, opportunities to understand computing history a little better. Other times, I'm revisiting a condemned old house I used to live in, in a town I abandoned, finding and dusting off a forgotten jewelry box, inside which sits a tarnished wedding ring. What we had was beautiful, once. With this exploration, I'm honestly not expecting to rekindle any deep love for PageMaker. It taught me much in my youth, lessons I've taken to heart over the years and carry with me to this day. Still, you never know, maybe there's something yet to learn. Only one way to to find out. This was the last version released under the Aldus label. Soon thereafter, Aldus merged with Adobe, and this was re-released as Adobe PageMaker 5.0a. I have a very specific project in mind this time around. No, "project" is not quite the right word. It's not a project, it's a calling . Many years ago, one Mr. Robert Charles Joseph Edward Sabatini Guccione had a dream. That dream? : to compete against Hugh Hefner's Playboy magazine for dominance in the adult erotica print landscape. His dream expanded into a hotel staffed by Penthouse Pets and visited by Saddam Hussein. The dream grew further still into an X-rated box-office bomb starring Malcolm McDowell, Helen Mirren, and Peter O'Toole. Good times. Guccione's eventual wife, Kathy Keeton, had her own dream: a kind of Penthouse magazine, but for the mind . It was to be a heady packaging of art, literature, and investigations into science and the paranormal, presented on high-gloss paper, with a design sensibility that promised intellectual value well beyond the $2.00 cover price. Heck, the liberal use of spot-color metallic ink in every issue was itself worth the $2.00. Edited by Frank Kendig, with Art Direction by Frank Devino, issue one of OMNI Magazine hit newsstands with the October 1978 issue, around the time of the debuts of the Speak & Spell, Intel's 8086 processor, and Space Invaders. As Bob Guccione wrote in the premiere issue's "First Word" publisher's column, "This then, is the editorial promise of OMNI - an original if not controversial mixture of science fact, fiction, fantasy, and the paranormal." The first issue set the table neatly. A story about scientific advances in age-defiance, fiction from Isaac Asimov, an interview with Freeman Dyson (he of the Dyson Sphere joke ), and artistic photography of soap bubbles all combined to take the reader on a magical journey of enlightenment. It worked on me at any rate. OMNI's print run ended in 1995, with flaccid attempts to reanimate its corpse over the years. A new issue appeared on newsstands in 2017 with a cover design I will charitably call, "I guess they tried." That was a cheap shot, and I need to be cautious throwing stones here, as I am about to make my own attempt at designing OMNI Magazine . My name isn't Christopher Hubris Drum for nothing. Launching into PageMaker brings back a tidal wave of memories. Good lord, the volume of Diet Coke I drank during long production nights back in the 90s! Even now, to place an image is as reflexive as breathing. If I mentally prod at the inner crevices of my brain matter, the rest of the expert knowledge appears to be long gone. Considering PageMaker from a digital native's perspective, its tools and way of thinking can be quite anachronistic. Today, we enjoy a kind of fluidity in page design, worrying about things like "reactive design" with flexible, auto-adjusting layouts. That was not such a concern to early desktop publishers, except in coarse-grained measures. What they really needed was a bridge for the mental divide between manual paste-up and the new digital hotness. Consider the tools of the trade at the time: X-Acto blades, point tape, light tables, non-reproducible pens, rubylith/amberlith, vellum, wax machines, PMT machines, photo typesetters, and just so much paper. All but the paper were replaced with a mouse. A fantastic video showing someone doing manual paste-up, just to give you a sense of the dramatic sea change desktop publishing introduced. PageMaker provides a digital equivalence to a physical pasteboard. Much like other software I've looked at, especially in the word processing arena, the "Don't worry! Yes it's on a computer but we've reproduced a metaphor you understand" approach informs a lot of decisions behind its interface and tool-set. Everything you do is "manually digital" if that makes any sense? User actions are similar to the pre-digital workflow, it just happens on a screen instead of a light table. For example, there are no tools to assist with positioning elements relative to one another. There is no real concept of "layers." There is no such thing as "grouping." If you want text in columns, you must manually lay those down, one at a time. So here's our blank page in PageMaker and the palettes which are ready to assist. We have a Letter-sized page, outlined in black, with default margins in pink and purple. In the bottom left are and icons for the left and right "master pages" (elements that will be included on every left or right page, respectively) and a little page icon showing that this is a 1-page document and we are on page 1. I really love the cute little pages in the scroll area; it's so easy to immediately jump to a specific page or spread. It's all so simple, it's kind of impossible to forget how to use it. Toolbox does what it says, offering the selection arrow, lines, text, object rotation, boxes, circles, and image cropping. Styles has pre-built paragraph styles, which can be modified to meet your design spec, and which can "cascade" by basing styles on other styles. Colors are of your own mixing, or can be pulled from licensed libraries, like Pantone, Toyo, Trumatch, and others. There is also a Library palette, for storing reusable objects in your publication, and which utterly failed me in my tests. It might be hard to wrap a modern mind around it, but that's basically it for the palettes. They don't dock with one another (that would come in the Adobe era). There are no hidden sub-palettes. There's just what you see: tools, styles, colors, and the one at the bottom, a context-sensitive "control palette." This is either a merciful culling of modern palette madness, or a frustrating barrier to artistic expression, depending on which side of 2000 you were born on. Interestingly to me during my early poking around in the tools, it isn't so much that I find myself wishing for more palettes, so much as I just want a refinement of these. As a simple example, notice what is not inside the palettes? There's no method for creating new colors or styles, for example. Those are separate options under the Element and Type menus, respectively. A little button would be nice. While re-familiarizing myself with the forgotten contours of the program, I'm remembering how great the control palette is. It debuted with PageMaker 4 and completely changed the usability of the program. In the image above you can see how the control palette morphs itself to show a core set of commonly used functions specific to the currently selected tool, represented by the left-most icon: box, text, line, and image (top to bottom). The palette gives live stats and mathematically precise control over most aspects of each tool. I find the control palette so adept at handling 90% of what I need to do, I basically don't touch the menus of the program. If color selection could be worked into the palette in some fashion, that would handle another 9.9% of what I need. The controls for numeric positioning are not just useful, they're basically required. Trying to position anything with precision by hand is futile, which reveals a letdown. The palette shows in real-time a dragged item's position on the page. When dragging out a guideline from the ruler, we can see where that guideline will fall when released. However, I just said that positioning by hand is futile, and guidelines need high precision. Yet guidelines in PageMaker are special, delicate creatures treated with unique rules. Unlike everything else, guidelines are not page objects and so, they cannot be selected for editing. We can grab them and move them around, but we cannot just click-select one. If we can't select it, we can't fine-tune it with the control palette. It's the one thing we need precision for, but it's the one we're denied. This blank page is driving me crazy. Let's get a masthead on there, so I can at least pretend like I'm a real designer for a moment. 0:00 / 0:40 1× Using the control palette to set the masthead. (measurements derived from here ) Continuing to think of PageMaker as just a big area for building collages out of raw material, this means it also doesn't have any concept of layers. Things are layered, but to find something in a stack means sifting through the stack item by item to reach the desired element. This is PageMaker's biggest flaw, and proves frustrating time and again. To be completely fair, when Aldus PageMaker 5 released in 1993, Adobe Photoshop was at version 2.5 and didn't have layers either. Photoshop wouldn't get layers until version 3, in 1994. It's particularly frustrating because the simple act of clicking on elements can bring them to the front automatically, as with the main cover image. Once I have it in place, I often find it is obscuring the masthead. So I have to over and over and over again, with every accidental click. That happens a lot, because PageMaker misunderstands my click intent quite frequently, clicking "through" my desired object into the background image. That jumps the image to the front, and here we go again. This brings up another issue, which is there is no way to "lock" objects into position. Everything is loosey-goosey and free-form, again mimicking old-school paste-up methodology (I recall dropping paste-up boards and losing a carefully arranged layout or two back in the day). That adherence to the old ways makes sense to me for PageMaker 1 and 2. By version 3, I think the digital nature could have been better explored. By version 4, it absolutely should have been. By version 5, it feels like weaponized incompetence. There is a clear reason QuarkXPress enjoyed a reported 90%+ market dominance in desktop publishing by the time PageMaker 6 came around. Simply put, they embraced the future of layout, not the past. Until they didn't, but that's a story for another day. Now to flesh out this cover a bit more. One thing that takes getting used to is how much of the design occurs in our imaginations. The screen is simply too small and too low-resolution to know with 100% certainty that what we see is what we want. That's one reason the control palette is so invaluable, is because we can know with mathematical certainty that an object is where we intend, despite what we see on screen when zoomed out. Like EA developing the IFF file format for the Amiga community, Aldus likewise developed TIFF (tagged image file format) to unify image handling on the Macintosh. TIFF was the image standard for continuous tone images in publishing on the Mac, bar none. Of course, images fit for print were pretty heavy objects for the RAM restricted Macintoshes of old. Lightweight 72dpi images might have been fine for the screen, but 300dpi was needed for output to Linotype for final camera-ready artwork. Here's a dpi vs. lpi explanation , in case a digital-only workflow has shielded you from learning of it. The cover will need a 9" x 11.5" image at 300dpi in CMYK. Using the only PageMaker -recognized compression method LZW, that's a 20MB file and PageMaker only requires 3MB to run. It's efficient at what it does, but something's gotta give. In PageMaker we can link to TIFF files (embedding is also an option), with three on-screen preview options: greyed out, normal, and high resolution. Your choice will depend on your system and complexity of layout. If things are chugging too hard, step down. Turn on high to get it right, then turn back to grey to avoid the ulcers of slow screen redraw on your Mac SE. This may still be taxing to early systems, but we have another option. A common practice in the day was to use FPOs, "For Position Only" images. Those were low-resolution proxies, good enough for a designer to marry text and graphics with some degree of confidence without stressing her computer. After delivering digital files to the printer (oftentimes literally handing over floppies, SyQuest , or Zip disks in person), a process for swapping FPOs with print-ready high resolution versions of the same images was available to the prepress team. Design in low-rez, output in high-rez. For OMNI , the only printer I have available is the coin-operated color laser copier at the convenience store, so I'm not overly concerning myself with "press ready" on this. However, I don't want to make things artificially easy on myself either. There is no art without pain, as they say. In reading about the origins of PageMaker , and interviews with and about Aldus's founding by Paul Brainerd, it seems he was a real stickler for typography. Of note, he pushed hard for things like typographer's quotes (curly vs. straight), and so within PageMaker there are quite a few options for setting type "just so." Typographer's quotes can be toggled on a document as the default. Text tracking, leading, baseline shift, and kerning are all settable in precise increments by the control palette. Letter-by-letter kerning is also easily achievable through to set nice, tight TA pairs (a little Guccione callback joke for you there). Despite Brainerd's self-professed love for good type, the Quark crowd lamented PageMaker's typographical controls. One area in which QuarkXPress and Ventura Publisher had innovated were the tools for laying down columns of text. Those used a "text box" methodology, which is pretty much the standard today. A box could be drawn, delineating an area of the page which should hold text. That box could then be set up to contain columns, gutters, insets, a frame and so on, and the text would flow within accordingly. Move the box and the internal formatting moves with it. It makes too much sense, and so PageMaker doesn't do that. PageMaker kicks it old-school, forcing us to put down guidelines on the page that show where columns of text should fall, nay where they could fall if one were so inclined. They're mere suggestions, really, and it is up to the designer to place the text within those guidelines, or not. This kind of adheres to the concept of using a grid structure for a page, where the grid can be used rigidly or fluidly, as the designer may choose. Using an OMNI scan for measurements, I've set up a template for two-page spreads. Notice how column guides fill the page top to bottom. Left and right can have different column counts, but a single page cannot. With the box layout methods of Quark and company, if we want to split the layout into 4 columns on top and 3 on the bottom, we can draw two text boxes and assign respective column counts. To do the same thing in PageMaker , we have to set the page to 4 columns, lay out the 4 columns, then change to 3 columns, and lay those out. This kind of futzing about is the drum-beat of using PageMaker , a rhythm of "set a value, do a thing, change that value, do the next thing, reset the value, do another thing" which my muscles have remembered long before my brain does. PageMaker offers a few tools for wrangling long-form publications. The story editor is a lightweight, built-in word processor, with spell check and find-and-replace. Styles can be applied in its stripped down text view, which aren't visible until exiting the story editor, but are annotated in the margin. It's nice not having to jump out of PageMaker just to do a quick edit. Throwing everything together into a monolithic document can be unwieldy. It's a far sight better to break the publication into separate documents for work by various contributors simultaneously. will let us link multiple individual documents into one larger, logical construct. Select a set of files, reorder them into their book order, and away you go. Once those document relations are set, we have a number of tools for helping our reader navigate the tome. A table of contents can be auto-generated, thanks to paragraph styles. Turn on the "Include in table of contents" flag for any given style to get table of contents coalescing for free. The formatting options will probably get you about 60% of the way toward a final layout. 0:00 / 0:30 1× Setting a "next style" for each paragraph style lets me simply type to automatically receive a perfect column header. This exists in page layout software even today; see, we weren't completely hopeless back then! I can't one-shot an " OMNI perfect" table of contents, but it's a good starting point and saves me from annoying minutiae, like laying down 1-point rules. Automatic page numbering is also available, by positioning page number placeholders on our master pages. When collated, each document will receive the appropriate page numbering relative to that document's position in the complete book. How about a nifty end-of-book index? It, too, can be auto-generated, though it requires good planning and forethought. Highlight a piece of text and promote it to an index entry with gives you an opportunity to tweak the data which drives the index layout, and will generate a text block containing a neatly formatted index. Such an index might need alphabetical ordering, or perhaps some kind of topical ordering, and both are possible. Tools for setting up index topics, and the rules for PageMaker to follow when extracting that data, are available to ease the pain. It takes some playing around, testing the waters, to really get how the pieces fit together into a final index, but proves to be a fairly robust, data-driven solution to a logistical nightmare. PageMaker accepts a wide variety of content types. Various word processor formats, graphic formats in both raster and vector (in EPS format), and even Lotus 1-2-3 and dBase data can be imported, for those worried I wouldn't tie things neatly back into previous posts. With all of the various pieces on the page, we need to be able to make sure they're linked to the right source documents and stay up to date as our team makes changes. For a while, there was a publish/subscribe mechanism on the Mac, which danced on the edge of OpenDoc ideas (but was not related, to my knowledge). PageMaker supports this, functioning as a "subscriber," and it is up to other applications to function as data "publishers." If you know OLE on Windows, you know what I'm describing here. Once subscribed to a component, which could be as hyper-specific as a single word from a Microsoft Word document (I tried it before I claimed it!), PageMaker will sense changes to the source data and prompt the designer to keep it up to date. This won't help if the change alters the length of the text and forces a reflow, or if the shape and dimensions of the graphic require a new text wrap. Also, any styling previously applied to the subscribed element will be lost, and will need to be re-styled to match as before, after an update. "Technically, this all works," he said with a shrug. Honestly, I find it annoying, both to set up and to utilize. PageMaker interrupts right in the middle of working on something else to announce updates to subscribed elements. The Links panel already lists everything placed into the project with each link's update status. It also lets me one-click update all links globally, on my own time, at my own pace, when I'm ready. It's unobtrusive and puts the control back into my hands. Publish/subscribe? More like PUNISH/subscribe, am I right folks? the audience boos, pelting me with unopened copies of Microsoft BOB I don't need to dig into this too much, because it's very much a "going to press" feature, and a vigorous interrogation of pre-press technologies falls far outside the scope of this article. In the context of the desktop publishing wars, it is important to note PageMaker 's constant catch-up to Quark in the professional arena. Where Aldus was initially content to appeal to a "making flyers at home for church fund-raising bake sales" kind of crowd, Quark had gone for the professional jugular. Generating separations, the component cyan, magenta, yellow, and black layers that, when combined in ink on paper build our final image, was a major missing component of a robust publishing strategy. Or at least that was true until QuarkXPress 2 in 1989, itself contemporaneous with PageMaker 3. Around 1992, Aldus attempted to staunch the bleeding of users to Quark. PageMaker 4.2 came bundled with a standalone application for generating color separations, called Aldus PrePrint . As one review said , "It does the job." I can hear the yawn that accompanied the sentiment. Finally, four years after QuarkXPress 2, PageMaker 5 integrated color separation generation into the application proper. CMYK and spot color plates can intermingle, plate order can be assigned, colors can be set to overprint/knockout, and line screen/angle are all adjustable. It's all pretty coarse-grained though. For example, adjusting for dot gain on uncoated paper stock, removing color cast, grey color removal, adjusting plate levels and curves to compensate for a finicky press; situations like those are far more suited to sophisticated tools like Letraset ColorStudio or even Adobe Photoshop, after it gained CMYK control. For simple, basic, day-to-day separation needs, especially for those on a budget, PageMaker 5 does a fine job; an assertion I can illustrate with a clever video. 0:00 / 0:14 1× I brought the PDF into Affinity 3, tinted each separation and set those to "multiply". Dragging them together simulates the final printing effect. I'd say those separations look accurate. Having made the effort to catch up to QuarkXPress with its color separation utilities, Aldus had further catching up to do with Quark's plug-in architecture. What Lotus 1-2-3 "add-ins" did for spreadsheets, Quark Xtensions did for desktop publishing. Aldus had to keep their ball in play, and so introduced Aldus Additions with version 4, expanding the breadth of bundled tools in version 5. In practice, using the tools reveals how weak Aldus's retort to Quark was. For example, PageMaker 5 adds the ability to "group items" through an Addition. Hooray! "PS Group It" and "PS Ungroup It" kind of do what they state, except all selected items must be completely contained within the current page boundaries. If anything sticks off into the pasteboard, it cannot be grouped. Additions are, put simply, a mess of a solution to a real problem. MacWorld PageMaker 5 Bible concurs. Where Aldus kind of dropped the ball, third-party Additions didn't do much to make up the slack. The biggest package, and one I remember using, was Extensis PageTools for about $100 in 1994. A visually heavy, kinda Microsoft Word 5 -esque toolbar with lots of geegaws and whoozits, character-level styles ( PageMaker only did paragraph-level), find and replace colors, visual thumbnail document navigator, and more formed a grab bag of solutions to a variety of random PageMaker annoyances. It's not nothin '. While I was researching the history of desktop publishing, one word came up again and again: democratization . Desktop computers would ostensibly simplify formerly specialized skills into tools so simple anyone could use them. This would drive down production costs, opening print publishing to a wider audience. It occurred to me to do a Google N-Gram search and this graph in particular got me thinking about democratization a little more. I wasn't doubting the truth of it all, per se, but the chart gave me a "this needs further investigation" itch I needed to scratch. Searching for "social impact of desktop publishing" turns up surprisingly little, at least in the way that I mean it. There is a good amount of information on the technical side of the discussion, extolling the virtues of PostScript and the cost/time savings gained by the new desktop tools. But we see in the chart that talk of the "digital divide" followed desktop publishing's hype cycle. Those two didn't seem to get a lot of time to chat with one another. The birth of desktop publishing, the rise of personal laser printers, and the rapidly lowering costs of powerful personal computers all converged to lower the barrier to entry into publishing. There's no denying that. There are many stories talking about production times being cut in half, or typesetting costs being cut by up to 90%. PageMaker: Desktop Publishing on the Macintosh , by Kevin Strehlo, noted that traditional typesetting could run up to US$400/page in 1989, about US$1,000/page in 2026 dollars. 90% off ain't 50% bad. "Cheaper than ever before" doesn't necessarily mean "cheap." In 1985, a Macintosh 512K ($3,195; $9,700 in 2026) + LaserWriter ($6,995; $21,000) + PageMaker v1.2 (w/PostScript printer font support, $495; $1500) cost over US$30,000, in 2026 money. Even without the LaserWriter, that's $10K. I can appreciate the dramatic reduction in costs, but personally I would still be priced out of joining that revolution, even if I "acquired" certain tools through "alternative means." Everything I read about the impact on publishing seems to be from the point of view of publishing elites, and the CEOs of the companies involved. Brainerd would often recount a story about a church that was able to do print runs of 600,000 units thanks to PageMaker . Dan Putnam, Adobe employee #2, called out a risqué lesbian newsletter, and a fundamentalist Christian newsletter as examples representing the breadth of materials PostScript helped enable. If we're talking about empowerment and democratization, I don't particularly want to get that information secondhand from corporate execs. Join me then, won't you, on a small audit of desktop publishing's impact on the rest of us, and let's try to get a sense of how the "revolution" was seen by those who fought in the streets. Sometimes, the revolutionary street fighting was literal. In 1991, Communist Party of the Soviet Union hardliners attempted to wrest control of the country away from Mikhail Gorbachev and newly-elected president Boris Yeltsin. During the coup attempt, Gorbachev was stolen away and newspaper presses were locked down. According to Brainerd's obituary in GeekWire , Aldus PageMaker played a role in defanging the "Gang of Eight," the core hardliners who staged the coup. As an alternative way to get the pro-democracy word out, flyers carrying Yeltsin's message were created in PageMaker (the story goes) and photocopied for mass distribution. "During the coup in Moscow all the presses had been shut down. Boris Yeltsin commandeered an HP printer, a PC, and a copy machine. There were pictures of Yeltsin surrounded by people with their hands outreached, trying to get copies of documents that were all produced in PageMaker . Its really a powerful image. It made me very proud," Brainerd said in Inside the Publishing Revolution: The Adobe Story , by Pamela Pfiffner. Brainerd's obituary states that Aldus later ran an ad with the tagline "We helped create a revolution." That ad ran in the... in the... huh, where did that ad run, anyway? Its existence is corroborated by ex-Aldus employee Gabi Clayton in a Facebook post after Brainerd's death. She recalls having a copy by her desk, but doesn't have it any longer. I looked high and low for the tagline and couldn't find it in archive.org, Google Books, nor the internet at large. My best current guess is that it ran in Aldus Magazine , whose digital archives are almost non-existent. 20 years ago, Computer History Museum did an interview with Brainerd in which he mentioned neither the event nor the ad whatsoever, which was a strange omission, in my opinion. At the end of the interview, he's asked if he has materials to donate to the CHM, which he affirmed. I checked the CHM online archive and found nothing, so I reached out to see if Brainerd ever followed through on that donation. CHM responded saying that he had indeed done so and they would scan the materials at my request. I have no idea if the ad is amongst those items, and am waiting for the results. I will update this article if new information comes to light. In the meantime, I thought I'd look through print materials associated with the coup, looking for some tell-tale sign of desktop publishing's involvement in the production of revolutionary materials. That would be a quite literal "democratization" artifact; democracy was precisely what they were fighting for! Harvard keeps a small selection of coup-related materials online for perusal; you can check that stuff out here . I may have found something that matches the story. I cannot say "this was done in PageMaker. " However, if translation tools are to be trusted, this is a celebration of the failure of the coup attempt, including a mocking piece about "How not to stage a coup d'etat." If you've ever tried to do manual text wrap, you'll know that what we see in that sample could only be done digitally. Full text justification with hyphenation, and the slightly staggered baselines (probably shifted due to subhead leading) feels very PageMaker , especially since I encountered the same issue in my OMNI project. It also appears to be a photocopied handout, which matches the publishing methodology of the resistance. It doesn't prove PageMaker , per se, but it definitely tingles my spider senses. Sometimes its very easy to see the before and after of desktop publishing on a publication. A typical layout in smaller publications was a literal typewritten page published as-is, like this example from an early issue of Azania Worker . Columns? Bah, who needs stupid columns! (please don't make us manually type out columns!) Later, Azania Worker explicitly called out their transition to desktop publishing for tightening up the layouts for their anti-apartheid publication. Bob Symes is credited with handling that, and a hallmark of early struggles with digital typesetting tools, and over-trusting of "forced justification" is evident. I remember distinctly playing around with kerning and leading to make articles fit into given spaces in the student newspaper. If an article were a few lines too short, that was nothing an increase in font size or leading by 0.1 points couldn't fix. The overly-tight tracking in the example suggests that cutting text was not a consideration. They were determined to fit every important word into the limited space available, evenifitmeantmakingeverythingruntogether. A desire to join the desktop publishing revolution was expressed across a few publications I peeked through. As I suggested earlier, pricing still shut some groups out of enjoying the new tools of the trade. It would take more time yet for prices to fall enough to open the doors wider and let more people join in the fun. Lesbian Connection struggled to figure out how to afford to give the people what they wanted, nay demanded: COLUMNS!!! The author then lays out the costs and is excited to deliver. Let's look at the next issue and see those beautiful, highly-demanded columns at work. Oh, well, maybe next issue? I won't drag this gag on any longer. Over the next two years no transition occurred. The reason for this is explained to their column-desiring audience: it was still too expensive. Even budgeting for a PC over a Mac, and with laser printer costs having cut in half or more over the years, the savings still weren't enough. This publication continues to this day, and as they're on the web they clearly made the transition to digital production. But when? Online archives of the print edition stop before that happened. I need closure on this story! I reached out to the editors and tried to make a case for helping me learn when the transition occurred. It seems to me that it would have had a big hullabaloo, something like, "You demanded it for 20 years, so we're proud to bring you columns!" Unfortunately, my journalistic persuasion skills seem to have atrophied, and I didn't get a response. Maybe someday I'll find out how and when their readers received the columnar layouts they so craved, nay deserved . Zines, "blogs in print form" I suppose I'd call them today, were and still are an interesting subculture of the publishing scene. Unapologetically hand-crafted, sometimes constructed as literal collage on the kitchen floor, topics ranged from personal ramblings to the adventures of a man who wanted to wash dishes in every state. I defy you to tell me that story wouldn't trend on Hacker News today. In Notes From Underground: Zines & the Politics of Alternative Culture, author Stephen Duncombe noted a tension for zine makers trying to incorporate desktop publishing into their workflows. The editor of William Wants a Doll , Arielle Greenberg, struggled to use desktop publishing "in a way that didn't dehumanize her zine." Lizzard Amazon, editor of Slut Utopia , wrote, "it is not so hard to use pagemaker," but, "i am still going to write all over this thing in pen at the last minute." and apparently she did. The zine ethos is one steeped in anti-establishment, rejecting utterly the trappings of mass produced media. It is supposed to be an antidote, a vaccine against anything that smacks of corporate influence. The tools of desktop publishing offer democratization of professional layout tools, yet the author suggests that very democratization runs counter to zine-culture ethos. When the tools are democratized, and by extension homogenized, maintaining the expression of authenticity becomes harder, if not impossible. Duncombe concludes that the internet and web publishing, more so than any of the print desktop publishing tools before, actually fulfilled the original promise of democratization. But, at what cost? "In the zine scene we preach the ethics of DIY and democratic creation but the experience of self-publishing on the Internet demonstrates that when everyone begins to express themselves then there isn't the scale or coherence that encourages the formation of an alternative world-view." Every technology has its naysayers. Some, like the anti-generative-AI crowd, are right, and just, and 100% correct to fight the dumb AI companies and not let them turn everything we love into room-temperature mayonnaise like the flat-out wrong information that keeps turning up in search results when I'm just a guy trying to do his best to inform his readership about ancient publishing practices and the history of those technologies and is it so terrible to want real information and.... Ahem, excuse me. Let's start again. In the HyperCard article I noted Sheldon Leemon's reactionary stance to all things hyperlinked, "Do we really want to give hypertext to young school children, who already have plenty of distractions?" Similar naysaying naturally accompanied the advent of desktop publishing. Even those who acknowledged the benefits still felt some sense of loss. As the editor of Tradeswomen said, "we don't have nearly as much fun." It is hard to impress upon a digital-native, remote-only workforce just how fun physical production was. The late nights, the mishaps, the heartaches, the triumphs, of a team united around putting an issue to bed, all felt earned . In the end, when real newspapers hit the newsstands and students and faculty were reading it over lunch, every person on staff could point to something specific in every tangible artifact and state, "I did that." Before I close, it's important to acknowledge font handling vis-a-vis desktop publishing back in the day. Font management, printing, and on-screen rendering could be a real struggle at times, so it needs at least a little discussion. I will do this by way of confession. Woz forgive me, for I have sinned, I cheated throughout this post. I used... ah, I'm almost too embarrassed to admit this... I used TrueType fonts. Hold your comments until I've made my case! John Warnock, don't pout! Fonts for the original Macintosh started life as font "suitcases," a special folder which held system resources and a collection of hand drawn bitmap fonts at various sizes. Susan Kare kept it real. If a font wasn't explicitly drawn at the size you wanted, it would scale to match your desire, which could result in ugly, chunky, pixelated on-screen text. PostScript fonts could, at the very least, print nicely even when the on-screen representations were ugly. PostScript had two commonly-used font types: Type 1 and Type 3. Type 1 was Adobe's crown jewel, the font standard that included what we know today as font hinting , as well as a coveted secret recipe which Adobe refused to share at the time (see the timeline for more details). Type 3 was a more open, but inferior standard, and didn't include Type 1's secret sauce. For a time, it was the only option to font vendors who didn't want a licensing agreement with Adobe. Put simply, Type 1 fonts looked better in print. The gulf between on-screen representations and printer output was vast, and TrueType promised to fix that. Announced at Seybold 1989, it's core selling point was a single font file that could provide both a clean on-screen representation at any size, as well as sharp printer output. Inside the Publishing Revolution says, "Gates claimed that TrueType's quadratic splines were far superior to PostScript's Bezier curves." Warnock was beside himself, calling it on stage, "the biggest bunch of garbage and mumbo jumbo," and " on the verge of tears , he said, 'What those people are selling you is snake oil!'" Adobe's immediate response was two-fold: one, open up their proprietary Type 1 spec for all to use, license-free, and two, the development of Adobe Type Manager , a system control panel that used PostScript Type 1 font definitions to generate crisp, clean on-screen representations. Once more from Inside the Publishing Revolution , "David Lemon recalls the "manic" pace of (ATM) development (after the Seybold shock), "They'd look at me and say, 'It's not life or death if we get this out. It's only the future of the company.'" Working at a breakneck pace, Adobe brought ATM to market at least a year before any TrueType fonts from Apple or Microsoft appeared. "If we hadn't gotten ATM out then, we would be living in an all-TrueType world now."" Their beachhead fortified, ATM became the must-have extension for every Macintosh I ever touched; TrueType fonts were kind of snubbed by the Mac design community, is my recollection of those times. All of that said, when sending modern fonts back in time onto older Macs, TrueType has proven to be the path of least resistance by far. I feel irrational shame for using TrueType in this project, eschewing ATM. Forgive me for taking the coward's path! Alright, it's time to make this OMNI dream a reality, and get these ideas out of the computer and onto paper. I'm excited! Everything you see was generated as PDFs by Adobe Acrobat Distiller 3.1 from PostScript generated by Aldus PageMaker 5.0a. I copied those PDFs over, untouched and unedited, and printed them as-is to the convenience store copier. First, I need to explain the chill that ran down my spine when I held those prints in my hands for the first time. Here was something tangible, something I crafted myself made physically manifest. I have done this in the past hundreds of times, but I'd forgotten the rush. It was a great feeling. I think this makes the case that design work can be done with PageMaker. Of course it can. It was used in the past, so why wouldn't it be able to continue to do what it was built to do? With Acrobat Distiller , we can generate PDFs that print perfectly on modern systems. Done and done. Would I choose to use it today? No way. The text workflow is too much of a PITA to do anything longer than a few pages; I almost can't believe I used to lay out 80-page magazines in it. The Additions are a fumbling mess. Guideline management is bumpy, although an Extensis Addition can smooth that a little. While I love the Control palette, the palettes in general need yet more refinement to become truly useful, time-saving features. Clicking on images and having them automatically pop to the top of the stack, without having any control over layering, is one annoyance too many. This is a case where you really can't go home again. I mean you can, but you're going to wonder, "Were the walls always this greasy? Did the toilet always back up like this?" PageMaker literally altered the course of my life, steering me from electrical engineering into graphic design. It was fun at the time, being new and exciting, but offers little today except as an exploration of the opposing forces at work during the "desktop publishing revolution." I am struck by one curiosity, however. I've been working as a professional software engineer for 20 years. With one exception, everything I've built professionally is gone. The companies folded, the apps were discontinued, contracts were ceased, the products the apps promoted were killed, and so on. There are any number of reasons, but they all converge at the same result: my professional digital legacy has been, will be, erased. Everything I published with PageMaker still exists. It's physically in the archives at UNC-Charlotte. It's framed on a business owner's wall when she was featured on the cover of Business Leader Magazine. It's sitting in a box in someone's attic waiting to be rediscovered. It is often said that what goes on the internet is forever. Yet every digital work I produce lives in someone else's infrastructure, subject to someone else's decisions about what is worth keeping. The work I produced on "bird cage liner" remains free to this day, and no popped stock bubble, no digital decay, no coup d'etat, can stop those ideas from propagating, once let loose in the world. Looks like PageMaker had one more lesson to teach me, after all. Thanks for reading all the way through. I have a reward for your effort. You may have noticed the OMNI font I used in the layouts, Continuum. Its possible the web crawlers have found it by now, but most likely you won't find it easily until then. It is, in fact, my gift to you. Before I started this blog, I built it from scratch in Affinity and FontForge , using OMNI Magazine as the sole source of truth for all shapes, default leading, and kerning pairs. It was just a for-fun project, to learn how fonts are created and to see if I could get it working on the machines of my youth. There's no point to my gatekeeping it any longer; it's time to set it free. You can grab Continuum on my personal GitHub. I accept bug reports and pull-requests, so long as they are backed by real, in-print proof that a change is warranted. Be aware, the goal is not to make a font "inspired by" OMNI , it is to be the OMNI font, full stop. Maybe you can help me get it there. https://github.com/christopherdrum/continuum PageMaker native files are not compatible with anything that exists these days. However, the PostScript PageMaker generates works fine with Distiller on classic Mac and Ghostscript on modern systems. The resultant PDF files in either case printed perfectly on a Sharp MX-3631DS color copier. There may be a conversion path by opening PageMaker 5 files in PageMaker 7 , then finding a copy of InDesign CS6 or prior . CS6 should be able to open the PM7 document, thereby converting it to InDesign format. It should technically be possible to open that converted file in a more modern copy of InDesign . This setup requires access to software I simply don't have, so this is my best, educated guess. Affinity 3 could open the PageMaker PDFs as well, but exhibited a text rendering bug that wasn't found in any other PDF viewer, modern or classic, nor in the final print. I have reported it to the developers. Basilisk II v1.1 on Windows 11 Mac IIci w/68040 CPU, 64MB RAM 1024 x 768 24-bit color Macintosh System 7.5.5 StickyClick v1.2 Suitcase 3.0 Adobe Acrobat 3.01 Microsoft Word 5.1a StuffIt Deluxe 5.0 GraphicConverter 2.2 TTConverter 1.5 Aldus PageMaker 5.0a Everything you see was generated as PDFs by Adobe Acrobat Distiller 3.1 from PostScript generated by Aldus PageMaker 5.0a. I copied those PDFs over, untouched and unedited, and printed them as-is to the convenience store copier. First, I need to explain the chill that ran down my spine when I held those prints in my hands for the first time. Here was something tangible, something I crafted myself made physically manifest. I have done this in the past hundreds of times, but I'd forgotten the rush. It was a great feeling. I think this makes the case that design work can be done with PageMaker. Of course it can. It was used in the past, so why wouldn't it be able to continue to do what it was built to do? With Acrobat Distiller , we can generate PDFs that print perfectly on modern systems. Done and done. Would I choose to use it today? No way. The text workflow is too much of a PITA to do anything longer than a few pages; I almost can't believe I used to lay out 80-page magazines in it. The Additions are a fumbling mess. Guideline management is bumpy, although an Extensis Addition can smooth that a little. While I love the Control palette, the palettes in general need yet more refinement to become truly useful, time-saving features. Clicking on images and having them automatically pop to the top of the stack, without having any control over layering, is one annoyance too many. This is a case where you really can't go home again. I mean you can, but you're going to wonder, "Were the walls always this greasy? Did the toilet always back up like this?" PageMaker literally altered the course of my life, steering me from electrical engineering into graphic design. It was fun at the time, being new and exciting, but offers little today except as an exploration of the opposing forces at work during the "desktop publishing revolution." I am struck by one curiosity, however. I've been working as a professional software engineer for 20 years. With one exception, everything I've built professionally is gone. The companies folded, the apps were discontinued, contracts were ceased, the products the apps promoted were killed, and so on. There are any number of reasons, but they all converge at the same result: my professional digital legacy has been, will be, erased. Everything I published with PageMaker still exists. It's physically in the archives at UNC-Charlotte. It's framed on a business owner's wall when she was featured on the cover of Business Leader Magazine. It's sitting in a box in someone's attic waiting to be rediscovered. It is often said that what goes on the internet is forever. Yet every digital work I produce lives in someone else's infrastructure, subject to someone else's decisions about what is worth keeping. The work I produced on "bird cage liner" remains free to this day, and no popped stock bubble, no digital decay, no coup d'etat, can stop those ideas from propagating, once let loose in the world. Looks like PageMaker had one more lesson to teach me, after all. The literal copier I used is inexplicably viewable on Google Maps. A gift! A gift comes! Thanks for reading all the way through. I have a reward for your effort. You may have noticed the OMNI font I used in the layouts, Continuum. Its possible the web crawlers have found it by now, but most likely you won't find it easily until then. It is, in fact, my gift to you. Before I started this blog, I built it from scratch in Affinity and FontForge , using OMNI Magazine as the sole source of truth for all shapes, default leading, and kerning pairs. It was just a for-fun project, to learn how fonts are created and to see if I could get it working on the machines of my youth. There's no point to my gatekeeping it any longer; it's time to set it free. You can grab Continuum on my personal GitHub. I accept bug reports and pull-requests, so long as they are backed by real, in-print proof that a change is warranted. Be aware, the goal is not to make a font "inspired by" OMNI , it is to be the OMNI font, full stop. Maybe you can help me get it there. https://github.com/christopherdrum/continuum Sharpening the Stone Emulator improvements The Basilisk II emulator itself is solid and I don't have any real issues with it, once I had it set up following precisely the Emaculation instructions . Getting the emulator set up this time around was quite frustrating. Some of it was inadvertently a quagmire of my own creation. Some was just easy to overlook. Some was just plain craziness. Unless you really understand Classic Macintosh systems and how they work, I would recommend building a new VM hard drive from scratch for your DTP work. My disk image carried over from the Hypercard article, and I was rather cavalier with my installs on top of that. This caused nothing but pain, including crashing apps, odd PostScript generation, and more. A full reinstall of System 7.5.5 was step one. That gave me a base system, which I backed up as a "pristine" starting point for the future. Then, installing apps one by one with testing at each phase helped establish pristine "checkpoint" images I could use as starting points for future projects. If you go for a PageMaker 5.0a installation, be absolutely certain to install the "RSRC patch" files. They are easy to overlook, but are absolutely critical. They fixed my PostScript rendering offset bugs. I don't recommend installing Distiller 3.01 on top of 3.0. I did that and something went wrong, resulting in a flaky application. A pristine install of Distiller 3.01 worked great. This is the biggest frustration with Basilisk II on Windows (apparently other platforms don't have this issue). Once installed, PageMaker wouldn't copy anything. I could copy from any other application, and I could paste into every application, including PageMaker . But I couldn't copy anything while in PageMaker . The helpful experts at the Macintosh Garden forums got me straightened out. It seems that on Windows, the system clipboard ( ) must be flushed for copy/paste to work properly in Basilisk II . You'll have to do this again and again while using the program, if you jump out of Basilisk II into Windows and back again. Very annoying.

0 views
Anton Sten 2 weeks ago

Taste isn't a screenshot

I keep seeing designers share their "taste libraries." Folders full of screenshots. Apps they admire, interfaces that inspired them, UI details they want to remember. It's a lovely habit. I've done versions of it myself. But I've started to wonder if we've confused collecting taste with having it. There's a difference between recognizing that something is good and understanding why it's good. And an even bigger difference between that and knowing what to leave out. Steve Jobs said it better than I could: "People think focus means saying yes to the thing you've got to focus on. But that's not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I'm actually as proud of the things we haven't done as the things I have done. Innovation is saying no to 1,000 things." A screenshot folder is a yes list. Taste — real taste — is mostly no's. ## The constraint was always judgment Alfred Lin from Sequoia recently wrote something that stuck with me. In [AI Adoption vs. AI Advantage](https://outlierspath.com/2026/03/23/ai-adoption-vs-ai-advantage/), he argues that for two decades, the binding constraints in software were hiring engineers, writing code, and shipping products. Capital flowed around those bottlenecks. Competitive advantage was often just about who could attract talent and move fast. AI is dissolving those constraints. Code gets generated. Prototypes are instant. Iteration is nearly free. Which means the question is no longer "can we build this?" It's "should we?" Lin's point is that when execution constraints disappear, what's left is judgment. The ability to distinguish signal from noise, to say no to good ideas in favor of great ones, to hold conviction when the data is ambiguous. That's what compounds. Clear thinking compounds. Confused thinking unravels. The gap between good judgment and bad judgment doesn't close when the tools get better. It widens. ## What this looks like in practice Right now, there are people vibe coding their own todo app, adding it to their todo list, and using it to remind themselves to vibe code a new one. That's not a critique — it's genuinely how you learn to build. But it does make Things an interesting thing to look at. If you've used it, you know it feels different from other task managers. It's not just that it looks good — though it does. It's that it feels considered. Every interaction has been thought about. Nothing is there by accident. That didn't happen because the team had a good Dribbble board. It happened because someone said no to hundreds of features that would have made Things more powerful on paper and worse in practice. The craft is visible. But the restraint is what makes the craft matter. That kind of restraint is hard. It requires conviction. You have to believe — without always being able to prove it — that the thing you're not building is the right call. ## Who this is actually about I want to be careful here, because I'm not talking about people who are new to building. The weekend builders, the indie hackers spinning up their fifth productivity app — they're learning something genuinely valuable. They're developing intuition by doing. That's how it works. What I'm less sure about is experienced professionals — designers, product people — who've started measuring their output by volume. Fifteen apps shipped. Twenty experiments running. A new launch every week. Shipping a lot isn't the same as building well. And in a world where anyone can ship anything, the signal you're sending with volume isn't "I have great judgment." It's "I haven't figured out what I actually want to say yet." I wrote recently about how [AI will happily design the wrong thing for you](https://www.antonsten.com/articles/ai-will-happily-design-the-wrong-thing-for-you/). The tools are neutral. They amplify whatever you point them at. Strong judgment gets faster and more focused. Weak judgment gets noisier. The tools don't fix the underlying problem. They just make it more visible. ## The harder skill So what does it actually take to develop judgment? Not a bigger screenshot folder. Not more launches. Not faster iteration for its own sake. It takes slowing down enough to ask whether the thing you're building is worth building at all. Whether the feature you're adding is solving a real problem or just filling a roadmap. Whether the app you're designing needs to exist, or whether you're building it because you can. That question — *should we?* — is harder than it sounds. It requires understanding users well enough to know what they actually need, not just what they say they want. It requires understanding the business well enough to know what moves the needle. It requires enough confidence in your own judgment to say no even when someone is excited about the idea. Taste, in the sense that actually matters, is the accumulation of those decisions. Not the screenshots you've saved. The calls you've made — especially the ones where you chose not to build something. The tools have never been more capable. That's real, and it's exciting. But capability without judgment is just a faster way to build the wrong thing. The ceiling has gone up. That's good news — for people who already know what matters.

0 views
matduggan.com 2 weeks ago

I Can't See Apple's Vision

I don't typically write about Apple stuff. It's the most written-about company on earth. Every product launch gets the kind of forensic scrutiny normally reserved for plane crashes and celebrity divorces. Mostly though, I feel like a line cook at a Denny's talking trash about whether the French Laundry has lost their way. I'm back here microwaving a Grand Slam and opining about Thomas Keller's sauce work. The engineers I know personally at Apple are, on average, much more talented than me. They work harder, they do it for decades without a break, and none of them have ever shipped a feature while still wearing pajama pants at 2 PM. It seems insane for someone of my mediocre talent to critique them. It also feels a little dog-pile-y. Apple employees know Tahoe sucks. They know it the way you know your haircut is bad — they don't need strangers on the internet confirming it. And to be fair, there's genuinely great work buried inside Tahoe: the clipboard manager, the automation APIs, a much-improved Spotlight. But visually it's gross, and that matters when your entire brand identity is "we're the ones who care about design." Instead, I want to talk about a bigger problem and one that I do feel qualified to talk about because I am very guilty of committing this sin. I don't see a cohesive vision for MacOS and WatchOS. This, more than one bad release, seems far worse to me and dangerous for the company. Since this is already 2000 words as a draft I'll save WatchOS for another time. I'm verbose but even I have limits. Now to be clear this isn't across every product . iPadOS has a strong vision and have the strength of their convictions to change approaches. The different stabs at solving the window problem inside of the iPad and make it so that you still have an iPad experience while being able to do multiple things at the same time is proof of that. iOS has an incredibly strong vision for what the product is and isn't and how the software works with that. VisionOS and tvOS are less strong, but visionOS is still finding its footing in a brand new world. The Apple TV hardware and software is in a weirdly good position even though nothing has changed about it in what feels like geological time. I've purchased every version of the Apple TV, and with the exception of that black glass remote — the one that felt like it was designed by someone who had never held a remote, or possibly a physical object — everything has been pretty good. I'm still not clear how storage works on the Apple TV and I don't think anybody outside of Apple does either. I'm not even sure Apple knows. But somehow it's fine. But with watchOS and MacOS we have 2 software stacks that seem to be letting down the great hardware they are installed in. They seem to be evolving in random directions with no clear end goal in mind. I used to be able to see what OS X was aiming for, even if it didn't hit that goal. Now with two of Apple's platform I'm not able to see anything except a desire to come up with something to show as this years release. When I got my first Mac — an iBook G3 — the experience was like test-driving a Ferrari that someone had fitted with a lawnmower engine. You'd click on the hard drive icon and wait. And wait. And in those few seconds of waiting, you'd think: man, this would be incredible if the hardware could keep up. The software had somewhere it wanted to go. The hardware just couldn't get it there yet. This trend continued for a long time on OS X, where you'd see Apple really pushing the absolute limits of what it could get away with. After the rock solid stability of 10.4 Apple took a lot of swings with 10.5 and they didn't all land. The first time you opened the Time Machine UI and the entire thing crawled to an almost crash, you'd think boy maybe this wasn't quite ready for prime time . But this entire time there wasn't really a question, ever, that there was a vision for what this looked like. The progression of OS X from the beta onward was this: OS X tried to accommodate you, not the other way around. When you look at these screenshots I'm always surprised how light the touch is. There isn't a lot of OS here to the user. Almost everything is happening behind the scenes and the stuff you do see is pretty obvious. The first time I thought "oh man, they've lost the thread" was Notifications. On iOS, Notifications make sense — you've got apps buried in folders three screens deep, so a unified system for surfacing what's happening is genuinely useful. On macOS, this design makes absolutely no sense at all. You can see your applications. They're right there. In the Dock. Which is also right there. This is the beginning of this feeling of "we aren't sure what we're doing here with the Mac anymore". iOS users like Notifications so maybe you dorks will too? It consumes a huge amount of screen real estate, it was never (and still isn't) clear what should and shouldn't be a notification. Even opening up mine right now it's filled with garbage that doesn't make sense to notify me about. A thing has completed running the thing that I asked it to run? Why would I need to know that? There is also already a clear way to communicate this information to me. The application icon adds an exclamation point or bounces up and down in the dock. With Notifications you end up with just garbage noise taking up your screen for no reason. Maybe worse, it's not even garbage designed with the Mac in mind. It's just like random crap nobody cares about that looks exactly like iOS Notifications. The issue with copying everything from iOS is that it's like copying someone's homework — except they go to a different school, in a different country, studying a different subject. It's not just wrong in the way where you tried and failed. It's wrong in a way that makes everyone who encounters it deeply uncomfortable. The teacher doesn't even know where to begin. They just stare at it. For years afterwards it seemed like the purpose of MacOS was just to port iOS features to the Mac years after their launch on iOS. Often these didn't make much sense or hadn't had a lot of effort expended in making them very Mac-y. Like there was clearly a favorite child with iOS, then a sassy middle child with iPadOS and then, like a 1980s sitcom where there was a contract dispute, "another child" you saw every 5th episode run down the stairs in the background with no lines. Me at home would shout at my TV "I knew they didn't kill you off MacOS!". Now with Tahoe there's clearly some sort of struggle happening inside of the team. And here's what's maddening — buried inside this visual catastrophe, someone at Apple is doing incredible work. Clipboard management has been table stakes in the third-party ecosystem for years. Apple finally added a version that handles 90% of use cases. It's classic Sherlocking: Apple shows up ten years late to the party, brings a decent bottle of wine, and somehow half the guests leave with them. Same with Spotlight. Spotlight hasn't gotten a ton of love in years. Suddenly it's really competing with third-party tools. If you're searching for a file, you can filter it based on where the file is stored. Type "name of Directory" press the Tab key, and then type the name of the file before pressing Enter. This is great! We finally have keyword search for stuff like . Application shortcuts for opening stuff with things like for Firefox is nice. Assign a quick key like “se” to  Send Email . Type it in Spotlight, hit enter, and compose your message. This is all classic Apple thinking which is "how can we make the Mac as good as possible such that you, the user, don't need to download any third-party applications to get a nice experience". You don't need a word processor, you have a word processor and a spreadsheet application and presentation software and a PDF viewer and a clipboard manager and a system launcher and automation APIs etc etc etc. This is a vision that is consistent throughout the entire systems history, how can we help you do the things you need to do more easily. But the reason why I'm stressed as someone who is pretty invested in the ecosystem is that the visual stuff is so bad and not just bad, but negligent. We didn't test how it was gonna look under a bunch of situations so that's now someone else's problem. Whenever I get a finder sidebar covering folder contents so I had to resize the window every time, or the Dock freaks out and refuses to come back out, it feels like I installed one of those OS X skins for a Linux distro. I buy Apple stuff cause its nice to look at and this is horrible to look at. Why is this so big? Why did you cut off the word "Finder" from Force Quit? Everywhere you look there's a million of these papercuts. We have a resolution on our laptops screen that would have made people collapse in 2005 why must we waste all of it on UI elements? Also you can't grab window edges as shown by the best post ever written here: https://noheger.at/blog/2026/01/11/the-struggle-of-resizing-windows-on-macos-tahoe/ Why is there so much empty space between everything? Why are there six ways to do literally everything? Why did we copy the concept of Control Center from iOS at all if there's very little limit on screen real estate and we could already do this from the menu bar? So we're going to keep the Mac menu bar but we're going to add a full iPad control system and then we're going to use the iPad control system to manage the menu bar . I will say the "Start Screen Saver" makes me laugh because its a mistake I would make in CSS. The text is too long so the button is giant but we didn't resize the icon so it looks crazy. Now do we need the same text inside the button as outside of it? No, and that leads me to the other banger. It's pretty clear the two white boxes inside of "Scene or Accessory" were supposed to be text, Scene on the top and then Accessory on the bottom, but SwiftUI couldn't do that so they left the placeholder. Somewhere there is a Jira ticket to come back to this that got trashed. Also, complete aside. Has anyone in the entire fucking world ever run Shazam from a Mac? What scenario are we designing for here? I hear a banger at the coffee shop so I hold my MacBook Pro up over my head like John Cusack in Say Anything , hoping it catches enough audio before my arms give out? "Recognize Music" is in my menu bar, taking up space that could be used for literally anything else, on the off chance I need to identify a song using a device that weighs four pounds and has no microphone worth using in a noisy room. If you are going to copy ipadOS's homework you need to think about it for 30 seconds . So my hope is that the improvement camp wins. That the people who built the better Spotlight and the clipboard manager and the automation APIs are the ones who get to set the direction. Because right now it feels like the best work on macOS is being done in spite of the overall vision, not because of it. Like someone's sneaking vegetables into a toddler's mac and cheese. The good stuff is in there — you just have to eat around a lot of neon orange nonsense to find it. Steve Jobs talked about creative people having to persuade five layers of management to do what they know is right. I don't know how many layers there are now. But I know what it looks like when the creative people are losing that argument, and I know what it looks like when they're winning it. Right now, on macOS, it looks like both are happening at the same time, in the same release, on the same screen. And that's scarier than any one bad design choice. It's Unix, but you never need to know that. All the power, none of the beard. You get the stability of a server OS without ever having to type into anything. Everything annoying is abstracted away. Drivers? Gone. "Installing" an application? You drag it into a folder. That's it. That's the install. It felt like the computer was meeting you more than halfway — it was practically doing your job for you and then apologizing for not doing it sooner. If it seems like it should work, it works. Double-click a PDF, it opens. Put in a DVD, it plays. Drag an app to the Applications folder and it becomes an application. This sounds obvious now, but in 2003 this was like witchcraft if you were coming from Windows. But it was also serious. It wasn't cluttered with stupid bullshit. It was designed for people who made things — with real font management, color calibration, the works. The OS tried to stay out of your way. Your content was the show; everything else was stagecraft.

0 views
David Bushell 3 weeks ago

Top ten Figma betrayals

Figma is the industry standard for painting pretty pictures of websites. It’s where designers spend my designated dev time pushing pixels around one too many artboards. Figma promises to remove the proverbial fence between design and development. In reality it provides the comfort of an ideal viewport that doesn’t exist. I don’t mind Figma (the software), although I prefer Penpot myself. I still dabble in the deceptive arts of web design. Don’t be thinking I’m out here hating on designers. I like to stick my nose inside a Figma file and point out issues before they escalate. Below I cover classic Figma betrayals that I bet you’ve experienced. Betrayals happen when software promises more than it can deliver. Take a gander at this amazing website design I whipped up in Figma to illustrate the most common betrayals. I told you I was a designer! I’ll evolve this design throughout the post. Figma has deemed 1440×1024 to be “Desktop” resolution so I’ve started there. In this mockup I’ve added a full-width banner of our hero Johnny Business . I’ve built this website far too many times than I care to remember. I’ll repeat the same question here I ask every time I build it: what happens at other viewport sizes? Do I scale the banner proportionally? On wider viewports this is likely to push content out of sight. It might even require scrolling to see the entire image on Johnny’s ultra-wide 8K. The phrase “above the fold” will be spoken in a Teams call, can we avoid that? Do I also set a maximum height on the banner? This is going to decapitate poor Johnny! He paid a lot for that haircut. What are we doing below the “Desktop” viewport, by the way? Let’s design for the 402×874 resolution Figma calls “iPhone 17” because it was first on the list. Note the absolute perfect crop of Johnny’s sockless businessing. Okay, next question: how do we move between “mobile” and “desktop”? That’s a very specific focal point. We can’t just change it willy-nilly! Code has rules; logic. A website must be responsive between all breakpoints. Are we going to use multiple images? At what breakpoint do they swap? Because that perfectly cropped mobile image doesn’t scale up very far. Hold the phone! A shadow stakeholder has asked for a redesign to “make it pop!” The ultra-wide problem has been solved with a centred fixed-width style. If that is the intention? Does either the banner or header stretch to the edge of the viewport? More importantly, that image and text has no room to move. I’ve only reduced the viewport by 200 pixels and it’s already crashing into Johnny’s face. Are we expecting breakpoints every 100 pixels? — No, wait! Please don’t spend more time designing more breakpoints! Okay, I’ll hold until more breakpoints are designed. Are we extending my development deadline? No. Okay. As development continues I’ve got more bad news to share. Figma is very happy allowing us to enter arbitrary line breaks for the perfect text fit. That’s not how the web works. One of these options is probably what we’ll see if text is left to naturally break. Yes, we can technically allow for a manual line break. That’s a pain in the content management system, but sure. Text is still forced to wrap on a smaller viewport, then what? Oh that? Now you want the manual line break to magically disappear? (╯°□°)╯︵ ┻━┻ I lied when I said “top ten” Figma betrayals. The issues above can appear in hundreds of guises across any component. If you’re betrayed once you’ll be hit again and again. Figma is not exactly conducive to responsive web design. Designing more breakpoints often leads to more questions, not less. Another betrayal I pull my hair out over is the three card pattern packed with content. This leads to an immediate breakpoint where one card drops awkwardly below. I dread this because the word “carousel” will be uttered and my sobbing is heard far and wide. Carousels are not a content strategy. I was once inspecting a Figma file only to witness the enemy cursor drive by and drop several dots underneath an image. The audacity! Figma betrayals are classic waterfall mistakes that are solved by human conversation. Developers need to be part of the design process to ask these questions. Content authors should be involved before and not after a design is complete. You’ll note I never answered the questions above because what might work for my fictional design isn’t universal. On a tangential topic Matthias Ott notes: Think about what actually happens when a designer and an engineer disagree about an interaction pattern. There’s a moment of tension – maybe even frustration. The engineer says it’ll be fragile. The designer says it’s essential for the experience. Neither is wrong, necessarily. But the conversation – if your process allows for it to happen – that back-and-forth where both sides have to articulate why they believe what they believe, is where the design becomes robust and both people gain experience. Not in the Figma file. Not in the pull request. In the friction between two people who care about different things and are forced to find a shared answer. The Shape of Friction - Matthias Ott Figma is not friction-free and that’s fine. We can’t expect any software in the hands of a single person to solve problems alone. Software doesn’t know what questions to ask. Not then with Clippy, not now with Copilot. Humans should talk to one another, not the software. Together we can solve things early the easy way, or later the hard way. One thing that has kept me employed is the ability to identify questions early and not allow Fireworks, Photoshop, Sketch, XD, and now Figma to lead a project astray. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
flowtwo.io 3 weeks ago

Fundamentals of Software Architecture

A handshake should be firm, but not overpowering. Look the person in the eye; looking away while shaking someone’s hand is a sign of disrespect, and most people will notice that. Also, don’t keep the handshake going too long. Two or three seconds are all you need. — Richards & Ford, Fundamentals of Software Architecture , Ch. 32, para. 87 I swear, I find a lot of value in reading books about software. But I take issue with the length of some of them. When I'm 600 pages into an 800 page technical book, and I'm reading something barely tangential to the book's topic, like detailed instructions on how to shake hands...I get a bit annoyed. I think it's because every author wants to make their book "the definitive reference on X", whatever X is, so they feel the need to include stuff about leadership, soft skills, etc. Technical books like this could be more approachable if they kept to a more concise topic. My two cents. Anyways, Fundamentals of Software Architecture was written by Mark Richards and Neal Ford. It's a thorough cataloguing of every popular architectural style and their pros/cons. It introduces a lot of terminology, with the goal of defining how to evaluate and explain the architectural qualities of a system—qualities like availability, coupling, fault tolerance etc. This post is mostly a summary the architectural topics covered by the book; I've added some personal commentary on system coupling and AI near the end. According to Richards and Ford, the 3 laws of software architecture are: Everything in software architecture is a trade-off Why  is more important than  how Most architecture decisions aren’t binary but rather exist on a spectrum between extremes. They added the 3rd law in the book's 2nd edition. It sorta just feels like a different way of phrasing the 1st law, but I think they're trying to highlight that any architectural decision is never "absolute", i.e. most systems don't perfectly align to any one architectural style. A system might lean towards microservices architecture but have elements of other patterns too, for example. "As I have evolved, so has my understanding of the Three Laws. You cannot be trusted with your own system architecture." — Claude For mostly my own sake, I've briefly summarized each of the architecture styles covered by the book. Just 1 or 2 sentences explaining what it is and when you should use it—I'm aiming for brevity here, like a crib sheet. Pictured: Enterprise Service Java Beans from the Neolithic era. Thought to be a tribute to Sun Microsystems It's important to understand how to define a system's boundaries. In the book, the authors define the concept of an architectural quantum which is the "smallest part of the system that runs independently". The system might be your entire microservice architecture, but if one part of it can function independently of other parts of the system, it forms its own architectural quantum. So how does an architectural quantum run independently if it has to communicate with other parts of the system? The critical part is how the communication happens—whether it's synchronous or asynchronous: The dependency turns them into a single architectural quantum. Asynchronous communication can help detangle architectural quanta because it removes that dynamic dependency — Richards & Ford, Ch. 21, para. 48 If the operation of System A requires information from System B, then it's coupled to System B and they form a single architectural quantum. This means that System A's characteristics are impacted by System B's characteristics. If System A needs to be fast, we must ensure System B is fast, and consistently fast. At my current company, every service is associated with a reliability tier. The service's tier determines many of its operational requirements. For instance, a tier 0 system (the highest tier) needs to be deployed in multiple regions for redundancy. It needs an on-call engineer, clearly defined SLAs, etc. But if a tier-0 system needs to retrieve data from a lower tier system as part of its operation, all of a sudden the lower tier system needs to be a tier-0 system. They become coupled. In practice, there's some nuance here. Just because you call another service via HTTP and block the current process waiting for a response, doesn't mean the two services are fully coupled. As long as there's fallback functionality that doesn't constitute an error state, they needn't be considered coupled. If your service needs to be fast and the other service isn't reliably fast, you may implement a strict timeout and then fallback to some degraded functionality in the event the request times out. As an example, consider a new user recommendation system being built by your company's ML team. Your tier-0 homepage rendering service can still attempt to retrieve user recommendations from this new system, but as long as you can fallback to some other functionality (like just choosing the user's recently viewed content) we don't need to group that recommendation system in with our service and its strict functional requirements. The 2nd edition of this book was published in April 2025. So of course, AI was brought up a lot. In general, the authors' stance was that AI is not an effective replacement for human architects—and they didn't seem optimistic that it could ever be. Why? Because, as we’ve demonstrated in this book, everything in software architecture is a trade-off. LLMs are great for understanding knowledge, but to this day, they still lack the wisdom necessary to make appropriate decisions. That wisdom includes so much context that it’s much faster for the architect to solve a business problem by themselves than to teach an LLM all about the problem and its extended environment and context. The fact that we’ve included eight other intersections to be concerned about should be evidence enough that this is a daunting task. — Richards & Ford, Ch. 33, para. 80 While I agree that the amount of context necessary to properly make architectural decisions is hard to shove into an LLM's context window right now, I don't believe that'll be the case for long. I have a feeling the opinions in this book will become outdated quite soon. Also, despite the authors' insistence that "architecture is the stuff you can’t Google or ask an LLM about", I fully believe that AI tools are an indispensable tool for researching architectural decisions. They can explore the problem domain more completely and much faster than any human could. They can also illuminate trade-offs and nuances you might have missed. The fact that the authors' never mentioned this in their statements on AI utility is a major oversight. Every job function in software development, from junior dev to CTO, should be leveraging AI tooling at this point. Like I mentioned at the start, I found FoSA to be a bit bloated. Also, the book didn't didn't really cover what I was looking for. I wanted a book that described more specific architectural patterns for solving common technical challenges like cache invalidation, database replication etc. Instead, it focuses exclusively on the overall system layout—how the domain boundaries are divided and what the physical topology looks like. And how to shake someone's hand properly. I also think the book tried too hard to quantify complex system characteristics. I don't find much use in assigning a 1 to 5 star rating for the "maintainability" of a "microkernel" architecture style (which is 3/5 according to the book)—simply because both the characteristic and the style itself are too vaguely defined to warrant a rating. I'm certain you could build your microkernel system to have poor maintainability OR incredible maintainability. There's too much ambiguity to extract any conclusions from these assessments. Still, in general, FoSA is an interesting book that tackles one of the more complex and less formally researched areas of software development. Architectural decisions are the hardest to make due to their consequences and trade-offs, so knowing the patterns that have worked for others is a great starting point. Everything in software architecture is a trade-off Why  is more important than  how Most architecture decisions aren’t binary but rather exist on a spectrum between extremes. What is it: Technically partitioned: presentation, business, persistence, and database layers for example. Typically a monolithic application with a monolithic database. Very common, especially in legacy systems. When to use it: Small, low-budget applications. But it can scale surprisingly well. What is it: Another monolithic style, i.e. a singularly deployed application. The system is divided by business domain instead of technical functionality. Domains are called "modules". Goal is to minimize communication between modules as much as possible. When to use it: If teams are domain-focused and using domain-driven development, it's a good starting architecture. Can later migrate to a distributed architecture more easily. What is it: Topology consists of pipes and filters . Filters perform business logic; pipes coordinate and transfer data. Systems have a unidirectional data flow; it can be monolithic or distributed. When to use it: Suitable for systems with one-way, ordered processing steps. ETL pipelines, etc. What is it: Topology consists of a core system (the "microkernel") and plug-ins. Plug-ins are optional and provide extensible functionality to the system. Traditionally monolithic with a single database. Plug-ins shouldn't access database directly. When to use it: Installable desktop applications, or domains that address a wide market and require many custom rules and functionalities for each customer. What is it: Distributed architecture with a separately deployed user interface, coarse-grained domain-centric remote services, and a monolithic database. Basically microservices but with coarser service boundaries and a single shared database, or just a few. When to use it: When the system is of significant complexity and serves a wide enough user base that the benefits of a distributed architecture outweigh the costs. Can be a stepping stone towards other distributed architectures What is it: Distributed system using mostly asynchronous communication. Consists of event publishers, brokers, and processors (the services). Central communication unit is an event, as opposed to a request. When to use it: Systems that require flexible, dynamic processing that need to scale to lots of concurrent users. Applications where eventual consistency is tolerable and immediate acknowledgement isn't needed. What is it: A complicated distributed infrastructure of scalable processing units that are supported by replicated and/or distributed caches. There is a shared "data grid" that handles data syncing between units and reading/writing from the database. This removes the database bottleneck from the system—database access isn't needed for processing requests. When to use it: Applications with very high concurrent user volume and high traffic variability, AND a low need for data consistency between users. Race conditions and data conflicts will be unavoidable in this system. What is it: A legacy architectural style that uses abstract service layers and operations orchestrated by a shared "enterprise service bus" which knows which services to call to complete operations. Uses generic components to increase code re-use. When to use it: If you've taken a time machine back to the 90s and you have to write enterprise software. What is it: Domain-driven architecture that enforces strict API boundaries and minimizes coupling between domains. Duplication is favoured over re-use where possible. Each service should "do one thing" and have its own database ideally. When to use it: Systems that are highly modular and have high enough load to justify the scalability and performance benefits compared to the development and operational costs.

0 views
Jim Nielsen 3 weeks ago

Re: People Are Not Friction

Dave Rupert puts words to the feeling in the air: the unspoken promise of AI is that you can automate away all the tasks and people who stand in your way. Sometimes I feel like there’s a palpable tension in the air as if we’re waiting to see whether AI will replace designers or engineers first. Designers empowered by AI might feel those pesky nay-saying, opinionated engineers aren’t needed anymore. Engineers empowered with AI might feel like AI creates designs that are good enough for most situations. Backend engineers feel like frontend engineering is a solved problem. Frontend engineers know scaffolding a CRUD app or an entire backend API is simple fodder for the agent. Meanwhile, management cackles in their leather chairs saying “Let them fight…” It reminds me of something Paul Ford said : The most brutal fact of life is that the discipline you love and care for is utterly irrelevant without the other disciplines that you tend to despise. Ah yes, that age-old mindset where you believe your discipline is the only one that really matters. Paradoxically, the promise of AI to every discipline is that it will help bypass the tedious-but-barely-necessary tasks (and people) of the other pesky disciplines. AI whispers in our ears: “everyone else’s job is easy except yours” . But people matter. They always have. Interacting with each other is the whole point! I look forward to a future where, hopefully, decision makers realize: “Shit! The best products come from teams of people across various disciplines who know how to work with each other, instead of trying to obviate each other.” Reply via: Email · Mastodon · Bluesky

0 views
Kev Quirk 3 weeks ago

Another ANOTHER New Lick of Paint

So it turns out I didn't like the mustard yellow and steel blue design that I created a couple weeks ago. It just didn't sit well with me, and if I look back over my design history the designs that have stuck over the years are invariably grey with a splash of colour. Problem was, I didn't really know how I was going to redesign the site. Then, one day, I was talking with Sven via email and I visited his blog (also running Pure Blog for the record 🎉), and I immediately knew that was the kind of design I was looking for. It's simplicity is just lovely, and so easy to read. So I set about making my own version of Sven's lovely design. I didn't want it to be exactly the same as his, but I also didn't think my design would turn out quite as close to his as it did - I suppose that goes to show how much I like his site. :-) I've spoken to Sven and he's good with me effectively copying his design. For posterity (as I'm likely to change it again in the future) here's what the design currently looks like: I'm still not 100% sold on the font (but it is growing on me), and I'm not sure about the yellow in the , but blue everywhere else. So I may change a couple of things subtly. Having said all that, overall I'm the happiest with the design I've been since moving to Pure Blog. Finally I'd like to thank Sven for allowing me to steal his wonderful design. What do you guys think? Leave a comment below, or reply by email. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views