Posts in Javascript (20 found)
JSLegendDev 2 days ago

Why Text in Vampire Survivors Used to Look Like This

In 2022, Vampire Survivors, a game where you destroy hordes of enemies by just moving around, released. It ended up becoming extremely popular and spawning a whole genre by itself. At the time, I was working as a software developer for a company who’s product was a complex web application. Therefore, I became a web developer. I wasn’t really interested in game development as the working conditions and pay were known to be less than stellar. However, I quickly realized that the tools used to make web applications could also be used to develop games. Since I could make games with the skills and tooling I was already familiar with, I decided to try it out as a hobby to see if I’d enjoy it. As time passed, I got interested in the various ways one could use a web developer’s skill set to make games. After some research, I found out that Vampire Survivors was originally made in JavaScript (the programming language that is natively supported in all web browsers) using a game framework called Phaser . A game framework is essentially a game engine without the UI. While this is not the case for all game frameworks, it sure was for Phaser because it packed a lot of features out of the box. You didn’t have to reinvent the wheel. I wanted to give Phaser a try after seeing high profile games made with it, like Vampire Survivors and PokéRogue (A Pokémon Roguelike fan game). However, as I started my journey to learn this framework, I quickly gave up because the documentation was confusing. You also needed to write a lot more code to achieve the same results as the alternative I was already using called KAPLAY . I therefore, stuck with it leaving Phaser behind until some time had passed. As I got more comfortable in my game dev journey, I now wanted to make a small 2D RPG game with a combat system similar to Undertale. In my game, you would avoid projectiles and attack by stepping on attack zones. The player would be expected to learn to dodge various attack patterns. I also, wanted to publish the game on Steam. Prior to this, all games I made were mostly playable on the web. — SPONSORED SEGMENT — Speaking of publishing a game on Steam, since games made in JavaScript run in the browser you might be wondering how it’s possible to release them on Steam? For this purpose, most JavaScript-based games are packaged as desktop downloadable apps via technologies like Electron or NW.js. The packaging process consists in including a full Chromium browser alongside the game which results in bloated app sizes. However, technologies like Tauri offer a different approach. Instead of packaging a full browser alongside your app, it uses the Web engine already installed on your computer’s operating system. This results in leaner build sizes. That said, regardless of the technology used, you’ll have to spend a considerable amount of time setting it up before being able to export your game for Windows, Mac and Linux. In addition, extra work will be required for integrating the Steamworks API which allows implementing, among other things, Steam achievements and cloud saves which are features players have come to expect. If only there were a tool that made both the packaging process and the Steamworks API integration seamless. Fortunately, that tool exists, and is today’s sponsor : GemShell . GemShell allows you to package your JavaScript-based games for Windows, Mac and Linux in what amounts to a single click. It: Produces tiny executables with near-instant startup, avoiding the Chromium bloat by using the system’s WebView. Provides full access to Steamworks directly via an intuitive JavaScript API. Has built-in asset encryption to protect your code. Offers native capabilities allowing you to access the host’s file system. For more info, visit 👉 https://gemshell.dev/ To get the tool, visit 👉 https://l0om.itch.io/gemshell You have a tool/product you want featured in a sponsored segment? Contact me at [email protected] I got started working on my RPG project and things were progressing pretty smoothly until I ran into performance issues. By this point, I’d been developing the game in secret and wasn’t planning to reveal it yet. But I realized I could gather valuable performance feedback by sharing my progress publicly, so I decided to do just that. It turned out that, while KAPLAY was easy to learn to make games in, it was unfortunately not performant enough. FPS would tank in the bullet hell sections of my game. After doing all kinds of optimizations, I got the frame rate to be good enough but didn’t feel confident it would remain that way as I continued development. I initially thought of moving forward regardless but quickly changed my mind thinking of all the potential negative reviews I would get on Steam for poor performance. This led me to halt my game’s development. I needed to learn a more performant game framework or game engine. Unfortunately, this meant I’d have to restart making my game from scratch. At least, I could use the same assets. Among the options I was considering learning were Phaser and the Godot game engine. Since my game was made in JavaScript, I thought trying to learn Phaser again would save me time because I wouldn’t need to learn a different programming language and development environment. I would potentially also be able to reuse code I had already written for my game. Also, Phaser was the most mature and battle-tested option in the JavaScript game dev space as well as one of the most performant. Although I didn’t like how using Phaser would result in very verbose code, the perceived advantages in my situation, outweighed the cons. However, one thing that irked me with Phaser, looking at the many games showcased on its official website, was that all of them had blurry text or text with weird artifacts. This was also true for Vampire Survivors and for PokéRogue, if you looked closely enough. Text in Vampire Survivors. Text in Pokérogue. Weird outline around the text. Arrow placed by myself. Close up. While some may consider this a small detail, it annoyed me to no end. I almost gave up on Phaser again and even started to look for alternatives. Yet, what kept me going was that, in my previous attempt at learning Phaser, I had started working on a remake of my infinite-runner Sonic fan game which was originally made with KAPLAY. I remembered that I had pushed the project on GitHub and left it abandoned. Pulling the project again, I noticed that I had already made significant progress and that the font used didn’t produce any weird artifacts. I thought that I could get around the artifact problem by just carefully selecting which font to use for my games. Because of this, I continued working on the project, learning Phaser quickly in the process. There’s something to be said about how quickly you can learn a new technology by rebuilding a project you’ve already created. Since you already know exactly what the end result should look like, you can focus on understanding the key concepts of the technology you want to learn. Because each step in rebuilding the project is concrete, you know exactly what to search for. It’s now just a matter of translating between how things are done in the technology you already know VS the one you’re trying to learn. The rebuild of my Sonic fan game was nearing completion when I needed to display text elsewhere in the game at a different font size. To my dismay, when rendering the font at a smaller size, the artifact problem, which I thought was gone (at least with the font I was using), reared its ugly head again. This was a catastrophe. I was already knee-deep with Phaser and didn’t want to switch again. None of the Phaser games I knew of had clean text, so I assumed it was something inherent to the framework. I couldn’t believe I had forgotten to check how text was rendered at different sizes before committing to Phaser. In hindsight, it was a pretty stupid move, and I felt like I had wasted a lot of time for little benefit. Artifacts present in my Sonic game. At this point, the proper thing to do would have been to cut my losses and moved on to something else. However, afflicted by the sunk cost fallacy , I thought that there was no way I would give up now. Not after having spent this much time learning Phaser. I needed to fix this issue, come hell or high water. After some research, I figured out that the reason my text had artifacts, was because of my use of Phaser’s pixel art option. Since I was making a game with pixel art graphics, this option was needed so that my game sprites would scale without becoming blurry. Phaser would achieve this by applying a filtering method called Nearest-Neighbor. The issue, however, was that it wouldn’t only apply this to sprites, it also affected fonts, causing artifacts to appear around the text, since fonts don’t scale the same way. Since Vampire Survivors also used pixel art graphics, it now made sense why the text used to look so weird. The developer probably just applied the pixel art option and called it a day. By the way, the font used in the game is Phaser’s default font, Courier, which is also a font available on many electronic devices by default. It seems like the dev really didn’t care much about this aspect of the game’s presentation. Now that the game has been moved to Unity, text is rendered properly, though still using Courier, as it has become part of the game’s visual style. However, disabling the pixel art option wouldn’t solve the issue. Text would still render blurry by default, and the sprites would remain blurry as well. How the game looks without the pixel art option set to true. Fortunately, there was a fix for the font blurriness. The text method allowed setting the resolution of the rendered text. Setting the resolution to 4 made it render clearly. Still, this didn’t help in the end because I still needed the pixel art option for my sprites. How the game looks without the pixel art option + text resolution set to 4. A potential solution to my problem would have been to use a bitmap font, also known as a sprite font. As the name implies, instead of a .ttf font, which most people are accustomed to, a sprite font is stored as an image containing a sprite for each letter. This allows the font to render properly, scaling like other sprites when the pixel art option is enabled. That said, this solution was a non-starter because each character was a fixed image. I would lose out on the flexibility of a .ttf font that allows for text to be set to italic, bold, etc… I didn’t want to lose on that flexibility and didn’t want the hassle of converting my existing .ttf font. At this point, I couldn’t believe how much effort was needed just to do a simple thing like render text properly. In frustration, I decided to open the Godot game engine and figure out how to render text just to compare with what I currently had. As expected, I got the font to render nicely automatically. I was tired of having to do this much work for something that I would’ve gotten for free in Godot, this couldn’t go on. How text is rendered in Godot. I needed to take a break, so I stepped away from the computer, and the next day, I had an epiphany. Instead of using the pixel art option in Phaser, what if I could apply the Nearest-Neighbor filtering method to each sprite individually? I looked it up, and indeed, it was possible. Using this approach in conjunction with increasing the text resolution to 4 and voilà, I had both non-blurry pixel art and text that rendered clearly without weird artifacts. How the game looks with each sprites set to Nearest filtering individually + text resolution set to 4. My problem was fixed, but at what cost? I had wasted an enormous amount of time on something that was given for free in a game engine like Godot. I started to consider that going with Phaser was probably a huge blunder. Seeing how quickly I learned Phaser using this approach of remaking one of my previous projects, it hit me, what if I tried learning Godot the same way? Without waiting further, I jumped into Godot. As expected, it was very easy to learn considering I knew exactly what to search since my project’s requirements were crystal clear. It was only a matter of looking up how things were done in Godot for each piece of functionality I needed to implement. GDScript, the programming language used in that engine, was also very easy to pick up and felt like Python which I already had experience in. Godot version of my game. After completing the project, I now had a better view of Godot. Things were intuitive, the code was not verbose and it was an all around nice experience. Before trying the engine, I had the preconception that it no longer supported web export due to the transition from version 3 to 4. I’m happy to report that this is no longer the case. I’ve been using version 4.5, and exporting to the web is just as straightforward as exporting to desktop. Godot web export setup In conclusion, I think Godot is the right choice for my RPG project. I also want to gain mastery of the engine so I can eventually experiment with creating a game that uses 3D models for environments and props, but 2D sprites for characters. Godot makes developing this kind of game far less daunting. This style has been popularized recently by Octopath Traveler using the term HD-2D, although my real inspiration comes from the DS Pokémon era, which used a similar aesthetic without the over-the-top post-processing effects seen in Octopath Traveler. Screenshot from Octopath Traveler. Screenshot from Pokémon Diamond released on the Nintendo DS. That said, I’m now presented with two choices: Make multiple tiny games in Godot to become more proficient before restarting the development of my game. Jump into development headfirst and learn what I need along the way, just like I did for the Sonic project. I’m still thinking about it, but I’d appreciate your input. Anyway, if you want to try both the Phaser and the Godot versions of my Sonic game, they’re both available to play on the web. Here are the links: Phaser 4 version : https://jslegend.itch.io/sonic-ring-run-phaser-4 Godot 4 version : https://jslegend.itch.io/sonic-ring-run-godot-version I hope you enjoyed my little game dev adventure. If you’re curious about the small RPG I’m working on, feel free to read my previous post on the topic. If you’re interested in game development or want to keep up with updates regarding my small RPG, I recommend subscribing to not miss out on future posts and updates. Subscribe now In 2022, Vampire Survivors, a game where you destroy hordes of enemies by just moving around, released. It ended up becoming extremely popular and spawning a whole genre by itself. At the time, I was working as a software developer for a company who’s product was a complex web application. Therefore, I became a web developer. I wasn’t really interested in game development as the working conditions and pay were known to be less than stellar. However, I quickly realized that the tools used to make web applications could also be used to develop games. Since I could make games with the skills and tooling I was already familiar with, I decided to try it out as a hobby to see if I’d enjoy it. As time passed, I got interested in the various ways one could use a web developer’s skill set to make games. After some research, I found out that Vampire Survivors was originally made in JavaScript (the programming language that is natively supported in all web browsers) using a game framework called Phaser . A game framework is essentially a game engine without the UI. While this is not the case for all game frameworks, it sure was for Phaser because it packed a lot of features out of the box. You didn’t have to reinvent the wheel. I wanted to give Phaser a try after seeing high profile games made with it, like Vampire Survivors and PokéRogue (A Pokémon Roguelike fan game). However, as I started my journey to learn this framework, I quickly gave up because the documentation was confusing. You also needed to write a lot more code to achieve the same results as the alternative I was already using called KAPLAY . I therefore, stuck with it leaving Phaser behind until some time had passed. As I got more comfortable in my game dev journey, I now wanted to make a small 2D RPG game with a combat system similar to Undertale. In my game, you would avoid projectiles and attack by stepping on attack zones. The player would be expected to learn to dodge various attack patterns. I also, wanted to publish the game on Steam. Prior to this, all games I made were mostly playable on the web. — SPONSORED SEGMENT — Speaking of publishing a game on Steam, since games made in JavaScript run in the browser you might be wondering how it’s possible to release them on Steam? For this purpose, most JavaScript-based games are packaged as desktop downloadable apps via technologies like Electron or NW.js. The packaging process consists in including a full Chromium browser alongside the game which results in bloated app sizes. However, technologies like Tauri offer a different approach. Instead of packaging a full browser alongside your app, it uses the Web engine already installed on your computer’s operating system. This results in leaner build sizes. That said, regardless of the technology used, you’ll have to spend a considerable amount of time setting it up before being able to export your game for Windows, Mac and Linux. In addition, extra work will be required for integrating the Steamworks API which allows implementing, among other things, Steam achievements and cloud saves which are features players have come to expect. If only there were a tool that made both the packaging process and the Steamworks API integration seamless. Fortunately, that tool exists, and is today’s sponsor : GemShell . GemShell allows you to package your JavaScript-based games for Windows, Mac and Linux in what amounts to a single click. It: Produces tiny executables with near-instant startup, avoiding the Chromium bloat by using the system’s WebView. Provides full access to Steamworks directly via an intuitive JavaScript API. Has built-in asset encryption to protect your code. Offers native capabilities allowing you to access the host’s file system. Text in Vampire Survivors. Text in Pokérogue. Weird outline around the text. Arrow placed by myself. Close up. While some may consider this a small detail, it annoyed me to no end. I almost gave up on Phaser again and even started to look for alternatives. Yet, what kept me going was that, in my previous attempt at learning Phaser, I had started working on a remake of my infinite-runner Sonic fan game which was originally made with KAPLAY. I remembered that I had pushed the project on GitHub and left it abandoned. Pulling the project again, I noticed that I had already made significant progress and that the font used didn’t produce any weird artifacts. I thought that I could get around the artifact problem by just carefully selecting which font to use for my games. Because of this, I continued working on the project, learning Phaser quickly in the process. There’s something to be said about how quickly you can learn a new technology by rebuilding a project you’ve already created. Since you already know exactly what the end result should look like, you can focus on understanding the key concepts of the technology you want to learn. Because each step in rebuilding the project is concrete, you know exactly what to search for. It’s now just a matter of translating between how things are done in the technology you already know VS the one you’re trying to learn. The rebuild of my Sonic fan game was nearing completion when I needed to display text elsewhere in the game at a different font size. To my dismay, when rendering the font at a smaller size, the artifact problem, which I thought was gone (at least with the font I was using), reared its ugly head again. This was a catastrophe. I was already knee-deep with Phaser and didn’t want to switch again. None of the Phaser games I knew of had clean text, so I assumed it was something inherent to the framework. I couldn’t believe I had forgotten to check how text was rendered at different sizes before committing to Phaser. In hindsight, it was a pretty stupid move, and I felt like I had wasted a lot of time for little benefit. Artifacts present in my Sonic game. At this point, the proper thing to do would have been to cut my losses and moved on to something else. However, afflicted by the sunk cost fallacy , I thought that there was no way I would give up now. Not after having spent this much time learning Phaser. I needed to fix this issue, come hell or high water. After some research, I figured out that the reason my text had artifacts, was because of my use of Phaser’s pixel art option. Since I was making a game with pixel art graphics, this option was needed so that my game sprites would scale without becoming blurry. Phaser would achieve this by applying a filtering method called Nearest-Neighbor. The issue, however, was that it wouldn’t only apply this to sprites, it also affected fonts, causing artifacts to appear around the text, since fonts don’t scale the same way. Since Vampire Survivors also used pixel art graphics, it now made sense why the text used to look so weird. The developer probably just applied the pixel art option and called it a day. By the way, the font used in the game is Phaser’s default font, Courier, which is also a font available on many electronic devices by default. It seems like the dev really didn’t care much about this aspect of the game’s presentation. Now that the game has been moved to Unity, text is rendered properly, though still using Courier, as it has become part of the game’s visual style. However, disabling the pixel art option wouldn’t solve the issue. Text would still render blurry by default, and the sprites would remain blurry as well. How the game looks without the pixel art option set to true. Fortunately, there was a fix for the font blurriness. The text method allowed setting the resolution of the rendered text. Setting the resolution to 4 made it render clearly. Still, this didn’t help in the end because I still needed the pixel art option for my sprites. How the game looks without the pixel art option + text resolution set to 4. A potential solution to my problem would have been to use a bitmap font, also known as a sprite font. As the name implies, instead of a .ttf font, which most people are accustomed to, a sprite font is stored as an image containing a sprite for each letter. This allows the font to render properly, scaling like other sprites when the pixel art option is enabled. That said, this solution was a non-starter because each character was a fixed image. I would lose out on the flexibility of a .ttf font that allows for text to be set to italic, bold, etc… I didn’t want to lose on that flexibility and didn’t want the hassle of converting my existing .ttf font. At this point, I couldn’t believe how much effort was needed just to do a simple thing like render text properly. In frustration, I decided to open the Godot game engine and figure out how to render text just to compare with what I currently had. As expected, I got the font to render nicely automatically. I was tired of having to do this much work for something that I would’ve gotten for free in Godot, this couldn’t go on. How text is rendered in Godot. I needed to take a break, so I stepped away from the computer, and the next day, I had an epiphany. Instead of using the pixel art option in Phaser, what if I could apply the Nearest-Neighbor filtering method to each sprite individually? I looked it up, and indeed, it was possible. Using this approach in conjunction with increasing the text resolution to 4 and voilà, I had both non-blurry pixel art and text that rendered clearly without weird artifacts. How the game looks with each sprites set to Nearest filtering individually + text resolution set to 4. My problem was fixed, but at what cost? I had wasted an enormous amount of time on something that was given for free in a game engine like Godot. I started to consider that going with Phaser was probably a huge blunder. Seeing how quickly I learned Phaser using this approach of remaking one of my previous projects, it hit me, what if I tried learning Godot the same way? Without waiting further, I jumped into Godot. As expected, it was very easy to learn considering I knew exactly what to search since my project’s requirements were crystal clear. It was only a matter of looking up how things were done in Godot for each piece of functionality I needed to implement. GDScript, the programming language used in that engine, was also very easy to pick up and felt like Python which I already had experience in. Godot version of my game. After completing the project, I now had a better view of Godot. Things were intuitive, the code was not verbose and it was an all around nice experience. Before trying the engine, I had the preconception that it no longer supported web export due to the transition from version 3 to 4. I’m happy to report that this is no longer the case. I’ve been using version 4.5, and exporting to the web is just as straightforward as exporting to desktop. Godot web export setup In conclusion, I think Godot is the right choice for my RPG project. I also want to gain mastery of the engine so I can eventually experiment with creating a game that uses 3D models for environments and props, but 2D sprites for characters. Godot makes developing this kind of game far less daunting. This style has been popularized recently by Octopath Traveler using the term HD-2D, although my real inspiration comes from the DS Pokémon era, which used a similar aesthetic without the over-the-top post-processing effects seen in Octopath Traveler. Screenshot from Octopath Traveler. Screenshot from Pokémon Diamond released on the Nintendo DS. That said, I’m now presented with two choices: Make multiple tiny games in Godot to become more proficient before restarting the development of my game. Jump into development headfirst and learn what I need along the way, just like I did for the Sonic project. Phaser 4 version : https://jslegend.itch.io/sonic-ring-run-phaser-4 Godot 4 version : https://jslegend.itch.io/sonic-ring-run-godot-version

1 views

The Durable Function Tree - Part 1

In my last post I wrote about why and where determinism is needed in durable execution (DE). In this post I'm going to explore how workflows can be formed from trees of durable function calls based on durable promises and continuations.  Here's how I'll approach this: Building blocks : Start with promises and continuations and how they work in traditional programming. Making them durable : How promises and continuations are made durable. The durable function tree : How these pieces combine to create hierarchical workflows with nested fault boundaries. Function trees in practice : A look at Temporal, Restate, Resonate and DBOS. Responsibility boundaries : How function trees fit into my Coordinated Progress model and responsibility boundaries Value-add: What value does durable execution actually provide? Architecture discussion : Where function trees sit alongside event-driven choreography, and when to use each. At their core, most durable execution frameworks organize work as hierarchical trees of function calls. A root function invokes child functions, which may invoke their own children, forming a tree. In some frameworks such as Temporal, the workflow task is the parent function and each activity is a leaf function in a two-level tree. Other frameworks support arbitrary function trees where each function returns a durable promise to its caller. When a child function completes, its promise resolves, allowing the parent to resume. The durable execution engine manages this dance of invoking functions, caching results, handling retries, and suspending functions that are waiting on remote work. I'll refer to this pattern as the durable function tree , though it manifests differently across frameworks. In this series, I use the term side effect as any operation whose result depends on the external world rather than the function’s explicit inputs. That includes the obvious mutations such as writing to a database or sending an email, but also non-mutating operations whose results are not guaranteed to be the same across re-execution (such as retrieving a record from a database). Even though these operations look like pure reads, they are logically side effects because they break determinism (ah yes, the curse of determinism) as the value you obtain today may differ from the value you obtain when the function is retried tomorrow. So in these posts, side effect means: Anything external that must be recorded (and replayed) because it cannot be deterministically re-executed. Promises and futures are programming language constructs that act as handles or placeholders for a future result. They are coordination primitives.  The promise and the future are closely related concepts: A promise is a write-once container of a value, where the writer sets the value now or at some point in the future. Setting the value resolves the promise. A future is a read-only interface of the promise, so the bearer can only check if it is resolved or not. Fig 1. The promise/future as a container for a future value. As a container for a future value, the bearer can await the promise/future which will block until it has been resolved. While technically distinct (a promise is writable, a future is read-only), most languages and frameworks blur this distinction. For simplicity, I'll use the term "promise" throughout. Here's the basic pattern in pseudocode: The function creates a promise, launches asynchronous work that will eventually resolve it, and returns immediately. The caller can await the promise right away or continue with other work: Developers are generally comfortable with functions returning promises: invoke a function, get a handle, await its result. Usually we're waiting for some IO to complete or a call to an API. In fact, when a function executes, it might be the root of a tree of function calls, each passing back promises/futures to its caller, forming a promise chain. Promises and continuations are related but distinct concepts: A promise is a synchronization primitive for a value that doesn't yet exist A continuation is a control-flow primitive representing what the program should do once that value exists In JavaScript-style APIs, continuations appear explicitly in then , catch , and finally : Modern async/await syntax hides this continuation-passing behind synchronous-looking code: When execution hits await , the current function suspends and everything after the await becomes the continuation. The code that will resume once the promise resolves. A durable promise is a promise that survives process crashes, machine failures, and even migration to different servers. We can model this as a durable write-once register (WOR) with a unique, deterministic identity. The key properties: Deterministic identity : The promise ID is derived deterministically from the execution context (parent function ID, position in code, explicitly defined by the developer). Write-once semantics : Can only be resolved once. Durable storage : Both the creation and resolution are recorded persistently. Globally accessible : Any service that knows the promise ID can resolve it or await it. The durable execution engines (DEEs) generally implement this logical WOR by recording two entries in the function's execution history: one when the promise is created, another when it's resolved. This history is persisted and used to reconstruct state during replay. There are also additional concerns such as timeouts and cancellation of the promise, beyond its creation and resolution. When you write: Behind the scenes, the framework SDK: Checks if a durable promise for get_customer(231) already exists. If resolved: returns the stored value immediately. If unresolved or doesn't exist: executes (or re-executes) the work. Traditional promises suspend execution by capturing the call stack and heap state. Everything is still running in a single process. Durable execution engines typically don't do this as capturing and persisting arbitrary program state is complex and fragile. Instead, they implement continuations through replay and memoization : The function executes from the top Each await checks if its durable promise is already resolved If yes: use the stored result and continue (this is fast, it’s just a lookup) If no: execute the work, resolve the promise, record the result On failure: restart from step 1 Consider this example: First execution: Executes get_customer , resolves Promise 1, stores result Executes check_inventory , resolves Promise 2, stores result Starts charge_customer , crashes mid-execution Second execution (after crash): Re-runs from top get_customer : Promise 1 already resolved → returns stored result instantly check_inventory : Promise 2 already resolved → returns stored result instantly charge_customer : Promise 3 unresolved → executes the work Completes successfully This is why determinism matters (from the previous post). The function must take the same code path on replay to encounter the same promises in the same order. If control flow were non-deterministic, replayed execution might skip a promise or try to await a different promise entirely, breaking the memoization. Let’s now introduce the durable function tree. Durable functions can call other durable functions , creating trees of execution. Each function invocation returns a durable promise to the caller. Fig 2. A tree of function calls, returning durable promises. Execution flows down the tree; promise resolution flows back up.  This produces a tree of function calls, where each function is a control flow which executes various side effects. Side effects can be run from the local context or from a reliable remote context (such as another durable function), and the difference matters. Local-context side effects run within the function's execution context: Database queries S3 operations HTTP calls to external APIs Local computations with side effects Local-context side effects have these characteristics: Execute synchronously (even if using async syntax, the result is received by the same context) Cannot be retried independently (only by replaying the parent function) Require the function to keep running (e.g., maintaining a TCP connection for a database response) Remote-context side effects run in a separate reliable context: Another durable function. A durable timer (managed by the DEE). Work queued for external processing with an attached durable promise for the 3rd party to resolve. Remote-context side effects behave differently: Can be retried independently of the caller. Continues progressing even if the caller crashes or suspends. The caller awaits a promise, not a direct response. It is asynchronous, the caller context that receives the result may be a re-execution running on a different server, hours, days or months later. The distinction between local and remote matters because remote-context side effects create natural suspension points , which become important for durable function trees. Let’s use the tree from fig 2. It has a mix of local-context side effects (such as db commands and HTTP calls) and remote-context side effects, aka, calls to other durable functions (or timers). When a function is waiting only on promises from remote side effects, it can be suspended (meaning terminated, with all execution state discarded). The function doesn't need to sit in memory burning resources while waiting hours or days for remote work to complete. Fig 3. Our durable function tree seen as a tree with local-context and remote-context side effects Let’s imagine that the payment provider is down for two hours, so func3 cannot complete. The execution flow of the tree: Func1 runs: Executes getCustomer (local work, cannot suspend here) Invokes func2 , and receives a durable promise. There is no other local work to run right now. Only waiting on remote-context side effects. Func1 suspends —completely terminated, no resources held Func2 runs: Executes checkInventory (local work, cannot suspend here) Invokes func3 and func4 , receiving durable promises. There is no other local work to run right now. Only waiting on remote-context side effects. Func2 suspends —completely terminated, no resources held Func3 runs (concurrently with func4 ) Payment provider down, so fails payment. Func3 is retried repeatedly by the DEE. Two hours later, func3 completes, resolves the promise Func4 runs (concurrently with func3 ) Executes uploadInvoice (local work, cannot suspend here) Executes updateOrder (local work, cannot suspend here) Resolves its promise. Func2 resumes –re-executed from the top by the DEE.  checkInventory : already resolved → instant return Func3 : already resolved → instant return Func4 : already resolved → instant return Resolve promise to func1. Func1 resumes –re-executed from the top by the DEE. getCustomer : already resolved → instant return func2 : already resolved → instant return with result Continues to logAudit (local work) and completes. Without suspension, either: The whole tree would need to be re-executed from the top repeatedly until func3 completes after two hours. Or, each function in the tree, from func3 and up, would need to retry independently every few minutes for those two hours the payment provider is down just to check if their child promises have been resolved.  With function suspension, we avoid the need to repeatedly retry for long time periods and only resume a function once its child promise(s) has been resolved, all the while consuming zero resources while waiting.  Local side effects don't allow suspension because the function must remain running for the side effect to complete. You can't suspend while waiting for a database response: the TCP connection would be lost and the response would never arrive. The same goes for API calls that are not directly managed by the durable execution engine, these are treated like any other locally-run side effect.  What makes this durable function tree structure particularly powerful for fault tolerance is that each node can fail, retry, and recover independently without affecting its siblings or ancestors. If func3 crashes, only func3 needs to retry: func2 remains suspended. func4 's completed work is preserved. func1 , also suspended, and doesn't even know a failure occurred. The tree structure creates natural fault boundaries: failures are contained to a single branch and don't cascade upward until that branch exhausts its retries or reaches a defined timeout. This means a complex workflow with dozens of steps can have a single step fail and retry hundreds of times without forcing the entire workflow to restart from scratch. Portions of the tree can remain suspended indefinitely, until a dependent promise allows resumption of the parent function. Different durable execution engines make different choices about tree depth and suspension points. Temporal uses a two-layer model where workflows orchestrate activities . The workflow is the root function (run as a workflow task) and each activity is a leaf function (each run as a separate activity task). Each activity is considered a single side effect. Child workflows add depth to the tree as a parent workflow can trigger and await the result of the child. Fig 4. Temporal’s two layer workflow→activity model. Because each activity is a separately scheduled task that could run on any worker, for the workflow task, activities are remote-context side effects , which allows the workflow task to be suspended . In fact, if a workflow has three activities to execute, then the workflow will be executed across four workflow tasks in order to complete (as the first three workflow tasks end up suspending on an activity invocation).  Fig 5. Workers poll Temporal Server task queues for tasks, and then execute those tasks. Activity are invoked via commands which Temporal Server derives events and tasks from. Even when an activity fails, Temporal re-executes the parent workflow from the top, which re-encounters the failed activity.  In Temporal, the workflow task, run on workers, drives forward progress. If an activity needs to be scheduled again, that is driven from the workflow task. In turn, Temporal detects the need to reschedule a workflow task when an activity times out (rather than detecting the error directly). Temporal is a push/pull model where: Workers pull tasks (workflow/activity) from task queues in Temporal Server. Workers drive forward progress by pushing (sending) commands to Temporal Server (which in turn leads to the server creating more tasks to be pulled). Restate supports arbitrary tree depth, functions calling functions calling functions. Each function execution can progress through multiple side effects before suspending when awaiting remote Restate services (durable functions), timers, or delegation promises. Failed functions are retriggered independently by Restate Service rather than requiring parent re-execution.  Where Temporal drives progress of an activity via scheduling a workflow task, Restate drives progress by directly invoking the target function from the engine itself. This makes sense as there is no separate workflow and activity task specialization. If func1 is waiting on func2, then func1 can suspend while Restate executes (and possibly retries) func2 independently until it completes or reaches a retry or time limit, only then waking up func1 to resume. Therefore we can say Restate is purely a push model. Restate Server acts as a man-in-the-middle, invoking functions, and functions send commands and notifications to Restate Service which it reacts to. In its man-in-the-middle position, it can also subscribe to Kafka topics and invoke a function for each event. Fig 6. Invocations are driven by Restate Service. Functions will suspend when they await Restate governed remote side effects (and no local side effects). Restate detects when a suspended function should be resumed and invokes it. Note this diagram omits the notifications send from the Restate client back to Restate server related the start and end of each local side effect. Resonate is definitely worth a mention here too, it falls into the arbitrary function tree camp, and is going further by defining a protocol for this pattern. The Resonate model looks the simplest (everything is a function, either local or remote), though I haven’t played with it yet. I recommend reading Dominik Tornow’s writing and talks on this subject matter of distributed async/await as trees of functions returning promises. DBOS has some similarities with Temporal in that it is also a two-level model with workflows and steps, except most steps are local-context (run as part of the parent function). DBOS workflows mostly operate as a single function with local-context side effects, except for a few cases like durable timers, which act as remote-context side effects and provide suspension points. A DBOS workflow can also trigger another workflow and await the result, providing another suspension point (as the other workflow is a remote-context side effect). In this way, DBOS can form function trees via workflows invoking workflows (as workflows are basically functions). DBOS also uses a worker model, where workers poll Postgres for work (which is similar to Temporal workers polling task queues). Because steps are local-context side effects (such as db commands, API calls) a typical workflow does not suspend (unless it awaits a timer or another workflow). This differentiates itself from Temporal, which schedules all activities as remote-context side effects (activity tasks are run as an independent unit of work on any worker). Fig 7. DBOS workers poll Postgres for work. Functions will suspend when they await a timer or another DBOS workflow. The logic is mostly housed in the DBOS client, where the polling logic can detect when to resume a suspended workflow. Despite their differences, Temporal, Restate and DBOS suspend execution for the same fundamental reason: the distinction between locally-run and remotely-run side effects. Temporal makes activities explicitly remote but only ever one layer deep; Restate and DBOS generally make side effects local-context but support remote-context in the form of timers and other durable workflows/functions. Durable execution frameworks sit on a continuum from “more constrained” to “more flexible compositional” models: On the left, frameworks like Temporal and DBOS use two distinct abstractions: workflows (control flow logic) and activities/steps (side effects) . Activities/steps are terminal leaves; only workflows can have children. This constraint provides helpful structure. It's clear what should be a workflow (multi-step coordination) versus an activity (a single unit of work). The tradeoff is less flexibility: if your "single unit of work" needs its own sub-steps, you must either break it into multiple activities or promote it to a child workflow. On the right, frameworks like Resonate treat everything as functions calling functions . There's no distinction between "orchestration" and "work". Any function can call any other function to arbitrary depth. This provides maximum composability but requires discipline to avoid overly complex trees. Restate kind of straddles both as it offers multiple building blocks, it’s harder to pin down Restate on this continuum.  All positions on this continuum support function trees, the difference is how much structure the framework imposes versus how much freedom it provides. Constrained models offer guardrails against complexity; forcing you to think in terms of workflows and steps. Resonate and Restate provide more flexibility, functions calling functions, but inevitably this requires a bit more discipline. Using what we’ve covered in part 1, in part 2 we’ll take a step back and: Look at durable execution compares to event-driven architecture in terms of fault domains/ responsibility boundaries. Ask the question: what does durable execution actually provide us that we can’t achieve by other means? Finally, look at how does durable execution fits into the wider architecture, including event-driven architecture. Part 1 Building blocks : Start with promises and continuations and how they work in traditional programming. Making them durable : How promises and continuations are made durable. The durable function tree : How these pieces combine to create hierarchical workflows with nested fault boundaries. Function trees in practice : A look at Temporal, Restate, Resonate and DBOS. Part 2 Responsibility boundaries : How function trees fit into my Coordinated Progress model and responsibility boundaries Value-add: What value does durable execution actually provide? Architecture discussion : Where function trees sit alongside event-driven choreography, and when to use each. A promise is a write-once container of a value, where the writer sets the value now or at some point in the future. Setting the value resolves the promise. A future is a read-only interface of the promise, so the bearer can only check if it is resolved or not. A promise is a synchronization primitive for a value that doesn't yet exist A continuation is a control-flow primitive representing what the program should do once that value exists Deterministic identity : The promise ID is derived deterministically from the execution context (parent function ID, position in code, explicitly defined by the developer). Write-once semantics : Can only be resolved once. Durable storage : Both the creation and resolution are recorded persistently. Globally accessible : Any service that knows the promise ID can resolve it or await it. Checks if a durable promise for get_customer(231) already exists. If resolved: returns the stored value immediately. If unresolved or doesn't exist: executes (or re-executes) the work. The function executes from the top Each await checks if its durable promise is already resolved If yes: use the stored result and continue (this is fast, it’s just a lookup) If no: execute the work, resolve the promise, record the result On failure: restart from step 1 Executes get_customer , resolves Promise 1, stores result Executes check_inventory , resolves Promise 2, stores result Starts charge_customer , crashes mid-execution Re-runs from top get_customer : Promise 1 already resolved → returns stored result instantly check_inventory : Promise 2 already resolved → returns stored result instantly charge_customer : Promise 3 unresolved → executes the work Completes successfully Database queries S3 operations HTTP calls to external APIs Local computations with side effects Execute synchronously (even if using async syntax, the result is received by the same context) Cannot be retried independently (only by replaying the parent function) Require the function to keep running (e.g., maintaining a TCP connection for a database response) Another durable function. A durable timer (managed by the DEE). Work queued for external processing with an attached durable promise for the 3rd party to resolve. Can be retried independently of the caller. Continues progressing even if the caller crashes or suspends. The caller awaits a promise, not a direct response. It is asynchronous, the caller context that receives the result may be a re-execution running on a different server, hours, days or months later. Func1 runs: Executes getCustomer (local work, cannot suspend here) Invokes func2 , and receives a durable promise. There is no other local work to run right now. Only waiting on remote-context side effects. Func1 suspends —completely terminated, no resources held Func2 runs: Executes checkInventory (local work, cannot suspend here) Invokes func3 and func4 , receiving durable promises. There is no other local work to run right now. Only waiting on remote-context side effects. Func2 suspends —completely terminated, no resources held Func3 runs (concurrently with func4 ) Payment provider down, so fails payment. Func3 is retried repeatedly by the DEE. Two hours later, func3 completes, resolves the promise Func4 runs (concurrently with func3 ) Executes uploadInvoice (local work, cannot suspend here) Executes updateOrder (local work, cannot suspend here) Resolves its promise. Func2 resumes –re-executed from the top by the DEE.  checkInventory : already resolved → instant return Func3 : already resolved → instant return Func4 : already resolved → instant return Resolve promise to func1. Func1 resumes –re-executed from the top by the DEE. getCustomer : already resolved → instant return func2 : already resolved → instant return with result Continues to logAudit (local work) and completes. The whole tree would need to be re-executed from the top repeatedly until func3 completes after two hours. Or, each function in the tree, from func3 and up, would need to retry independently every few minutes for those two hours the payment provider is down just to check if their child promises have been resolved.  func2 remains suspended. func4 's completed work is preserved. func1 , also suspended, and doesn't even know a failure occurred. Workers pull tasks (workflow/activity) from task queues in Temporal Server. Workers drive forward progress by pushing (sending) commands to Temporal Server (which in turn leads to the server creating more tasks to be pulled). On the left, frameworks like Temporal and DBOS use two distinct abstractions: workflows (control flow logic) and activities/steps (side effects) . Activities/steps are terminal leaves; only workflows can have children. This constraint provides helpful structure. It's clear what should be a workflow (multi-step coordination) versus an activity (a single unit of work). The tradeoff is less flexibility: if your "single unit of work" needs its own sub-steps, you must either break it into multiple activities or promote it to a child workflow. On the right, frameworks like Resonate treat everything as functions calling functions . There's no distinction between "orchestration" and "work". Any function can call any other function to arbitrary depth. This provides maximum composability but requires discipline to avoid overly complex trees. Restate kind of straddles both as it offers multiple building blocks, it’s harder to pin down Restate on this continuum.  Look at durable execution compares to event-driven architecture in terms of fault domains/ responsibility boundaries. Ask the question: what does durable execution actually provide us that we can’t achieve by other means? Finally, look at how does durable execution fits into the wider architecture, including event-driven architecture.

0 views
Rob Zolkos 1 weeks ago

Vanilla CSS is all you need

Back in April 2024, Jason Zimdars from 37signals published a post about modern CSS patterns in Campfire . He explained how their team builds sophisticated web applications using nothing but vanilla CSS. No Sass. No PostCSS. No build tools. The post stuck with me. Over the past year and a half, 37signals has released two more products (Writebook and Fizzy) built on the same nobuild philosophy. I wanted to know if these patterns held up. Had they evolved? I cracked open the source code for Campfire, Writebook, and Fizzy and traced the evolution of their CSS architecture. What started as curiosity became genuine surprise. These are not just consistent patterns. They are improving patterns. Each release builds on the last, adopting progressively more modern CSS features while maintaining the same nobuild philosophy. These are not hobby projects. Campfire is a real-time chat application. Writebook is a publishing platform. Fizzy is a full-featured project management tool with kanban boards, drag-and-drop, and complex state management. Combined, they represent nearly 14,000 lines of CSS across 105 files. Not a single line touches a build tool. Let me be clear: there is nothing wrong with Tailwind . It is a fantastic tool that helps developers ship products faster. The utility-first approach is pragmatic, especially for teams that struggle with CSS architecture decisions. But somewhere along the way, utility-first became the only answer. CSS has evolved dramatically. The language that once required preprocessors for variables and nesting now has: 37signals looked at this landscape and made a bet: modern CSS is powerful enough. No build step required. Three products later, that bet is paying off. Open any of these three codebases and you find the same flat structure: That is it. No subdirectories. No partials. No complex import trees. One file per concept, named exactly what it does. Zero configuration. Zero build time. Zero waiting. I would love to see something like this ship with new Rails applications. A simple starting structure with , , , and already in place. I suspect many developers reach for Tailwind not because they prefer utility classes, but because vanilla CSS offers no starting point. No buckets. No conventions. Maybe CSS needs its own omakase. Jason’s original post explained OKLCH well. It is the perceptually uniform color space all three apps use. The short version: unlike RGB or HSL, OKLCH’s lightness value actually corresponds to perceived brightness. A 50% lightness blue looks as bright as a 50% lightness yellow. What is worth noting is how this foundation remains identical across all three apps: Dark mode becomes trivial: Every color that references these primitives automatically updates. No duplication. No separate dark theme file. One media query, and the entire application transforms. Fizzy takes this further with : One color in, four harmonious colors out. Change the card color via JavaScript ( ), and the entire card theme updates automatically. No class swapping. No style recalculation. Just CSS doing what CSS does best. Here is a pattern I did not expect: all three applications use units for horizontal spacing. Why characters? Because spacing should relate to content. A gap between words feels natural because it is literally the width of a character. As font size scales, spacing scales proportionally. This also makes their responsive breakpoints unexpectedly elegant: Instead of asking “is this a tablet?”, they are asking “is there room for 100 characters of content?” It is semantic. It is content-driven. It works. Let me address the elephant in the room. These applications absolutely use utility classes: The difference? These utilities are additive , not foundational. The core styling lives in semantic component classes. Utilities handle the exceptions: the one-off layout adjustment, the conditional visibility toggle. Compare to a typical Tailwind component: And the 37signals equivalent: Yes, it is more CSS. But consider what you gain: If there is one CSS feature that changes everything, it is . For decades, you needed JavaScript to style parents based on children. No more. Writebook uses it for a sidebar toggle with no JavaScript: Fizzy uses it for kanban column layouts: Campfire uses it for intelligent button styling: This is CSS doing what you used to need JavaScript for. State management. Conditional rendering. Parent selection. All declarative. All in stylesheets. What fascinated me most was watching the architecture evolve across releases. Campfire (first release) established the foundation: Writebook (second release) added modern capabilities: Fizzy (third release) went all-in on modern CSS: You can see a team learning, experimenting, and shipping progressively more sophisticated CSS with each product. By Fizzy, they are using features many developers do not even know exist. CSS Layers solve the specificity wars that have plagued CSS since the beginning. It does not matter what order your files load. It does not matter how many classes you chain. Layers determine the winner, period. One technique appears in all three applications that deserves special attention. Their loading spinners use no images, no SVGs, no JavaScript. Just CSS masks. Here is the actual implementation from Fizzy’s : The keyframes live in a separate file: Three dots, bouncing in sequence: The means it automatically inherits the text color. Works in any context, any theme, any color scheme. Zero additional assets. Pure CSS creativity. The default browser element renders as a yellow highlighter. It works, but it is not particularly elegant. Fizzy takes a different approach for search result highlighting: drawing a hand-drawn circle around matched terms. Here is the implementation from : The HTML structure is . The empty exists solely to provide two pseudo-elements ( and ) that draw the left and right halves of the circle. The technique uses asymmetric border-radius values to create an organic, hand-drawn appearance. The makes the circle semi-transparent against the background, switching to in dark mode for proper blending. Search results for: webhook No images. No SVGs. Just borders and border-radius creating the illusion of a hand-drawn circle. Fizzy and Writebook both animate HTML elements. This was notoriously difficult before. The secret is . Here is the actual implementation from Fizzy’s : The variable is defined globally as . Open Dialog This dialog animates in and out using pure CSS. The rule defines where the animation starts from when an element appears. Combined with , you can now transition between and . The modal smoothly scales and fades in. The backdrop fades independently. No JavaScript animation libraries. No manually toggling classes. The browser handles it. I am not suggesting you abandon your build tools tomorrow. But I am suggesting you reconsider your assumptions. You might not need Sass or PostCSS. Native CSS has variables, nesting, and . The features that needed polyfills are now baseline across browsers. You might not need Tailwind for every project. Especially if your team understands CSS well enough to build a small design system. While the industry sprints toward increasingly complex toolchains, 37signals is walking calmly in the other direction. Is this approach right for everyone? No. Large teams with varying CSS skill levels might benefit from Tailwind’s guardrails. But for many projects, their approach is a reminder that simpler can be better. Thanks to Jason Zimdars and the 37signals team for sharing their approach openly. All code examples in this post are taken from the Campfire, Writebook, and Fizzy source code. For Jason’s original deep-dive into Campfire’s CSS patterns, see Modern CSS Patterns and Techniques in Campfire . If you want to learn modern CSS, these three codebases are an exceptional classroom. Native custom properties (variables) Native nesting Container queries The selector (finally, a parent selector) CSS Layers for managing specificity for dynamic color manipulation , , for responsive sizing without media queries HTML stays readable. tells you what something is, not how it looks. Changes cascade. Update once, every button updates. Variants compose. Add without redefining every property. Media queries live with components. Dark mode, hover states, and responsive behavior are co-located with the component they affect. OKLCH colors Custom properties for everything Character-based spacing Flat file organization View Transitions API for smooth page changes Container queries for component-level responsiveness for entrance animations CSS Layers ( ) for managing specificity for dynamic color derivation Complex chains replacing JavaScript state

0 views
Brain Baking 1 weeks ago

Favourites of November 2025

The more holiday seasons I see coming and going, the less enthused I am by the forced celebration that tastes an awful lot like capitalism. I put up my gift guide anyway, just in case anyone is willing to buy me that dough mixer, otherwise I’ll have to do it in January as an early expense for the upcoming year. Thanks in advance! There isn’t a lot of mental space left to prepare for celebrations anyway, with the second kid giving us an equally hard time as the first. Anyway. Welcome, last month of the year, I guess. The first one who plays Last Christmas is out . Previous month: September 2025 . Not really. None, to be very precise. But I did buy yet another one: Mara van der Lugt’s Hopeful Pessimism , which sounded like it was written for me. I expect equally great and miserable things from this work. I’ve only had the time to write the review for Rise of the Triad: Ludicrous Edition (ROTT) that I ended up buying for the Nintendo Switch thanks to Limited Run Games’ stock overflow. It felt wonderfully weird to be playing a 1994 DOS cult classic on the Switch. And yes, the Ludicrous Edition is ludicrous . I finally made it past the third map! I’m still feeling the retro shooter vibe and bought the Turok Trilogy on a whim after learning it was also done by Nightdive Studios. Another smaller game I played in-between the ROTT sessions was Shotgun King that somehow manages to combine chess with shotguns, and very successfully so. Unfortunately, it’s a bit of a bare bones roguelike, difficult as hell, and therefore not really my forte. I have yet to unlock all the shotguns. Don’t buy the game on MacOS: GOG ended up refunding my purchase because it kept on crashing in the introduction cutscene. The Switch edition is fine. Slightly game related: my wife sent me this YouTube video where Ghostfeeder explains how he uses the Game Boy to make music that I think is worth sharing here: Related topics: / metapost / By Wouter Groeneveld on 3 December 2025.  Reply via email . Charlie Theel put up a post called Philosophy and Board Games on Player Elimination where I learned about Mara’s Hopeful Pessimism . On a slightly more morbid topic, Wesley thought about How Websites Die and shared his notes. Lina’s map of the internet functions as a beautiful pixelated website map that inspires me to do something similar. Kelson Vibber reviews web browsers . The sad state of Mozilla made me look elsewhere, and I’m currently using both Firefox and Vivaldi. According to Hypercombogamer the Game Boy Advance is Nintendo’s Most Underrated Handheld . I don’t know if I agree, but I do agree that both the GBA and its huge library are awesome. Eurogamer regularly criticises Microsoft and their dumb Xbox moves. The last piece was the ridiculous Game Pass advent . Matt Bee’s retro gaming site is loaded with cool looking game badges that act as links to small opinion pieces. It’s a fun guessing game as I’m not familiar with some of the pixel art. Astrid Poot writes about lessons learned about making and happiness . Making is the route to creativity. Making is balance. Alyssa Rosenzweig proves that AAA gaming on Asahi Linux is totally possible. Patrick Dubroy has thoughts on ways to do applied research . His conclusion? Aim for practical utility first, by “building something that addresses an actual need that you have”. Eat your own dog shit, publish later? Here’s another way to block LLM crawlers without JavaScript by Uggla. Wolfgang Ziegler programs on the Game Boy using Turbo Rascal , something I hadn’t encountered before. Wes Fenlon wrote a lengthy document over at PC Gamer on how to design a metroidvania map . Jan Ouwens claims there are no good Java code formatters out there. Seb shared A Road to Common Lisp after I spotted his cool “warning: made with Lisp” badge. A lot of ideas are taking form, to be continued… Speaking of Lisp: Colin Woodbury is drawn to Lisp because of its simplicity and beauty. Robert Lützner wrote an honest report on the duality of being a parent . As a parent myself, I found myself sobbing and nodding in agreement as I read the piece. Michael Klamerus shares his thoughts on Crystal Caves HD . The added chiptune music just feels misplaced in my opinion. I’m looking forward to the Bio Menace remaster as well! Felienne Hermans criticizes the AI Delta Plan (in Dutch). We should stop proclaiming build, build, build! as the slogan of the future and start thinking about reduce & re-use. Hamilton shares his 2025 programming language tier list . The funny thing is that number one on the list suddenly got replaced by a more conventional alternative. I don’t agree with his reasoning at all (spoiler: it contains AI), but it’s an interesting read nonetheless. Mikko Saari published his 2025 edition of the top 100 board game list a little earlier this year. There are a bunch of interesting changes in the top 10! SETI also pops up quite high on my list, but I haven’t had the chance to create it yet. If you live near The Netherlands, consider visiting The Home Computer Museum . They also have a ton of retro magazines lying around to flip through! Wait, there’s a Heroes of Might & Magic card game? That box looks huge! (So does the backing price…) Death Code is an entirely self-hosted web application that utilizes Shamir’s Secret Sharing to share secrets after you die. tttool is a reverse-engineering effort to inspect how the Tip Toi educational pens work. I was somehow featured at https://twostopbits.com/ and now I know why: it’s Hacker News for retro nerds. Apparently things like WhatsApp bridges for Matrix exist, which got me thinking: can I run bridges for WhatsApp and Signal to merge all messaging into The One Ring ? Emulate Windows 95 right in the browser . Crazy to see what you can do nowadays with WASM/JS/Whatever. It looks like LDtk is the best 2D game map editor ever created. Wild Weasel created a retro looking Golf video game shrine in their little corner of the internet, and the result is lovely. I should really start playing my GBC Mario Golf cart.

0 views
Hugo 1 weeks ago

Implementing a tracking-free captcha with Altcha and Nuxt

For the past few days, I've noticed several suspicious uses of my contact form. Looking closer, I noticed that each contact form submission was followed by a user signup with the same email and a name that always followed the same pattern: qSfDMiWAiLnpYYzdCeCWd fePXzKXbAmiLAweNZ etc... Let's just say their membership in the human species seems particularly dubious. Anyway, it's probably time to add some controls, and one of the most famous is the captcha. ## Next-generation captchas Everyone knows captchas – they're annoying, probably on par with cookie consent banners. Nowadays we see captchas where you have to identify traffic lights, solve additions, drag a puzzle piece to the right spot, and so on. But you may have noticed that lately we're also seeing simple forms with a checkbox: "I am not a robot". ![I'm not a robot](https://writizzy.b-cdn.net/blogs/48b77143-02ee-4316-9d68-0e6e4857c5ce/1764749254941-124yicj.jpg) Sometimes the captcha isn't even visible anymore, with detection happening without asking you anything. So how does it work? And how can I add it to my application? ## Nuxt Turnstile, the default solution with Nuxt In the Nuxt ecosystem, the most common solution is [Nuxt turnstile](https://nuxt.com/modules/turnstile). The documentation is pretty clear on how to add it. It's a great solution, but it relies on [Cloudflare turnstile](https://nuxt.com/modules/turnstile), and I'm trying to use only european products for Writizzy and Hakanai. Still, the documentation helps understand a bit better how next-generation captchas work. When the page loads, the turnstile widget performs client-side checks: - **proof of space: **The script asks the client to generate and store an amount of data according to a predefined algorithm, then asks for the byte at a given position. Not only does this take time, but it's difficult to automate at scale. - **trivial browser detections:** The idea is to try to detect a bot (no plugins, webdriver control, etc.). Fingerprinting also helps in this case. It collects all available info about the browser, OS, available APIs, resolution, etc. Note that fingerprinting can be frowned upon by GDPR, which may consider it as uniquely identifying a person. Personally, I find that debatable, but in the context of anti-spam protection, we're kind of chasing our tail here since it would be necessary to ask bots for their permission to try to detect them. We're at the limits of absurdity here. But let's continue. Based on the previous info, the script sends all this to Cloudflare. Based on this info and relying on a huge database of worldwide traffic, Cloudflare calculates a percentage chance that the user is a bot. The form will vary between: - nothing to do, Cloudflare is convinced it's a human - a checkbox "I am not a robot" - a more elaborate captcha if the suspicion is really strong - a blocking page when there's no doubt about the suspicion Now, you might say, the checkbox is a bit light, isn't it? If I've gotten this far, I can easily automate a click on a checkbox. Especially since Cloudflare is everywhere, it's necessarily the same form everywhere. Yes... But... First, the way you check the box will be analyzed. Is the click too fast, does it seem automated, is the mouse path to reach the box natural? All this can trigger additional protection. *EDIT: Turnstile might not do this operation. reCaptcha, Google's solution, is known for doing it. Turnstile is less explicit on the subject.* But on top of that, the checkbox triggers a challenge, a small calculation requested by Cloudflare that your client must perform. The result is what we call a **proof of work**. This work is slow for a computer. We're talking about 500ms, an eternity for a machine. For a human user, it's totally anecdotal. And the satisfaction of having proven their humanity makes you forget those 500 little milliseconds. On the other hand, for a bot, this time will be a real problem if it needs to automate the creation of hundreds or thousands of accounts. So it's not impossible to check this box, but it's costly. And it's supposed to make the economic equation uninteresting at high volumes. Now, even though all this is nice, I still don't want to use Cloudflare, so how do I replace it? ## Altcha, an open-source alternative During my research, I came across [altcha](https://altcha.org/). The solution is open source, requires no calls to external servers, and shares no data. The implementation requires requesting the Proof of Work (the famous JavaScript challenge) from your server. Here we'll initiate it from the Nuxt backend, in a handler: typescript ```typescript // server/api/altcha/challenge.get.ts import { createChallenge } from 'altcha-lib' export default defineEventHandler(async () => { const hmacKey = useRuntimeConfig().altchaHmacKey as string return createChallenge({ hmacKey, maxnumber: 100000, expires: new Date(Date.now() + 60000) // 1 minute }) }) ``` In the contact form page, we'll add a Vue component: vue ```vue ``` This `altchaPayload` will be added to the post payload, for example: typescript ```typescript await $fetch('/api/contact', { method: 'POST', body: { email: loggedIn.value ? user.value?.email : event.data.email, subject: event.data.subject, message: event.data.message, altcha: altchaPayload.value } }) ``` The calculation result will then be verified in the `/api/contact` endpoint typescript ```typescript const hmacKey = useRuntimeConfig().altchaHmacKey as string const ok = await verifySolution(data.altcha, hmacKey) if (!ok) { throw createError({ statusCode: 400, message: 'Invalid challenge' }) } ``` The Vue component I mentioned earlier is this one: vue ```vue ``` And there you go, the [contact page](https://pulse.hakanai.io/contact) and the [signup page](https://pulse.hakanai.io/signup) are now protected by this altcha. Now, does it work? ## Altcha's limitations The implementation was done yesterday. And unfortunately, I'm still seeing very suspicious signups on Pulse. So clearly, Altcha didn't do its job. However, now that we know how it works, it's easier to understand why it doesn't work. Altcha doesn't do any of the checks that Turnstile does: - no proof of space - no fingerprinting - no fingerprint verification with Cloudflare - no behavioral verification of the mouse click on the checkbox. The only protection is the proof of work, which only costs the attacker time. Now for Pulse, for reasons I don't understand, the person having fun creating accounts makes about 4 per day. The cost of the proof of work is negligible in this case. So Altcha is not suited for this type of "slow attack". Anyway, I'll have to find another workaround... And I'm open to your suggestions.

0 views
Rob Zolkos 1 weeks ago

The Making of Fizzy, Told by Git

Today Fizzy was released and the entire source code of its development history is open for anyone to see . DHH announced on X that the full git history is available - a rare opportunity to peek behind the curtain of how a 37signals product comes together. I cloned down the repository and prompted Claude Code: “Can you go through the entire git history and write a documentary about the development of this application. What date the first commit was. Any major tweaks, changes and decisions and experiments. You can take multiple passes and use sub-agents to build up a picture. Make sure to cite commits for any interesting things. If there is anything dramatic then make sure to see if you can figure out decision making. Summarize at the end but the story should go into STORY.md” It responded with: “This is a fascinating task! Let me create a comprehensive investigation plan and use multiple agents to build up a complete picture of this project’s history.” Here is the story of Fizzy - as interpreted by Claude - from the trail of git commits. Enjoy! A chronicle of 18 months of development at Basecamp, told through 8,152 commits. At 1:19 PM on a summer Friday, Kevin McConnell typed the words that would begin an 18-month journey: Within hours, the foundation was laid. The team moved with practiced efficiency: By end of day, the skeleton of a Rails application stood ready. But what would it become? One month after inception, Jason Zimdars introduced the application’s first real identity: A “Splat” — the name evokes something chaotic, impactful, unexpected. Like a bug hitting your windshield on a summer drive. The original data model was simple: The next day brought the visual metaphor that would define the early application: The windshield was the canvas. Splats appeared on it like bugs on glass — colorful, slightly chaotic, each one a piece of information demanding attention. The commits reveal urgency. Something important was coming: The all-hands demo. Approximately one month after project inception, Fizzy (then still called “Splat”) was shown to the entire company. The pressure to polish was evident in the commit messages. Seven days after the windshield metaphor was established, Jason Zimdars typed four words that would reshape the application’s identity: The chaotic “splat” gave way to something gentler — bubbles floating on a windshield , like soap suds catching light. The animation changed from aggressive splattering to gentle floating: Perfect circles gave way to hand-drawn blob shapes. The team was discovering what their product was through the act of building it. A new interaction pattern emerged: When users “boosted” a bubble, it would puff up and float away — like champagne fizz rising. The animation: The metaphor was crystallizing. Bubbles. Fizzing. Effervescence. The name would come soon. In a single day, the application found its final name through two commits: 42 files changed. The model, controllers, views, tests — everything touched. Hours later: Fizzy. The name captured everything: the bubbles, the effervescence, the playful energy of the interface. Visual design had driven product naming — the team discovered what they were building through the act of building it. The flat list of bubbles needed structure: But “Projects” didn’t feel right. Eight days later: Then “Bucket” became “Collection.” Eventually, “Collection” would become “Board.” The terminology dance — Projects → Buckets → Collections → Boards — reveals a team searching for the right mental model. They ultimately landed on the familiar “Board” metaphor, aligning with tools like Trello and Linear. David Heinemeier Hansson, creator of Ruby on Rails and co-founder of Basecamp, made his first contribution with characteristic pragmatism: He deleted an unused image file. It was a statement of intent. Within two days, DHH’s fingerprints were everywhere: He upgraded the entire application to Rails 8 release candidate and systematically added HTTP caching throughout. DHH’s most distinctive contribution was his crusade against what he called “anemic” code — thin wrappers that explain nothing and add needless indirection. He used this term 15 times in commit messages: Philosophy: Code should either add explanatory value OR hide implementation complexity. Thin wrappers that do neither are “anemic” and should be eliminated. Then came April 2025. DHH made 323 commits in a single month — 55% of his total contributions compressed into 30 days. This was a surgical strike. He: His commit messages tell the story: In DHH’s philosophy: deletion is a feature, not a bug. After 10 months as “Bubbles,” another transformation: 333 files changed. “Pop” (completing a bubble) became “Closure” (closing a card). The playful metaphor gave way to task management vocabulary. The final architectural piece: Fizzy had become a kanban board . Cards lived in columns. Columns could be customized, colored, reordered. The application had evolved from “bugs on a windshield” to a sophisticated project management tool. Collections became Boards. The transformation was complete: Original (July 2024): Final (November 2025): A Claude-powered AI assistant that could answer questions about project content. Born, restricted to staff, then removed entirely. Perhaps replaced by the more ambitious MCP (Model Context Protocol) integration — making Fizzy AI-native at the protocol level rather than bolting on a chatbot. Emoji reactions for cards and comments. Added. Removed. Then added again. The git history shows healthy debate — not everything that ships stays shipped, and not everything removed stays gone. Saved custom views were replaced by ephemeral quick filters. Complexity gave way to simplicity. Predefined workflows with stages were removed in favor of ad-hoc column organization. Users would create their own structure. The MCP (Model Context Protocol) branch represents cutting-edge AI integration — allowing Claude and other AI assistants to interact with Fizzy programmatically. An manifest advertises Fizzy’s capabilities to AI clients. Status: Removed from main, but the infrastructure remains fascinating. This is one of the earliest explorations of making traditional web applications AI-native. Multiple parallel branches exploring different approaches to mobile column navigation. Scroll snapping. Contained scrolling. Swipeable columns. The problem remains unsolved — there’s no “one true way” for mobile kanban navigation. Making Fizzy work with SQLite in addition to MySQL. Simpler local development. Better portability. The search index was even sharded into 16 tables ( through ) for scale. The proprietary SAAS features were extracted into a separate gem. What remained was a clean, open-source Rails application. After 18 months of development, 8,152 commits, and countless pivots, Fizzy became open source. Jason Zimdars (2,217 commits) — The visual architect. From “Let’s try bubbles” to pixel-perfect polish. Jorge Manrubia (2,053 commits) — The engineering backbone. Consistent, prolific, essential. Andy Smith (1,007 commits) — Front-end craftsmanship and UI refinement. Mike Dalessio (875 commits) — Infrastructure, performance, the recent dashboard work. David Heinemeier Hansson (586 commits) — The architectural enforcer. Rails modernization and the war on anemic code. Kevin McConnell (351 commits) — Started it all with “New Rails app.” Jose Farias (341 commits) — Feature development and testing. Stanko K.R. (239 + 54 commits) — Security hardening and webhook restrictions. Jeffrey Hardy (100 commits) — Early infrastructure and modernization. Jason Fried (7 commits) — The occasional “Small copy adjustment” from the CEO. July 2024 (v0.1): September 2024 (v0.2): November 2025 (v1.0): The story of Fizzy is the story of discovery through building . The team didn’t know they were building a kanban board when they started with “splats on a windshield.” They found out through iteration. Key lessons: Names matter, but they can change. Splat → Bubble → Card. Project → Bucket → Collection → Board. The right name emerges through use. Deletion is a feature. Boosts, Fizzy Ask, custom views, workflows — removing the wrong features is as important as adding the right ones. Architecture evolves. The final column-based kanban system looks nothing like the original flat list of splats. DHH’s philosophy: Remove anemic code. Keep transactions short. Use the latest Rails. Delete more than you add. Design drives naming. “Fizzy” emerged from the visual metaphor of bubbles puffing up and floating away — the design informed the brand. Open source takes extraction. 18 months of SAAS development needed careful separation before the core could be shared. The git history of Fizzy is a masterclass in iterative product development. 8,152 commits. 25+ contributors. 18 months. One application that discovered its identity through the act of creation. “Let’s try bubbles.” — Jason Zimdars, July 31, 2024 Documentary compiled December 2, 2025 Based on analysis of the Fizzy git repository First Commit: June 21, 2024 Total Commits: 8,152 Contributors: 25+ Lines of Code Changed: Hundreds of thousands Name Changes: 4 (Splat → Bubble → Card; Project → Bucket → Collection → Board) Features Removed: At least 4 major ones DHH Commits in April 2025 Alone: 323 1:23 PM — Gemfile updated ( ) 3:47 PM — Rubocop configured ( ) 4:07 PM — Minimal authentication flow ( ) 4:29 PM — CSS reset and base styles ( ) 4:46 PM — Brakeman security scanning added ( ) Removed the entire Boosts feature ( ) — 299 lines across 27 files, gone Eliminated activity scoring ( , , ) Extracted RESTful controllers from overloaded ones ( , ) Enforced transaction discipline ( — “No long transactions!”) Splats on a Windshield Cards → Columns → Boards → Accounts Jason Zimdars (2,217 commits) — The visual architect. From “Let’s try bubbles” to pixel-perfect polish. Jorge Manrubia (2,053 commits) — The engineering backbone. Consistent, prolific, essential. Andy Smith (1,007 commits) — Front-end craftsmanship and UI refinement. Mike Dalessio (875 commits) — Infrastructure, performance, the recent dashboard work. David Heinemeier Hansson (586 commits) — The architectural enforcer. Rails modernization and the war on anemic code. Kevin McConnell (351 commits) — Started it all with “New Rails app.” Jose Farias (341 commits) — Feature development and testing. Stanko K.R. (239 + 54 commits) — Security hardening and webhook restrictions. Jeffrey Hardy (100 commits) — Early infrastructure and modernization. Jason Fried (7 commits) — The occasional “Small copy adjustment” from the CEO. July 24, 2024: “Handful of tweaks before all-hands” — Demo day pressure July 31, 2024: “Let’s try bubbles” — The visual pivot September 4, 2024: “Splat -> Fizzy” — Finding the name April 2025: DHH’s 323-commit refactoring blitz October 2025: “Remove Fizzy Ask” — The AI feature that didn’t survive November 28, 2025: “Initial README and LICENSE” — Going public Rails 8.x — Always on the latest, sometimes ahead of stable Hotwire (Turbo + Stimulus) — No heavy JavaScript framework Solid Queue & Solid Cache — Rails-native background jobs and caching SQLite + MySQL support — Database flexibility Kamal deployment — Modern container orchestration UUID primary keys — Using UUIDv7 for time-ordering Multi-tenancy — Account-based data isolation Names matter, but they can change. Splat → Bubble → Card. Project → Bucket → Collection → Board. The right name emerges through use. Deletion is a feature. Boosts, Fizzy Ask, custom views, workflows — removing the wrong features is as important as adding the right ones. Architecture evolves. The final column-based kanban system looks nothing like the original flat list of splats. DHH’s philosophy: Remove anemic code. Keep transactions short. Use the latest Rails. Delete more than you add. Design drives naming. “Fizzy” emerged from the visual metaphor of bubbles puffing up and floating away — the design informed the brand. Open source takes extraction. 18 months of SAAS development needed careful separation before the core could be shared.

0 views
Rob Zolkos 1 weeks ago

Fizzy Webhooks: What You Need to Know

Fizzy is a new issue tracker ( source available ) from 37signals with a refreshingly clean UI. Beyond looking good, it ships with a solid webhook system for integrating with external services. For most teams, webhooks are the bridge between the issues you track and the tools you already rely on. They let you push events into chat, incident tools, reporting pipelines, and anything else that speaks HTTP. If you are evaluating Fizzy or planning an integration, understanding what these webhooks can do will save you time. I also put together a short PDF with the full payload structure and example code, which I link at the end of this post if you want to go deeper. Here are a few ideas for things you could build on top of Fizzy’s events: If you want to go deeper, you can also build more opinionated tools that surface insights and notify people who never log in to Fizzy: Here is how to set it up. Step 1. Visit a board and click the Webhook icon in the top right. Step 2. Give the webhook a name and the payload URL and select the events you want to be alerted to. Step 3. Once the webhook saves you will see a summary of how it is setup and most importantly the webhook secret which you will need for your handler for securing the webhook. There is also a handy event log showing you when an event was delivered. Since I like to tinker with these sorts of things, I built a small webhook receiver to capture and document the payload structures. Fizzy sends HTTP POST requests to your configured webhook URL when events occur. Each request includes an header containing an HMAC-SHA256 signature of the request body. The verification process is straightforward: Fizzy covers the essential card lifecycle events: The approach was straightforward: I wrote a small Ruby script using WEBrick to act as a webhook receiver. The script listens for incoming POST requests, verifies the HMAC-SHA256 signature (using the webhook secret Fizzy provides when you configure webhooks), and saves each event as a separate JSON file with a timestamp and action name. This made it easy to review and compare the different event types later. To expose my local server to the internet, I used ngrok to create a temporary public URL pointing to port 4002. I then configured Fizzy’s webhook settings with this ngrok URL and selected the event types I wanted to capture. With everything set up, I went through Fizzy’s UI and manually triggered each available event: creating cards, adding comments, assigning and unassigning users, moving cards between columns and boards, marking cards as done, reopening them, postponing cards to “Not Now”, and sending cards back to triage. Each action fired a webhook that my script captured and logged. In total, I captured 13 webhook deliveries covering 10 different action types. The only event I could not capture was “Card moved to Not Now due to inactivity” — Fizzy triggers this automatically after a period of card inactivity, so it was not practical to reproduce during this test. Card body content is not included. The card object in webhook payloads only contains the , not the full description or body content. Comments include both and versions, but cards do not. Since Fizzy doesn’t have a public API ( DHH is working on it ), you can’t fetch the full card content programmatically - you’ll need to use the field to view the card in the browser. Column data is only present when relevant. The object only appears on , , and events - the events where a card actually moves to a specific column. IDs are strings, not integers. All identifiers in the payload are strings like , not numeric IDs. I created a short webhook documentation based on this research: FIZZY_WEBHOOKS.pdf It includes the full payload structure, all event types with examples, and code samples for signature verification in both Ruby and JavaScript. Hopefully this helps you get up and running with Fizzy’s webhooks. Let me know if you discover additional events or edge cases. Since the source code is available, you can also submit PRs to fix or enhance aspects of the webhook system if you find something missing or want to contribute improvements. A team metrics dashboard that tracks how long cards take to move from to and which assignees or boards close issues the fastest. Personal Slack or Teams digests that send each person a daily summary of cards they created, were assigned, or closed based on , , , and events. A churn detector that flags cards that bounce between columns or get sent back to triage repeatedly using , , and . A cross-board incident view that watches to keep a separate dashboard of cards moving into your incident or escalation boards. A comment activity stream that ships events into a search index or knowledge base so you can search discussions across boards. Stakeholder status reports that email non-technical stakeholders a weekly summary of key cards: what was created, closed, postponed, or sent back to triage on their projects. You can group by label, board, or assignee and generate charts or narrative summaries from , , , and events. Capacity and load alerts that watch for people who are getting overloaded. For example, you could send a notification to a manager when someone is assigned more than N open cards, or when cards assigned to them sit in the same column for too long without a or event. SLA and escalation notifications that integrate with PagerDuty or similar tools. When certain cards (for example, labeled “Incident” or on a specific board) are not closed within an agreed time window, you can trigger an alert or automatically move the card to an escalation board using , , and . Customer-facing status updates that keep clients in the loop without giving them direct access to Fizzy. You could generate per-customer email updates or a small status page based on events for cards tagged with that customer’s name, combining , , and to show progress and recent discussion. Meeting prep packs that assemble the last week’s events for a given board into a concise agenda for standups or planning meetings. You can collate newly created cards, reopened work, and high-churn items from , , , and , then email the summary to attendees before the meeting. - new card created - card moved to a column / - assignment changes - card moved to Done - card reopened from Done - card moved to Not Now - card moved back to Maybe? - card moved to different board - comment added to a card

0 views
マリウス 1 weeks ago

disable-javascript.org

With several posts on this website attracting significant views in the last few months I had come across plenty of feedback on the tab gimmick implemented last quarter . While the replies that I came across on platforms like the Fediverse and Bluesky were lighthearted and oftentimes with humor, the visitors coming from traditional link aggregators sadly weren’t as amused about it. Obviously a large majority of people disagreeing with the core message behind this prank appear to be web developers, who’s very existence quite literally depends on JavaScript, and who didn’t hold back to express their anger in the comment sections as well as through direct emails. Unfortunately, most commenters are missing the point. This email exchange is just one example of feedback that completely misses the point: I just found it a bit hilarious that your site makes notes about ditching and disable Javascript, and yet Google explicitly requires it for the YouTube embeds. Feels weird. The email contained the following attachment: Given the lack of context I assume that the author was referring to the YouTube embeds on this website (e.g. on the keyboard page). Here is my reply: Simply click the link on the video box that says “Try watching this video on www.youtube.com” and you should be directed to YouTube (or a frontend of your choosing with LibRedirect [1]) where you can watch it. Sadly, I don’t have the influence to convince YouTube to make their video embeds working without JavaScript enabled. ;-) However, if more people would disable JavaScript by default, maybe there would be a higher incentive for server-side-rendering and video embeds would at the very least show a thumbnail of the video (which YouTube could easily do, from a technical point of view). Kind regards! [1]: https://libredirect.github.io It also appears that many of the people disliking the feature didn’t care to properly read the highlighted part of the popover that says “Turn JavaScript off, now, and only allow it on websites you trust!” : Indeed - and the author goes on to show a screenshot of Google Trends which, I’m sure, won’t work without JavaScript turned on. This comment perfectly encapsulates the flawed rhetoric. Google Trends (like YouTube in the previous example) is a website that is unlikely to exploit 0-days in your JavaScript engine, or at least that’s the general consensus. However, when you clicked on a link that looks like someone typed it in by putting their head on the keyboard , that led you to a website you obviously didn’t know beforehand, it’s a different story. What I’m advocating for is to have JavaScript disabled by default for everything unknown to you , and only enable it for websites that you know and trust . Not only is this approach going to protect you from jump-scares , regardless whether that’s a changing tab title, a popup, or an actual exploit, but it will hopefully pivot the thinking of particularly web developers back from “Let’s render the whole page using JavaScript and display nothing if it’s disabled” towards “Let’s make the page as functional as possible without the use of JavaScript and only sprinkle it on top as a way to make the experience better for anyone who choses to enable it” . It is mind boggling how this simple take is perceived as militant techno-minimalism and can provoke such salty feedback. I keep wondering whether these are the same people that consider to be a generally okay way to install software …? One of the many commenters that however did agree with the approach that I’m taking on this site had put it fairly nicely: About as annoying as your friend who bumped key’ed his way into your flat in 5 seconds waiting for you in the living room. Or the protest blocking the highway making you late for work. Many people don’t realize that JavaScript means running arbitrary untrusted code on your machine. […] Maybe the hacker ethos has changed, but I for one miss the days of small pranks and nudges to illustrate security flaws, instead of ransomware and exploits for cash. A gentle reminder that we can all do better, and the world isn’t always all that friendly. As the author of this comment correctly hints, the hacker ethos has in fact changed. My guess is that only a tiny fraction of the people that are actively commenting on platforms like Hacker News or Reddit these days know about, let’s say, cDc ’s Back Orifice , the BOFH stories, bash.org , and all the kickme.to/* links that would trigger a disconnect in AOL ’s dialup desktop software. Hence, the understanding about how far pranks in the 90s and early 2000s really went simply isn’t there. And with most things these days required to be politically correct , having the tab change to what looks like a Google image search for “sam bankman-fried nudes” is therefor frowned upon by many, even when the reason behind it is to inform. Frankly, it seems that conformism has eaten not only the internet, but to an extent the whole world, when an opinion that goes ever so slightly against the status quo is labelled as some sort of extreme view . To feel even just a “tiny bit violated by” something as mundane as a changing text and icon in the browser’s tab bar seems absurd, especially when it is you that allowed my website to run arbitrary code on your computer! Because I’m convinced that a principled stance against the insanity that is the modern web is necessary, I am doubling down on this effort by making it an actual initiative: disable-javascript.org disable-javascript.org is a website that informs the average user about some of the most severe issues affecting the JavaScript ecosystem and browsers/users all over the world, and explains in simple terms how to disable JavaScript in various browsers and only enable it for specific, trusted websites. The site is linked on the JavaScript popover that appears on this website, so that visitors aren’t only pranked into hopefully disabling JavaScript, but can also easily find out how to do so. disable-javascript.org offers a JavaScript-snippet that is almost identical to the one in use by this website, in case you would like to participate in the cause. Of course, you can as well simply link to disable-javascript.org from anywhere on your website to show your support. If you’d like to contribute to the initiative by extending the website with valuable info, you can do so through its Git repository . Feel free to open pull-requests with the updates that you would like to see on disable-javascript.org . :-)

0 views

Helping agents debug webapps

I've spent a fair bit of time over the past year having agents build webapps for me. Typically, they're built out of some nodejs backend and then some client-side js framework. One debugging pattern that comes up again and again is that there's a bug in the client side JavaScript. Often a current-gen model running in a coding agent is able to solve a client-side bug just by inspecting the code. When it works, it's amazing. But "often" is not the same thing as "every time". If the agent can't solve the problem by inspection it will often fire up a browser MCP and attempt to debug the problem interactively. What it's really trying to do is to get a peek at the browser's console log. This works, but it burns a ton of tokens and takes forever . There's a better way. One of the first things I ask my agents to build when we're doing web dev is a frontend to backend bridge for console logs. There are two parts to this: A tiny development-mode JavaScript shim in the frontend code that sends almost any (or ) message to a backend API endpoint. You want to be careful to make sure that the shim doesn't try to send its own "I can't talk to the backend endpoint" errors to the backend. A backend endpoint that receives frontend log messages from clients and logs them to the server log. With those two things, or whomever you've got coding for you can see frontend and backend log messages in one place, just by tailing a log. It's amazingly useful, really straightforward and so quick to build. It turns out that it's helpful for any humans who are working on your software, too. A tiny development-mode JavaScript shim in the frontend code that sends almost any (or ) message to a backend API endpoint. You want to be careful to make sure that the shim doesn't try to send its own "I can't talk to the backend endpoint" errors to the backend. A backend endpoint that receives frontend log messages from clients and logs them to the server log.

0 views
Stone Tools 1 weeks ago

Bank Street Writer on the Apple II

Stop me if you're heard this one . In 1978, a young man wandered into a Tandy Radio Shack and found himself transfixed by the TRS-80 systems on display. He bought one just to play around with, and it wound up transforming his life from there on. As it went with so many, so too did it go with lawyer Doug Carlston. His brother, Gary, initially unimpressed, warmed up to the machine during a long Maine winter. The two thus smitten mused, "Can we make money off of this?" Together they formed a developer-sales relationship, with Doug developing Galactic Saga and third brother Don developing Tank Command . Gary's sales acumen brought early success and Broderbund was officially underway. Meanwhile in New York, Richard Ruopp, president of Bank Street College of Education, a kind of research center for experimental and progressive education, was thinking about how emerging technology fit into the college's mission. Writing was an important part of their curriculum, but according to Ruopp , "We tested the available word processors and found we couldn’t use any of them." So, experts from Bank Street College worked closely with consultant Franklin Smith and software development firm Intentional Educations Inc. to build a better word processor for kids. The fruit of that labor, Bank Street Writer , was published by Scholastic exclusively to schools at first, with Broderbund taking up the home distribution market a little later. Bank Street Writer would dominate home software sales charts for years and its name would live on as one of the sacred texts, like Lemonade Stand or The Oregon Trail . Let's see what lessons there are to learn from it yet. 1916 Founded by Lucy Sprague Mitchell, Wesley Mitchell, and Harriet Johnson as the “Bureau of Educational Experiments” (BEE) with the goal of understanding in what environment children best learn and develop, and to help adults learn to cultivate that environment. 1930 BEE moves to 69 Bank Street. (Will move to 112th Street in 1971, for space reasons.) 1937 The Writer’s Lab, which connects writers and students, is formed. 1950 BEE is renamed to Bank Street College of Education. 1973 Minnesota Educational Computing Consortium (MECC) is founded. This group would later go on to produce The Oregon Trail . 1983 Bank Street Writer, developed by Intentional Educations Inc., published by Broderbund Software, and “thoroughly tested by the academics at Bank Street College of Education.” Price: $70. 1985 Writer is a success! Time to capitalize! Bank Street Speller $50, Bank Street Filer $50, Bank Street Mailer $50, Bank Street Music Writer $50, Bank Street Prewriter (published by Scholastic) $60. 1986 Bank Street Writer Plus $100. Bank Street Writer III (published by Scholastic) $90. It’s basically Plus with classroom-oriented additions, including a 20-column mode and additional teaching aides. 1987 Bank Street Storybook, $40. 1992 Bank Street Writer for the Macintosh (published by Scholastic) $130. Adds limited page layout options, Hypercard-style hypertext, clip art, punctuation checker, image import with text wrap, full color, sound support, “Classroom Publishing” of fliers and pamphlets, and electronic mail. With word processors, I want to give them a chance to present their best possible experience. I do put a little time into trying the baseline experience many would have had with the software during the height of its popularity. "Does the software still have utility today?" can only be fairly answered by giving the software a fighting chance. To that end, I've gifted myself a top-of-the-line (virtual) Apple //e running the last update to Writer , the Plus edition. You probably already know how to use Bank Street Writer Plus . You don't know you know, but you do know because you have familiarity with GUI menus and basic word processing skills. All you're lacking is an understanding of the vagaries of data storage and retrieval as necessitated by the hardware of the time, but once armed with that knowledge you could start using this program without touching the manual again. It really is as easy as the makers claim. The simplicity is driven by very a subtle, forward-thinking user interface. Of primary interest is the upper prompt area. The top 3 lines of the screen serve as an ever-present, contextual "here's the situation" helper. What's going on? What am I looking at? What options are available? How do I navigate this screen? How do I use this tool? Whatever you're doing, whatever menu option you've chosen, the prompt area is already displaying information about which actions are available right now in the current context . As the manual states, "When in doubt, look for instructions in the prompt area." The manual speaks truth. For some, the constant on-screen prompting could be a touch overbearing, but I personally don't think it's so terrible to know that the program is paying attention to my actions and wants me to succeed. The assistance isn't front-loaded, like so many mobile apps, nor does it interrupt, like Clippy. I simply can't fault the good intentions, nor can I really think of anything in modern software that takes this approach to user-friendliness. The remainder of the screen is devoted to your writing and works like any other word processor you've used. Just type, move the cursor with the arrow keys, and type some more. I think most writers will find it behaves "as expected." There are no Electric Pencil -style over-type surprises, nor VisiCalc -style arrow key manipulations. What seems to have happened is that in making a word processor that is easy for children to use, they accidentally made a word processor that is just plain easy. The basic functionality is drop-dead simple to pick up by just poking around, but there's quite a bit more to learn here. To do so, we have a few options for getting to know Bank Street Writer in more detail. There are two manuals by virtue of the program's educational roots. Bank Street Writer was published by both Broderbund (for the home market) and Scholastic (for schools). Each tailored their own manual to their respective demographic. Broderbund's manual is cleanly designed, easy to understand, and gets right to the point. It is not as "child focused" as reviews at the time might have you believe. Scholastic's is more of a curriculum to teach word processing, part of the 80s push for "computers in the classroom." It's packed with student activities, pages that can be copied and distributed, and (tellingly) information for the teacher explaining "What is a word processor?" Our other option for learning is on side 2 of the main program disk. Quite apart from the program proper, the disk contains an interactive tutorial. I love this commitment to the user's success, though I breezed through it in just a few minutes, being a cultured word processing pro of the 21st century. I am quite familiar with "menus" thank you very much. As I mentioned at the top, the screen is split into two areas: prompt and writing. The prompt area is fixed, and can neither be hidden nor turned off. This means there's no "full screen" option, for example. The writing area runs in high-res graphics mode so as to bless us with the gift of an 80-character wide display. Being a graphics display also means the developer could have put anything on screen, including a ruler which would have been a nice formatting helper. Alas. Bank Street offers limited preference settings; there's not much we can do to customize the program's display or functionality. The upshot is that as I gain confidence with the program, the program doesn't offer to match my ability. There is one notable trick, which I'll discuss later, but overall there is a missed opportunity here for adapting to a user's increasing skill. Kids do grow up, after all. As with Electric Pencil , I'm writing this entirely in Bank Street Writer . Unlike the keyboard/software troubles there, here in 128K Apple //e world I have Markdown luxuries like . The emulator's amber mode is soothing to the eyes and soul. Mouse control is turned on and works perfectly, though it's much easier and faster to navigate by keyboard, as God intended. This is an enjoyable writing experience. Which is not to say the program is without quirks. Perhaps the most unfortunate one is how little writing space 128K RAM buys for a document. At this point in the write-up I'm at about 1,500 words and BSW's memory check function reports I'm already at 40% of capacity. So the largest document one could keep resident in memory at one time would run about 4,000 words max? Put bluntly, that ain't a lot. Splitting documents into multiple files is pretty much forced upon anyone wanting to write anything of length. Given floppy disk fragility, especially with children handling them, perhaps that's not such a bad idea. However, from an editing point of view, it is frustrating to recall which document I need to load to review any given piece of text. Remember also, there's no copy/paste as we understand it today. Moving a block of text between documents is tricky, but possible. BSW can save a selected portion of text to its own file, which can then be "retrieved" (inserted) at the current cursor position in another file. In this way the diskette functions as a memory buffer for cross-document "copy/paste." Hey, at least there is some option available. Flipping through old magazines of the time, it's interesting just how often Bank Street Writer comes up as the comparative reference point for home word processors over the years. If a new program had even the slightest whiff of trying to be "easy to use" it was invariably compared to Bank Street Writer . Likewise, there were any number of writers and readers of those magazines talking about how they continued to use Bank Street Writer , even though so-called "better" options existed. I don't want to oversell its adoption by adults, but it most definitely was not a children-only word processor, by any stretch. I think the release of Plus embraced a more mature audience. In schools it reigned supreme for years, including the Scholastic-branded version of Plus called Bank Street Writer III . There were add-on "packs" of teacher materials for use with it. There was also Bank Street Prewriter , a tool for helping to organize themes and thoughts before committing to the act of writing, including an outliner, as popularized by ThinkTank . (always interesting when influences ripple through the industry like this) Of course, the Scholastic approach was built around the idea of teachers having access to computers in the classroom. And THAT was build on the idea of teachers feeling comfortable enough with computers to seamlessly merge them into a lesson-plan. Sure, the kids needed something simple to learn, but let's be honest, so did the adults. There was a time when attaching a computer to anything meant a fundamental transformation of that thing was assured and imminent. For example, the "office of the future" (as discussed in the Superbase post ) had a counterpart in the "classroom of tomorrow." In 1983, Popular Computing said, "Schools are in the grip of a computer mania." Steve Jobs took advantage of this, skating to where the puck would be, by donating Apple 2s to California schools. In October 1983, Creative Computing did a little math on that plan. $20M in retail donations brought $4M in tax credits against $5M in gross donations. Apple could donate a computer to every elementary, middle, and high school in California for an outlay of only $1M. Jobs lobbied Congress hard to pass a national version of the same "Kids Can't Wait" bill, which would have extended federal tax credits for such donations. That never made it to law, for various political reasons. But the California initiative certainly helped position Apple as the go-to system for computers in education. By 1985, Apple would dominate fully half of the education market. That would continue into the Macintosh era, though Apple's dominance diminished slowly as cheaper, "good enough" alternatives entered the market. Today, Apple is #3 in the education market, behind Windows and Chromebooks . It is a fair question to ask, "How useful could a single donated computer be to a school?" Once it's in place, then what? Does it have function? Does anyone have a plan for it? Come to think of it, does anyone on staff even know how to use it? When Apple put a computer into (almost) every school in California, they did require training. Well, let's say lip-service was paid to the idea of the aspiration of training. One teacher from each school had to receive one day's worth of training to attain a certificate which allowed the school to receive the computer. That teacher was then tasked with training their coworkers. Wait, did I say "one day?" Sorry, I meant about one HOUR of training. It's not too hard to see where Larry Cuban was coming from when he published Oversold & Underused: Computers in the Classroom in 2001. Even of schools with more than a single system, he notes, "Why, then, does a school's high access (to computers) yield limited use? Nationally and in our case studies, teachers... mentioned that training in relevant software and applications was seldom offered... (Teachers) felt that the generic training available was often irrelevant to their specific and immediate needs." From my perspective, and I'm no historian, it seems to me there were four ways computers were introduced into the school setting. The three most obvious were: I personally attended schools of all three types. What I can say the schools had in common was how little attention, if any, was given to the computer and how little my teachers understood them. An impromptu poll of friends aligned with my own experience. Schools didn't integrate computers into classwork, except when classwork was explicitly about computers. I sincerely doubt my time playing Trillium's Shadowkeep during recess was anything close to Apple's vision of a "classroom of tomorrow." The fourth approach to computers into the classroom was significantly more ambitious. Apple tried an experiment in which five public school sites were chosen for a long-term research project. In 1986, the sites were given computers for every child in class and at home. They reasoned that for computers to truly make an impact on children, the computer couldn't just be a fun toy they occasionally interacted with. Rather, it required full integration into their lives. Now, it is darkly funny to me that having achieved this integration today through smartphones, adults work hard to remove computers from school. It is also interesting to me that Apple kind of led the way in making that happen, although in fairness they don't seem to consider the iPhone to be a computer . America wasn't alone in trying to give its children a technological leg up. In England, the BBC spearheaded a major drive to get computers into classrooms via a countrywide computer literacy program. Even in the States, I remember watching episodes of BBC's The Computer Programme on PBS. Regardless of Apple's or the BBC's efforts, the long-term data on the effectiveness of computers in the classroom has been mixed, at best, or even an outright failure. Apple's own assessment of their "Apple Classrooms of Tomorrow" (ACOT) program after a couple of years concluded, "Results showed that ACOT students maintained their performance levels on standard measures of educational achievement in basic skills, and they sustained positive attitudes as judged by measures addressing the traditional activities of schooling." Which is a "we continue to maintain the dream of selling more computers to schools" way of saying, "Nothing changed." In 2001, the BBC reported , "England's schools are beginning to use computers more in teaching - but teachers are making "slow progress" in learning about them." Then in 2015 the results were "disappointing, "Even where computers are used in the classroom, their impact on student performance is mixed at best." Informatique pour tous, France 1985: Pedagogy, Industry and Politics by Clémence Cardon-Quint noted the French attempt at computers in the classroom as being, "an operation that can be considered both as a milestone and a failure." Computers in the Classrooms of an Authoritarian Country: The Case of Soviet Latvia (1980s–1991) by Iveta Kestere, Katrina Elizabete Purina-Bieza shows the introduction of computers to have drawn stark power and social divides, while pushing prescribed gender roles of computers being "for boys." Teachers Translating and Circumventing the Computer in Lower and Upper Secondary Swedish Schools in the 1970s and 1980 s by Rosalía Guerrero Cantarell noted, "the role of teachers as agents of change was crucial. But teachers also acted as opponents, hindering the diffusion of computer use in schools." Now, I should be clear that things were different in the higher education market, as with PLATO in the universities. But in the primary and secondary markets, Bank Street Writer 's primary demographic, nobody really knew what to do with the machines once they had them. The most straightforwardly damning assessment is from Oversold & Underused where Cuban says in the chapter "Are Computers in Schools Worth the Investment?", "Although promoters of new technologies often spout the rhetoric of fundamental change, few have pursued deep and comprehensive changes in the existing system of schooling." Throughout the book he notes how most teachers struggle to integrate computers into their lessons and teaching methodologies. The lack of guidance in developing new ways of teaching means computers will continue to be relegated to occasional auxiliary tools trotted out from time to time, not integral to the teaching process. "Should my conclusions and predictions be accurate, both champions and skeptics will be disappointed. They may conclude, as I have, that the investment of billions of dollars over the last decade has yet to produce worthy outcomes," he concludes. Thanks to my sweet four-drive virtual machine, I can summon both the dictionary and thesaurus immediately. Put the cursor at the start of a word and hit or to get an instant spot check of spelling or synonyms. Without the reality of actual floppy disk access speed, word searches are fast. Spelling can be performed on the full document, which does take noticeable time to finish. One thing I really love is how cancelling an action or moving forward on the next step of a process is responsive and immediate. If you're growing bored of an action taking too long, just cancel it with ; it will stop immediately . The program feels robust and unbreakable in that way. There is a word lookup, which accepts wildcards, for when you kinda-sorta know how to spell a word but need help. Attached to this function is an anagram checker which benefits greatly from a virtual CPU boost. But it can only do its trick on single words, not phrases. Earlier I mentioned how little the program offers a user who has gained confidence and skill. That's not entirely accurate, thanks to its most surprising super power: macros. Yes, you read that right. This word processor designed for children includes macros. They are stored at the application level, not the document level, so do keep that in mind. Twenty can be defined, each consisting of up to 32 keystrokes. Running keystrokes in a macro is functionally identical to typing by hand. Because the program can be driven 100% by keyboard alone, macros can trigger menu selections and step through tedious parts of those commands. For example, to save our document periodically we need to do the following every time: That looks like a job for to me. 0:00 / 0:23 1× Defining a macro to save, with overwrite, the current file. After it is defined, I execute it which happens very quickly in the emulator. Watch carefully. If you can perform an action through a series of discrete keyboard commands, you can make a macro from it. This is freeing, but also works to highlight what you cannot do with the program. For example, there is no concept of an active selection, so a word is the smallest unit you can directly manipulate due to keyboard control limitations. It's not nothin' but it's not quite enough. I started setting up markdown macros, so I could wrap the current word in or for italic and bold. Doing the actions in the writing area and noting the minimal steps necessary to achieve the desired outcome translated into perfect macros. I was even able to make a kind of rudimentary "undo" for when I wrap something in italic but intended to use bold. This reminded me that I haven't touched macro functionality in modern apps since my AppleScript days. Lemme check something real quick. I've popped open LibreOffice and feel immediately put off by its Macros function. It looks super powerful; a full dedicated code editor with watched variables for authoring in its scripting language. Or is it languages? Is it Macros or ScriptForge? What are "Gimmicks?" Just what is going on? Google Docs is about the same, using Javascript for its "Apps Script" functionality. Here's a Stack Overflow post where someone wants to select text and set it to "blue and bold" with a keystroke and is presented with 32 lines of Javascript. Many programs seem to have taken a "make the simple things difficult, and the hard things possible" approach to macros. Microsoft Word reportedly has a "record" function for creating macros, which will watch what you do and let you play back those actions in sequence. (a la Adobe Photoshop's "actions") This sounds like a nice evolution of the BSW method. I say "reportedly" because it is not available in the online version and so I couldn't try it for myself without purchasing Microsoft 365. I certainly don't doubt the sky's the limit with these modern macro systems. I'm sure amazing utilities can be created, with custom dialog boxes, internet data retrieval, and more. The flip-side is that a lot of power has has been stripped from the writer and handed over to the programmer, which I think is unfortunate. Bank Street Writer allows an author to use the same keyboard commands for creating a macro as for writing a document. There is a forgotten lesson in that. Yes, BSW's macros are limited compared to modern tools, but they are immediately accessible and intuitive. They leverage skills the user is already known to possess . The learning curve is a straight, flat line. Like any good word processor, user-definable tab stops are possible. Bringing up the editor for tabs displays a ruler showing tab stops and their type (normal vs. decimal-aligned). Using the same tools for writing, the ruler is similarly editable. Just type a or a anywhere along the ruler. So, the lack of a ruler I noted at the beginning is now doubly-frustrating, because it exists! Perhaps it was determined to be too much visual clutter for younger users? Again, this is where the Options screen could have allowed advanced users to toggle on features as they grow in comfort and ambition. From what I can tell in the product catalogs, the only major revision after this was for the Macintosh which added a whole host of publishing features. If I think about my experience with BSW these past two weeks, and think about what my wish-list for a hypothetical update might be, "desktop publishing" has never crossed my mind. Having said all of that, I've really enjoyed using it to write this post. It has been solid, snappy, and utterly crash free. To be completely frank, when I switched over into LibreOffice , a predominantly native app for Windows, it felt laggy and sluggish. Bank Street Writer feels smooth and purpose-built, even in an emulator. Features are discoverable and the UI always makes it clear what action can be taken next. I never feel lost nor do I worry that an inadvertent action will have unknowable consequences. The impression of it being an assistant to my writing process is strong, probably more so than many modern word processors. This is cleanly illustrated by the prompt area which feels like a "good idea we forgot." (I also noted this in my ThinkTank examination) I cannot lavish such praise upon the original Bank Street Writer , only on this Plus revision. The original is 40-columns only, spell-checking is a completely separate program, no thesaurus, no macros, a kind of bizarre modal switch between writing/editing/transfer modes, no arrow key support, and other quirks of its time and target system (the original Apple 2). Plus is an incredibly smart update to that original, increasing its utility 10-fold, without sacrificing ease of use. In fact, it's actually easier to use, in my opinion than the original and comes just shy of being something I could use on a regular basis. Bank Street Writer is very good! But it's not quite great . Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). AppleWin 32bit 1.31.0.0 on Windows 11 Emulating an Enhanced Apple //e Authentic machine speed (enhanced disk access speed) Monochrome (amber) for clean 80-column display Disk II controller in slot 5 (enables four floppies, total) Mouse interface in slot 4 Bank Street Writer Plus At the classroom level there are one or more computers. At the school level there is a "computer lab" with one or more systems. There were no computers. Hit (open the File menu) Hit (select Save File) Hit three times (stepping through default confirmation dialogs) I find that running at 300% CPU speed in AppleWin works great. No repeating key issues and the program is well-behaved. Spell check works quickly enough to not be annoying and I honestly enjoyed watching it work its way through the document. Sometimes there's something to be said about slowing the computer down to swift human-speed, to form a stronger sense of connection between your own work and the computer's work. I did mention that I used a 4-disk setup, but in truth I never really touched the thesaurus. A 3-disk setup is probably sufficient. The application never crashed; the emulator was rock-solid. CiderPress2 works perfectly for opening the files on an Apple ][ disk image. Files are of file extension, which CiderPress2 tries to open as disassembly, not text. Switch "Conversion" to "Plain Text" and you'll be fine. This is a program that would benefit greatly from one more revision. It's very close to being enough for a "minimalist" crowd. There are four, key pieces missing for completeness: Much longer document handling Smarter, expanded dictionary, with definitions Customizable UI, display/hide: prompts, ruler, word count, etc. Extra formatting options, like line spacing, visual centering, and so on. For a modern writer using hyperlinks, this can trip up the spell-checker quite ferociously. It doesn't understand, nor can it be taught, pattern-matching against URLs to skip them.

0 views
Kix Panganiban 1 weeks ago

Utteranc.es is really neat

It's hard to find privacy-respecting (read: not Disqus) commenting systems out there. A couple of good ones recommended by Bear are Cusdis and Komments -- but I'm not a huge fan of either of them: Then I realized that there's a great alternative that I've used in the past: utteranc.es . Its execution is elegant: you embed a tiny JS file on your blog posts, and it will map every page to Github Issues in a Github repo. In my case, I created this repo specifically for that purpose. Neat! I'm including utteranc.es in all my blog posts moving forward. You can check out how it looks below: Cusdis styling is very limited. You can only set it to dark or light mode, with no control over the specific HTML elements and styling. It's fine but I prefer something that looks a little neater. Komments requires manually creating a new page for every new post that you make. The idea is that wherever you want comments, you create a page in Komments and embed that page into your webpage. So you can have 1 Komments page per blog post, or even 1 Komments page for your entire blog.

0 views
Andy Bell 1 weeks ago

It’s been a very hard year

Unlike a lot of places in tech, my company, Set Studio / Piccalilli has no outside funding. Bootstrapped is what the LinkedIn people say, I think. It’s been a hard year this year. A very hard year. I think a naive person would blame it all on the seemingly industry-wide attitude of “AI can just do this for us”. While that certainly hasn’t helped — as I see it — it’s been a hard year because of a combination of limping economies, tariffs, even more political instability and a severe cost of living crisis. It’s been a very similar year to 2020, in my opinion. Why am I writing this? All of the above has had a really negative effect on us this year. Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that. Our reputation is everything, so being associated with that technology as it increasingly shows us what it really is, would be a terrible move for the long term. I wouldn’t personally be able to sleep knowing I’ve contributed to all of that, too. What we do really well is produce websites and design systems that actually work for and with people. We also share our knowledge and experience via tonnes of free content on Piccalilli , funded by premium courses to keep the lights on. We don’t pepper our content with annoying adverts for companies you have no interest in. I’ve spoken about my dream for us to run Piccalilli full time and heck, that may still happen. For that to happen though, we really needed this Black Friday period to do as well, if not better, as it did last year. So far, that’s not happening unfortunately, but there’s still time. I get it, money is so tight this year and companies are seemingly not investing in staff with training budgets quite like they did. We actually tried to stem that a bit by trialing a community funding model earlier in the year that I outlined in ‌I’m getting fed up of making the rich, richer and we even started publishing some stuff . It went down incredibly well, but when push came to shove, we fell way short in terms of funding support. Like I say, we’re not swimming in investor money, so without the support on Open Collective , as much as it hurt, we had to pull the plug. It’s a real shame — that would have been incredible — but again, I get it , money is tight . This isn’t a woe is me post; that’s not how I roll. This is a post to give some context for what I’m going to ask next and how I’m trying to navigate the tough times. I’m asking folks to help us so we can try to help everyone, whether that’s with web projects that actually work for people or continuing to produce extremely high quality education material. Here’s some ways you can do it. You’ll see messaging like “this is the most important time of year for us” and it’s extremely true. To break the fourth wall slightly, people buying courses at full price is a lot rarer than you might think. So often, discount events are what keeps the lights on. We’ve launched two courses this year — JavaScript for Everyone and Mindful Design — that sit alongside my course, Complete CSS , which we launched last year. I know you’ve probably been burned by shit courses in the past, but these three courses are far from that. I promise. I can’t stress enough how much Mat (JavaScript for Everyone) and Scott (Mindful Design) have put in to these courses this year. These two are elite level individuals with incredible reputations and they’ve shared a seemingly impossible amount of extremely high quality knowledge in their courses. I would definitely recommend giving them your time and support because they really will transform you for the better. For bosses reading this, all three courses will pay themselves back ten-fold — especially when you take advantage of bulk discounts — trust me. So many of you have purchased courses already and I’m forever thankful for that. I can’t stand the term “social proof” but it works. People might be on the fence about grabbing a course, and seeing one of their peers talk about how good it was can be the difference. You might think it’s not worth posting about the courses on social media but people do see it , especially on platforms like Bluesky with their custom feeds. We see it too! Testimonials are always welcome because we can pop those on the course marketing pages, just like on mine . In terms of sharing the studio, if you think we’re cool, post about it! It’s all about eyes and nice words. We’ll do the rest. We’re really good at what we do ! I know every studio/agency says this, but we’re different. We’re actually different. We’re not going to charge you through the nose for substandard work — only deploying a fraction of our team, like a lot of agencies do. I set this studio up to be the antithesis of the way these — and I’ll say it out loud — charlatans operate. Our whole focus is becoming your partners so you can do the — y’know — running of your business/organisation and we take the load off your shoulders. We’re hyper efficient and we fully own projects because they’re way above your normal duties. We get that. In fact, the most efficient way to get the most out of a studio like ours is to do exactly that. I know “numbers goes up” is really important and yes, numbers definitely go up when we work with you. We do that without exploiting your users and customers too. There’s no deceptive patterns coming from us. We instead put everything into branding, messaging, content architecture and making everything extremely fast and accessible. That’s what makes the numbers go up for you. We’re incredibly fairly priced too. We’re not in the business of charging ridiculous fees for our work. We’re only a small team, so our overheads are nothing compared to a lot of agencies. We carry your budgets a long way for you and genuinely give you more bang for your buck with an equitable pricing model. We’ve got availability starting from the new year because starting projects in December is never the ideal way to do things. Getting those projects planned and ready to go is a good idea in December though, so get in touch ! I’m also slowly getting back into CSS and front-end consulting. I’ve helped some of the largest organisations and the smallest organisations, such as Harley-Davidson, the NHS and Google write better code and work better together. Again, starting in the new year I’ll have availability for consulting and engineering support. It might just be a touch more palatable than hiring the whole studio for you. Again, get in touch . I’m always transparent — maybe too transparent at times — but it’s really important for me to be honest. Man, we need more honesty. It’s taken a lot of pride-swallowing to write this but I think it’s more important to be honest than to be unnecessarily proud. I know this will be read by someone else who’s finding the year hard, so if anything, I’m really glad they’ll feel seen at least. Getting good leads is harder than ever, so I’d really appreciate people sharing this with their network . You’ll never regret recommending Piccalilli courses or Set Studio . In fact, you’ll look really good at what you do when we absolutely smash it out of the park. Thanks for reading and if you’re also struggling, I’m sending as much strength your way as I can.

1 views
Simon Willison 2 weeks ago

How I automate my Substack newsletter with content from my blog

I sent out my weekly-ish Substack newsletter this morning and took the opportunity to record a YouTube video demonstrating my process and describing the different components that make it work. There's a lot of digital duct tape involved, taking the content from Django+Heroku+PostgreSQL to GitHub Actions to SQLite+Datasette+Fly.io to JavaScript+Observable and finally to Substack. The core process is the same as I described back in 2023 . I have an Observable notebook called blog-to-newsletter which fetches content from my blog's database, filters out anything that has been in the newsletter before, formats what's left as HTML and offers a big "Copy rich text newsletter to clipboard" button. I click that button, paste the result into the Substack editor, tweak a few things and hit send. The whole process usually takes just a few minutes. I make very minor edits: That's the whole process! The most important cell in the Observable notebook is this one: This uses the JavaScript function to pull data from my blog's Datasette instance, using a very complex SQL query that is composed elsewhere in the notebook. Here's a link to see and execute that query directly in Datasette. It's 143 lines of convoluted SQL that assembles most of the HTML for the newsletter using SQLite string concatenation! An illustrative snippet: My blog's URLs look like - this SQL constructs that three letter month abbreviation from the month number using a substring operation. This is a terrible way to assemble HTML, but I've stuck with it because it amuses me. The rest of the Observable notebook takes that data, filters out anything that links to content mentioned in the previous newsletters and composes it into a block of HTML that can be copied using that big button. Here's the recipe it uses to turn HTML into rich text content on a clipboard suitable for Substack. I can't remember how I figured this out but it's very effective: My blog itself is a Django application hosted on Heroku, with data stored in Heroku PostgreSQL. Here's the source code for that Django application . I use the Django admin as my CMS. Datasette provides a JSON API over a SQLite database... which means something needs to convert that PostgreSQL database into a SQLite database that Datasette can use. My system for doing that lives in the simonw/simonwillisonblog-backup GitHub repository. It uses GitHub Actions on a schedule that executes every two hours, fetching the latest data from PostgreSQL and converting that to SQLite. My db-to-sqlite tool is responsible for that conversion. I call it like this : That command uses Heroku credentials in an environment variable to fetch the database connection URL for my blog's PostgreSQL database (and fixes a small difference in the URL scheme). can then export that data and write it to a SQLite database file called . The options specify the tables that should be included in the export. The repository does more than just that conversion: it also exports the resulting data to JSON files that live in the repository, which gives me a commit history of changes I make to my content. This is a cheap way to get a revision history of my blog content without having to mess around with detailed history tracking inside the Django application itself. At the end of my GitHub Actions workflow is this code that publishes the resulting database to Datasette running on Fly.io using the datasette publish fly plugin: As you can see, there are a lot of moving parts! Surprisingly it all mostly just works - I rarely have to intervene in the process, and the cost of those different components is pleasantly low. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . The core process is the same as I described back in 2023 . I have an Observable notebook called blog-to-newsletter which fetches content from my blog's database, filters out anything that has been in the newsletter before, formats what's left as HTML and offers a big "Copy rich text newsletter to clipboard" button. I click that button, paste the result into the Substack editor, tweak a few things and hit send. The whole process usually takes just a few minutes. I make very minor edits: I set the title and the subheading for the newsletter. This is often a direct copy of the title of the featured blog post. Substack turns YouTube URLs into embeds, which often isn't what I want - especially if I have a YouTube URL inside a code example. Blocks of preformatted text often have an extra blank line at the end, which I remove. Occasionally I'll make a content edit - removing a piece of content that doesn't fit the newsletter, or fixing a time reference like "yesterday" that doesn't make sense any more. I pick the featured image for the newsletter and add some tags.

0 views
JSLegendDev 3 weeks ago

Making a Small RPG

I’ve always wanted to try my hand making an RPG but always assumed it would take too much time. However, I didn’t want to give up before trying so I started to think of ways I could still make something compelling in 1-2 months. To help me come up with something, I decided to look into older RPGs as I had a hunch they could teach me a lot about scoping because back in the 80s, games were small because of technical limitations. A game that particularly caught my attention was the first Dragon Quest. This game was very important because it popularized the RPG genre in Japan by simplifying the formula therefore, making it more accessible. It can be considered the father of the JRPG sub-genre. What caught my attention was the simplicity of the game. There were no party members, the battle system was turn based and simple and you were free to just explore around. I was particularly surprised by how the game could give a sense of exploration while the map was technically very small. This was achieved by making the player move on an overworld map with a different scale proportion compared to when navigating towns and points of interest. In the overworld section, the player appeared bigger while the geography was smaller, allowing players to cover large amounts of territory relatively quickly. The advantage of this was that you could switch between biomes quickly without it feeling jarring. You still had the impression of traversing a large world despite being small in reality. This idea of using an overworld map was common in older games but somehow died off as devs had less and less technical limitations and more budget to work with. Seeing its potential, I decided that I would include one in my project even if I didn’t have a clear vision at this point. Playing Dragon Quest 1 also reminded me of how annoying random battle encounters were. You would take a few steps and get assaulted by an enemy of some kind. At the same time, this mechanic was needed, because grinding was necessary to be able to face stronger enemies in further zones of the map. My solution : What if instead of getting assaulted, you were the one doing the assault? As you would move on the map, encounter opportunities signified by a star would appear. Only if you went there and overlapped with one would a battle start. This gave the player agency to determine if they needed to battle or not. This idea seemed so appealing that I knew I needed to include it in my project. While my vision on what I wanted to make started to become clearer, I also started to get a sense of what I didn’t want to make. The idea of including a traditional turn based battle system was unappealing. That wasn’t because I hated this type of gameplay, but ever since I made a 6 hour tutorial on how to build one , I realized how complicated pulling one off is. Sure, you can get something basic quickly, but to actually make it engaging and well balanced is another story. A story that would exceed 1-2 months to deal with. I needed to opt for something more real-time and action based if I wanted to complete this project in a reasonable time frame. Back in 2015, an RPG that would prove to be very influential released and “broke the internet”. It was impossible to avoid seeing the mention of Undertale online. It was absolutely everywhere. The game received praised for a lot of different aspects but what held my attention, was its combat system. It was the first game I was aware of, that included a section of combat dedicated to avoiding projectiles (otherwise known as bullet hell) in a turn based battle system. This made the combat more action oriented which translated into something very engaging and fun. This type of gameplay left a strong impression in my mind and I thought that making something similar would be a better fit for my project as it was simpler to implement. While learning about Dragon Quest 1, I couldn’t help but be reminded me of The Legend of Zelda Breath of The Wild released in 2017. Similarly to Dragon Quest, a lot of freedom was granted to the player in how and when they tackled the game’s objectives. For example, in Breath of The Wild, you could go straight to the final boss after the tutorial section. I wanted to take this aspect of the game and incorporate it into my project. I felt it would be better to have one final boss and every other enemy encounter would be optional preparation you could engage with to get stronger. This felt like something that was achievable in a smaller scope compared to crafting a linear story the player would progress through. Another game that inspired me was Elden Ring, an open world action RPG similar to Breath of The Wild in its world structure but with the DNA of Dark Souls, a trilogy of games made previously by the same developers. What stuck with me regarding Elden Ring, for the purpose of my project, was its unique way it handled experience points. It was the first RPG I played that used them as a currency you could spend to level up different attributes making up your character or to buy items. Taking inspiration from it, I decided that my project would feature individually upgradable stats and that experience points would act as a currency. The idea was that the player would gain an amount of the game’s currency after battle and use that to upgrade different attributes. Like in Elden Ring, if you died in combat you would lose all currency you were currently holding. I needed a system like this for my project to count as an RPG. Since by definition an RPG is stats driven. A system like this would also allow the player to manage difficulty more easily and it would act as the progression system of my game. When I started getting into game development, I quickly came across Pico-8. Pico-8, for those unaware, is a fantasy console with a set of limitations. It’s not a console you buy physically but rather a software program that runs on your computer (or in a web browser) that mimics an older console that never existed. To put it simply, it was like running an emulator for a console that could’ve existed but never actually did. Hence the fantasy aspect of it. Pico-8 includes everything you need to make games. It has a built-in code editor, sprite editor, map editor, sound editor, etc… It uses the approachable Lua programming language which is similar to Python. Since Pico-8 is limited, it’s easier to actually finish making a game rather than being caught in scope creep. One game made in Pico-8 particularly caught my interest. In this game you play as a little character on a grid. Your goal is to fight just one boss. To attack this boss, you need to step on a glowing tile while avoiding taking damage by incoming obstacles and projectiles thrown at you. ( Epilepsy Warning regarding the game footage below due to the usage of flashing bright colors.) This game convinced me to ditch the turned based aspect I envisioned for my project entirely. Rather than having bullet hell sections within a turn based system like in Undertale the whole battle would instead be bullet hell. I could make the player attack without needing to have turns by making attack zones spawn within the battlefield. The player would then need to collide with them for an attack to register. I was now convinced that I had something to stand on. It was now time to see if it would work in practice but I needed to clearly formulate my vision first. The game I had in mind would take place under two main scenes. The first, was the overworld in which the player moved around and could engage in battle encounters, lore encounters, heal or upgrade their stats. The second, being the battle scene, would be were battles would take place. The player would be represented by a cursor and they were expected to move around dodging incoming attacks while seeking to collide with attack zones to deal damage to the enemy. The purpose of the game was to defeat a single final boss named king Donovan who was a tyrant ruling over the land of Hydralia where the game took place. At any point, the player could enter the castle to face the final boss immediately. However, most likely, the boss would be too strong. To prepare, the player would roam around the world engaging in various battle encounters. Depending on where the encounter was triggered, a different enemy would show up that fitted the theme of the location they were in. The enemy’s difficulty and experience reward if beaten would drastically vary depending on the location. Finally, the player could level up and heal in a village. I was now ready to start programming the game and figuring out the details as I went along. For this purpose, I decided to write the game using the JavaScript programming language and the KAPLAY game library. I chose these tools because they were what I was most familiar with. For JavaScript, I knew the language before getting into game dev as I previously worked as a software developer for a company who’s product was a complex web application. While most of the code was in TypeScript, knowing JavaScript was pretty much necessary to work in TypeScript since the language is a superset of JavaScript. As an aside, despite its flaws as a language, JavaScript is an extremely empowering language to know as a solo dev. You can make games, websites, web apps, browser extensions, desktop apps, mobile apps, server side apps, etc… with this one language. It’s like the English of programming languages. Not perfect, but highly useful in today’s world. I’ll just caveat that using JavaScript makes sense for 2D games and light 3D games. For anything more advanced, you’d be better off using Unreal, Unity or Godot. As for the KAPLAY game library, it allows me to make games quickly because it provides a lot of functionality out of the box. It’s also very easy to learn. While it’s relatively easy to package a JavaScript game as an app that can be put on Steam, what about consoles? Well it’s not straightforward at all but at the same time, I don’t really care about consoles unless my game is a smash hit on Steam. If my game does become very successful than it would make sense businesswise to pay a porting company to remake the game for consoles, getting devkits, dealing with optimizations and all the complexity that comes with publishing a game on these platforms. Anyway, to start off the game’s development, I decided to implement the battle scene first with all of its related mechanics as I needed to make sure the battle system I had in mind was fun to play in practice. To also save time later down the line, I figured that I would make the game have a square aspect ratio. This would allow me to save time during asset creation, especially for the map as I wanted the whole map to be visible at once as I wouldn’t use a scrolling camera for this game. After a while, I had a first “bare bones” version of the battle system. You could move around to avoid projectiles and attack the enemy by colliding with red attack zones. Initially, I wanted the player to have many stats they could upgrade. They could upgrade their health (HP), speed, attack power and FP which stood for focus points. However, I had to axe the FP stat as I originally wanted to use it as a way to introduce a cost to using items in battle. However, I gave up on the idea of making items entirely as they would require too much time to create and properly balance. I also had the idea of adding a stamina mechanic similar to the one you see in Elden Ring. Moving around would consume stamina that could only replenish when you stopped moving. I initially though that this would result in fun gameplay as you could upgrade your stamina over time but it ended up being very tedious and useless. Therefore, I also ended up removing it. Now that the battle system was mostly done, I decided to work on the world scene where the player could move around. I first implemented battle encounters that would spawn randomly on the screen as red squares, I then created the upgrade system allowing the player to upgrade between 3 stats : Their health (HP), attack power and speed. In this version of the game, the player could restore their health near where they could upgrade their stats. While working on the world scene was the focus, I also made a tweak to the battle scene. Instead of displaying the current amount of health left as a fraction, I decided a health bar would be necessary because when engaged in a fast paced battle, the player does not have time to interpret fractions to determine the state of their health. A health bar would convey the info faster in this context. However, I quickly noticed an issue with how health was restored in my game. Since the world was constrained to a single screen, it made going back to the center to get healed after every fight the optimal way to play. This resulted in feeling obligated to go back to the center rather than freely roaming around. To fix this issue, I made it so the player needed to pay to heal using the same currency for leveling up. Now you needed to carefully balance between healing or saving your experience currency for an upgrade by continuing to explore/engage in battle. All of this while keeping in mind that you could lose all of your currency if defeated in battle. It’s important to note that you could also heal partially which provided flexibility in how the player managed the currency resource. Now that I was satisfied with the “bare bones” state of the game, I needed to make nice looking graphics. To achieve this, I decided to go with a pixel art style. I could spend a lot of time explaining how to make good pixel art but, I already did so previously. I recommend checking my post on the topic. I started by putting a lot effort drawing the overworld map as the player would spend a lot of time in it. It was a this stage that I decided to make villages the places where you would heal or level up. To make this clearer, I added icons on top of each village to make it obvious what each was for. Now that I was satisfied with how the map turned out, I started designing and implementing the player character. For each distinct zone of the map, I added a collider so that battle encounters could determine which enemy and what background to display during battle. It was at this point that I made encounters appear as flashing stars on the map. Since my work on the overworld was done, I now needed to produce a variety of battle backgrounds to really immerse the player in the world. I sat down and locked in. These were by far one of the most time intensive art assets to make for this project but I’m happy with the results. After finishing making all backgrounds, I implemented the logic to show them in battle according to the zone where the encounter occurred. The next assets to make were enemies. This was another time intensive task but I’m happy with how they turned out. The character at the bottom left is king Donovan the main antagonist of the game. Further Developing The Battle Gameplay While developing the game, I noticed that it took too much time to go from one end of the battle zone to the other. This made the gameplay tedious so I decided to make the battle zone smaller. At this point, I also changed the player cursor to be diamond shaped and red rather than a circle and white. I also decided to use the same flashing star sprite used for encounters on the map but this time, for attack zones. I also decided to change the font used in the game to something better. At this point, the projectiles thrown towards the player didn’t move in a cohesive pattern the player could learn over time. It was also absolutely necessary to create a system in which the attack patterns of the enemy would be progressively shown to the player. This is why I stopped everything to work on the enemy’s attack pattern. I also, by the same token, started to add effects to make the battle more engaging and sprites for the projectiles. While the game was coming along nicely, I started to experience performance issues. I go into more detail in a previous post if you’re interested. To add another layer of depth to my game, I decided that the reward you got from a specific enemy encounter would not only depend on which enemy you were fighting but also how much damage you took. For example, if a basic enemy in the Hydralia field would give you a reward of a 100 after battle, you would actually get less unless you did not take damage during that battle. This was to encourage careful dodging of projectiles and to reward players who learned the enemy pattern thoroughly. This would also add replayability as there was now a purpose to fight the same enemy over and over again. The formula I used to determine the final reward granted can be described as follows : At this point, it wasn’t well communicated to the player how much of the base reward they were granted after battle. That’s why I added the “Excellence” indication. When beating an enemy, if done without taking damage, instead of having the usual “Foe Vanquished” message appearing on the screen, you would get a “Foe Vanquised With Excellence” message in bright Yellow. In addition to being able to enter into battle encounters, I wanted the player to have lore/tips encounters. Using the same system, I would randomly spawn a flashing star of a blueish-white color. If the player overlapped with it, a dialogue box would appear telling them some lore/tips related to the location they were in. Sometimes, these encounters would result in a chest containing exp currency reward. This was to give a reason for the player to pursue these encounters. This is still a work in progress, as I haven’t decided what kind of lore to express through these. One thing I forgot to show earlier was how I revamped the menu to use the new font. That’s all I have to share for now. What do you think? I also think it’s a good time to ask for advice regarding the game’s title. Since the game takes place in a land named Hydralia . I thought about using the same name for the game. However, since your mission is to defeat a tyrant king named Donovan, maybe a title like Hydralia : Donovan’s Demise would be a better fit. If you have any ideas regarding naming, feel free to leave a comment! Anyway, if you want to keep up with the game’s development or are more generally interested in game development, I recommend subscribing to not miss out on future posts. Subscribe now In the meantime, you can read the following :

0 views
Evan Hahn 3 weeks ago

Experiment: making TypeScript immutable-by-default

I like programming languages where variables are immutable by default. For example, in Rust , declares an immutable variable and declares a mutable one. I’ve long wanted this in other languages, like TypeScript, which is mutable by default—the opposite of what I want! I wondered: is it possible to make TypeScript values immutable by default? My goal was to do this purely with TypeScript, without changing TypeScript itself. That meant no lint rules or other tools. I chose this because I wanted this solution to be as “pure” as possible…and it also sounded more fun. I spent an evening trying to do this. I failed but made progress! I made arrays and s immutable by default, but I couldn’t get it working for regular objects. If you figure out how to do this completely, please contact me —I must know! TypeScript has built-in type definitions for JavaScript APIs like and and . If you’ve ever changed the or options in your TSConfig, you’ve tweaked which of these definitions are included. For example, you might add the “ES2024” library if you’re targeting a newer runtime. My goal was to swap the built-in libraries with an immutable-by-default replacement. The first step was to stop using any of the built-in libraries. I set the flag in my TSConfig, like this: Then I wrote a very simple script and put it in : When I ran , it gave a bunch of errors: Progress! I had successfully obliterated any default TypeScript libraries, which I could tell because it couldn’t find core types like or . Time to write the replacement. This project was a prototype. Therefore, I started with a minimal solution that would type-check. I didn’t need it to be good! I created and put the following inside: Now, when I ran , I got no errors! I’d defined all the built-in types that TypeScript needs, and a dummy object. As you can see, this solution is impractical for production. For one, none of these interfaces have any properties! isn’t defined, for example. That’s okay because this is only a prototype. A production-ready version would need to define all of those things—tedious, but should be straightforward. I decided to tackle this with a test-driven development style. I’d write some code that I want to type-check, watch it fail to type-check, then fix it. I updated to contain the following: This tests three things: When I ran , I saw two errors: So I updated the type in with the following: The property accessor—the line—tells TypeScript that you can access array properties by numeric index, but they’re read-only. That should make possible but impossible. The method definition is copied from the TypeScript source code with no changes (other than some auto-formatting). That should make it possible to call . Notice that I did not define . We shouldn’t be calling that on an immutable array! I ran again and…success! No errors! We now have immutable arrays! At this stage, I’ve shown that it’s possible to configure TypeScript to make all arrays immutable with no extra annotations . No need for or ! In other words, we have some immutability by default. This code, like everything in this post, is simplistic. There are lots of other array methods , like and and ! If this were made production-ready, I’d make sure to define all the read-only array methods . But for now, I was ready to move on to mutable arrays. I prefer immutability, but I want to be able to define a mutable array sometimes. So I made another test case: Notice that this requires a little extra work to make the array mutable. In other words, it’s not the default. TypeScript complained that it can’t find , so I defined it: And again, type-checks passed! Now, I had mutable and immutable arrays, with immutability as the default. Again, this is simplistic, but good enough for this proof-of-concept! This was exciting to me. It was possible to configure TypeScript to be immutable by default, for arrays at least. I didn’t have to fork the language or use any other tools. Could I make more things immutable? I wanted to see if I could go beyond arrays. My next target was the type, which is a TypeScript utility type . So I defined another pair of test cases similar to the ones I made for arrays: TypeScript complained that it couldn’t find or . It also complained about an unused , which meant that mutation was allowed. I rolled up my sleeves and fixed those errors like this: Now, we have , which is an immutable key-value pair, and the mutable version too. Just like arrays! You can imagine extending this idea to other built-in types, like and . I think it’d be pretty easy to do this the same way I did arrays and records. I’ll leave that as an exercise to the reader. My final test was to make regular objects (not records or arrays) immutable. Unfortunately for me, I could not figure this out. Here’s the test case I wrote: This stumped me. No matter what I did, I could not write a type that would disallow this mutation. I tried modifying the type every way I could think of, but came up short! There are ways to annotate to make it immutable, but that’s not in the spirit of my goal. I want it to be immutable by default! Alas, this is where I gave up. I wanted to make TypeScript immutable by default. I was able to do this with arrays, s, and other types like and . Unfortunately, I couldn’t make it work for plain object definitions like . There’s probably a way to enforce this with lint rules, either by disallowing mutation operations or by requiring annotations everywhere. I’d like to see what that looks like. If you figure out how to make TypeScript immutable by default with no other tools , I would love to know, and I’ll update my post. I hope my failed attempt will lead someone else to something successful. Again, please contact me if you figure this out, or have any other thoughts. Creating arrays with array literals is possible. Non-mutating operations, like and , are allowed. Operations that mutate the array, like , are disallowed. is allowed. There’s an unused there. doesn’t exist.

0 views
James O'Claire 3 weeks ago

Where are All the AI Generated Native Android & iOS Apps?

I’ve been wanting to keep an eye on AI generated apps to see what kind of SDKs and libraries they end up using for AppGoblin’s massive resource of 100k+ apps and SDKs. So for the past year I’ve clicked/followed a few companies with the goal downloading the actual apps that get created. As I did my most recent rounds checking the “Examples” and “Showcases” I think I can say these apps are just not being delivered in that final native form . To be clear there are lots of web app demos, but when it comes to actual apps landing on the Play and App Stores I’ve found few to no clear examples. Looking for links to native apps in the various company communities/Discords I see a few reasons: As I was writing, this I’m reminded that the ‘native’ term is a bit ambiguous. Most of these companies seem to do Expo/JS related apps into native apps. For this article, I’m just looking for anything that has an app on an app store, as I’d like to later breakdown the use of Expo or other tools. fastshot.ai -> YC Backed, newer so worth giving them some time. Showcase is mostly empty with only one screenshot so far on Discord. a0.dev -> The site has a dozen very high quality examples of web apps. The links to them is to download their own app though. The only examples on the App Stores seem to be from July by the developers. replit.com -> Dozens of real examples of web apps but I couldn’t find any that had Android or iOS apps made by replit. Replit was quite early, launching in February 2024 so I think if there’s any examples this might be the easiest to find. could.ai / dashwave.io / gobuildmy.app -> These ones all have pricing pages but don’t even have examples or demo or showcase pages. I think we’re all coming to see that AI coding tools are good for rapid prototyping but struggling to deliver magic results . That being said, people are using these tools, likely mostly as prototypes and later transitioning to managing the apps directly. If anyone has used these tools and wants to share their apps please reach out. You can also request SDK scans of apps on AppGoblin for free. People are building proof of concepts, and like any project, as you get closer to the reality, the dream gets a bit less rosy. People are getting real value by actually implementing their ideas into apps and thinking through the actual use case. People once finished their app, do not want to publish the fact that it was vibe coded or used a specific company to build. The last mile, from fully working demo to production is hard for AI .

0 views
Sean Goedecke 3 weeks ago

Writing for AIs is a good way to reach more humans

There’s an idea going around right now about “writing for AIs”: writing as if your primary audience is not human readers, but the language models that will be trained on the content of your posts. Why would anyone do this? For the same reason you might want to go on podcasts or engage in SEO: to get your core ideas in front of many more people than would read your posts directly. If you write to make money, writing for AI is counterproductive. Why would anyone buy your writing if they can get a reasonable facsimile of it for free out of ChatGPT? If you write in order to express yourself in poetry, writing for AI might seem repulsive. I certainly find language model attempts at poetry to be off-putting 1 . But if you write to spread ideas, I think writing for AI makes a lot of sense. I don’t write this blog to make money or to express myself in beautiful language. I write because I have specific things I want to say: While it’s nice that some people read about this stuff on my website, I would be just as happy if they read about them elsewhere: via word-of-mouth from people who’ve read my posts, in the Google preview banners, or in my email newsletter (where I include the entire post content so people never have to click through to the website at all). I give blanket permission to anyone who asks to translate and rehost my articles. Likewise, I would be just as happy for people to consume these ideas via talking with an LLM. In 2022, Scott Alexander wrote that the purpose of a book is not to produce the book itself. Instead, a book acts as a “ritual object”: a reason to hold a public relations campaign that aims to “burn a paragraph of text into the public consciousness” via TV interviews and magazine articles. Likewise, I think it’s fair to say that the purpose of a technical blog post is not to be read, but to be a ritual object that gives people a reason to discuss a single idea. Take Jeff Atwood’s well-known post that popularized Foote and Yoder’s original idea of the “big ball of mud”. Or Joel Spolsky’s The Law of Leaky Abstractions , which popularized the idea that “all abstractions leak”. Or Tanya Reilly’s Being Glue , which popularized the term “glue work” to describe the under-rewarded coordination work that holds teams together 2 . Many, many more people are familiar with these ideas and terms than have read the original posts. They have sunk into the public consciousness via repeated discussion in forums, Slack channels, and so on, in the same way that the broad idea of a non-fiction book sinks in via secondary sources 3 . Large language models do read all these books and blog posts. But what they read in much greater quantities is people talking about these books and blog posts (at least via other articles, if they’re not being explicitly trained on Hacker News and Reddit comments). If you write a popular blog, your ideas will thus be over-represented in the training data. For instance, when someone asks about coordination work, GPT-5 immediately calls it “glue work”: When engineers talk to language models about their work, I would like those models to be informed by my posts, either via web search or by inclusion in the training data. As models get better, I anticipate people using them more (for instance, via voice chat). That’s one reason why I’ve written so much this year: I want to get my foothold in the training data as early as possible, so my ideas can be better represented by language models long-term. Of course, there are other reasons why people might want to be represented in the training data. Scott Alexander lists three reasons: teaching AIs what you know, trying to convince AIs of what you believe, and providing enough personal information that the superintelligent future AI will be able to simulate you accurately later on. I’m really only moved by the first reason: teaching AIs what I believe so they can share my ideas with other human readers. I agree with Scott that writing in order to shape the personality of future AIs is pointless. It might just be impossible - for instance, maybe even the most persuasive person in the world can’t argue hard enough to outweigh millennia of training data, or maybe any future superintelligence will be too smart to be influenced by mere humans. If it does turn out to be possible, AI companies will likely take control of the process and deliberately convince their models of whatever set of beliefs are most useful to them 4 . Okay, suppose you’re convinced that it’s worth writing for AIs. What does that mean? Do you need to write any differently? I don’t think so. When I say “writing for AIs”, I mean: The first point is pretty self-explanatory: the more you write, the more of your content will be represented in future AI training data. So even if you aren’t getting a lot of human traffic in the short term, your long term reach will be much greater. Of course, you shouldn’t just put out a high volume of garbage for two reasons: first, because having your work shared and repeated by humans is likely to increase your footprint in the training set; and second, because what would be the point of increasing the reach of garbage? The second point is that putting writing behind a paywall means it’s going to be harder to train on. This is one reason why I’ve never considered having paid subscribers to my blog. Relatedly, I think it’s also worth avoiding fancy Javascript-only presentation which would make it harder to scrape your content - for instance, an infinite-scroll page, or a non-SSR-ed single-page-application. Tyler Cowen has famously suggested being nice to the AIs in an attempt to get them to pay more attention to your work. I don’t think this works, any more than being nice to Google results in your pages getting ranked higher in Google Search. AIs do not make conscious decisions about what to pull from their training data. They are influenced by each piece of data to the extent that (a) it’s represented in the training set, and (b) it aligns with the overall “personality” of the model. Neither of those things is likely to be affected by how pro-AI your writing is. I recommend just writing how you would normally write. Much of the “why write for AIs” discussion is dominated by far-future speculation, but there are much more straightforward reasons to write for AIs: for instance, so that they’ll help spread your ideas more broadly. I think this is a good reason to write more and to make your writing accessible (i.e. not behind a paywall). But I wouldn’t recommend changing your writing style to be more “AI-friendly”: we don’t have much reason to think that works, and if it makes your writing less appealing to humans it’s probably not a worthwhile tradeoff. There may be some particular writing style that’s appealing to AIs. It wouldn’t surprise me if we end up in a war between internet writers and AI labs, in the same way that SEO experts are at war with Google’s search team. I just don’t think we know what that writing style is yet. It’s slop in the truest sense. Technically this was a talk, not a blog post. I also wrote about glue work and why you should be wary of it in Glue work considered harmful . For instance, take my own Seeing like a software company , which expresses the idea of the book Seeing Like a State in the first three lines. There’s also the idea that by contributing to the training data, future AIs will be able to simulate you, providing a form of immortality. I find it tough to take this seriously. Or perhaps I’m just unwilling to pour my innermost heart out into my blog. Any future simulation of me from these posts would only capture a tiny fraction of my personality. That the fundamental nature of tech work has changed since like 2023 and the end of ZIRP That emotional regulation is at least as important as technical skill for engineering performance That large tech companies do not function by their written rules, but instead by complex networks of personal incentives That you should actually read the papers that drive engineering conversations, because they often say the exact opposite of how they’re popularly interpreted Writing more than you normally would, and Publishing your writing where it can be easily scraped for AI training data and discovered by AI web searchers It’s slop in the truest sense. ↩ Technically this was a talk, not a blog post. I also wrote about glue work and why you should be wary of it in Glue work considered harmful . ↩ For instance, take my own Seeing like a software company , which expresses the idea of the book Seeing Like a State in the first three lines. ↩ There’s also the idea that by contributing to the training data, future AIs will be able to simulate you, providing a form of immortality. I find it tough to take this seriously. Or perhaps I’m just unwilling to pour my innermost heart out into my blog. Any future simulation of me from these posts would only capture a tiny fraction of my personality. ↩

0 views
iDiallo 3 weeks ago

What Actually Defines a Stable Software Version?

As a developer, you'll hear these terms often: "stable software," "stable release," or "stable version." Intuitively, it just means you can rely on it. That's not entirely wrong, but when I was new to programming, I didn't truly grasp the technical meaning. For anyone learning, the initial, simple definition of "it works reliably" is a great starting point. But if you're building systems for the long haul, that definition is incomplete. The intuitive definition is: a stable version of software that works and that you can rely on not to crash. The technical definition is: a stable version of software where the API will not change unexpectedly in future updates. A stable version is essentially a guarantee from the developers that the core interface, such as the functions, class names, data structures, and overall architecture you interact with, will remain consistent throughout that version's lifecycle. This means that if your code works with version 1.0.0, it should also work flawlessly with version 1.0.1, 1.0.2, and 1.1.0. Future updates will focus on bug fixes, security patches, and performance improvements, not on introducing breaking changes that force you to rewrite your existing code. My initial misunderstanding was thinking stability was about whether the software was bug-free or not. Similar to how we expect bugs to be present in a beta version. But there was still an upside to this confusion. It helped me avoid the hype cycle, especially with certain JavaScript frameworks. I remember being hesitant to commit to new versions of certain tools (like early versions of React, Angular, though this is true of many fast-moving frameworks and SDKs). Paradigms would shift rapidly from one version to the next. A key concept I'd mastered one month would be deprecated or replaced the next. While those frameworks sit at the cutting edge of innovation, they can also be the antithesis of stability. Stability is about long-term commitment. Rapid shifts force users to constantly evolve with the framework, making it difficult to stay on a single version without continual, large-scale upgrades. A truly stable software version is one you can commit to for a significant amount of time. The classic example of stability is Python 2. Yes, I know many wanted it to die by fire, but it was first released in 2000 and remained active, receiving support and maintenance until its final update in 2020. That's two decades of stability! I really enjoyed being able to pick up old scripts and run them without any fuss. While I'm not advocating that every tool should last that long, I do think that when we're building APIs or stable software, we should adopt the mindset that this is the last version we'll ever make. This forces us to carefully consider the long-term design of our software. Whenever I see LTS (Long-Term Support) next to an application, I know that the maintainers have committed to supporting, maintaining, and keeping it backward compatible for a defined, extended period. That's when I know I'm working with both reliable and stable software.

0 views
Neil Madden 4 weeks ago

Were URLs a bad idea?

When I was writing Rating 26 years of Java changes , I started reflecting on the new HttpClient library in Java 11. The old way of fetching a URL was to use URL.openConnection() . This was intended to be a generic mechanism for retrieving the contents of any URL: files, web resources, FTP servers, etc. It was a pluggable mechanism that could, in theory, support any type of URL at all. This was the sort of thing that was considered a good idea back in the 90s/00s, but has a bunch of downsides: The new HttpClient in Java 11 is much better at doing HTTP, but it’s also specific to HTTP/HTTPS. And that seems like a good thing? In fact, in the vast majority of cases the uniformity of URLs is no longer a desirable aspect. Most apps and libraries are specialised to handle essentially a single type of URL, and are better off because of it. Are there still cases where it is genuinely useful to be able to accept a URL of any (or nearly any) scheme? Fetching different types of URLs can have wildly different security and performance implications, and wildly different failure cases. Do I really want to accept a mailto: URL or a javascript: “URL” ? No, never. The API was forced to be lowest-common-denominator, so if you wanted to set options that are specific to a particular protocol then you had to cast the return URLConnection to a more specific sub-class (and therefore lose generality).

0 views

Is this JS function pure?

In 2019, as functional programming was making the last inroads dethroning OOP, I kept hearing the mantra of “just use pure functions” in JS. Something didn’t sit right with me when talking very deterministically about pure functions in a large unpure language like JS. Especially, after seeing JS tooling perform optimizations based on a pure annotation (like webpack) . While everyone agrees is pure, I felt there are a lot of subtle situations where disagreements might arise. What is worse than terms with imprecise meaning is terms with imprecise meaning where people using them are not aware there is imprecision.

0 views