Posts in Lua (20 found)
JSLegendDev 1 weeks ago

Making a Small RPG

I’ve always wanted to try my hand making an RPG but always assumed it would take too much time. However, I didn’t want to give up before trying so I started to think of ways I could still make something compelling in 1-2 months. To help me come up with something, I decided to look into older RPGs as I had a hunch they could teach me a lot about scoping because back in the 80s, games were small because of technical limitations. A game that particularly caught my attention was the first Dragon Quest. This game was very important because it popularized the RPG genre in Japan by simplifying the formula therefore, making it more accessible. It can be considered the father of the JRPG sub-genre. What caught my attention was the simplicity of the game. There were no party members, the battle system was turn based and simple and you were free to just explore around. I was particularly surprised by how the game could give a sense of exploration while the map was technically very small. This was achieved by making the player move on an overworld map with a different scale proportion compared to when navigating towns and points of interest. In the overworld section, the player appeared bigger while the geography was smaller, allowing players to cover large amounts of territory relatively quickly. The advantage of this was that you could switch between biomes quickly without it feeling jarring. You still had the impression of traversing a large world despite being small in reality. This idea of using an overworld map was common in older games but somehow died off as devs had less and less technical limitations and more budget to work with. Seeing its potential, I decided that I would include one in my project even if I didn’t have a clear vision at this point. Playing Dragon Quest 1 also reminded me of how annoying random battle encounters were. You would take a few steps and get assaulted by an enemy of some kind. At the same time, this mechanic was needed, because grinding was necessary to be able to face stronger enemies in further zones of the map. My solution : What if instead of getting assaulted, you were the one doing the assault? As you would move on the map, encounter opportunities signified by a star would appear. Only if you went there and overlapped with one would a battle start. This gave the player agency to determine if they needed to battle or not. This idea seemed so appealing that I knew I needed to include it in my project. While my vision on what I wanted to make started to become clearer, I also started to get a sense of what I didn’t want to make. The idea of including a traditional turn based battle system was unappealing. That wasn’t because I hated this type of gameplay, but ever since I made a 6 hour tutorial on how to build one , I realized how complicated pulling one off is. Sure, you can get something basic quickly, but to actually make it engaging and well balanced is another story. A story that would exceed 1-2 months to deal with. I needed to opt for something more real-time and action based if I wanted to complete this project in a reasonable time frame. Back in 2015, an RPG that would prove to be very influential released and “broke the internet”. It was impossible to avoid seeing the mention of Undertale online. It was absolutely everywhere. The game received praised for a lot of different aspects but what held my attention, was its combat system. It was the first game I was aware of, that included a section of combat dedicated to avoiding projectiles (otherwise known as bullet hell) in a turn based battle system. This made the combat more action oriented which translated into something very engaging and fun. This type of gameplay left a strong impression in my mind and I thought that making something similar would be a better fit for my project as it was simpler to implement. While learning about Dragon Quest 1, I couldn’t help but be reminded me of The Legend of Zelda Breath of The Wild released in 2017. Similarly to Dragon Quest, a lot of freedom was granted to the player in how and when they tackled the game’s objectives. For example, in Breath of The Wild, you could go straight to the final boss after the tutorial section. I wanted to take this aspect of the game and incorporate it into my project. I felt it would be better to have one final boss and every other enemy encounter would be optional preparation you could engage with to get stronger. This felt like something that was achievable in a smaller scope compared to crafting a linear story the player would progress through. Another game that inspired me was Elden Ring, an open world action RPG similar to Breath of The Wild in its world structure but with the DNA of Dark Souls, a trilogy of games made previously by the same developers. What stuck with me regarding Elden Ring, for the purpose of my project, was its unique way it handled experience points. It was the first RPG I played that used them as a currency you could spend to level up different attributes making up your character or to buy items. Taking inspiration from it, I decided that my project would feature individually upgradable stats and that experience points would act as a currency. The idea was that the player would gain an amount of the game’s currency after battle and use that to upgrade different attributes. Like in Elden Ring, if you died in combat you would lose all currency you were currently holding. I needed a system like this for my project to count as an RPG. Since by definition an RPG is stats driven. A system like this would also allow the player to manage difficulty more easily and it would act as the progression system of my game. When I started getting into game development, I quickly came across Pico-8. Pico-8, for those unaware, is a fantasy console with a set of limitations. It’s not a console you buy physically but rather a software program that runs on your computer (or in a web browser) that mimics an older console that never existed. To put it simply, it was like running an emulator for a console that could’ve existed but never actually did. Hence the fantasy aspect of it. Pico-8 includes everything you need to make games. It has a built-in code editor, sprite editor, map editor, sound editor, etc… It uses the approachable Lua programming language which is similar to Python. Since Pico-8 is limited, it’s easier to actually finish making a game rather than being caught in scope creep. One game made in Pico-8 particularly caught my interest. In this game you play as a little character on a grid. Your goal is to fight just one boss. To attack this boss, you need to step on a glowing tile while avoiding taking damage by incoming obstacles and projectiles thrown at you. ( Epilepsy Warning regarding the game footage below due to the usage of flashing bright colors.) This game convinced me to ditch the turned based aspect I envisioned for my project entirely. Rather than having bullet hell sections within a turn based system like in Undertale the whole battle would instead be bullet hell. I could make the player attack without needing to have turns by making attack zones spawn within the battlefield. The player would then need to collide with them for an attack to register. I was now convinced that I had something to stand on. It was now time to see if it would work in practice but I needed to clearly formulate my vision first. The game I had in mind would take place under two main scenes. The first, was the overworld in which the player moved around and could engage in battle encounters, lore encounters, heal or upgrade their stats. The second, being the battle scene, would be were battles would take place. The player would be represented by a cursor and they were expected to move around dodging incoming attacks while seeking to collide with attack zones to deal damage to the enemy. The purpose of the game was to defeat a single final boss named king Donovan who was a tyrant ruling over the land of Hydralia where the game took place. At any point, the player could enter the castle to face the final boss immediately. However, most likely, the boss would be too strong. To prepare, the player would roam around the world engaging in various battle encounters. Depending on where the encounter was triggered, a different enemy would show up that fitted the theme of the location they were in. The enemy’s difficulty and experience reward if beaten would drastically vary depending on the location. Finally, the player could level up and heal in a village. I was now ready to start programming the game and figuring out the details as I went along. For this purpose, I decided to write the game using the JavaScript programming language and the KAPLAY game library. I chose these tools because they were what I was most familiar with. For JavaScript, I knew the language before getting into game dev as I previously worked as a software developer for a company who’s product was a complex web application. While most of the code was in TypeScript, knowing JavaScript was pretty much necessary to work in TypeScript since the language is a superset of JavaScript. As an aside, despite its flaws as a language, JavaScript is an extremely empowering language to know as a solo dev. You can make games, websites, web apps, browser extensions, desktop apps, mobile apps, server side apps, etc… with this one language. It’s like the English of programming languages. Not perfect, but highly useful in today’s world. I’ll just caveat that using JavaScript makes sense for 2D games and light 3D games. For anything more advanced, you’d be better off using Unreal, Unity or Godot. As for the KAPLAY game library, it allows me to make games quickly because it provides a lot of functionality out of the box. It’s also very easy to learn. While it’s relatively easy to package a JavaScript game as an app that can be put on Steam, what about consoles? Well it’s not straightforward at all but at the same time, I don’t really care about consoles unless my game is a smash hit on Steam. If my game does become very successful than it would make sense businesswise to pay a porting company to remake the game for consoles, getting devkits, dealing with optimizations and all the complexity that comes with publishing a game on these platforms. Anyway, to start off the game’s development, I decided to implement the battle scene first with all of its related mechanics as I needed to make sure the battle system I had in mind was fun to play in practice. To also save time later down the line, I figured that I would make the game have a square aspect ratio. This would allow me to save time during asset creation, especially for the map as I wanted the whole map to be visible at once as I wouldn’t use a scrolling camera for this game. After a while, I had a first “bare bones” version of the battle system. You could move around to avoid projectiles and attack the enemy by colliding with red attack zones. Initially, I wanted the player to have many stats they could upgrade. They could upgrade their health (HP), speed, attack power and FP which stood for focus points. However, I had to axe the FP stat as I originally wanted to use it as a way to introduce a cost to using items in battle. However, I gave up on the idea of making items entirely as they would require too much time to create and properly balance. I also had the idea of adding a stamina mechanic similar to the one you see in Elden Ring. Moving around would consume stamina that could only replenish when you stopped moving. I initially though that this would result in fun gameplay as you could upgrade your stamina over time but it ended up being very tedious and useless. Therefore, I also ended up removing it. Now that the battle system was mostly done, I decided to work on the world scene where the player could move around. I first implemented battle encounters that would spawn randomly on the screen as red squares, I then created the upgrade system allowing the player to upgrade between 3 stats : Their health (HP), attack power and speed. In this version of the game, the player could restore their health near where they could upgrade their stats. While working on the world scene was the focus, I also made a tweak to the battle scene. Instead of displaying the current amount of health left as a fraction, I decided a health bar would be necessary because when engaged in a fast paced battle, the player does not have time to interpret fractions to determine the state of their health. A health bar would convey the info faster in this context. However, I quickly noticed an issue with how health was restored in my game. Since the world was constrained to a single screen, it made going back to the center to get healed after every fight the optimal way to play. This resulted in feeling obligated to go back to the center rather than freely roaming around. To fix this issue, I made it so the player needed to pay to heal using the same currency for leveling up. Now you needed to carefully balance between healing or saving your experience currency for an upgrade by continuing to explore/engage in battle. All of this while keeping in mind that you could lose all of your currency if defeated in battle. It’s important to note that you could also heal partially which provided flexibility in how the player managed the currency resource. Now that I was satisfied with the “bare bones” state of the game, I needed to make nice looking graphics. To achieve this, I decided to go with a pixel art style. I could spend a lot of time explaining how to make good pixel art but, I already did so previously. I recommend checking my post on the topic. I started by putting a lot effort drawing the overworld map as the player would spend a lot of time in it. It was a this stage that I decided to make villages the places where you would heal or level up. To make this clearer, I added icons on top of each village to make it obvious what each was for. Now that I was satisfied with how the map turned out, I started designing and implementing the player character. For each distinct zone of the map, I added a collider so that battle encounters could determine which enemy and what background to display during battle. It was at this point that I made encounters appear as flashing stars on the map. Since my work on the overworld was done, I now needed to produce a variety of battle backgrounds to really immerse the player in the world. I sat down and locked in. These were by far one of the most time intensive art assets to make for this project but I’m happy with the results. After finishing making all backgrounds, I implemented the logic to show them in battle according to the zone where the encounter occurred. The next assets to make were enemies. This was another time intensive task but I’m happy with how they turned out. The character at the bottom left is king Donovan the main antagonist of the game. Further Developing The Battle Gameplay While developing the game, I noticed that it took too much time to go from one end of the battle zone to the other. This made the gameplay tedious so I decided to make the battle zone smaller. At this point, I also changed the player cursor to be diamond shaped and red rather than a circle and white. I also decided to use the same flashing star sprite used for encounters on the map but this time, for attack zones. I also decided to change the font used in the game to something better. At this point, the projectiles thrown towards the player didn’t move in a cohesive pattern the player could learn over time. It was also absolutely necessary to create a system in which the attack patterns of the enemy would be progressively shown to the player. This is why I stopped everything to work on the enemy’s attack pattern. I also, by the same token, started to add effects to make the battle more engaging and sprites for the projectiles. While the game was coming along nicely, I started to experience performance issues. I go into more detail in a previous post if you’re interested. To add another layer of depth to my game, I decided that the reward you got from a specific enemy encounter would not only depend on which enemy you were fighting but also how much damage you took. For example, if a basic enemy in the Hydralia field would give you a reward of a 100 after battle, you would actually get less unless you did not take damage during that battle. This was to encourage careful dodging of projectiles and to reward players who learned the enemy pattern thoroughly. This would also add replayability as there was now a purpose to fight the same enemy over and over again. The formula I used to determine the final reward granted can be described as follows : At this point, it wasn’t well communicated to the player how much of the base reward they were granted after battle. That’s why I added the “Excellence” indication. When beating an enemy, if done without taking damage, instead of having the usual “Foe Vanquished” message appearing on the screen, you would get a “Foe Vanquised With Excellence” message in bright Yellow. In addition to being able to enter into battle encounters, I wanted the player to have lore/tips encounters. Using the same system, I would randomly spawn a flashing star of a blueish-white color. If the player overlapped with it, a dialogue box would appear telling them some lore/tips related to the location they were in. Sometimes, these encounters would result in a chest containing exp currency reward. This was to give a reason for the player to pursue these encounters. This is still a work in progress, as I haven’t decided what kind of lore to express through these. One thing I forgot to show earlier was how I revamped the menu to use the new font. That’s all I have to share for now. What do you think? I also think it’s a good time to ask for advice regarding the game’s title. Since the game takes place in a land named Hydralia . I thought about using the same name for the game. However, since your mission is to defeat a tyrant king named Donovan, maybe a title like Hydralia : Donovan’s Demise would be a better fit. If you have any ideas regarding naming, feel free to leave a comment! Anyway, if you want to keep up with the game’s development or are more generally interested in game development, I recommend subscribing to not miss out on future posts. Subscribe now In the meantime, you can read the following :

0 views
Playtank 2 weeks ago

Maximum Iteration

The quality of your game is directly related to the number of iterations you have time to make. The adage is that game development is an iterative process . We know we should be tweaking and tuning our game until it feels and runs great. To make it the best it can be; greater than the sum of its parts. Early on, to make sure that the features we work on are worth pursuing. An iteration can be as small as an incremented variable or as big as a complete reset of your entire game project. What iterations have in common is that the only way to get more of them is to teach yourselves the right mindset and to continuously remove anything that costs time. For the past few years, this has been at the top of my mind: how to maximise iteration . At the very highest level, you need to remove obstacles, clicks, and tools. The fewer things a developer needs to know and do per iteration, the better. Those three are what this is all about. I’ve come up with five areas where you need to optimise iteration, that I’ve obsessively built into my own pipelines. These five are what the rest of this post elaborates on: Iterating on object and state authoring means creating new objects and states and connecting them to data. A character that can roam, shoot, and take cover, and has MoveSpeed, TurnSpeed, and Morale, perhaps. This is one of those things where many developers will get used to how their first engine does things and forever see it as the norm. But most tools for object authoring are actually quite terrible (in my opinion), and are also highly unlikely to match your specific needs. They are far more likely to present you with hoops to jump through and prevent you from achieving fast iteration. It’s not unusual for getting a new object into a game to take hours and involve multiple people. Particularly if the game’s pipeline has grown organically over several years of production. Where you only had to add a single collision capsule at first, maybe you must now add a full ragdoll, two different sets of hit capsules, IK targets, and a bunch of other things before the new asset works as intended. Some of which has to be created manually. Forget one step, and your game may crash or exhibit weird results. This is a big threat to iteration. Maybe the biggest. So if you can, you should make your own tools for object authoring that are perfectly suited to your needs, require as few steps as possible, and waste as little time as possible. Or use a tool that’s specifically made for exactly the thing you need, if you can find it. I tend to think of objects in systemic design as Characters, Props, and Devices. This is not in any way strict, it’s only what my favorite designs tend to need. If you are working on a grand strategy game, a puzzle game, or something else, the nature of your objects may vary. The key to object authoring is variation. A lamp is not the same thing as a crate or a human, but they should be able to interact in interesting ways. To make them interact, you need to be able to vary them easily and then hand off responsibility to the game’s systems in a predictable way. Something that can’t be stressed enough is to always set working defaults for all of your objects. Make sure that objects work out of the box so iteration can begin immediately. Few things waste more time than “oops, forgot the flag that did the thing.” The most intuitive way to represent objects is to use objects, unsurprisingly. A Character can be expected to do certain things and a Door will do other things. Enemy and Player can now inherit from Character and they may make use of a Gun or a Broom depending on the kind of game you’re making. With this setup, authoring objects is no harder than inheriting from the right class and then tweaking the numbers. This is how Unreal Engine is used by many teams. But this gets cumbersome if you want a character that can fly or to utilise the dialogue system in a character but for something that cannot move. Or maybe the spline following that characters have, but now for a train car. Authoring with object-oriented systems seems intuitive but doesn’t handle exceptions well. Everything now needs to be a character if it wants to access certain things, and designers will have to learn the intricacies of all the objects in the game before they can truly begin iterating. If you want your object to collide with things in a physics simulation, you add a Collider. If you want it to move on a flow field, you add FlowFieldMove. The sum of an object’s components dictates its behavior. This may use many different types of component setups, but the two most common are GameObject/Component (GO/C) and Entity Component System (ECS). Both Unreal and Unity uses the first, but in very different ways. Both Unreal and Unity also provide ways to use the second, but in ways that are mostly incompatible with the first. Conceptually, component-based object authoring is great. In practice, it tends to be a deep rabbit hole of exceptions and flawed component combinations that have grown organically through an engine’s lifetime. Most game engines today are data-driven at some level. You plug data in, it gets compiled into an engine-friendly format, and voila: the engine knows what to do. The data is picked up by a renderer, physics engine, or something else, and things simply happen just the way they are supposed to because the data is clear enough to just chug along. Like feeding coal into a steam engine. With a data-driven approach, you will usually be collecting all that data and bundling it up using authoring tools. Bring in the mesh asset, animate it using animation assets, play some sound assets on cue, etc. The data itself will drive the process. For example in a “target-based” setup, where one piece of data activates another which activates a third, etc., until the game level or other logic has run its course. You need ways to define how something goes from Alive to Dead, or when something should be Idle instead of Moving. This layer of authoring and iteration is very rarely straightforward, and parts of it are almost always deep down in the code for your game. This is bad. So let’s discuss how to make it not bad, and how to open up your game for more direct rules authoring through state transitions. If my use of the word “state” in this post gets confusing, you can look into the state-space prototyping post to see what I mean. This is not standard jargon used by all game developers, but it is a key part in my own framework. A good state authoring tool allows you to list which states an object can be in, where it can collect changes from, and how it behaves in relation to other objects and their state. Just to be clear: this doesn’t have to be complex at all. It can be enough to list the actions an entity can use and then leave it to other systems to actually select actions. Take a look at the An Object-Rich World post if you are curious about other models for working with permissions and restrictions. The most important element of permissions and restrictions is predictability . There are many cases where our games become interconnected in ways that are not immediately visible. For example, when you say that a character’s ability to Move has been restricted due to a state, you may have to manually add this to multiple places. Perhaps the sound, animation, and head-bobbing system also need to be paused separately. This is extremely bad, because it means both that you will get unpredictable results and that you will often have to revisit the same changes. A specific state is only relevant for a particular object. A generic state can be used by any object sharing the same characteristics. Think of the idea of spotting something, for example. A sensor picking up that an object can be seen. If a player is going to spot something, this needs to be specific , since the player’s avatar, unlike a NPC avatar, will generally have a camera attached to it. So to check if the player spots something, we can use the camera’s viewport to determine if the thing is on-screen or not. A generic version of the same thing could instead use the avatar’s forward vector, an arbitrary angle, and perhaps a linecast, to determine if the object can be seen. This could be used by any avatar, player or otherwise, and would probably be accurate enough if your game doesn’t need more granularity. An exclusive state is the only state that can be run at a given time, whereas an inclusive state also allows other state to run alongside it. Parallell states are made to run at the same time as each other and may therefore not poke at the same data, or you could get unpredictable results. A state is conditional if it only activates based on preset conditions. It’s your if-then-else setup. Conditionals will often need considerable tweaking, and if you’re not careful in how you build such systems, they can turn into a tangled mess. Just like nested ifs. Common ways to handle conditional states are predicate functions, tags, flags, and many of the other things brought up in the A State-Rich Simulation post. Preferably, setting or changing conditionals should be just a click or two, and it should respect the type of data separation mentioned earlier. When a game has multiple dynamic sources for conditions, it quickly gets complicated. For this reason, your tools should provide debug settings for visualising where conditions are coming from, and you can also log everything that gets triggered by certain conditions during a session. A state is injected when it’s pushed into an object. This can follow any number of systemic effects , from straightup addition to slightly more granular propagation . Common points in a game simulation for state to get injected are collision events, spawning or destruction, proximity, spotting, and various forms of scripted messaging. This means that having a solid system for defining such injections is a great starting point for how transitions will work in your game. If you have the concept of a Room, for example, this Room may keep track of what’s inside of it and then propagate that knowledge to anyone visiting the room. Objects would then inject their presence into the room, while the room would inject relevant state into the objects in turn. An explicit conditional state is something like the Idle state pushing a Move state onto an internal stack because move vector magnitude is higher than zero. These are the only circumstances where Move will ever happen, making it an explicit transition. A dynamic state would be something like a gunshot killing you by injecting the Dead state. This is a dynamic transition because it can happen at any time, and beyond any restrictions on the injection itself (ammo, aiming, etc.), you won’t be defining anything in advance, and you’re not really waiting for it to happen. It happens when it happens, or it may not happen at all. A state is timed if it remains active for a limited time. It can also loop over a given duration and either bounce back (i.e., from 0 to 1 back to 0) or it can reset and repeat. The current value of the timed state is often referred to simply as T and should be a normalized (0-1) floating point number. This type of state is extremely handy, and you will want to tweak how the T value output gets handled in as many varied ways as possible. You want to be able to use curves, easing functions, and all thinkable different kinds of interpolation. Timed state can be used to achieve anything from a Thief -style AI sense of “smell,” to a menu blend, to an animation system, to reward pizzazz. It’s the perfect type of state for an interstitial and is where you will be able to do much of your polish. A state is interstitial when it’s added between other states without affecting them beyond the delay this may cause. Screenfades, stop frames, and sound triggers, are some examples of this. Objects and states will be defining the game at its highest level. But you will also want to change the rat catcher’s catching range from 2.3 to 2.5 and maybe add an additional key to a curve to make a fade-in smoother. It’s been mentioned before, but may be worth repeating: you should separate data from objects from the very beginning of your project. Every second you can avoid having to navigate the jungle of files in your project is a second gained towards additional iteration. Remember: remove clicks and remove tools. Many games will expect either a database approach (“spreadsheet specific,” in Michael Sellers’ terms), or they will have a hard connection between an object and its data. But a good data authoring tool is either integrated with the game engine or is an established external tool, such as a spreadsheet or database, that has a single-click or dynamic export/import process into the game. Many games still to this day keep data hard-coded into their compiled executables. This can be done for security or obfuscation reasons, out of habit, or because the engine used for a certain game is structured that way. For a small game with simple data, this is rarely an issue. You can make your changes, recompile, and then test, within seconds. But for bigger or more complex projects, it can have a cascading effect on iteration complexity. It also forces you to rely on programmers even for changes that have nothing to do with game logic or code. If you can avoid this, do so. It doesn’t matter if a compile takes five minutes, it’ll be stealing those five minutes over and over again. It will also decrease the number of iterations you can make. Issues with compiled data are not new. One common way to avoid some of them is to use lightweight text files that can be loaded and interpreted at runtime. This can be done in one of two ways. You can construct data this way . The below is a small example of this, where Lua was used to package information about different sectors in a space game. In this case, a sector has details about which other sectors the player can travel to, which pilots are present in the sector, and which stations and colonies can be visited. This is information that could’ve been hardcoded into the client, but this way it’s made available at runtime and much easier to iterate on. You can build logic this way . The next example is also Lua, but is a narrative sequence from the same space game. By exposing gameplay features to Lua, it becomes possible to script these sequences that can be loaded and parsed by the engine on demand. One benefit of this is that you can rewrite the script, make the engine reload the data, and then test within moments of making the change. If there’s such a thing as a standard today, it’s to store your data in a database. This database may live on a proprietary server owned by the developer or publisher, or it can utilise something in the cloud, like Microsoft Azure or Amazon Web Services (AWS). It can also be an offline database that you store with your game client much like a script. A database forces you to decouple data from objects and allows live editing of data (if in the cloud). Most modern live service games do this for some of its data, if not all, as it makes it a lot easier to respond to community feedback and fix data-related issues. Planning how you structure your data before a project begins can save you many headaches. If you want to do MoveSpeed, you could have a MoveSpeed baseline multiplier at 1.0, each object could have a MoveSpeed attribute of maybe 10-20, and gear or other props could then add their own MoveSpeed modifiers on top as additions, multipliers, cumulative multipliers, or some other thing. You’d get something like MoveSpeed = Baseline * (Attribute + Modifier(s)) . If you manage to separate these from their objects you can mix things up for any reason you want without ever touching or even looking for the objects ever again. The amount of time this saves for more iteration can’t be overstated. (Again: remove clicks, remove tools.) Maybe you want to modify Baseline based on difficulty, so that MoveSpeed is 1.5x on Easy, but only 0.75 on Hard. Or go in there and double the MoveSpeed attribute for all enemies that have the Small trait. With this type of separation, all of those things can suddenly be done in seconds. This makes everything from bulk operations to conditional exceptions a lot easier to make and therefore to iterate on. A change set is a collection of changes made to your existing data. You can look at it as a changelist or commit in version control. Bundling variables into change sets is a handy way to keep track of what you are doing and makes it easier to compare one change to another. Change sets really come into their own if you can combine them, turn them on/off, and provide more than one at a time. Over time, these sets can become like a log for your earlier tweaks, creating a kind of tweak history for your game’s design. To know how any iteration works out you need to play it. But it’s not enough to merely play as you usually do. You need to compare changes and report when something doesn’t work out. Even as a solo developer, a solid reporting tool can be the difference between fixing problems and shipping with them. This is where your change sets from before will work their magic. Let’s say you made a “goblin damage debuff” change set where you decreased how much damage the goblin dealt by half, and you now go into your change set tool to activate that change set. Or you tell external playtesters to play once with and once without the change set. You can suddenly talk about balancing the same way you’d talk about feature implementations. I encounted Semantic Versioning during my first mobile game studio experience, at Stardoll Mobile Games. I’ve stuck to it ever since. The summary for Semantic Versioning is so simple, yet so powerful: “Given a version number MAJOR.MINOR.PATCH, increment the: This is a convenient way to plan your assets. The Patch version can be automatically incremented whenever you build your game to identify each change and you can regulate when the Minor and Major version must be incremented. For example, you can plan that you only release a new Major when you are releasing new content and a Minor when features are added or changed. At Calm Island, we used to maintain one Dev and one Stable branch. The latter meant we could always show the game to any external stakeholders, even if it may have been an older build. The stable version was also the one deployed to stores after final validation. The idea to always keep your game playable may sound self-explanatory, but good processes for this are uncommon. Many studios still use a single main branch for everything and when a deadline looms the only way to safeguard its health is to enact some kind of commit/submit stop where no one is allowed to push anything that risks the playability of the build. This often results in a rush of new code and content right after the stop is lifted, that almost always breaks something and may take days or weeks to resolve. A common issue with playtesting is that you need to jump through hoops before you can test the thing you’re actually working on. This can be because you need to launch the game, go through the splash screen, load the right level, noclip or teleport to the right place, etc., before you actually play . If your game is unstable (see Always Playable above), this can be further exacerbated by crashes or bugs that are not yours to fix. To avoid this it’s important to be able to do targeted testing. Using isolated environments, such as a “gym” level for movement testing, and testing exactly the thing you just tweaked or implemented without any distractions. You need to be able to mix and match both systems and change sets in your game, to iterate as much as possible. Play without the enemy AI running, no props spawning, or with that goblin damage debuff or double move speed turned on or off. You can look at this like the layers in Photoshop, where you can turn things on or off so they don’t impact your testing when you need to test something specific. Once you have a modular setup, make sure that you can switch quickly and easily between different modules as well. Make them incerchangeable. If you need to test playing against only a single goblin, but that goblin can’t move, and you have only torches and stale bread; then it should be as few clicks and tools involved as possible to do so. Once the data is separated, you can take it one step further: you can remove entire segments of your game and isolate iteration and testing to retention loops or other longterm systems. Think of a standard game loop. You have some inputs into each session, such as matchmaking settings or difficulty selection. This input affects how the session plays. Once the session completes, you get outputs , such as XP or treasure , that you can then reinvest into progression. This is the template for many standard game loops. Simulated state allows you to pretend that one of these steps happened without actually having to take the time to play them. You can randomise the inputs and then play, or skip the session entirely to only work on the output and investment cycle. Once you reach the modular and interchangeable iteration dream, this is quite possible. The value of this type of testing is high, since longterm systems often don’t get the testing they need simply because you must finish a real session of gameplay to get the “proper” outputs. Being able to compare different iterations to each other and choose which comparisons to make is more of a meta tool than it’s directly testing related. It’s more about comparing the results you gain from testing than the testing itself. Look at the Game Balancing Guide for some inspiration on what kinds of things you could potentially compare. If you find something that’s not great or that you want to revisit, make it easy to take notes or report to a central system; you may even go so far as to generate planning tickets from an in-engine event. Have your testers press some easy to access key combination (on controllers, maybe to hold both triggers and both stick buttons down for one second). Sometimes in a big team, the more technical tasks involved with the build and distribution process are invisible to you. You may hear about porting or signing or compliance, but you never have to deal with any of it. You happily playtest on whatever is easy and available, usually your development computer. Sometimes even inside of your development environment. The reason this happens is because your updating process is not built with iteration in mind. Builds take too long, frequently don’t work, and distributing to local devices is a hassle. Many teams “forget,” or rather downprioritise, testing on their proper target devices. One of the stranger things I’ve run into is developers who not only dislike testing on their current target platform but basically refuse. It’s so much easier to stay in your comfortable development environment indefinitely. Some studios may even resent some of their own target platforms, for example mobile platforms or consoles, because they are allowing personal opinion to affect their professionalism. But there’s really no excuse: you should always test on your target devices. Something that’s easy to overlook is to keep visible and easily copy/pasteable version information on-screen in your game. This is good for a product after launch too, so that players can provide you with more detailed information if they experience bugs or crashes. One of the first things I did in gamedev was to drive cars along a race track’s edges to make sure that the collisions worked like they should. A kind of testing that you can automate relatively easily. In test-driven development , testing and automation is already part of the thinking, and there’s really no need for game development to be different. Automate the right things, however. An automated test can’t tell you about quality. It can’t suggest design changes or warn that a player may not understand the phrasing of a dialogue line. Automate regression testing, compliance testing, integration testing, and the driving along the tracks to test collision. But don’t automate quality testing. Building for all of your platforms without having to do so manually is an essential element of game development. No amount of testing in a development environment compares to testing real builds. Automated builds are often triggered by new commits or version increments. It’s also common to have nightly builds, hourly builds, and build cadences based on testing needs and build duration. What’s important for such a pipeline is that it can clearly say what’s going wrong by posting logs and details to the relevant people. A Slack channel, for example. What you absolutely don’t want is to put developers on fulltime duty to get builds out. Once you have a build, you need to get that build onto the right device for testing. Most devkits and software platforms allow remote connection. You can usually set up jobs to trigger automatically when a build completes and publish your game to your testing platform (or even live) without requiring any work at all. Hopefully, this post provides some food for thought on iteration and what it really means. If not, tell me every way I’m wrong in an e-mail to [email protected] or in a comment. Here’s the list: Remove obstacles . Make the process of iteration as fast as possible, by removing gatekeepers and bottlenecks. Maybe you shouldn’t go through the full approval process for a quality of life improvement, maybe your playtesters should get three separate sets of things to test instead of just one, and maybe a developer can prioritise their own tasks rather than sitting in hours-long meetings or being micromanaged. Remove clicks . I once heard the suggestion that you lose 50% of viewers with every required interaction on a website. More clicks will invite more pain points, more potential human errors, and will also lead to fewer iterations. Just imagine (or remember) not having box selection in a node tool vs having it. Remove tools . You need special skills, licenses, installation time, and more, the more tools you require. Everything in your pipeline that can be either bundled into something else or removed entirely via devops automation should be considered. Not least of all because tools development is itself a deep rabbit hole . Authoring objects and data. Transitioning objects between states. Tweaking and rebalancing data. Testing and comparing iterations. Updating the game for testing and distribution. For object-oriented authoring: clearly visualise what an object can (and can’t) do based on its inheritance; don’t hide logic deep into a dropdown hierarchy. For component-based authoring: make non-destructive tools with opt-in as the default rather than opt-out . Provide good error messaging for when requirements are not met. For data-driven authoring: provide clear debug information and visual representations for where data is coming from, when, and what it allows. Make it clear what data is expected where, so no steps are missed. Make it easy to list states and transitions per object. Provide state transition information with data reporting, so that you can keep track of all the whens and whys. Make states have meaning; if a state says that an object cannot move, this should be definitive. Differentiate between Specific and Generic states, so that you will never accidentally add state to an object that won’t work. Set clear guardrails between Exclusive, Inclusive, and Parallell states. Plan what you need each state to be able to do and where to get its data. Visualise which conditions apply at a given moment and why. Show when conditions are unavailable and why. Log transition changes and which conditions made them change. Show when, how, and from where a state injection occurs. Make it clear which explicit states are running at any given time. When dynamic state is triggered, make all of its relevant overrides predictable and singular: it should always be enough to turn something on or off once . Provide visualisations of start and end positions for timed states. Allow developers to scroll timed states manually to preview them. Allow states to resume after interruption, so that you can use interstitials in a non-destructive way. Separate your data into logical containers, such as Baseline, Attribute, and Modifier. Bundle collections of changes into change sets . E.g., “double move speed.” Identify change sets modularly, so you can test more than one thing at a time. MAJOR version when you make incompatible API changes MINOR version when you add functionality in a backward compatible manner PATCH version when you make backward compatible bug fixes” Maintain clear versioning, even if just for yourself. Make sure that you can always play a recent version of your game. Provide shortcuts and settings that let you avoid time sinks. Make it easy to choose what to test. Make it clear what is being tested. Make your systems modular. Make modules easy to toggle. Allow testers to easily switch out and modify what they are testing: anything with the same output should be able to tie into the correct input. Make it possible to simulate the systems without running them. Show the data; show comparisons. Make it easy to file bug reports and provide feedback without leaving your game. Integrate screenshot tools and video recording. Test on target devices. Test your lowest spec targets. Make version numbers visible in all game builds, including release. Automate functionality testing, but not quality testing. Building the game automatically and get new builds continuously without requiring manual intervention. Remove all obstacles for build distribution: make it a single click (or less) to get a functional build to play on the right device. For object-oriented authoring: clearly visualise what an object can (and can’t) do based on its inheritance; don’t hide logic deep into a dropdown hierarchy. For component-based authoring: make non-destructive tools with opt-in as the default rather than opt-out . Provide good error messaging for when requirements are not met. For data-driven authoring: provide clear debug information and visual representations for where data is coming from, when, and what it allows. Make it clear what data is expected where, so no steps are missed. Make it easy to list states and transitions per object. Provide state transition information with data reporting, so that you can keep track of all the whens and whys. Make states have meaning; if a state says that an object cannot move, this should be definitive. Differentiate between Specific and Generic states, so that you will never accidentally add state to an object that won’t work. Set clear guardrails between Exclusive, Inclusive, and Parallell states. Plan what you need each state to be able to do and where to get its data. Visualise which conditions apply at a given moment and why. Show when conditions are unavailable and why. Log transition changes and which conditions made them change. Show when, how, and from where a state injection occurs. Make it clear which explicit states are running at any given time. When dynamic state is triggered, make all of its relevant overrides predictable and singular: it should always be enough to turn something on or off once. Provide visualisations of start and end positions for timed states. Allow developers to scroll timed states manually to preview them. Allow states to resume after interruption, so that you can use interstitials in a non-destructive way. Separate your data into logical containers, such as Baseline, Attribute, and Modifier. Bundle collections of changes into change sets. E.g., “double move speed.” Identify change sets modularly, so you can test more than one thing at a time. Maintain clear versioning, even if just for yourself. Make sure that you can always play a recent version of your game. Provide shortcuts and settings that let you avoid time sinks. Make it easy to choose what to test. Make it clear what is being tested. Make your systems modular. Make modules easy to toggle. Allow testers to easily switch out and modify what they are testing: anything with the same output should be able to tie into the correct input. Make it possible to simulate systems without running them. Show the data; show comparisons. Make it easy to file bug reports and provide feedback without leaving your game. Integrate screenshot tools and video recording. Always test on target devices: no amount of emulation will ever compensate for real qualitative testing. Have as many diverse target devices available as financially and physically possible. Test on target devices. Test your lowest spec targets. Make version numbers visible in all game builds, including release. Automate functionality testing, but not quality testing. Building the game automatically and get new builds continuously without requiring manual intervention. Remove all obstacles for build distribution: make it a single click (or less) to get a functional build to play on the right device.

0 views
Stone Tools 1 months ago

Superbase on the Commodore 64

When it comes to databases, I've never been much more than a dabbler. I remember helping dad with PFS:File so he could do mail merge. I remember address books and recipe filers. I once tried committing my comic book collection to ClarisWorks . Regardless of the actual efficacy of those endeavors, working with database management systems never stopped feeling important. I was "getting work done," howsoever illusory it may have been. These days, the average consumer probably shies away from any kind of hardcore database software. Purpose-built apps which manage specific data (address books, invoicing software) do most of our heavy lifting, and basic spreadsheets ( Google Sheets , Notion , Airbase ) tend to fill in the remaining niche gaps. The industry was hell-bent on transforming rapidly improving home computers into productivity powerhouses and database software promised to unlock a chunk of that power. Superbase on the Commodore 64 was itself put to work in forensic medicine in England and to help catch burglars in Florida . Maybe it can help me keep track of who borrowed my VHS copy of Gremlins 2: The New Batch. The manual has a three-part tutorial, the first two parts of which have an audio component (ripped from cassette tapes). I will absolutely use it for an authentic learning experience. I'm looking forward to some pre-YouTube tutorial content, "What's up everyone, it's ya boy Peter comin' atchu with another Superbase tutorial. If you're enjoying these audio tapes, drop a like on our answering machine and subscribe to AHOY! Magazine. " From first boot, I feel the pain. After the almost instantaneous launching of trs80gp into Electric Pencil last blog, getting Superbase launched in VICE is annoyingly slow. I appreciate a pedantic pursuit of accuracy as much as anyone, but two full minutes to load Superbase is ridiculous, for my 2025 interests. Luckily VICE has a "WARP" mode which runs some 1500% faster, bringing boot time to under 10 seconds. A C64 one could only dream of is a keyboard stroke away, to enable or dismiss on a whim. How spoiled we are! Here I am, a businessman of 1983, knitted tie looking sharp with my mullet, ready to thrust my 70s HVAC business into the neon-soaked future of 80s information technology. (The company must pivot or die !) First things first, “What is a database?” I wonder, sipping a New York Seltzer. According to the very slow audio tutorial, "It's an electronic filing cabinet!" So far, so good. "And just as in an ordinary filing cabinet, information is stored in batches called 'files'. and you can think of Superbase as an office containing a number of electronic filing cabinets." OK, so if Superbase is my office, and my office currently contains seven filing cabinets with 150 files/per, I’ll make seven databases to hold my information? "Superbase will allow you to hold up to 15 files in each database." OK, I'm not sure I heard that correctly. Rather than having seven cabinets with 150 files each, I instead have 70 cabinets with 15 files each? Is this the " office of the future ?" Come to think of it, are we even using the same definition of the word "file?" When I ask Marlene to bring me "the Doogan file" I receive a file folder filled with Doogan-related stuff: one client, one file. "Each of the files is made of bits of information known as RECORDS. For example, you may have a file containing names of companies. In that case each company name would be one RECORD." A file which contains only the names of companies? Now I'm learning that records are made of FIELDS. But we were just told that a RECORD is "a bit of information" like a company name. This filing cabinet metaphor is falling apart and I'm only five minutes into a 60-minute tutorial. Not only did society have to learn how to create new tools for moving into the information age, we also had to learn how to teach one another how to use those tools. In Superbase's case, I find the manual mostly OK. It offers a glossary, sample code, and a robust rundown of each menu and command. What's missing here is an explanation of the mental shift required in moving from analog to digital files. Where a traditional filing cabinet is organized by relation, our C64 will discover relations (though this is not a relational database); a kind of inversion of the physical filing cabinet strategy. Without my 2025 understanding of such things, I would be completely lost right now about how Superbase and databases work. At any rate, working through the tutorial, I do find the operation of the software quite simple so far. Place the cursor where you want to add a field name or field input area and start typing. and set the start and end points of a field, which doubles as a visual way to set the length of that field. The field's reference name is only ever the word to the immediate left of the field entry area. Simple, if inflexible. Setting field types is also easy enough, even if the purpose and usage of the "key" field is never made explicitly clear. It is only ever described as being the field that records will be sorted on by default. Guidance on choosing an appropriate key field and how to format it is essentially nonexistant. Querying records is straightforward, though there is definitely a learning curve. Partials, wildcards, absence of a value, value sets and ranges, and comparatives (values <100, for example) are all possible and chainable. The syntax is relatively clear, even if conventions ( is the wildcard token) have subtly changed. I've now built something like a phone book and entered some sample data. This usage of the database matches my mental model of the object being replaced and I'm feeling somewhat confident. But this is also something I could have built with a type-in BASIC program from Popular Computing Weekly . If I put myself in the mindset of someone reading a contemporary book like Business Systems on the Commodore 64 by Susan Curran and Margaret Norman , it is quite unclear how my filing cabinet data and organizational structure translates to floppy disk. With floppy drives, a printer, and more I have spent almost $5000 (in 2025 money) on this system. For that outlay of cash, am I really asking too much for someone to help guide me into a "paperless office?" Speaking of which. George Pake of Xerox PARC (yes, that Xerox PARC ) gave an interview to Businessweek in June 1975 in which he spoke of his vision for a "paperless office." The later spread of that concept into larger circles seems to owe a lot to F.W. Lancaster. In 1978, Lancaster published Toward Paperless Information Systems and spent a full chapter contemplating what a paperless research lab might look like in the year 2000. Lancaster's vision paralleled a fair amount of what we know today as the internet. To readers of the time it was all brand new conceptually, so he spent a lot of time explaining concepts like "keeping a journal on the computer" and how databases could just as easily be located 5000 miles away as 5 feet away. He couldn't quite envision high resolution video displays, and expected graphic data to remain in microfilm/fiche. He could envision "pay as you go" for data access, however. It should be noted that the phrase "paperless office" does not appear in Lancaster's book (it does in his previous book). That phrase had already started an upward trend since before the Pake interview, but in my research it does seem that Lancaster really helped mainstream the concept. Lancaster identified three main functions of computer use in a paperless office. Especially in the 80s, transmit and receive were a long way from being cheap and ubiquitous enough to replace paper between two parties. That sounds obvious, but hype around the "paperless office" made it easy to overlook such flaws. Besides, wasn't it a matter of time before the flaws were resolved? Wasn't everyone working toward the same paperless vision? Well that's hard to say, given the slightly mixed messaging of the time. 1983's The Work Revolution by Gail Garfield Schwartz PhD and William Neikirk says explicitly, "we are at the brink of the paperless office." 1982's The Word Processing Handbook by Russell Allen Stultz cautions us, "The notion of a 'paperless office' is just that, a notion." But May 1983's Compute Magazine keeps the dream alive with a multi-page article, "VICSTATION: A Paperless Office" as though it had already arrived and was waiting for you to catch up. Computer magazines and academic investigations were typically cold on the idea of the "paperless office" ever coming to fruition. Rather they saw (quite correctly) that if everyone had simple, easy-to-use publishing tools at their fingertips paper usage would increase . The mainstream, ever one to latch onto a snappy catch phrase, really did seem to push the idea to the masses as an inevitability . A CEO in 1983 really couldn't be blamed for buying into the hype. To not have bought into it would have felt tantamount to corporate negligence. I asked ChatGPT for a modern parallel and all it said was, "Time is a flat circle." Building out anything more advanced than the most rudimentary of rolodexes required a lot of patience and forbidden knowledge. As noted earlier, the manual only gets you so far. There was a decent stream of books published during the early 80s which tried to fill various knowledge gaps. Some would tackle general "using your computer for business" while others would target specific software + hardware combinations. Database Management for the Apple from 1983, the release year for Superbase , has some great illustrations and explanations about databases and how they work conceptually. It digs into how to mentally adjust your thinking from manual filing to electronic filing. It also includes fully commented source code in BASIC for an entire database program. A bargain for $12.95 ($40 in 2025), but probably ignored by C64 Superbase users? Unfortunately for us in 1983, the book we Superbase users desperately need won't be published for three more years. Superbase: The Book , by Dr. Bruce Hunt, was published by Precision Software Ltd, the very makers of Superbase itself in 1986 for $15.95 ($47 in 2025). It straight up acknowledges the lack of help over the years in making the most of Superbase . "Part I: Setting Up a System" addresses almost every single thing I complained about in the tutorial. It contains a mea culpa for failing to help users build anything beyond the most rudimentary of address books. It then moves into "the most important discussion in the book." A conceptual framework for thinking about your existing files, and how to translate them into data that leverages Superbase's power, is well explained with concrete examples. As well, it works diligently to show you that the way files and fields were set up in the tutorials that shipped with Superbase was woefully inadequate for making good use of Superbase . We learned it by watching you! As an example, what was just "firstname" and "lastname" fields in the tutorial are considered here more thoroughly. We are given a proper mental context for why a name is more complex than it first looks. As data , it is better broken into at least five fields: title, initials, first name, surname, suffix. Heck, I'd throw "middle" in the mix as well. Then Dr. Hunt explains what is actually a very powerful idea: record fields don't have to exist exclusively for human-readable output purposes. That is true, and almost counter to the shallow ways fields are treated in the manual, which only ever seemed to consider field data as output to the screen or a printer. "The crucial realization is that you don't need to restrict the fields in the record to the ones that will be printed." Many examples of private data that you might want to attach to a customer record (for example) are given, as well as ways to use fields solely for the purpose of increasing the flexibility of Superbase's query tools. Lastly, in what felt like the book had thoroughly invaded my mind and read my thoughts directly, an entire section is devoted to understanding key values, how they work, and ideas for generating robust, flexible keys. The remainder of the book continues on in the same fashion, providing straightforward explanations and solutions to common user issues and confusions. It's a solid B+ effort, even if the Apple database book feels more friendly and carefully designed. I'd give this book an A had Precision Software not made its customers wait three years for it. Here in 2025, the further into the tutorial I delve, the more the word "deal-breaker" comes up. I'll start with the format of the "Date" field type, and maybe you can spot the problem? We can enter the date in two ways: means a two-digit year and ONLY two-digits. This restricts our range of possible years to 1900 - 1999. That's right, returning after a 30 year absence: it's the Y2K problem ! Not only does this prevent us from bringing Superbase into the future, but we also cannot log even the recent (relative to 1983) historical past. I had a great-grandmother alive at that time who was born in the late 1800s, yet Superbase cannot calculate her age. Moving on, a feature I enjoy in modern databases (or at least more sophisticated than Superbase ) is input validation. Being able to standardize certain field data against a master file, to ensure data consistency, would be really nice. It's also a bit of a drag that a record's key value can only ever be a text string, even if you only use numbers. The manual gives a specific workaround for this issue which is to pad a number string with leading zeros. This basically equates to no auto-increment for you. Something I very much appreciate is that the entire program can be run strictly through textual commands; no F-keys or menus necessary. In fact, I dare say the menus hide the true power of the system, functioning as a "beginner's mode" where the user is expected to graduate to command-line "expert mode" later. Personally, I say just jump straight into expert mode. We can use a convention in a command to read and write values from records. BASIC-style variables can store those values for further processing inside longer, complex commands. As a developer, I'm happy. As a non-developer, this would be an utter brick wall of complexity for which I'd probably hire an expert to help me build a bespoke database solution. "Batch" is similar to "Calc" (itself a free-form or record-specific calculator) which works across a set of records. We can perform a query, store the result as a "list," then "Batch" perform actions or calculations on every record in that list. Very useful, but it comes with a note. "Takes a while" is just south of an outright lie. I must remember that this represents many users' first transition to electronic file management. Anything faster than doing work by hand had already paid for itself; that's true even today. That said, consider this. I ran "Batch" on eight (8!) records to read a specific numeric field, reduce that value by 10%, then write that new, lower value back into each record. Now, further consider that a C64 floppy can hold about 500 records, which seems like a perfectly reasonable amount of data for a business to want to process. ONE AND A QUARTER HOURS! Look, I know it was magical to type a command, hit a button, and have tedious work done while you took a long lunch. I once tasked a Macintosh to a 48-hour render in Infini-D . Here in 2025, I'm balking even at the 6 minute best case scenario in VICE. On real hardware, we must also heed the advice from the book Business Systems on the Commodore 64 : In fairness, most of the things I'd want to do are simple lookups and record updates from time to time. Were I stuck on 1982 hardware, it would be possible to mitigate the slow processing by working processing-time into my weekly work schedule. I wouldn't necessarily be "happy" about that situation, and may even start to question my investment if that were the end of the features. Luckily, Superbase offers a killer feature which offsets the speed issue: programmability. The commands we've been using so far are in reality one-line BASIC programs, and more complex, proper programs can be authored in the "Prog" menu. We are now unbound, limited only by our knowledge of BASIC (so I'm quite limited) to extend the program, and work around the "deal-breakers" I encountered earlier. Not every standard BASIC command is available (we can't do graphics, for example), but 40 of the heavy hitters are here plus 50 Superbase- specific additions . I don't want to sound naive, but I was shocked at the depth and robustness, yes even the inclusion of its programming language. It's far more forward thinking than I expected for $99 on a 64K machine. But I also cannot credit the manual with giving too much help with these functions. It's quite bare-bones. After all is said and done, the simple form building and robust search tools have won me over, but the limitations are frustrating. Whether I could make this any kind of a daily driver depends on what I can make of the programmability. It's asking a lot of me to become proficient in BASIC here in 2025. But the journey is its own reward. I press onward. Initially I thought I would build a database of productivity software for the Commodore 64, inspired by Lemon64 . The truth is, after my training to-date I am still a fair distance from accomplishing that, though I can visualize a path to success. There are two main issues I need to solve within the confines of Superbase's tools and limitations. Doing so will give more confidence that it is still useful for projects of humble sizes. Thinking of a Lemon64-alike, to constrain the software "genre" field (for example), I need a master list against which to validate my input. Superbase has some interesting commands that appear to do cross-file lookups: The code examples are not particularly instructive, at least not for what I want to do. The linking feature needs a lot more careful attention and practice to leverage. Rethinking my approach to the problem of data conformity, I have come to realize that the answer was right in front of me. All I really need is the humble checkbox. There is no such UI element on a machine which pre-dates the Macintosh nor has a GUI operating system, but I can mimic one with a list of genre field names each of a single-character field length. Type anything into a corresponding field to designate that genre. When doing a query for genre, I can search for records whose matching field is "not empty." Faking it is A-OK in my book. Without a working date solution, my options for using Superbase in 2025 are restricted. I can either only track things from the 20th century, or only track things that don't need dates. Neither is ideal. Working on UNIX-based systems professionally all day long, I think it would be nice to get this C64 on board the "epoch time" train. Date representation as a sequential integer feels like a good solution. It would allow me to do easy chronologically sorting, do calendar math trivially, and standardize my data with the modern world. However, the C64's signed integers don't have the numeric precision to handle epoch time's per-second precision . A "big numbers" solution could overcome this, but that is a heavy way just to track the year 2000. If I limit myself to per-day precision (ignoring timezones, ahem ), that would cover me from 1970 - 2059. Not bad! I poked around looking for pre-existing BASIC solutions to the Y2K problem and came up empty-handed. Hopping into Pico-8 (my programming sketchpad of choice) I roughed out my idea as a proof of concept. Then, after many "How do I fill an array with data in BASIC?" simpleton questions answered by blogs, forum posts, and wikis I converted my Lua into a couple of BASIC routines which do successfully generate an epoch date from YYYY and back again. Y2K solved! Snippet from my date <-> epoch converter routines; now it's 2059's Chris's problem. 1 REM human yyyy mm dd to epoch day format 5 REM set up our globals and arrays 10 y=2025:m=8:d=29 11 isleap=0:yd=0:ep=0 15 dim dc%(12) 16 for i=1 to 12 17 read dc%(i) 18 next 99 REM this is the program proper, just a sequence of subroutines 100 gosub 1000 200 gosub 2000 300 gosub 3000 400 print "epoch: ";ep 900 end 999 REM is the current year (y) a leap year or not? 0=yes, 1=no 1000 if y-(int(y/4)*4) >0 then leap=1:goto 1250 1050 leap=0 1100 if y-(int(y/100)*100) > 0 then goto 1250 1150 leap=1 1200 if y-(int(y/400)*400) = 0 then leap=0 1250 isleap = leap 1300 return 1999 REM calculate number of days that have passed in the current year 2000 yd = dc%(m) 2010 yd= yd+ d 2020 if isleap=0 then yd=yd+1 2030 return 2999 REM the epoch calculation, includes leap year adjustments 3000 ty=y-1900 3010 p1 = int((ty-70)*365) 3020 p2 = int((ty-69)/4) 3030 p3 = int((ty-1)/100) 3040 p4 = int((ty+299)/400) 3050 ep=yd+p1+p2-p3+p4-1 3060 return 4999 REM days passed tally for subroutine at 2000 5000 data 0,31,59,90,120,151 5001 data 181,212,243,273,304,334 -------------------------------------------------------------------------------- 5 REM epoch date back to human readable format 10 y=0:m=0:d=0 11 isleap=0:yd=0:ep=20329 15 dim md%(12) 16 for i=1 to 12 17 read md%(i) 18 next 100 gosub 2000 200 print y, m, d 900 end 999 REM is the current year (y) a leap year or not? 0=yes, 1=no 1000 if y-(int(y/4)*4) >0 then leap=1:goto 1250 1050 leap=0 1100 if y-(int(y/100)*100) > 0 then goto 1250 1150 leap=1 1200 if y-(int(y/400)*400) = 0 then leap=0 1250 isleap = leap 1300 return 1999 REM add days to 1970 Jan 1 counting up until we reach our epoch (ep) target 2000 y=1970:dy=0:td=ep 2049 REM ---- get the year 2050 gosub 1000 2100 if isleap=0 then dy=366 2200 if isleap>0 then dy=365 2300 if td>dy or td=dy then td=td-dy:y=y+1:goto 2050 2399 REM ---- get the month 2400 m=1:dm=0 2500 dm=md%(m) 2700 if m=2 and isleap=0 then dm=dm+1 2800 if td>dm or td=dm then td=td-dm:m=m+1:goto 2500 2899 REM add in the remaining days, +1 because calendars start day 1, not 0 2900 d=td+1 3000 return 4999 REM days-per-month lookup array data 5000 data 31,28,31,30,31,30 5001 data 31,31,30,31,30,31 I'm hedging here as I've had a kind of up-and-down experience with the software. I have the absolute luxury of having the fastest, most tricked out, most infinite storage of any C64 that ever existed in 1983. Likewise, I possess time travel abilities, plucking articles and books from "the future" to solve my problems. I have it made. There are limitations to be sure, starting with the 40-column display. But I also find the limitations kind of liberating? I can't do anything and everything, so I have to focus and zero in on what data is truly important and how to store that data efficiently. The form layout tools are as simplistic as it gets, which also means I can't spend hours fiddling with layouts. Even if the manual let me down, the intention behind its design unlocks a vast untapped power in a Commodore 64. It's almost magical how much it can do with so little. I can easily see why it won over so many reviewers back in the day. Though the cost and complexity would have frustrated me back in the day, in the here and now with the resources available to me, it could possibly meet my needs for a basic, occasional, nuts-and-bolts database. It would require learning a fair bit more BASIC to really do genuinely useful things, but overall it's pretty good! Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). While Warp mode in VICE is very handy, it's only truly useful when I hit slowness due to disk access. I'm sure I'll find more activities that benefit as this blog progresses, but for text-input based productivity tools, warp mode also warps the keyboard input. Utterly unusable. Basically I just use the system at normal speed. When I commit to a long-term action like loading the database, sorting, or something, I temporarily warp until I get feedback that the process is complete. Superbase The Book tells us that realistically a floppy will accommodate about 480 records. However, 1Mb and 10Mb hard drives are apparently supported, so storage should be fine with a proper VICE setup. VICE v3.9 (64-bit, GTK3) on Windows 11 x64sc ("cycle-based and pixel-accurate VIC-II emulation") drive 8: 1541-II; drive 9: 1581 model settings: C64C PAL printer 4: "IEC device", file system, ASCII, text, device 1 to .out file Superbase v3.01 (multi-floppy, 1581-compatible) Create information Transmit information Receive information "Ultimately, the workstation configuration will probably replace the usual office furnishings as the organization evolves toward the 'paperless office'" - The Office of the Future , Ronald P. Uhlig, 1979 "Transformation of the office into a paperless world began in the early 1980s. Computers have been an integral component of the paperless office concept." - O MNI Future Almanac , 1982 "This information revolution is transforming society through basic changes in our jobs and lifestyles. Indeed, the paperless office of the future and computerized home communications centers are information age miracles not to be hoped for, but expected." - America Wants to Know: The Issues and the Answers of the Eighties , George Horace Gallup, 1983 Base C64: 1 minute, 6.88 seconds In WARP: 6.22 seconds Base C64: 1.25 hours In WARP: 6 minutes I want to constrain some data to a standardized set of fixed values. I want to solve the Y2K problem. : select a second file in the same database only whose records you want to look up (no cross-database lookups) : specify the specific field in the file against which you want to do lookups : close the link to the second file : "reverse" the linked files; the linked becomes primary and vice versa Key input repeating like the system is demon possessed? Warp mode is probably still on. A snapshot saves the C64 state, but not the emulator state. So if you have a disk in the drive when you take a snapshot, that disk will not be inserted when you restore state. Save your snapshot with a name that reminds you which diskette should be inserted in which drive to continue smoothly from the snapshot. Superbase developers understood that data migration and interoperability are critical. We cannot have our data locked down into a proprietary format with no option to move to a different system. Print and Export accept formatting parameters which allow us to effectively duplicate CSV format. Printing with VICE generates an ASCII file. Exporting puts the data onto our virtual disk image. To get data off that disk image into our host operating system, we need to be able to browse disk contents and extract files. On Windows, DirMaster works nicely . For macOS and Linux, the DirMaster dev created dm , a command-line utility for browsing and working with C64 disk image files. Speed. I'm spoiled, I admit it. For standard searches it's snappy enough, but batch operations are tedious. Superbase isn't particularly easy to use with multiple floppies. The manual addendum for v3.01 says that a two-drive setup is supported, but I didn't really see how to do that. The initial data disk formatting routine offered no opportunity to point to drive #9, for example. I wish VICE would show the name of each .d64 file currently inserted into the virtual floppy drives. It's a little tough not having access to modern GUI elements in the form builder, like pull-down menus. "Build Your Own" is a powerful, flexible, time-consuming process using Superbase's programming tools. Getting around limitations of the pre-built fields, forms, etc seems possible with enough BASIC knowledge, time, and desire to commit to Superbase . Once that data is in there, it's honestly easier to let it stay there than try to work out some export/import function. This may be an issue for your use case.

0 views
Kartik Agaram 1 months ago

Quickly make any LÖVE app programmable from within the app

It's a very common workflow. Type out a LÖVE app. Try running it. Get an error, go back to the source code. How can we do this from within the LÖVE app? So there's nothing to install? This is a story about a hundred lines of code that do it. I'm probably not the first to discover the trick, but I hadn't seen it before and it feels a bit magical. Read more

0 views
NULL on error 2 months ago

Carimbo now have a better stack trace and Sentry integration

As I’ve already said countless times, I’m working on my first game for Steam, which you can check out online at reprobate.site . Since it’s a paid game, I need to provide proper bug support. With that in mind, and based on both professional and personal experience, I decided to use Sentry . At first, integrating C++ and Lua should have been straightforward, but I ran into some issues when using the Conan package manager. Initially, the package wasn’t being included in the compiler’s include flags, which led me to open an issue both on Conan Center and on Sentry Native . After spending a whole day on this, I eventually found out that the fix was actually pretty simple. Now Carimbo has native support for Sentry, both on the web (WebAssembly) and natively (Android, iOS, Linux, Windows, and macOS). Here’s how I managed to get it working. This was certainly my biggest problem. For some reason, even when following Conan’s documentation, I couldn’t get it to include the header path for Sentry Native. In the end, my solution looked like this: The native part was pretty easy, but for the web part I had an insight while walking, because I realized I could inject JavaScript using the Emscripten API. Since the game assets are stored in a compressed file, PhysicsFS provides an API to handle them transparently. It’s great for distributing the game — you only need the cartridge.zip and the executable — and it works even better on the web. The engine must provide a searcher so that Lua can find the game’s other Lua scripts. For this, I use a custom searcher: it first looks for the scripts inside the game package, and if they’re not found, it falls back to the interpreter’s default search to load them from the standard library. To improve stack traces, the secret lies in the second parameter of lua.load. You can pass a string starting with followed by the file name. This alone gives you a much richer stack trace. In Carimbo, I have a terminate hook that catches any exception and atexit hooks to always handle cleanup. This way, I can provide Sentry support in a practically universal and abstract manner for the engine’s user. You can find more details about the engine and its implementation in the official repository: github.com/willtobyte/carimbo .

0 views
NULL on error 4 months ago

Poor’s Man Shaders

Spoiler: it’s not shaders I’m waiting for a universal solution that the SDL developers are working on — cross-platform, multi-API shaders, the SDL_shadercross . The idea is that you write shaders in a single language, and at runtime, they get compiled for the target GPU. Unfortunately, it’s a large and complex project, and it will take time before it becomes stable. In the meantime, in my Carimbo engine, I was wondering if I could implement something similar to shaders — something that would allow Lua code to write arbitrary pixels into a buffer and stream that buffer into a texture. So I created what I call a canvas, which is basically a texture the same size as the screen, rendered after certain elements. The set_pixel function receives a pointer to a uint32_t buffer that exactly matches the texture size. This pointer is actually a Lua string, which I found to be the most performant way to transfer data between Lua and C++ without relying on preallocated buffers. On Lua side: Some effects I’ve created so far: https://youtu.be/GUWTWRQuzxw https://youtu.be/usJ9QM7V8BI https://youtu.be/DUhQmL91cNA

0 views
NULL on error 4 months ago

AI will replace programmers—just not yet, because it still generates very extremely inefficient code.

I was working on my engine, which includes a sort of canvas where Lua code can generate chunks of pixels and send them in batches for the C++ engine to render. This worked very well and smoothly at 60 frames per second with no frame drops at low resolutions (240p, which is the screen size of my games). However, when I happened to try 1080p, the frame rate dropped. Since I was in a rush and a bit lazy—because I can’t afford to spend too much time on personal projects—I decided to use AI to optimize it, and this was the best solution I could squeeze out. It went from 40 FPS down to 17, much worse than the initial implementation! Naturally, the code was not just complex, but also way slower. That’s when I decided to take my brain off the shelf and came up with this solution: Kabum! Smooth 60 frames per second, even at 8K resolution or higher.

0 views
maxdeviant.com 4 months ago

The ComputerCraft Iceberg

My friend Steffen recently turned me on to the ComputerCraft mod for Minecraft. For the uninitiated—a group I myself was a member of until a mere 24 hours ago—ComputerCraft is a mod that adds programmable computers and turtles to the game. "Turtles, you say? What, like these fellas ?" Cute as they may be, the sea variety of turtles are not the ones I'm excited to talk about today. Let me introduce you to a new kind of turtle: These turtles—which get their name from turtle graphics —are little robots that you can control programatically. Inside of each one is a ComputerCraft computer. Players are able to write programs in Lua and execute those programs on the turtle. Programs have access to a number of different APIs, including the module that provides functions for controlling the turtle. For instance, calling the function will move the turtle forward. Calling will have the turtle dig the block in front of it. It all started with a video Steffen sent me of a turtle-driven tree farm he had built in his world. The turtle would walk a loop around a patch of trees, checking each spot to see if a tree was grown yet. If it detected a grown tree, it would chop down the tree, replace it with a sapling, and continue on to the next spot. I decided to start up a new Minecraft world to give it a go. For my initial foray into working with turtles, I copied the tree farm program using the code that was visible in the video. I transcribed it, making a few tweaks as I went, and soon ended up with an automated tree farm of my own: During the course of building it and trying it out, I even managed to find a bug in the original program that needed fixing: With my wood situation sorted, I turned my attention to mining. Initially I wanted to write a branch mining program to assist me in quickly finding more diamonds, but this proved to be somewhat complex. I scoped down the implementation to a simple tunnel miner that would mine a tunnel and place torches on the wall every so often: It was at this point that my software engineer brain started screaming at me. I had these two working programs, but was already noticing common functions that were duplicated between the two. I factored out a new module to house the helper functions I had written for dealing with the turtle's inventory: Keeping with the mining theme, the next program I wrote was for digging out vertical mine shafts. I could imagine wanting to have different-sized mine shafts based on the need, so for this program I explored taking user input as arguments to the program: While working on that program, I noticed that could be generalized into a general-purpose function. While in this case we care about mining out a layer of blocks, the core algorithm of moving a turtle around a plane could have lots of different uses. I pulled this out into its own function: This refactoring then enabled me to quickly whip up a new program for having a turtle farm wheat for me: At this point it was bedtime, and I had wrapped up my first day of working with ComputerCraft. I had gotten to grips with basics of Lua (as this was my first time using it in any real capacity), written a handful of different programs, pulled some common functionality into modules, and was feeling pretty happy with it all. As I got ready for bed, I found myself pondering how I would maintain all of this code as I continued to expand my ComputerCraft usage. Something I had observed during my first day was that I spent a lot of time testing my programs "in production", as it were. The general flow of creating a new program looked something like: I spent a lot of time watching the turtle churn through its instructions waiting for it to reach the point in the program that needed testing and observation. I even created a separate Minecraft world that I would use to test my programs in before letting the turtles run them in my actual world. The process was slow and time-consuming. The answer to this, of course, was testing. I needed a way to write tests that I could run over and over as I made changes to the programs, and test that they were all still working in a variety of different scenarios. Bringing forth this vision of automated testing required one crucial component: a way to simulate ComputerCraft in a controlled environment. I'd spent the previous day steeped in Lua, but I set it aside for a moment and broke ground on a new Rust project. My initial idea for the simulator was quite simple: create a simplified representation of a Minecraft world, a simulated turtle that exists in that world, and an embedded Lua VM to run the programs. A few hours of hacking later, and I could write tests like this: There's still more surface area that the simulator will need to cover, but I'm excited that I was able to prove out the concept quickly. That's all for now, but I'll likely be writing more about my ComputerCraft adventures in the future. Write the first version of a program Run it on the turtle See something not work as expected Refine the program Rinse and repeat.

0 views
Weakty 5 months ago

Radio Silence - An Unfinished Playdate Game

About 8 months ago, I started building my second game for the Playdate. The working title was "Black Hole" (later to be renamed "Radio Silence") and the idea was simple: you are a crew member in a spaceship that needs to enter a black hole. And so you orbit around the black hole, collecting raw materials from space debris, which you then use to craft items to help you successfully enter and cross through a black hole without getting crushed (Yes, I know that it's physically impossible to go through a black hole, but this is also a fictional game). When I started this game, I was coming off the momentum of having finished my first game for the Playdate, which was a success beyond my expectations (people actually paid for it!). I was determined to keep up that momentum. This time around I wanted to build something different. Whereas my first game used the Pulp game engine on the web, I now wanted to use Lua and to build something that was conceptually simpler—less of a story, and more of an arcade game. In the end, I lost steam on this project and have decided to let it go. But first, let me walk you through how far I got and some of the things I learned. As I mentioned, it's an arcade-style game and conceptually has a pretty simple story. The core loop is to collect materials, craft upgrades, and then to descend safely into the black hole. There were a few areas that I could embellish to make it more than this simple game loop, but at the end of the day, there wasn't much more to it than that. I'll walk you through the few main core mechanics that I had to build, show you some screenshots, and share what I enjoyed and disliked about that process. I began by creating a very simple mechanic, the ability for the ship to orbit a black hole and slowly descend into it. I spent maybe a week and a half on this, and it wasn't too complicated. I had to learn a little bit of math and got some help from GPT. In the end, I had a sprite that rotated around another sprite, the first of which would slowly move towards the second. Again, it was a matter of adding collision detection between the two so that when the ship touched the black hole, it would disappear, and the game would be over. But this alone was kind of boring. I needed to introduce some kind of mechanic that would keep the player paying attention, and I also wanted to find some way of using the crank. And so I built a strange little UI that was meant to represent the navigation of the ship to stay in balance with the black hole's orbit. The idea was that as you would rotate the crank, you needed to keep a special nav point in between two closing-in meters. It would get progressively harder as the game moved on, and using the crank, you would have to gently navigate this balance. Eventually, I had something in place so that when you lost control of the nav point's balance, it would cause the ship's health to decrease to a point where if it reached zero, it would fall into the black hole. For a while, I was satisfied with this, and then for a time, I thought, "This isn't very good, let's remove it." And then a few months later, I brought it back again. So I was feeling a little fickle about this feature. Before I got to the next large mechanic, I spent a good amount of time on the atmosphere of the game - I made a title screen, intro music, game music, and a generative background with shooting stars. This part was relaxing and fun, but also a place where I could noodle indefinitely. Here was when I played around with adding 1-bit characters from an asset pack I found on itch.io. I figured this could be a fun way to introduce a way of guiding the player on what to do if necessary. From the outset, I knew that I wanted to have a crafting component to this game. What I didn't know was how much work that would take. There is a lot of business logic that's involved in building a crafting system, and this really slowed down the momentum of the game. In fact, it got to a point where I really didn't want to work on it because of this crafting system. Some of the work involved: It was around this time that I started to think to myself, "I don't really want to make a game like this." And I realized that I was losing interest. I did have fun creating the raw materials, the items, and their respective descriptions (which had to be created as PNGs): At the beginning of this project, I started documenting my weekly progress on the game. This literally meant sitting in front of a camera, talking about what I had built, and then combining that with footage of the Playdate, development, and adding some reflections on top of that. I think I had some hopes that this would both a) be interesting, b) provide a record of my progress that I could feel motivated by seeing, and c) maybe get people interested in the game. Pretty typical stuff. While I do enjoy video editing, and I don't actually mind being on camera, I did find that this was pretty tedious by the end. It was taking away from my energy that I could be spending on the game, or really, on other things as well. None of the videos really got many views, just about 100 or so views per video. I was surprised to see that some repeat people would show up and comment and provide feedback as well as encouragement. That in itself is a nice component, especially when you're building something on your own, bit by bit. More importantly, I have a complicated relationship with social media. Mostly, I find participation in it to be a net negative for my life, personally. I felt that if I wanted something to be "successful" by the standards of YouTube, I would have to compromise some of my values on how I would present the content. This would mean creating something that would be oriented around clickbait, and unrealistic catfishing in terms of thumbnails and titles, rather than truly representative of what I want to make. At one point I was asking myself, am I trying to build an audience as fast and as large as possible? Or do I just want to make a game and be able to say I finished a game and here's what I learned? Documenting projects on YouTube doesn't need to be mutually exclusive of those two things, but I think that if any part of me had an interest in reaching more and more people and having a "successful" video, that it really actually just took away from my original intent. It's a tough balance to be sure. I eventually did finish the crafting system. And while I would term it as the motivation killer of the project, it was what came after that made me realize I didn't want to do any more work on this thing. After finishing these mechanics, when I looked at the game, I saw something that was very robotic. There weren't any animations, there wasn't any fluidity or juice as some people refer to it in games. And that was where I started to feel the most defeated. In fact, it wouldn't be too much work to add a few animations to spruce up the game. But something about even just having to learn how the animation class worked in the Playdate just felt insurmountable. I no longer wanted to work on the project. For what the game was, in its simplicity of the game loop, I just didn't feel like the idea and the execution so far merited any more time. Simple as that. If it isn't already pretty clear from this retrospective, there are a few things I would change. I wouldn't make YouTube videos at all. I just don't think it's worth it. While I did enjoy some parts of it, I think that it took away from having the energy to finish the game, and I think that it distracted me and discouraged me. I would have also spent more time brainstorming on what I wanted to build for my next game. While I was proud of myself for not picking the first idea that came into my head (which is what I did for my last game) and while I did indeed brainstorm several possible options, I think I should have spent more time thinking about what I really wanted to build. It's hard to let go of projects, especially ones that you've spent a fair bit of time on (and even more especially, ones that you have publicly discussed). It's hard to let go - but life's a bit too short to spend working on projects that you are toiling through and don't find too much joy in anymore. Thankfully, I learned a lot as I went along. Speaking of life - things are about to change around here anyway — We're having a kid! It's nice to let go of projects to make way for this big change; and this way I have one less unfinished thing floating around in my brain during this period. Someday, when I feel a bit more like I've got my feet under me with parenting, I might scope out something small I can build in the few scraps of time I have after the long (but rewarding) days to come. Will it be a game? Maybe. But maybe it'll be some writing, or drawing. Maybe something with a little less staring at a screen. Building the crafting menu sprite itself Building a means of collecting raw materials (asteroids) Displaying items that you've picked up in a grid Making it so that items that have been acquired can be combined with other items, i.e. creating recipes Showing the recipe once it's been crafted in the right-hand side pane of the menu Implementing the actual effects of that item once it has been crafted

0 views
NULL on error 6 months ago

How to avoid dynamic linking of Steam’s client library using a very old trick

As you know, this blog is more focused on sharing code snippets than on teaching, so today I’m going to show you something I recently discovered. If you’ve been following me, you know I’ve been working in my free time on a 2D game engine where creators can build games using only Lua — and I’d say, even fairly complex ones. Right now, I’m working on a point-and-click game that you can play here: https://bereprobate.com/. It’s built using this same engine, and I’m publishing builds in parallel to Steam and the Web using GitHub Actions. The thing is, Steam — which is the main target platform for this game — supports achievements, and I want to include them. But to use achievements, you have to link the Steam library to your engine. The problem is, doing that creates a dependency on that library in the binaries, which I don’t want. I also don’t want to maintain a separate build just for that. Then I thought: “Why not load the Steam library dynamically? Use LoadLibraryA on Windows and dlopen on macOS. (Sorry Linux — it’s Proton-only for now.)” I tried the experiment below, and it worked. If the DLL/dylib is present, the Steam features work just fine. If not, everything runs normally. Achivement class

0 views
JSLegendDev 7 months ago

Where to Find Inspiration For Making Small Games

In my opinion, the most approachable way to get into game development is to make small games. However, making the usual Pong or Flappy bird clone is boring. In this post, I’ll share where I find inspiration for building interesting small games. PICO-8 is a fantasy console, meaning a console that doesn’t actually exist. In practice it’s a virtual machine that has a bunch of artificial limitations (number of colors it can display, resolution, limited sprites, etc…). There are two components to it. The first one is for developers where you can develop your game entirely in PICO-8 using the lua programming language. It has a built-in code editor, sprite editor, level editor and sound editor. Since this fantasy console is so limited in what it can do, developers can easily avoid making games that are too big in scope. The second component is the console aspect, where you can play games made in PICO-8. While I think limitations breeds creativity, I think PICO-8’s limitations are a bit too much for me as a developer, however, I think it’s worth playing games made in it. You’ll get very inspired by how creative these small games are. I recommend installing the P8GO app which allows you to discover PICO-8 games in a TikTok style app, allowing you to cover a lot of games quickly. There are a lot of games released in the past that are easy to remake today. I recommend looking up and trying old arcade, NES and Gameboy games. A good exercise is to try remaking them while simplifying or improving the user experience since many of these games were obtuse as gaming was relatively young back then. A lot of the established quality of life features we’re used to today weren’t common. In the startup world, you’ll often see successful businesses emerge by taking one of the many features an establish company already has and focusing exclusively on building that one feature as their product. You can take the same approach in game dev. In The Legend of Zelda Link’s Awakening, there is a part of the game where you need to do a trading sequence. Meaning, exchange an item with an NPC to get another item you can exchange with another NPC until you obtain the item you need. What if you made a game that was exclusively about trading things with NPCs to get an item the player needs? While that might not sound particularly fun, you can always take inspiration from things outside of game dev to spice up your game concept. For example, Rayan Trahan, a popular YouTuber, made a series called the penny challenge. He starts with a penny and needs to cross through America using only that as his starting budget. The YouTuber then proceeds to buy, sell and trade items enabling them to gradually increase their wealth so they can pay for transportation and other expenses to achieve their goal. Now, let’s modify the game idea a bit. What if you made a game where the player starts in a small island/town and wants to leave that town. To do so, they must buy an expensive ticket for a cruise that will enable them to leave. However, they’re broke and need to explore the town for items, and trade them with NPCs, until they make enough money to buy the ticket and leave. Add to this, challenging aspects, like some NPCs only being interested in certain items, and you’ve got a nice little game. You can add a bit of randomness with which NPC wants what and this will make each run different allowing your game to be highly replayable. In the popular game franchise Pokémon, you often need to pass by a Pokémart to buy various items useful in your adventure. What if you made a game solely about managing a Pokémart? You don’t have to use the Pokémon IP, you could replace it with your own monster catching IP. Instead of selling Pokéballs, you would sell CaptureCubes or something to that effect. The whole game would take place within the mart. Every time a new customer enters, they can either buy from your inventory or you can buy from them. Which could be your only way of getting certain items in demand by other customers. Your goal is to stay in business since you have expenses, while keeping customers happy by having the items they need so that they keep coming to your mart. I could go on, but I think you get the point. In the first example of the previous section, I mentioned taking inspiration from a YouTuber’s video series. This is an example of taking inspiration from outside of gaming and I’d argue that the truely novel ideas can be found there. What if you made a game about managing a library? A game about being a moderator of a social media website? A game about being a lumberjack? A game about managing a retro handled company? What if you made a game based on a book, how would you adapt it so that the game is fun? The list goes on. In the end, there are a lot of ways to find good small game ideas. In this post, I shared what the inspiration sources I use. If you want to learn game development, web development or game developement using web dev tools, I recommend subscribing to not miss out on future posts and tutorials. Thanks for reading! Subscribe now In the meantime, you can check out my previous content.

0 views
Playtank 7 months ago

My Game Engine Journey

There, but certainly not back again. It’s sometime around the late 1980s/early 1990s that some developers start talking about a “game engine” as a thing. Maybe not even using the term “engine” yet, but in the form of C/C++ libraries that can be linked or compiled into your project to provide you with ready-made solutions for problems. Color rendering for a particular screen, perhaps, or handling the input from a third-party joystick you want to support. The two Worlds of Ultima games are built on the Ultima VI: The False Prophet engine, as a decently early example. When you put a bundle of these ready-made solutions together, it becomes an engine . In those days, the beating heart would usually be a bespoke renderer. Software that transforms data into moving pictures and handles the instruction set of whichever hardware it’s expected to run on. What id Software perhaps revolutionised, if you are to believe John Romero in his autobiography Doom Guy: Life in First Person (an amazing book), was to make developer tools part of this process. To push for a more data-driven approach where the engine was simply the black box that you’d feed your levels and weapons and graphics into. This is how we usually look at engines today: as an editor that you put data into and that makes a game happen. To give some context for this, I thought I’d summarise my personal software journey. One stumbling step at a time, and not all of it strictly engines . When I grew up in the 80s/90s, I was often told that programming was simply too hard for Average Joe poor kids like myself. You had to be a maths genius and you had to have an IQ bordering on Einstein’s. At a minimum, you needed academic parents. If you had none of those, programming wasn’t for you. Sorry. This is the mindset I adopted and it affected my early days of dabbling profoundly. Where I lived, in rural Sweden, there were no programmer role models to look up to, and there was no Internet brimming with tutorials and motivation either. Not yet. We didn’t have a local store with game-stocked shelves or even ready access to computers at school. Again, not yet. But eventually, maybe around the age of 10 or so, I ran into QBASIC on the first home PC that was left over from my dad when he upgraded. Changing some values in the game Gorillas to see what happened was my introduction to programming in its most primitive form. Ultimately, I made some very simple goto-based text adventures and even an attempt at an action game or two, but I didn’t have enough context and no learning resources to speak of, so in many ways this first attempt at dabbling was a deadend. It’s clear to me today, looking back, that I always wanted to make games, and that I would probably have discovered programming earlier if I had been introduced to it properly. Even if I felt programming was too complicated, I did pull some games apart and attempt to change things under the hood. One way you could do this was by using a hex editor (hex for hexadecimal ) to manipulate local files. This is something you can still use for many fun things, but back then hexadecimal was part of how games were packaged on disk. (Maybe it still is and I’m letting my ignorance show.) The image below is from Ultima VII: The Black Gate seen through a modern (free) hex editor called HxD . As you can see, it shows how content is mapped in the game’s files. Back then, my friends and I would do things like replace “ghoul” in Ultima VIII with “zombi” (because it has to be the same number of letters), or even attempt to translate some things to Swedish for some reason. (To be fair, the Swedish translation of Dungeon Keeper 2 is in every way superior to the English simply because of how hilariously cheesy it is.) To grab this screenshot I could still find the file from memory, demonstrating just how spongy and powerful a kid’s brain really is… With Duke Nukem 3D , and to a lesser extent DOOM , I discovered level editors. The Build Engine, powering the former, was a place where I spent countless hours. Some of the levels I made, I played with friends. I particularly remember a church level I built that had sneaky pig cops around corners, and how satisfying it was to see my friends get killed when they turned those corners. How this engine mapped script messages to an unsigned byte, and memorising those tiny message codes and what they meant, were things I studied quite deeply at the time. I fondly remember a big level I downloaded at some point (via 28.8 modem I think) that was some kind of tribute level built to resemble Pompeii at the eruption of Vesuvius. It’s a powerful memory, and I’m quite actively not looking to find that level to get to keep the memory of it instead. The fact that walls couldn’t overlap because it wasn’t actually a 3D engine were some of the first stumbling steps I took towards seeing how the sausage gets made. Several years after playing around with the Build Editor, I discovered WorldCraft. I built a church here too for some reason, despite being a filthy secular Swede, and tried to work it into a level for the brilliant Wasteland Half-Life mod. This was much harder to do, since it was fully 3D, and you ran into the limitations of the day. The engine could only render about 30,000 polygons at a time, meaning that sightline optimisations and various types of load portals were necessary. Things I learned, but struggled with anyway. Mostly because Internet was still not that great as a resource. Had I been smarter, I would’ve started hanging around in level design forums. But level design never stuck with me the way programming eventually would. During this time, I also learned a little about tools like 3D Studio Max, but as with programming in the past I thought you had to be much better than I was to actually work on anything. My tip to anyone who is starting out in game development: don’t bring yourself down before you even get started. It can deal lasting damage to your confidence. During the late 90s and early 2000s, something came along that finally “allowed me” to make games, at least in my head. At first it was DarkBASIC , which is a BASIC version with added 3D rendering capabilities produced at the time by the British company The Game Creators. This discovery was amazing. Suddenly I was building little games and learning how to do things I had only dreamed of in the past. None of it was ever finished, and I always felt like I wasn’t as good as people from the online communities. It’s pretty cool, however, that Rami Ismail hung out in these forums and that I may even have competed against him in a text adventure competition once. Along the way, I did learn to finish projects however. I made two or three text adventures using the DarkBASIC sequel, DarkBASIC Professional, and even won a text adventure competition all the way back in 2006 with a little game I called The Melody Machine . In 2005 I enrolled in a game development education in the town of Falun, Sweden, called Playground Squad. It was the first year that they held their expanded two-year vocational education for aspiring game designers, programmers, and artists. My choice was game design, since I didn’t feel comfortable with art or code. This was a great learning experience, particularly meeting likeminded individuals, some who are still good friends today. It’s also when I started learning properly how the sausage gets made, and got to use things like RenderWare Studio. An early variant of an editor-focused game engine, where designers, programmers, and artists could cooperate more directly to build out and test games. It was never a hit the way Unity or the UDK would become, but I remember it as being quite intuitive and fun to play around with. We made one project in it, that was a horde shooter thing. I made the 3D models for it in Maya, which isn’t something I’ve done much since. I don’t remember what SimBin called their engine, but I got to work in two different iterations of it in my first real work at a game studio, as an intern starting in 2006. One engine was made for the older games, like RACE: The WTCC Game that became my first published credit . The other was deployed on consoles and was intended to be the next-generation version of SimBin technology. There I got to work on particle effects and other things, that were all scripted through Lua or XML if I recall correctly. Writing bugs in bug tools while performing light QA duties. To be honest, I’m not sure SimBin knew what they needed any designers for. But I was happy to get my foot in the door. My best lesson from SimBin was how focused it was on the types of experiences they wanted. They could track the heat on individual brakes, the effects of the slipstream behind a car in front of you, and much more. They also focused their polygon budget on the rear of cars, since that’s the part that you see the most. You typically only see the front of a game model car in the mirror, rendered much smaller than you see the rear of the car in front of you. This is an example I still use when talking about where to put your focus: consider what the player actually sees the most. I did work with the visual scripting tool Kismet (precursor to Blueprint) and Unreal’s then-intermediary scripting language UnrealScript in my spare time in 2006, for a year or so. It had so many strange quirks to it that I just never got into it properly. First of all, Unreal at the time used a subtractive approach to level editing unlike the additive approach that everyone else was using, which meant that level design took some getting used to. With BSP-based rendering engines, the additive norm meant that you had an infinite void where you added brushes (like cubes, stairs, cones, etc.) and that was your level. In the UDK, the subtractive approach meant that you instead had a filled space (like being underground) where you subtracted brushes to make your level. The results could be the same, and maybe hardcore level designers can tell me why one is better than the other, but for me it just felt inconvenient. Never got into UDK properly, because I always felt like you had to jump through hoops to get Unreal to do what you wanted it to. With Kismet strictly tied to levels (like a Level Blueprint today), making anything interesting was also quite messy, and you had to strictly adhere to Unreal’s structure. My longest stint at a single company, still to this day, was with Starbreeze. This is the pre- Payday 2 Starbreeze that made The Chronicles of Riddick: Escape from Butcher Bay and The Darkness . The reason I wanted to go there was the first game, the best movie tie-in I had ever played. A game that really blew my mind when I played it with its clever hybridisation of action, adventure, and story. Starbreeze was very good at making a particular kind of game. Highly cinematic elements mixed with first-person shooter. If this makes you think of the more recent Indiana Jones and The Great Circle , that’s because Machinegames was founded by some of the same people. Starbreeze Engine was interesting to work with, with one foot firmly in the brushy BSP shenanigans of the 90s (additive, thankfully), and the other trying to push forward into the future. Its philosophies, including how to render a fully animated character for the player in a first-person game, and how scripting was primarily “target-based” in the words of the original Ogier programmer, Jens Andersson, are things I still carry with me. But as far as the work goes, I’m happy that we don’t have to recompile our games for 20 minutes after moving a spawnpoint in a level anymore. (Or for 10-24 hours to bake production lighting…) During my time at Starbreeze, I finally discovered programming and C++ and learned how to start debugging the Starbreeze Engine. Something that made my job (gameplay scripting) a lot easier and finally introduced me to programming in a more concrete way at the ripe age of 26. At first, I tried to use the DarkBASIC-derived DarkGDK to build games in my spare time, since I understood the terminology and conventions, but soon enough I found another engine to use that felt more full-featured. It was called Nuclear Fusion. It was made by the American one-man company Nuclear Glory Entertainment Arts, and I spent some money supporting them during that time. Now they seem to be gone off the face of the Earth unfortunately, but I did recently discover some of the older versions of the software on a private laptop from those years. As far as projects go, I never finished anything in this engine, but I ended up writing the official XInput plugin for some reason. Probably the only thing I ever wrote in plain C++ to be published in any form. Having built many throwaway prototypes by this time, but never quite finished anything, I was still looking for that piece of technology that could bridge the gap between my lower understanding of programming and that coveted finished game project I wanted to make. At this point, I’m almost six years into my career as a game developer and my title is Gameplay Designer. It’s in 2011-2012 that I discover Unity. On my lunch break and on weekends, I played around with it, and it’s probably the fastest results I’ve ever had in any game engine. The GameObject/Component relationship was the most intuitive thing I had ever seen, and my limited programming experience was an almost perfect match for what Unity required me to know. Unity became my first introduction to teaching, as well, with some opportunities at game schools in Stockholm that came about because a school founder happened to be at the Starbreeze office on the lunch break one day and saw Unity over my shoulder. “Hey, could you teach that to students?” All of two weeks into using it, my gut response was “yes,” before my rational brain could catch up. But it turned out I just needed to know more than the students, and I had a few more weekends to prepare before course start. Teaching is something I’ve done off and on ever since—not just Unity—and something I love doing. Some of my students have gone on to have brilliant careers all across the globe, despite having the misfortune of getting me as their teacher at some point. Since 2011, I’ve worked at four different companies using Unity professionally, and I have been both designer and programmer at different points in time, sometimes simultaneously. It’s probably the engine I’m most comfortable using, still to this day, after having been part of everything from gameplay through cross-platform deployment to hardware integration and tools development in it. You can refer to Unity as a “frontier engine,” meaning that it’s early to adopt new technologies and its structure lends itself very well to adaptation. You set it up to build a “player” for the new target platform, and you’re set. Today it’s more fragmented than it used to be, with multiple different solutions to the same problems, some of which are mutually exclusive. If you ask me if I think it’s the best engine, my answer would be no, but I’ll be revisiting its strengths and weaknesses in a different post. The same person who pulled me in to teach Unity also introduced me to Unreal Engine 4 in the runup to its launch. I was offered an opportunity to help out on some projects, and though I accepted, I didn’t end up doing much work. It coincided with the birth of my first child (in 2013) and therefore didn’t work out as intended. I’ve still used Unreal Engine 4 quite a bit, including working on prototypes at a startup and teaching it to students. It’s a huge leap forward compared to the UDK, maybe primarily in the form of Blueprint. Blueprint is the UE4 generation of Kismet and introduced the object form of Blueprints that you’d be used to today. Rather than locking the visual scripting to a level, Blueprints can be objects with inheritance. They are C++ behind the scenes and the engine can handle them easily and efficiently using all the performance tricks Unreal is known for. Funnily enough, if you came from UDK, you can still find many of the Kismet helper classes and various UnrealScript template shenanigans are still there in Blueprint and Unreal C++ but wrapped into helper libraries. It’s clearly an engine with a lot of legacy, and the more of it you know before starting the better. Autodesk Stingray is an engine that was developed from the Swedish BitSquid engine after Autodesk purchased it and repurposed it for their own grand game engine schemes. BitSquid was a company founded by some of the developers that once made the Diesel engine, that was used at the long-since defunct Grin and later Starbreeze-merged Overkill game studios. When I worked with it, Autodesk had discontinued the engine, but three studios were still using it and supporting it with internal engine teams. Those three were Arrowhead, Fatshark, and Toadman. I worked at Toadman, as Design Director. As far as engines go, Stingray has some really interesting ideas. Two things struck me, specifically. The first is that everything in the engine is treated essentially as a plugin, making it incredibly modular. Animation tool? Plugin. Scripting VM? Plugin. The idea of a lightweight engine with high extensibility is solid. Not sure it was ever used that much in practice, but the intention is good. Another thing I bring with me from those days isn’t strictly about Stingray, but about a fantastic data management tool called Hercules that Toadman used. It allowed you to make bulk changes to data, say doubling the damage of all weapons with a single command, and was an amazing tool for a system designer. It decoupled the data from the game client in ways that are still inspiring me to this day. Sadly, since earlier this year (2025), Toadman is no longer around. The jump between Unreal Engine 4 and Unreal Engine 5 is not huge in terms of what the engine is capable of, even if Epic certainly wants you to think so (Lumen and Nanite come to mind). But there is one big difference and that’s the editor itself. The UE5 editor is much more extensible and powerful than its older sibling, and is both visually and functionally a complete overhaul. There’s also a distribution of Unreal Engine 5 called Unreal Editor for Fortnite that uses its own custom scripting language called Verse, that is said to eventually be merged into the bigger engine. But I simply have no experience with that side of things. My amateur level designing days are long-since over. Probably the biggest change between UDK and UE5 is that the latter wants to be a more generic engine. Something that can power any game you want to make. But in reality, the engine’s high end nature means that it’s tricky to use it for more lightweight projects on weaker hardware, and the legacy of Unreal Tournament still lives on in the engine’s core architecture and workflows. As with Unity, I don’t think it’s the best engine. But I’ll get into what I consider its strengths and weaknesses in a future post. I’ve spent years working with UDK, UE4, and UE5, in large teams and small, but haven’t released any games with them thus far. Projects have been defunded, cancelled, or otherwise simply didn’t release for whatever reason. Imagine that you release a new update for your game every week , and you’ve been doing so consistently since 2013. This is Star Stable Online—a technical marvel simply for the absolutely insane amounts of data it handles. Not to mention the constant strain on pipelines when they’re always in motion. My biggest takeaway from working alongside this engine last year (2024) is its brilliant snapshot feature that allows you to to save the game’s state at any moment and replay from that moment whenever you like. Even shared with other developers. This approach saves a ton of time and provides good grounds for testing the tens of thousands of quests that the game has in store for you after its 14 years (and counting) life span. You may look at its graphics and think, “why don’t they build this in Unreal?”, but let’s just say that Unreal isn’t an engine built to handle such amounts of data. The visuals may improve, but porting it over would be a much larger undertaking than merely switching out the rendering. Can’t really talk about it. It’s Stingray, but at Arrowhead, and it powers Helldivers 2 . Like the engine’s Fatshark and Toadman variants, it has some features and pipelines that are unique to Arrowhead. I hope I get to play around with even more engines than I already have. They all teach you something and expand your mind around how game development can be done. At the end of the day, it doesn’t matter which engine you use, and it’s not often that you can make that decision yourself anyway if you’re not footing the bill. Like an old colleague phrased it, “there’s no engine worse than the one you’re using right now.” Fortunately, there’s also no better engine for getting the work done. QBASIC (around ’89 or ’90?) Hex Editing (early 90s) Build Engine (early-mid 90s) WorldCraft/Hammer (late 90s, early 00s) DarkBASIC/DarkBASIC Pro (late 90s, early 00s) RenderWare Studio (’05 or ’06) SimBin Engine (’06) First professional work. UDK (’05-’07) Starbreeze Engine (’07-’12) DarkGDK/Nuclear Fusion (’09-’12) Unity (’12-today) Toadman Stingray (’17-’20) UE4 (’14-’21, sporadically) UE5 (’21-today) Star Stable Engine (2024) Arrowhead Stingray (2025-?)

0 views
Kartik Agaram 8 months ago

A markup language and hypertext browser in 600 lines of code

Here's a document containing a line of text: I'm building in Lua, so I'm reusing Lua syntax. Here's how it looks: Such boxes are the workhorse of this markup language. There are 3 other kinds of boxes: , and . Rows and cols can nest other boxes. But let's focus on text boxes for a bit. Here's a text box containing two lines of text: Since it's hypertext, you get a few attributes. You can set , (background color) and (also color). Each of these is a Lua array of 3 or 4 elements for red, green, blue and transparency. Notice that the text is not centered vertically. The browser doesn't know details about the font like how far down the baseline is. You have to center manually using . I imagine this sort of thing will get old fast. Probably best to use and sparingly. Since this is a hypertext browser, the main attribute of course is hyperlinks. To turn a text box into a link you can click on, use the target attribute. Links get some default colors if you don't override them. The target is a file path that will open when you click on it; there's no networking here. Press alt+left to go back. What else. The one remaining attribute text boxes support is . You can use any font as long as it's Vera Sans (or you're welcome to open up my program and put more fonts in). But you can adjust the font size and select bold and italic faces. However, before we can see them in action I should discuss inline styles. A text box contains lines. Each line so far has been a string. But it can also be a string augmented with attributes. Here's a line with an inline 'tag': (So many irritating silly curly brackets! But I hope you'll stick with me here. The goal is a simple markup language that is easy to implement while still providing some basic graphical niceties.) Inline segments of text are surrounded in and prefixed with an alphanumeric name. (So they have to begin at the start of a word, after whitespace or punctuation.) The name gets connected up with attributes inside a block called . To stretch our legs, here's a text box with two lines, each containing inline markup for font and color. Each line's attributes are independent. So far you can't change font size or add borders inline, because it complicates matters to change line height within a line, and also seldom looks nice. Inline should arguably be used sparingly. The pattern I've been using more is to give each text block a uniform font and mix and match combinations of text boxes. There are 2 ways to combine text boxes: and . Here's a vertical array of text boxes: And here's a horizontal array: It's hard to see, so let's make the border more obvious. You can add attributes to and just like to . All children of share a width, and all children of share a height. Children can also share other attributes when specified in a attribute: Widths and heights will grow and shrink depending on what you put in them, but you can also fix a width in and boxes, and lines will wrap as needed. Notice that it'll try to wrap at word boundaries if it can. But it'll chop mid-word if a line would be too empty without the entire word. For completeness, here's a box. All it does is add padding within and . Within , needs to specify a , and within , a . The other dimension will resize as needed. Putting it all together, here's a table: No wait, that's not right: That's annoying! You have to specify columns before rows, or you're stuck manually sizing widths. But, 600 lines of code! Here it is. You'll need LÖVE, but the code should be easy to port to other graphics toolkits, the markup to JSON literals, etc. The program is a zip file containing the source code.

0 views
Andre Garzia 9 months ago

The Web Should Be A Conversation

For a very long time, I've defended that the Web should be a conversation, a two-way street instead of a chute just pushing content into us. The Web is the only mass media we currently have where most people can have a voice. I'm not saying all these voices have the same loudness nor that every single person in our beautiful planet and space stations can actually post to the Web, just that it is the one place where everyone has the potential to be a part of it. Contrast it with streaming services, radio, or even the traditional publishing industry and you'll see that a person alone with an idea has a lot more obstacles in their way, than when considering just starting a blog. For the last couple of years, there has been a colossal push by Silicon Valley companies towards generative AI. Not only bots are going crazy gobbling all the content they can see regardless if they have the rights to do so or not, but content farms have been pushing drivel generated by such machines into the wider Web. I have seen a horrible decline in the quality of my search results and the social platforms that I'm a part of — the ones with algorithmic timelines such as Instagram and YouTube — have been pushing terrible content towards me, the kind that tries to get a rise out of you. They do this to "toxically foster" engagement. Trying to get you to be so mad that you dive deeper into either an echo champer or a flame war. The enshitfication of the Web is real, but it is happening at a surface level. All the content you love and want is still there. They are just harder to discover cause FAANG companies got a nuclear powered shit firehose spraying bullshit all over the place. There are many ways to fight this and in this blog post, I'll outline what I am doing and try to convince you to do the same. Yes, this post has an agenda, a biased human wrote it. TL;DR: We need to get back into blogging. We need to put care and effort into the Blogosphere. A human-centric Web, in my own opinion, is one that is made by people to be browsed by people. The fine folks at the IndieWeb been hammering at this for a very long time: On Social Networks such as Facebook or YouTube, you don't own your platform. You're just feeding a machine that will decide to show your content or not to people, depending on how much their shareholders can make out of your work and passion. Your content is yours When you post something on the web, it should belong to you, not a corporation. Too many companies have gone out of business and lost all of their users’ data. By joining the IndieWeb, your content stays yours and in your control. You are better connected Your articles and status messages can be distributed to any platform, not just one, allowing you to engage with everyone. Replies and likes on other services can come back to your site so they’re all in one place. You are in control You can post anything you want, in any format you want, with no one monitoring you. In addition, you share simple readable links such as example.com/ideas. These links are permanent and will always work. — Source: IndieWeb I'm not advocating for you to stop using these bad social networks. You do whatever you want to do. I'm urging you to also own your own little corner on the Web by making a little blog. What will you post into it? Well, whatever you want. The same stuff you post elsewhere. A blog doesn't need to be anything more complicated than your random scribblings and things you want to share with the world. I know there are many people that treat it as a portfolio to highlight their best self and promote themselves, if that is you too, go forward and do it! If that is not you, you can still have a blog and have fun. There are thousands of ways to start a blog, let me list some that I think are a good way to go: These are just some ways to do it. There are many more. When you start your own blog, you're joining the conversation. You don't need the blessing of a social network to post your own content online. You certainly don't need to play their algorithm game. Join the conversation as you are and not as these companies want you to be. The Web becomes better when you are your authentic self online. Post about all the things that interest you. It doesn't matter if you're mixing food recipes with development tips. You contain multitudes. Share the blog posts and content creators that you like. Talk about your shared passions on your blog. Create connections. The way to avoid doomscrolling and horrible algorithmic timelines is to curate your own feed subscriptions. Instead of relying on social networks and search engines to surface content for you, you can subscribe to the websites you want to check often. Many websites offer feeds in RSS or Atom formats and you can use a feed reader to keep track of them. There are many feed readers out there (heck, even I made one, more about it later). Let me show you some cool ones: Once you're in control of your own feed, you step away from algorithmic timelines. You can use feed readers to subscribe not only to blogs, but your favourite creators on YouTube and other platforms too. If the website you want to subscribe to does not offer a feed, check out services like rss.app and others to try to convert it into a feed you can use on your feed reader of choice. With time, you'll collect many subscriptions and your Web experience will be filled with people instead of bots. Use opml exporting and importing from your feed reader to share interesting blogs with your friends and readers. Word of mouth and grassroot connections between people in the blogosphere is how we step out of this shit. Learn a bit of HTML to add a blogroll link to your template. Sharing is caring. As I mentioned before, I have been thinking about this for a long time. I suspect I might have created one of the first blogging clients on MacOS 8 (yeah the screenshot is from MacOS 9). I have no idea how many times I implemented a feed reader, a blogging client, or a little blogging CMS. Even this blog you're reading right now is a home grown Lua -based blogging CMS I made in an afternoon. BlogCat is my latest experiment. It is an add-on for Firefox that adds blogging features to the browser. It aims to reduce the friction between blogging and Web Browsing by making weblogs a first-class citizen inside your user agent. You can subscribe to websites, import and export OPML, all from inside the browser. You can have a calm experience checking the latest posts from the websites you follow. Being a part of the conversation is also easy cause BlogCat supports posting to Micropub-enabled sites and also microblogging to Bluesky and Mastodon. It uses a handy sidebar so you can compose your post while browsing the web. I been using it for a couple weeks now and am enjoying it a lot. Maybe you will enjoy it too. Anyway, this is not a post about BlogCat, but this post is what originally inspired BlogCat. As I drafted this post weeks ago and mused about the Web I want and the features I want on Web Browsers, I realised I knew how to make them. Instead of simply shouting about it, I decided to build it myself. You too can be a part of the conversation. You too can help build the Web you want. Let's walk away from the enshitfication of the Web by linking hands across the blogosphere. Micro.Blog : A simple and powerful blogging platform by people who actually love blogs. You need a subscription for it, but it can be as cheap as 1 buck per-month. Jekyll using Github Pages : If you're a developer and already know a bit about Git, you can quickly spin a blog using Jekyll and Github Pages. That allows you to start a blog for free. Wordpress : It pains me to write this one. I don't like Wordpress but I understand it is an easy way to start blogging for free. Blogger : Blogger still exists! A simple way to create a blog. Feedly : A SaaS that is liked by many. Create an account and subscribe to your blogs from any Web device you got. NetNewsWire : Polished macOS app that has been the gold standard for feed readers for more than a decade. It is FOSS. Akregator : From our friends at KDE, a FOSS Desktop feed reader for Linux and Windows. Miniflux : a minimalist feed reader. You can join their SaaS or self-host it. Rad Reader : A minimalist desktop reader for macOS, Linux, and Windows. BlogCat : Yep, I made this. More about this later. It is an add-on for Firefox that adds blogging features to the browser.

0 views
Andre Garzia 10 months ago

Creating a gamebook engine

I made this some time ago but never blogged about it. Unfortunately, I lost some of the source code, but that would be easy to rebuild. I decided to check what were the development options for the Playdate handheld console by Panic after receiving an email from them (I’m on the mailing list for the device). The offering is just too damn polished. Check out Develop for Playdate page. Like everything Panic does, it is damn well done. You can use the SDK to develop using C or Lua or a combination of both. They also offer an web IDE called Pulp that is similar to a pico-8 development workflow with tools for crafting fonts, screens, sprites, audio, and scripting. I went ahead and downloaded the SDK. I already had a license for Nova — which is the fancy development editor they make — and they ship lots of integrations for that editor with the SDK. Everything works out of the box. This is an example source loaded in Nova (using the Playdate theme and the extension). I’m using the editor task system to run the sample in the included emulator. It just works… I might be coding a gamebook engine for the Playdate… that was not in my strategy post, but the muse calls me and it is rude not to answer her call. And this is what my gamebook editor looks like:

0 views
Andre Garzia 10 months ago

Creating a simple posting interface

I recently noticed that I haven't been posting much on my blog which surprised me because blogging has always been among my favourite activities. The main obstacle that has prevented me from posting more often is that I didn't had an easy to use interface or app for doing so. When this blog was done with Racket + NodeJS, I implemented a MetaWeblog API server and thus could use apps such as MarsEdit to post to my blog. Once I rebuilt using Lua, I didn't finish implementing that API — I got it halfway done — and thus couldn't use that app anymore. I implemented Micropub API in Lua but am yet to find an app I like to use that supports that spec. Thankfully, Micropub is such an easy spec to implement that creating a little client for it can be done in hours if not minutes. Today, in about two hours, I made a small single-file HTML editor for my blog. It allows me to create new posts with ease including file uploads. It is actually the interface I'm using to write this post right now. It is a simple HTML form with 137 lines of vanilla JavaScript. All the JS does is simple cosmetics such as disabling buttons when posting or uploading is happening (so I don't press them twice) and using the fetch API to send data to the server. Of course this editor is super simple. There's barelly any error checking and most of the errors will just be console messages, but it is enough for my day to day usage. It serves its purpose which is to provide an easy way for me to make posts. I wonder what new features I'll implement as the week moves on.

0 views
Kartik Agaram 1 years ago

Sokoban

The kids have been enjoying Baba is You, and watching them brought back pleasant memories for me of playing the classic crate-pushing game Sokoban. So I went looking and found a very nice project that has collected 300 classic publicly available Sokoban puzzles. Then of course I had to get it on my phone so I could play it anywhere. The result is the sokoban.love client. video; 1 minute On a technical level, with sokoban.love I've finally managed to figure out how to scale modifying programs on my phone beyond the tiny scripts Lua Carousel supports. Carousel treats each 'page' of the carousel as a separate script, and shares the screen between the code for the page and the drawings the page makes. When you switch between pages, Carousel saves and restores code for you so the script currently on screen is always the one currently drawing. sokoban.love comes bundled with multiple pages of code (including 7000 lines for all the levels; those would be a pain to copy paste into Carousel). The pages all collaborate to create the app; switching pages changes nothing about the code that is running. The screen is also no longer shared between the app and its code editing environment. When you run the app the Carousel menu disappears, replaced by a single button to exit the app and edit its code. This approach works well for editing on a phone. The trade-off I made is to jettison the live-editing experience. You can still get that with sokoban.love, but you'll need to get on a computer and connect driver.love to it like all my Freewheeling Apps. As a bonus, sokoban.love includes a simple solver to eliminate some gruntwork for moving the player on touchscreens that you can see in action in this video. Tapping on the buttons along the edges moves the player a single square. Tapping on an empty square moves the player there if that is possible without moving any crates. Tapping on a crate and then an empty square will try to get the crate there if that is possible without moving any other crates.

0 views
Ginger Bill 1 years ago

Why I Hate Language Benchmarks

[Originally from a Twitter Thread] Original Twitter Post I don’t know if I have “ranted” about this here before but: I absolutely HATE comparing programming languages with “benchmarks”. Language benchmarks rarely ever actually test for anything useful when comparing one language against another. This goes for ANY language. Even in the best case scenario: you are comparing different compilers for the same language (and the same input). This means that you are just comparing how well the optimizing backends work for those compilers. Comparing different languages is not even in the same category. When comparing languages, you are not just comparing the optimizing backend of the compiler (assuming it even is compiled), but a completely different input. And most benchmarks rarely use the semantically equivalent code to test against. The implementations vary widely. And even in the case where the input is semantically equivalent AND the compiler backends for each language use the same “library” (e.g. LLVM), even then the semantics of each language may not allow for certain passes. LLVM is a good example which assumes C and C++ semantics. If your language does not adhere to C and C++ semantics, then most of the passes in LLVM cannot be used. And some compilers may have different default “flags” too, which makes dumb comparisons rarely equal (e.g. native vs portable microarchitectures). Clarifying the semantically equivalent aspect: a printing procedure in one language may be drastically different too. Runtime vs compile-time type information or none at all, flushing after each call or not, richer formatting or not, etc. Are you even comparing the same thing? There is also the “idiomatic” aspect which I hate too. “Idiomatic” in one language is a subjective and personal construct, and may produce very different results compared to “non-idiomatic” code. “Idiomatic” styles might produce slower code in general; the tests won’t show this. One of the most egregious websites for this is: https://programming-language-benchmarks.vercel.app . I recommend anyone to compare two languages of the same ilk and actually read the differences between the code in the tests; note how they are nothing alike most of the time. Different implementations & logic. n.b. I do personally try to make distinctions between the language, the compiler, the core library, and the ecosystem, as much as I can. I know most people do not and just lump everything together as a single package. This is due to most languages having a single implementation. But if you come from a C or C++ background, like my own, then you are/were confronted with the selection of different toolchains from the start (MSVC, Clang, GCC, Intel, tcc, 8cc, etc). And you are usually forced to write/import your own core library too (e.g. C’s is awful). For a language like Lua, there are different implementations, but they pretty much offer the same “ecosystem”, but just differ in how they are ran (i.e. VM vs JIT). And for many people, the choice of which to use is dictated by the use case. In summation, metrology is hard. You actually need to know what you are comparing against; if that thing is even measurable (quantitatively or qualitatively) in the first place; that the things you are comparing are actually useful or valid for what you want to know. Comparing multivariate things against each other and going “yep, that entire ’language’ is faster than this one” is misguided at best, and idiotic at worse. Please don’t treat “benchmarks” such as these as mostly pseudo-science, not science. Just because it has loads of numbers and “measurements”, does not make it “scientific”.

0 views