Latest Posts (20 found)

Adding a Book Editor to My Pure Blog Site

Regular readers will know that I've been on quite the CMS journey over the years. WordPress, Grav, Jekyll, Kirby, my own little Hyde thing, and now Pure Blog . I won't bore you with the full history again, but the short version is: I kept chasing just the right amount of power and simplicity, and I think Pure Blog might actually be it. But there was one nagging thing. I have a books page that's powered by a YAML data file, which creates a running list of everything I've read with ratings, summaries, and the occasional opinion. It worked great, but editing it meant cracking open a YAML file in my editor and being very careful not to mess up the indentation. Not ideal. So I decided to build a proper admin UI for it. And in doing so, I've confirmed that Pure Blog is exactly what I wanted it to be - flexible and hackable. I added a new Books tab to the admin content page, and a dedicated editor page. It's got all the fields I need - title, author, genre, dates, a star rating dropdown, and a Goodreads URL. I also added CodeMirror editors for the summary and opinion fields, so I have all the markdown goodness they offer in the post and page editors. The key thing is that none of this touched the Pure Blog core. Not a single line. My new book list in Pure Blog A book being edited Pure Blog has a few mechanisms that make this kind of thing surprisingly clean: is auto-loaded after core, so any custom functions I define there are available everywhere — including in admin pages. I put my function here, which takes the books data and writes it back to the data file, then clears the cache — exactly like saving a normal post does. Again, zero core changes. is the escape hatch for when I do need to override a core file. I added both (where I added the Books tab) and (the new editor) to the ignore list , so future Pure Blog updates won't mess with them. It's a simple text file, one path per line. Patch what you need, ignore it, and move on. is where it gets a bit SSG-ish. The books page is powered by — a PHP file that loads the YAML, sorts it by read date, and renders the whole page. It's essentially a template, not unlike a Liquid or Nunjucks layout in Jekyll or Eleventy. Same idea for the books RSS feed . Using a YAML data file for books made more sense to me, rather than markdown files like a post or a page, as it's all metadata really. There's no real "content" for these entries. Put those three things together and you've got something pretty nifty. A customisable admin UI, safe core patching, and template-driven data pages — all without a plugin system or any framework magic. Bloody. Brilliant. I spent years chasing the perfect CMS, and a big part of what I was looking for was this . The ability to build exactly what I need without having to fight the platform, or fork it, or bolt on a load of plugins. With Kirby, I could do this kind of thing, but the learning curve was steep and the blueprint system took me ages to get my head around. With Jekyll/Hyde, I had the SSG flexibility, but no web-based CMS I could login to and create content - I needed my laptop. Pure Blog sits in a really nice middle ground — it's got a proper admin interface out of the box, but it gets out of the way when you want to extend it. I'm chuffed with how the book editor turned out. It's a small thing, but it's exactly what I wanted, and the fact that it all lives outside of core means I can update Pure Blog without worrying about losing any of it. Now, if you'll excuse me, I have some books to log. 📚 Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views

How I Discover New Blogs

Finding a new blog to read is one of my favourite things to do online. It genuinely brings me joy. Right now I have 230 sites that I follow in my RSS reader, Miniflux . If I ever want to spend some time reading, I'll usually open Miniflux over my Mastodon client, Moshidon. There's no likes, boosts, hashtags etc. just interesting people sharing interesting opinions. It's lovely. So how do I discover these blogs? There's many ways to do it, but here's some that I've found most successful, ranked from most useful, to least. When someone I already enjoy reading links to a post from another blogger, either just to share their posts, or to add their own commentary to the conversation. This (to me at least) is the most useful way to discover new blogs to read. It's the entire premise of the Indieweb, so if you own a blog, please make sure you're linking to other blogs in your posts. 🙃 There are a number of great small/indie web aggregators out there, and there seems to be new ones popping up all the time. Here's a list of some of my favourites: I tend to use these as a kind of extended RSS reader. So if I'm up to date on my RSS feeds, I'll use these as a way to continue hunting for new people to follow. Truth is, I actually spend more time on these sites than I do on the fediverse. Speaking of which... There's lots of cool people on the fediverse , and many of them have blogs. Even those who don't blog will regularly share links to posts they've enjoyed. I also nose at hashtags of the topics that interest me, rather than just the timeline of people I follow. So remember to add hashtags to your posts - they're a great way to aid discovery. 👍🏻 This last bucket is just everything else ; where I naturally find my way to a blog while surfing the net. I've discovered some great blogs this way, but it's becoming harder and harder to find indie blogs this way, as discoverability on the web has been overtaken by AI summaries and SEO. 😏 It's still possible though. There's plenty of interesting people out there, creating great posts for us all to enjoy. The indie web is thriving, and if you're not taking advantage of it, you're missing out! Why not take a look at a couple of the sites I've listed above and see what you discover? It's a tonne of fun. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . Bear Blog Discover Blogosphere Kagi Small Web

0 views

Analogue Prototyping

There is a lot to say about prototyping . Chris Hecker talked about advanced prototyping at GDC 2006, and provided a hierarchy of priorities that goes like this: Analogue prototyping comes in right away at Step 1: Don’t . By not launching straight into your game engine, you can save giant heaps of time between hypothesis and implementation. You can also figure out what kinds of references will be relevant before you reach Step 4: Gather References . There’s another side to analogue prototyping as well. In the book Challenges for Game Designers , Brenda Romero says: “A painter gets better by making lots of paintings; sculptors hone their craft by making sculptures; and game designers improve their skills by designing lots of games. […] Unfortunately, designing a complete video game (and implementing it, and then seeing all the things you did right and wrong) can take years, and we’d all like to improve at a faster rate than that.” Brenda Romero Using cards, dice, and paper leads to some of the fastest prototyping possible. It can be just ten minutes between idea and test, fitting really well into those two days of Step 2: Just Do It . Of course, it can also take weeks and require countless iterations, but that’s part of the game designer’s job after all. This post focuses on what to gain from analogue prototypes of digital games, and the practical process involved. It’s also unusually full of real work, since this is something I’ve done quite a bit for my personal projects and is therefore not under NDA. If you’re curious about something or need to tell me I’m wrong, don’t hesitate to comment or e-mail me at [email protected] . Why you should care about analogue prototyping when all you want to do is the next amazing digital game may seem like a mystery. A detour that leads to having your fingers glued together and a bunch of leftover paper clippings you can’t use for anything. In Chris Hecker’s talk, the first suggestion is that you should cheat before you put too much time into anything else. Since you will be cutting and gluing and sleeving, and some of that work takes time, this counts double with analogue prototypes. The easiest way to cheat is to use proxies. If you have a collection of boardgames, this is easy. You can also go out and buy some used games cheap or ask friends if they have some lying around that they don’t use. Perhaps that worn copy of Monopoly that almost caused a family breakup can finally get some table time again, in a different form. Aesthetics matter. If you want to take shortcuts with how a game feels to play, getting something that looks the part can be a shortcut. Go to your local Dollar Store or second hand shop and pick up some plastic toys or a game with miniatures that are similar to what you are after. They can merely be there to act as center pieces for your prototype. The easiest and most efficient reference board that exists is a standard chessboard. Square grid with a manageable size. You can also use a Go board, with the extra benefit that the Go beads also make for excellent proxy components. Beyond those two, you can really use any other board game board too. Just make sure to remember where you got it from if you want to play those games in the future. Or you can even pick up games with missing parts at yard sales, usually super cheap, and scavenge proxy parts from those. For some types of games, finding a good real-world map, perhaps even a tourist map or subway map, can be an excellent shortcut. Not just for wargames, but for anything with a spatial component. The guide map from a theme park or museum works, too. Packs of 52 standard playing cards are fantastic proxies. You can use suits, ladders, make face cards have a different meaning, and much more. Countless prototypes have used these excellent decks to handle anything from combat resolution to hidden information. It’s also possible to go even further, and make your own game use regular playing gards and the known poker combos as a feature. Balatro comes to mind. Many families have a Yatzy set lying around, providing you with a small handful of six-sided standard dice. You can do a lot with just this simple straightforward randomisation element. But don’t limit yourself to just six-sided dice, if you don’t have to. Get yourself a set of Dungeons & Dragons polyhedrals and you’ll have four-, eight-, ten-, twelve- and twenty-sided dice rounding out your randomisation armory. Just want to make an honorable mention of this fantasy wargame, because of its diversity. You can build all manner of strange scenery from just a core HeroScape set and use it effectively to represent almost anything. The same goes for Lego. The main issue with these kinds of proxies is that they can take a lot of space. Particularly HeroScape , since it has a predefined scale. With Lego, you just need to figure out a scale and stick to it. If there’s a game the people you will play with are especially familiar with, you can skip over having to design one of your systems by substituting a mechanic from a game you already know. Say, if you know that you will want to have statistics in your game, you can copy the traditional lineup of six abilities from Dungeons & Dragons , as well as their scale, to get started. Even if you know that you will want a different lineup later, this means you can test elements that are more unique to your game faster. An effective way to minimise cut-and-paste time is to print your cards very small. Preferably so all of them fit on a single piece of paper. They will be a bit trickier to shuffle this way, but that’s rarely an issue in testing. This way, you need less paper and you can cut everything faster. Going from eight cards to a sheet to 32 is a pretty big difference. Just avoid miniaturizing to the point that you need a magnifying glass. There’s no need to get fancy with real cardstock. Here are some things you can use. I usually just keep any interesting sheets from deliveries I receive. Say, the sturdy sheet of paper used in a plastic sleeve to make sure a comic book doesn’t bend in the mail. Perfect for gluing counters. There are three things you need to consider for paper: size, weight, and texture. For size, since I’m in Europe, I use the standardized A-sizes. A0 is a giant piece of paper, A1 is half as big, A2 half as big again, and so on. The standard office paper format is A4, roughly equivalent to U.S. Letter. This can easily be folded into A5 pamphlets. I also keep A3 papers around (twice the size of A4), but those I use to draw on. Not for printing. I don’t have a big enough home to fit a floor printer. The next thing is paper weight, measured in grams per square meter (GSM). Most home printers can’t handle heavier paper than 120-200 GSM. I always keep standard paper (80 GSM) around, and some heavier papers too. If I print counters or cards I sometimes use the sturdier stock. For reference, Magic cards are printed on 300 GSM black core paper stock. The black core is so you can’t see through the card and is taken directly from the gambling circuit. Lastly, the paper’s texture. If you want to work a little on the presentation, it can be nice to find paper canvas, or other sturdier variants. I’ve found that glossy photo paper is almost entirely useless in my own printer, however, always smearing or distorting the print. So when I buy any higher-GSM paper I try to find paper with coarser texture. There are many different kinds of cardboard, and you should try to keep as many around as possible. Some can be good for gluing boards or counters onto, while others can help make your prototype sturdier. This isn’t as important as paper, but gets used frequently enough that it felt worth mentioning. There will be a lot of rambling about cards later, and how to use them. For now, I only refer to loose cards you can use to prop up your thin paper printouts. These are not strictly necessary, but make shuffling easier. I don’t play much Magic: The Gathering anymore, but I still have lots and lots of leftover Magic cards, so those are the ones that get used as backing in most of my prototypes. You can cheaply buy colored wooden cubes as well as glass and plastic beads in bulk. It’s not always obvious what you may need, so keeping some different types around can be helpful. More specific pieces, like coins or pawns, can also be useful but unless these components provide unique affordances the kinds of components you have access to is rarely important. It’s usually enough to be able to move them around and separate them into groups. Storage is another thing that needs solving. If you mostly print paper and iterate on rules, a binder can be quite helpful. Especially paired with plastic sleeves so you can group iterations of your rules together and store them easily. If you also need to transport your prototypes, the kinds of storage boxes you find in office supply stores will have you sorted. You can push your analogue prototyping really far and build a whole workshop. A 3D printer for making scenery and miniatures, a laser cutter for custom MDF components, and a big floor-sized professional printer that takes over a whole room. If you have the space and the resources for that, you do you, but let’s focus on the smallest possible toolbox for making analogue prototypes. If you want to buy a printer, you just need to be aware that all of them have the same problems of losing connections and failing to print still to this day. Those same problems that have plagued printers since forever. I use a laser color printer with duplex (double-sided) printing support and the ability to print slightly heavier paper, up to 220 GSM. This has been more than enough for my needs. Specifically the duplex feature helps a lot if you want to print rulebooks. Having a good store of pencils and pens, including alcohol- and water-based markers, is more than enough. You can go deeper into the pen rabbit hole by looking at Niklas Wistedt’s spectacular tutorial on how to draw dungeon maps : it’ll have you covered in the pen and pencil department. Some tools you keep around to hold piles of paper or cards together. Paper clips are extra handy, because they can also be used as improvised sliders pointing at health numbers or other variables. Rubber bands are handy for keeping decks of cards together inside a box and for transportation. Almost every paper-based activity without decent scissors on hand will be a futile effort. Just beware that cutting things out by hand takes more time than you think. If you have a game with many cards, you may have to put on a couple of episodes of your favorite show as you cut them out. If you need more precision than scissors can provide, the next rung on the cutting lader is to get a proper cutting mat, a steel ruler, and a set of good sharp knives. These can be craft scalpels, metal handles with interchangeable blades (Americans insist on calling these “x-acto knives”), or carpet knives. Once you have rules and test documents printed, you’ll quickly disappear under a veritable ocean of paper. Though smaller sheafs can be pinned together with a paper clip, staplers are even better. A standard small office stapler is enough. But if you want to staple booklets and not just sheafs, it can be worth it to get a long-reach stapler capable of punching 20 sheets or more. Attaching paper to other paper can be done in more ways than with clips or staples. Sometimes you want to use glue or adhesive tape. Keeping a standard gluestick and a can of spray glue around is perfect. Regular tape and double-sided tape is also great for many things, even if the main use for tape can just be to make larger scale maps out of individual pieces of paper. As mentioned previously, it can take some time to cut out all the cards you want to print. You can cut this time down to a fraction, metaphorically and physically, by getting a paper guilloutine. These can usually take a few sheets at a time and will give you clean cuts along identified lines. Yelling “vive la France” when you drop the blade is optional. Lastly, a more decadent piece of machinery that isn’t strictly needed is a paper laminator. These will heat up a plastic pocket and melt the edges together to provide the paper with a plastic surface. It makes the paper much sturdier and has the added benefit of allowing you to use dry erase markers to make notes and adjustments right on the sheet itself. There is a lot of software out there that can be used to make cards, boards, illustrations, and whatever else you may need. The following is merely a list of what I personally use. Since you will often want to test things at different sizes, vector graphics are generally more useful for board game prototyping than pixel graphics. This is by no means a hard rule, but resolution of pixel images tends to limit how large you can scale them, while vector graphics have no such limitations. My go-to for vector graphics is Illustrator, but there are free alternatives like Affinity available as well. My other go-to piece of software for analogue shenanigans is InDesign, another Adobe program that can also be replaced by Affinity . I’m just personally too stuck in the Adobe ecosystem, after decades of regular use, that it’s too late for me to switch. You can’t teach an old dogs new tricks, as the saying goes. Indesign is great for multiple reasons. Not least of all its ability to use comma-separated value (CSV) files to populate unique pages or cards with data. A feature called DataMerge. Speaking of spreadsheets, all system designers have a lovely relationship to their tool of choice. This can be Microsoft Excel , OpenOffice Calc , or Google Spreadsheets, but the many convenient features of spreadsheets are a huge part of our bread and butter. I don’t even want to know how many sheets I create in an average year. Very broadly speaking, when making an analogue prototype, I will make use of spreadsheets for these reasons: The fantastic Tabletop Simulator is not just a great place to play tabletop games, it’s also a great place to test your own games. Renown board game designer Cole Wehrle has recorded some workshops for people interested in this specific adventure, and let’s just say that once you have this up and running it will make it a lot easier to test your game. Especially if the members of your team doesn’t all live in the same city. Its biggest strength is how quickly you can update new versions for anyone with a module already installed. If you share your module through Steam Workshop, it’s even easier. For most analogue prototypes, this isn’t doable, simply because of NDAs and rights issues. So much stuff ! Let’s put it all together. The way I’ve talked about this, there are really six steps to the process of making an analogue prototype: This is more important than you may think. An analogue prototype can easily become a design detour. Because of this, your goal needs to formulate why you are making this analogue prototype. “Test if it’s fun with infinitely respawning enemies” could be a goal. “See what works best: party or individual character” could be another one. But it can also be a lot narrower, for example designed to test the gold economy in your game. Perhaps even to balance it. The point is that you need a goal, and you need to stick to it and cut everything out that doesn’t serve that goal. If you need to test how travelling works on the map, you probably don’t need a full-fledged combat system, for example. Facts are the smallest units of decision in your game’s design . Stuff that every decision maker on your team has agreed on and that can therefore safely inform your analogue prototype. This can be super broad, like “the player plays a hamster,” or it can be more specific, like “the player character always has exactly one weapon.” You need these facts to keep your prototype grounded, but you don’t necessarily need to refer to them all at once. Pick the ones that are most important to your goal. With a goal and some facts, you need to figure out what systems you will use. Try to narrow it down more than you may think. Don’t make a “combat system,” but rather one “attack system” and another “defense system.” The reason for this is that what you are after is the resource exchanges that come from this, and the dynamics of the interactions. The attack system may take player choices as input and dish out damage as output, while the defense system may accept armor and damage input and send health loss as output. You can refer to the examples of building blocks in this post for inspiration. This is where we come to the biggest strength of analogue prototyping: real humans provide a lot more nuance and depth than any prototype can do on its own. Analogue or digital. One player can take on the role of referee or game master, similar to how it would work in a tabletop role-playing game . In many wargames of the past, this was called an umpire. Someone who would know all the rules and act as a channel between the players and the systems. If you have built a particularly complicated analogue prototype, a good way to test it can be to act as a referee and then simply ask players what they want to do instead of teaching them the details of the rules. Players can play each other’s opponents, representing different factions, interest groups, or feature sets via their analogue mechanics. If you built an analogue prototype of StarCraft , you’d probably do it this way, with three players taking on one faction each. One player can play the enemies, while another plays the economy system, or the spawning system. The goal here is to put one player in charge of the decisions made within the related system. If someone wants to trade their stock for a new space ship, and this isn’t covered by the rules, the economy system player can decide on the exchange rate and the spawning system player can say that this spawns a patrol of rival ships. Just take ample notes, so you don’t forget the nuances that come out of this process. There are many different ways to use the components you collected previously. Some of them may not be intuitive at all. The humble die: perhaps the most useful component in your toolbox. Just look at the following list and be amazed: People have been using playing cards for leisure activities since at least medieval times. Just as for dice, you’ll see why right here, and perhaps these things will fit your needs better than dice: Humans are spatial beeings that think in three dimensions. Even such a simple thing as a square grid where you put miniatures will create relationships of behind, in front of, far away from, close to, etc. All analogue prototypes don’t need this, but if you do need it, here are some alternatives to explore: With the fast iterations of analogue prototypes, you can usually just change a word or an image somewhere and print a new page. This means you may have many copies of the same page after a while. To prepare for this situation, make sure to have a system for versioning. It doesn’t have to be too involved, especially if you’re the only designer working on this prototype, but you need to do something. I usually just iterate a number in the corner of each page. The 3 becomes a 4. I may also write the date, if that seems necessary. I may also add a colored dot (usually red) to pages that have been deprecated, since just the number itself won’t say much and you may end up referring to the wrong pages if you don’t have an indicator like this. Step 1: Don’t : Steal it, fake it, or rehash stuff you have already made before you start a new prototype. Step 2: Just Do It : If it takes less than two days, just do it. As the saying goes, it’s easier to ask for forgiveness than for permission. Step 3: Fail Early : When something feels like a dud even at an early stage, you can assume that it is in fact a dud. There’s nothing wrong about abandoning a prototype. In fact, learning to kill things early is a skill. Step 4: Gather References : Prototypes can only really help with small problems. Big problems, you must break apart and figure out. Collect references. White papers, mockup screenshots, music, asset store packs, and so on. Anything that helps you understand the problem space. The same psychology applies . Rewards, risk-taking, information overload. Many of our intrinsic and extrinsic motivators are triggered the same by boardgames as by digital games. The distance is not nearly as far as we may tell ourselves. Players can represent complex systems . A player has all the complexity of a living breathing human, making odd decisions and concocting strange plans. This lets you use players as representations of systems, from enemy behaviors to narrative. Analogue games are “pure” systems . If you can’t make sense of your mechanic in its naked form, you can probably not expect your players to make sense of it either. Similar affordances . Generating random numbers with dice, shuffling cards, moving things around a limited space; analogue gaming is always extremely close to digital gaming, even to the point that we use similar verbs and parlance. Holism . Probably the best part of the analogue format is that you can actually represent everything in your game in one way or another. It doesn’t have to be a big complex system, as long as you provide something to act as that system’s output. Listing all the actions, components, elements, etc., that are relevant. Just getting things into a list can show you if something is realistic or not. Cross-matrices for fleshing out a game’s state-space. If I know the features I want, and the terrains that exist, a cross-matrix can explore what those mean: a feature-terrain matrix. Notes on playtests. How many players played, what happened, who won and why, etc. Calculators of various kinds, incorporating more spreadsheet scripting. Can be used to check probabilities, damage variation, feature dominance, etc. Session logging. If I want to be more detailed, I can log each action from a whole session and see if there are things that can be added or removed. Set a Goal Identify Facts Systemify the Facts Consider the roles of Players Tie it together with Components Types of dice : you can use any number of sides, and make use of the corresponding probabilities. Dividing a result by the number of sides gives you the probability of that result. So, 1/6 = 0.1666 means there’s a ~17% chance to roll any single side on a six-sided die. Use the dice that best represents the percentage chances you have in mind. Singles : rolling a single die and reading the result. Pretty straightforward. Sums : rolling two or more dice and adding the result together. Pools : rolling a handful of dice and checking for specific individual results or adding them together. Buckets : rolling a lot of dice and checking for specific results. The only reason buckets of dice are separated from dice pools here is because they have a different “feel” to them; they are functionally identical. Add/Subtract : add or subtract one die from the result of another, or use mathematical modifiers to add or subtract from another result. X- or X+ : require specific results per die. In these cases X- would mean “X or lower,” and X+ would mean “X or higher.” Patterns : like Yatzy, or what the first The Witcher called “Dice Poker:” you want doubles, triples, full houses, etc. Reroll : allowing rerolls of some or all of the dice you just rolled. Makes the rolling take longer but also provides increased chances of reaching the right result. Some games allow rerolling in realtime and then use other time elements to restrict play. So you can frantically keep trying to get that 6, but if an hourglass runs out first you lose. Spin : spinning the die to the specific side you want. Trigger : if you roll a specific result, something special happens. It could be the natural 20 that causes a critical hit in Dungeons & Dragons , or it can be that a roll of 10 means you roll another ten-sided die and add it to your result. Hide : you roll or you set your result under a cupped hand or physical cup, hiding the result until everyone reveals at the same time or the game rules require it. Statistics : common sense may say that you can’t possibly roll a fifth one after the first four, but in reality you can. Dice are truly random. Shuffle : shuffling cards is a great way to randomise outcomes. This can be done in many different ways, as well, where you shuffle a “bomb” into half of the pile and then shuffle the other half to place on top, for example. There are many ways to mix up how to shuffle a deck of cards. Uniqueness : each card can only be drawn once, which means that you can make each card in a deck unique and you can affect the mathematics of probability by adding multiple copies of the same card. Just like the board game Maria uses standard playing cards but in different numbers. Front and back : the face and back of the cards can have different print on them, or the back can just inform you what kind of card it is so you can shuffle them together in setup. Of course, the fact that you can hide the faces for other players is also what makes bluffing in poker interesting. Turn, sideways : what Magic calls “tapping” and other games may call exhausting or something else. Some cards can be turned sideways (in landscape mode instead of portrait mode) by default. Turn, over : flipping a card to its other side can serve to show you new information or to hide its face from everyone around the table. It can represent a card being exhausted, or injured, or other state changes like a person transforming into a werewolf. Over/under : cards can be placed physically over or under other cards, to show various kinds of relationships. An item equipped by a character, or a condition suffered by an army, for example. Card grids : cards can be placed in a grid to generate a board, or to act as a sheet selection for a character. One card could be your character class, another could be a choice of quest, etc. It’s a neat way to test combinations. Hide cards : if you want to get really physical, you can hide cards on your person, under boards, and so on. This was one way you could play Killer , by hiding notes your opponents would find. Card text : if you print your own cards, you can have any text you want on them. Reminders, rules exceptions, etc. Deck composition : how you put decks together will affect how the game plays, and predesigning decks for different tests can be very effective. Perhaps you remove all the goblins in one playtest and have only goblins in another. Deck building : decks can also be constructed through play, similarly to how Slay the Spire works. A style of mechanic where you can start small and then grow in complexity throughout a session. Stats : cards can be in different states. On the table, in your hand, available from an open tableau, shuffled into a deck, discarded to a discard pile, and even removed from the game due to in-game effects. Semantics : something that Magic: The Gathering ‘s designer, Richard Garfield, was particularly good at was to figure out interesting names for the things you were doing. You don’t just play a card, you’re casting a spell. It’s not a discard pile, it’s your graveyard. These kinds of semantics can be strong nods back to the digital game you are making, or they can serve a more thematic purpose. Statistics : with every card you draw, the deck shrinks, increasing the chances of drawing the specific card you may want. You are guaranteed to draw every card if you go through a whole deck, which is one of the biggest strengths of decks of cards. Node or point maps : picture a corkboard with pins and red thread, or just simple circular nodes with lines between them. You can draw this easily on a large sheet of paper and just write simple names next to each circle to provide context. Sector maps : one step above the node or point map is the sector map, where regions share proximity. Grand strategy games have maps like this, where provinces share borders. Another example are more abstract role-playing games, where a house’s interior is maybe divided into two sectors and the whole exterior area around it is another sector. It’s excellent for broad-stroke maps. Square grids : if you want a grid, the square grid is probably the most intuitive. But it also has some mathematical problems: diagonals reach twice as far as cardinals. This means you need to either not allow diagonals or allow them and account for the problems that will emerge. Hexagon grids : these are more accurate and classic wargame fare, but they will also often force you to adapt your art to the grid in ways that are not as intuitive as with a square grid. Freeform : finally, you can just take any satellite image or nice drawn map, perhaps an overhead screenshot from a level you’ve made, and use it as a map in a freeform capacity. This may force you to use a tape measure or other way to measure distances, but if the distances are not important that matters a lot less. For example if your game shares sensibilities with Marvel’s Midnight Suns .

0 views

KTT x 80Retros GAME 1989 Orange

I picked up the KTT x 80Retros GAME 1989 Orange switches a while ago at Funkeys , a physical brick-and-mortar mechanical keyboard store in Yongsan-gu, Seoul , and it’s my first linear switch. Given its surprisingly cheap price I really didn’t expect much from it to be honest. KTT is a name people normally associate with budget options, like Peaches , Sea Salts , and Strawberries . It’s the kind of switches that show up in beginner build guides and they are generally good stuff, but not really the kind of thing that made me stop and think about what I was typing on. However, the GAME 1989 Orange changed that perception for me, and it did it in a way I genuinely didn’t see coming. But before we get into the switch itself, we need to talk about the vibe , because the vibe is half the story here. 80Retros is a relatively young brand out of China that debuted on ZFrontier around December 2023 with an interest check for their GAME 1989 cherry-profile PBT keycap set inspired by the original Game Boy . They describe themselves as lovers of all things vintage and retro, and unlike a lot of brands that slap “retro” on things as a marketing afterthought, they actually seem to mean it. What’s remarkable is how fast they’ve moved since then. Within a few years, they went from a single keycap IC to pushing out nearly a dozen different switches across two separate manufacturers ( KTT and HMX ), along with matching keycap sets in multiple colorways. The G.O.A.T. of switch reviews himself, ThereminGoat , covered this in detail in his HMX Volume 0-T review , and the GAME timeline is pretty interesting: The original HMX -manufactured GAME 1989 switches came first, followed by what he calls the “Film Trio” (the KD200 , FJ400 , and GAME 1989 Classic ), all packaged in these absolutely gorgeous film canister-inspired containers that look like oversized Kodak rolls. The film canister thing started as a nod to the KD200 and FJ400 being camera-brand-inspired, but the community loved the packaging so much that 80Retros seemingly just kept using it for everything. Even for switches that have nothing to do with photography. The KTT -manufactured GAME 1989 Orange and Red are the newer entries in this expanding catalogue, released as part of an “Expanded Film Series” in early 2025 alongside a Silent White variant and an HMX XMAS switch. So we’re looking at a brand that is absolutely not slowing down. On paper, PC top and PA66 bottom is a pretty classic material combo. KTT has used variations of this pairing for years. What makes this switch interesting is the KT2 stem made out of their proprietary UPE blend. UPE ( ultra-high molecular weight polyethylene ) is a material that’s been showing up more and more in the switch world, but it’s one of those things where the specific manufacturer’s blend matters enormously. Keygeek ’s U4 , for example, sounds glassy and solid. KTT ’s KT2 is more dry, a bit foamy, and (this is the part I didn’t expect) it brings an audible character that I can only describe as “marble-y” . It’s not soft, but it’s not hard either. It sits in this interesting middle ground. At 4mm travel with a pole bottom-out the switch is technically a long-pole linear, but the full travel distance means it doesn’t feel like one in the snappy, sharp way that most long-poles do. The pole bottom-out is there, but it’s mellowed out by the travel length and the stem material. More on that later. Stock smoothness is good, and I mean genuinely good. Probably not HMX -tier buttery, and probably not the absolute smoothest thing I’ve tried in the recent years, but there’s a quality to the travel that feels deliberate and controlled. The factory lube is present but light. A thin coating on the bottom housing railings, some on the stem legs and leaf, and the springs seem lightly done too. There is a texture to the keystroke and some people might call it scratch, but I’m not sure that would be fair, though it’s not entirely wrong either. UPE blends can be unpredictable when paired with other housing materials. Sometimes you get something silky, sometimes you get audible friction. The KT2 blend with this PC/PA66 housing produces a slight tactile grain in the travel that I genuinely enjoy. It’s subtle enough that you won’t notice it during normal typing speed, but if you slow-press a single key at ear level, it’s there. Spring-wise, 40g actuation bottoming out at around 50g is on the lighter side, especially for me and my usual Frankenswitches . I wouldn’t call it featherweight, but if you tend to bottom out hard, you’ll definitely hit the end of the stroke with minimal effort. The springs are clean, without noticeable ping in my set. The factory lube on the springs seems to do its job. One thing to note is that there’s reportedly about a 3g variance between individual switches. I couldn’t verify that precisely, but I did notice the occasional key that felt marginally different. Not a dealbreaker for me, but if you’re the kind of person who weighs every spring in a batch, keep it in mind. As for wobble, it is present. There’s some slight vertical (north-south) wobble and maybe a touch of east-west if you go looking for it. This seems to be a known trade-off with KTT ’s newer molds. Their older switches like the Hyacinths seemingly had incredibly tight tolerances, but those molds are from a different era. KTT has been retooling to accommodate new materials like their KT2 and KT3 blends, and the fit isn’t quite as snug as the old stuff. As for films, they probably do help to tighten up the housings and I’ve read that filming the switches apparently also compresses the sound profile slightly. Personally, the wobble doesn’t bother me too much. The sound profile is where the GAME 1989 Orange gets genuinely interesting, because the sound profile is busy , and I mean that in a good way. The bottom-out is lower-pitched than you’d typically expect from a PC -topped switch. The PA66 bottom housing and the KT2 stem material seemingly pull the tone down into a territory that’s thocky without being mushy. There’s a definite pop to the keystroke, and the bottom-out has weight to it. The top-out (the return stroke) is a touch brighter, creating this slight tonal contrast between the downstroke and upstroke that gives the switch a lot of auditory dimension. There’s a lot happening acoustically at any given keystroke and none of it sounds muddied or confused. The “marble-y” quality I mentioned earlier really comes through in the sound. It’s not a wet, lubed sound, but a relatively dry and more textured one, with a character that feels… natural, in lack of better words. The slight scratch in the travel actually adds to the sound profile rather than detracting from it. The initial contact, the pole hitting bottom, the spring compression, the return remains distinct of each other and layered. Volume-wise, it’s moderate. Definitely not silent, but also not exactly loud. Slightly quieter than your average long-pole, which makes sense given the full 4mm travel and the way the KT2 material absorbs some of the impact energy. I haven’t yet tested it on any of my aluminium builds , but at least on the few keyboards Funkeys had these switches on, as well as on my Kunai , I find that the sound profile works beautifully. Having that said, these switches are definitely less ideal for quiet/public environments, like open space offices and cafes. The switches come factory lubed and they work just fine stock. I’d personally resist the urge to lube them further unless you specifically want to kill the audible scratch, which I think is part of the charm. If you do lube, know that you’re trading character for smoothness, and these are already reasonably smooth to begin with. They accept films, and filming them does seem to tighten the sound slightly with less resonance in the housing, a more compressed signature. Depending on your build and plate material, that might be exactly what you want or exactly what you don’t. Try a few with and without before committing. As for the packaging, if you buy the 35-switch sets, they come in those aforementioned film canister containers. It’s genuinely lovely and a nice touch that makes the whole experience feel considered. Not something I’d pay extra for, but it’s a detail that matters for the overall product identity. One thing to note is that the canisters open very easily. I wouldn’t walk around holding them upside down unless I’d want to play find 35 switches hidden underneath the furniture . The KTT x 80Retros GAME 1989 Orange surprised me. It’s a switch that trades the ultra-polished, frictionless perfection for something with a dry, textured, slightly scratchy keystroke that somehow comes together into a sound profile that’s warm, full, and more complex than it has any right to be at this price point. It’s not perfect. The wobble is there, and the housing tolerances aren’t as tight as the best in the business. It doesn’t feel like every other linear on the market, at least not like the ones I had the chance to try over the past years. It has character, which, in a hobby that’s increasingly crowded with technically excellent but personality-free switches, has its charm. If you want the smoothest linear available, look elsewhere. If you want something that sounds interesting, feels engaging, and comes wrapped an homage to a long gone era give the 1989 Orange a shot. I’m genuinely glad I did. Disclaimer: I’m not a switch scientist. I don’t own a force curve rig, I can’t tell you the exact durometer of the KT2 blend, and my ears are probably not calibrated to the standards of someone like ThereminGoat . This review is based on my personal experience typing on these switches across a few different boards and ultimately actively using them on my primary keyboard . Your mileage may vary based on your plate material, case, keycaps, and other factors. Take everything here as one person’s experience and use it as a starting point for your own.

0 views

The dumber, the better

Zhenyi Tan, in a blog post titled Ensheinification , writes: Every time I replace something with a new thing, the new thing is worse. My mother-in-law bought a new rice cooker. It has 20 settings and none of them cook good rice. The old one had one button and made perfect rice for 10+ years. I talked to her about it. She said she tried three different rice cookers. The first one made the rice too sticky. The second one had many buttons and bad design3. And all the buttons turned out to cook the same way. The third was also full of buttons and also made sticky rice. She went back to ask the shop staff how the buttons worked. Nobody knew. They’re just salespeople. Reading this article, I could almost taste the frustration that I often experience myself when I am in the market for something. The rice cooker is actually a great example of an object that is supposed to do one thing, and do it well. It turns out that last Christmas, my wife got me something I had on my wish list for a while: you guessed it, a rice cooker. But not any rice cooker: this “analogue”, beautiful, and simple Hario rice cooker . No button. No plug. No screen. No LED indicator. Just a rice cooker that whistles when the rice is about to be ready. Is it perfect? No. The rice is very good, every time, but I would not call it perfect. But if I prepare the rice the right way, the results are repeatedly and predictably great . The object itself is well-made too. A nice glass lid, a stainless steel and aluminium body, an easy-to-clean and replaceable whistle part: I think this thing could last decades if I take care of it properly. This article by Zhenyi Tan also reminded me of Bradley Taunt’s My Coffee Maker Just Makes Coffee post that I have shared a few times already : Both digital and industrial design suffer from bloat. Far too often I witness fellow designers over-engineer customer requests. Or they add excessive bloat to new product features. It’s almost a rarity these days to find designers who tackle work as single items. Everything expands. Everything needs to do one little extra “cool” thing. Nothing is ever taken away. My new rice cooker and my dear old coffee maker are great examples of this philosophy applied to everyday objects, and the more I think about it, the more satisfying it gets. * 1 As you know, I also love to take away and remove stuff to keep things light and simple . When my soon-to-be brother-in-law first visited our new flat last year, he asked me about the kind of roller shutters we had installed, if they were electrically operated and if I could activate them remotely. I told him that the real estate developer had stuck to manual levers to keep the cost down as much as possible, but we could, if we wanted, easily add a little motor on the side. But I told him that I preferred this manual system anyway. If one day I can’t open or close the shutters, I will know where the problem comes from: a mechanical issue with the roller. If I had a smart system, and if tapping the button on my iPhone screen didn’t do anything, the problem could not only be caused by more things, but also become harder to pinpoint. Is the Wi-Fi working? Do the shutters have internet access? * 2 Should I restart the app or my phone? Does my flat have power? Do I need to reset the connection? Is it a bug? Do I have to update the app? Do I need to give the app access to my location? And finally, is there a mechanical issue with the roller? I get that these modern and more complex solutions exist: some people might prefer them over “dumb” systems, some people may actually need 20+ functions for their rice cooker. But if the price to pay for these is less reliability and simplicity, I wouldn’t count this as progress, but as regression indeed. My coffee maker is this fantastic Braun Aromaster Classic KF 47/1 , in white, and not only do I find that it looks a little Dieter-Rams-esque , but it just works. I bought it in 2020, and I plan to keep it for at least another six years. Sounds like a lot these days. ^ This sentence alone should be a warning sign urging us to keep things as dumb as possible. ^ My coffee maker is this fantastic Braun Aromaster Classic KF 47/1 , in white, and not only do I find that it looks a little Dieter-Rams-esque , but it just works. I bought it in 2020, and I plan to keep it for at least another six years. Sounds like a lot these days. ^ This sentence alone should be a warning sign urging us to keep things as dumb as possible. ^

0 views
Martin Fowler Yesterday

Alan Turing play in Cambridge MA

Last night I saw Central Square Theater’s excellent production of Breaking the Code . It’s about Alan Turing, who made a monumental contribution to both my profession and the fate of free democracies. Well worth seeing if you’re in the Boston area this month.

0 views
ava's blog Yesterday

how i enjoy movies

I'm not much of a movie watcher. I somehow prefer watching multiple episodes of a TV show over a few hours over investing 2 hours into a movie. I get antsy in the second half of the movie and episodic stuff can more easily be paused for a break. My wife has gotten me into more movies the past few years though, especially the recent months. Catching up on classics like all the Star Wars movies, Lord of the Rings 1-3, American Psycho, Fight Club, some popular Studio Ghibli movies, some old genre-defining horror movies, and more. What makes movies a lot more bearable to me is talking about them while watching them, even pausing the movie while discussing. I know many people hate this and just want to watch something in peace, not tear it apart during or even be interrupted. Understandably, they don't want the fantasy and make-believe to be destroyed during. But my wife and I are on the same wavelength about this. She is my favorite person to watch movies with because of this. It would bore me to death to sit through 2+ hours in silence, just staring, and then both of us moving on from it and just saying "Yeah it was good.". I need to have some breaks to readjust my position, get something from the kitchen, drink some water, and have minutes in-between just psychoanalyzing characters, giving our interpretations of things that are still unclear, or saying what we would do if we were the characters. Also discussing the broader context, production, if something was real or CGI... I love it. It keeps me engaged, and it makes the movie more memorable for me. I also learn so much more about it and plot details I would have otherwise missed get revealed to me. I especially love watching something with my wife when it's something she is really interested in or has seen multiple times. Last night, we watched an Indiana Jones movie ( Raiders of the Lost Ark ), and I got so much info from her during it. "Harrison Ford improvised this scene because he was tired of reshooting it all the time." "In this scene you can spot C3PO and R2-D2 in the background. And you can see the Ark in the background of a Clone Wars episode." "I think this shot is actually a matte painting on glass." I'm more of a Lara Croft person, and so we also talked about the similarities and differences between the two, especially with Lara's reboot content and her grappling with the fact that her work tends to cause more harm than good, something Indiana doesn't seem to have to face that much. We also discussed some silly stuff; like how the snakes would realistically survive in that pit, and whether a bunch of snakes are flammable or not. All while watching it and occasionally pausing. Technically, we also do this for TV shows. Severance and Pluribus especially, but even X-Files . It's just so good! I just need to engage with someone about what I'm seeing and pick their brain about an aspect of it. Acknowledging something was produced, these were all actors, this didn't really happen, this was CGI, this is a plot inconsistency etc. doesn't ruin the entertainment for us at all :) Reply via email Published 11 Apr, 2026

0 views
iDiallo Yesterday

Your friends are hiding their best ideas from you

Back in college, the final project in our JavaScript class was to build a website. We were a group of four, and we built the best website in class. It was for a restaurant called the Coral Reef. We found pictures online, created a menu, and settled on a solid theme. I was taking a digital art class in parallel, so I used my Photoshop skills to place our logo inside pictures of our fake restaurant. All of a sudden, something clicked. We were admiring our website on a CRT monitor when my classmate pulled me aside. She had an idea. A business idea. An idea so great that she couldn't share it with the rest of the team. She whispered, covering her mouth with one hand so a lip reader couldn't steal this fantastic idea: "what if we build websites for people?" This was the 2000s, of course it was a fantastic idea. The perfect time to spin up an online business after a market crash. But what she didn't know was that, while I was in class in the mornings, my afternoons were spent scouring Craigslist and building crappy websites for a hundred to two hundred dollars a piece. I wasn't going to share my measly spoils. If anything, this was the perfect time to build that kind of service. That's a great idea , I said. There is something satisfying about having an idea validated. A sort of satisfaction we get from the acknowledgment. We are smart, and our ideas are good. Whenever someone learned that I was a developer, they felt this urge to share their "someday" idea. It's an app, a website, or some technology I couldn't even make sense of. I used to try to dissect these ideas, get to the nitty-gritty details, scrutinize them. But that always ended in hostility. "Yeah, you don't get it. You probably don't have enough experience" was a common response when I didn't give a resounding yes. I don't get those questions anymore, at least not framed in the same way. I have worked for decades in the field, and I even have a few failed start-ups under my belt. I'm ready to hear your ideas. But that job has been taken, not by another eager developer with even more experience, or maybe a successful start-up on their résumé. No, not a person. AI took this job. Somewhere behind a chatbot interface, an AI is telling one of your friends that their idea is brilliant. Another AI is telling them to write out the full details in a prompt and it will build the app in a single stroke. That friend probably shared a localhost:3000 link with you, or a Lovable app, last year. That same friend was satisfied with the demo they saw then and has most likely moved on. In the days when I stood as a judge, validating an idea was rarely what sparked a business. The satisfaction was in the telling. And today, a prompt is rarely a spark either. In fact, the prompt is not enough. My friends share a link to their ChatGPT conversation as proof that their idea is brilliant. I can't deny it, the robot has already spoken. I'm not the authority on good or bad ideas. I've called ideas stupid that went on to make millions of dollars. (A ChatGPT wrapper for SMS, for instance.) A decade ago, I was in Y Combinator's Startup School. In my batch, there were two co-founders: one was the developer, and the other was the idea guy. In every meeting, the idea guy would come up with a brand new idea that had nothing to do with their start-up. The instructor tried to steer him toward being the salesman, but he wouldn't budge. "My talent is in coming up with ideas," he said. We love having great ideas. We're just not interested in starting a business, because that's what it actually takes. A friend will joke, "here's an idea" then proceeds to tell me their idea. "If you ever build it, send me my share." They are not expecting me to build it. They are happy to have shared a great idea. As for my classmate, she never spoke of the business again. But over the years, she must have sent me at least a dozen clients. It was a great idea after all.

0 views

BlogLog April 10 2026

Subscribe via email or RSS I added a new page to my blog in the header showing all the specifications of my homelab and self-hosted services. It will be updated as I continue to update my services or infrastructure. Fixed misspellings in Overview of My Homelab post.

0 views
Stratechery 2 days ago

2026.15: Myth and Mythos

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Sharp Tech video is on why OpenAI’s enterprise pivot makes sense. Anthropic Anthropic Anthropic . In the current AI era, it feels like a new company is crowned the winner every few months, and right now Anthropic is wearing the crown. However, a point I make on Sharp Tech is that Anthropic’s exponential growth includes the part of the curve everyone misses: the company has been on this once-barely-visible trajectory for nearly two years now. Now the company has what is undoubtedly the most powerful model in the world, so powerful, in fact, that Anthropic says it can’t release it publicly. There’s reason for cynicism, given Anthropic’s history, but the part of the “Boy Cries Wolf” myth everyone forgets is that the wolf did come in the end. — Ben Thompson The New York Times and Another Paradigm Shift. If you’re interested in media, this week’s Stratechery Interview with New York Times CEO Meredith Kopit Levien is a fantastic listen. The  Times  has nailed the internet era better than media company in the world, and they’ve succeeded by making deliberate choices — a paywall before it was cool, a clear point of view, integrated business and editorial strategies — to differentiate themselves from a sea of commoditized content in an era of aggregators and content abundance. That playbook worked wonders for the Times in the previous generation of the internet, and I enjoyed hearing Levien’s thoughts on updating it for an era dominated by AI and video.  — Andrew Sharp The New Yorker  Explains Sam Altman. This week’s Sharp Text hit a few different beats, including thoughts on the Strait of Hormuz and a fun bit of E-ZPass history, but I opened with a take on the sprawling Sam Altman profile from the New Yorker . The 16,000 word profile is certainly an exhaustive recital of questions that have been asked about Altman for more than a decade, but better topics went unexplored. It’s frustrating — and representative of too much tech coverage — that so much effort went into what’s effectively a well-written Wikipedia entry, anchored by a predetermined conclusion, and ignoring more dramatic questions than whether Sam Altman is a good person. — AS OpenAI Buys TBPN, Tech and the Token Tsunami — OpenAI’s purchase of TBPN makes no sense, which may be par for the course for OpenAI. Then, AI is breaking stuff, starting with tech services. Anthropic’s New TPU Deal, Anthropic’s Computing Crunch, The Anthropic-Google Alliance — Anthropic needs compute, and Google has the most: it’s a natural partnership, particularly for Google. Anthropic’s New Model, The Mythos Wolf, Glasswing and Alignment — Anthropic says its new model is too dangerous to release; there are reasons to be skeptical, but to the extent Anthropic is right, that raises even deeper concerns. An Interview with New York Times CEO Meredith Kopit Levien About Betting on Humans With Expertise — An interview with New York Times Company CEO Meredith Kopit Levien about human expertise as a moat against Aggregators and AI. Hormuz, Rushmore and a Sam Altman Story That Missed the Story — On the New Yorker’s profile of Sam Altman, the future in the Middle East, and the power of E-ZPass history . OpenAI Buys TBPN Mythos, Altman, New York Times VLIW: The “Impossible” Computer Gas Turbine Blades and their Heat-Defying Single-Crystal Superalloys A Ceasefire and Reports of PRC Pressure; Another Politburo Investigation; Mythos, DeepSeek, and a Token Crunch An Exclusive Hornets-Suns Report and Mail on LeBron, Wemby, the Pistons, ABS in the NBA, Bulls Fandom for Kids Malone to Carolina and Karnisovas Out in Chicago, Cooper and Kon Battling to the Finish, A Jokic-Wemby Classic in Denver Mythos and Project Glasswing, The Year of Anthropic Continues Apace, Q&A on the NYT, Altman, De-globalization

0 views

Premium: The Hater's Guide to OpenAI

Soundtrack: The Dillinger Escape Plan — Setting Fire To Sleeping Giants In what The New Yorker’s Andrew Marantz and Ronan Farrow called a “tense call” after his brief ouster from OpenAI in 2023, Sam Altman seemed unable to reckon with a “pattern of deception” across his time at the company:  No, he cannot. Sam Altman is a deeply-untrustworthy individual, and like OpenAI lives on the fringes of truth, using a complaint media to launder statements that are, for legal reasons, difficult to call “lies” but certainly resemble them. For example, back in November 2025, Altman told venture capitalist Brad Gerstner that OpenAI was doing “well more” than $13 billion in annual revenue when the company would do — and this is assuming you believe CNBC’s source — $13.1 billion for the entire year . I guarantee you that, if pressed, Altman would say that OpenAI was doing “well more than” $13 billion of annualized revenue at the time, which was likely true based on OpenAI’s stylized math, which works out as so (per The Information): This means that, per CNBC’s reporting, OpenAI barely scratched $10 billion in revenue in 2025, and that every single story about OpenAI’s revenue other than my own reporting (which came directly from Azure) massively overinflates its sales. The Information’s piece about OpenAI hitting $4.3 billion in revenue in the first half of 2025 should really say “$3.44 billion,” but even then, my own reporting suggests that OpenAI likely made a mere $2.27 billion in the first half of last year, meaning that even that $10 billion number is questionable. It’s also genuinely insane to me that more people aren’t concerned about OpenAI, not as a creator of software, but as a business entity continually misleading its partners, the media, and the general public. To put it far more bluntly, the media has failed to hold OpenAI accountable, enabling and rationalizing a company built on deception, rationalizing and normalizing ridiculous and impossible ideas just because Sam Altman said them. Let me give you a very obvious example. About a month ago, per CNBC , “...OpenAI reset spending expectations, telling investors its compute target was around $600 billion by 2030.” This is, on its face, a completely fucking insane thing to say, even if OpenAI was a profitable company. Microsoft, a company with hundreds of billions of dollars of annual revenue, has about $42 billion in quarterly operating expenses .  OpenAI cannot afford to pay these agreements. At all. Hell, I don’t think any company can! And instead of saying that, or acknowledging the problem, CNBC simply repeats the statement of “$600 billion in compute spend,” laundering Altman and OpenAI’s reputation as it did (with many of the same writers and TV hosts) with Sam Bankman-Fried . CNBC claimed mere months before the collapse of FTX that it had grown revenue by 1,000% “during the crypto craze,” with its chief executive having “ ...survived the market wreckage and still expanded his empire .” You might say “how could we possibly know?” and the answer is “read CNBC’s own reporting that said that Bankman-Fried intentionally kept FTX in the Bahamas ,” which said that Bankman-Fried had intentionally reduced his stake in Canadian finance firm Voyager ( which eventually collapsed on similar terms to FTX ) to avoid regulatory disclosures around (Bankman-Fried’s investment vehicle) Alameda’s finances. This piece was written by a reporter that has helped launder the reputation of Stargate Abilene , claiming it was “online” despite only a fraction of its capacity actually existing.  The same goes for OpenAI’s $300 billion deal with Oracle that OpenAI cannot afford and Oracle does not have the capacity to serve . These deals do not make any logical sense, the money does not exist, and the utter ridiculousness of reporting them as objective truths rather than ludicrous overpromises allowed Oracle’s stock to pump and OpenAI to continue pretending it could actually ever have hundreds of billions of dollars to spend. OpenAI now claims it makes $2 billion a month , but even then I have serious questions about how much of that is real money considering the proliferation of discounted subscriptions (such as ones that pop up when you cancel that offer you three months of discounted access to ChatGPT Plus ) and free compute deals, such as the $2500 given to Ramp customers , millions of tokens in exchange for sharing your data , the $100,000 token grants given to AI policy researchers , and the OpenAI For Startups program that appears to offer thousands (or even tens of thousands) of dollars of tokens to startups . While I don’t have proof, I would bet that OpenAI likely includes these free tokens in its revenues and then counts them as part of its billions of dollars of sales and market spend . I also think that revenue growth is a little too convenient, accelerating only to match Anthropic, which recently “hit” $30 billion in annualized revenue under suspicious circumstances . I can only imagine OpenAI will soon announce that it’s actually hit $35 billion in annualized revenue , or perhaps $40 billion in annualized revenue , and if that happens, you know that OpenAI is just making shit up.  Regardless, even if OpenAI is actually making $2 billion a month in revenue, it’s likely losing anywhere from $4 billion to $10 billion to make that revenue. Per my own reporting from last year, OpenAI spent $8.67 billion on inference to make $4.329 billion in revenue , and that’s not including training costs that I was unable to dig up — and those numbers were before OpenAI spent tens of millions of dollars in inference costs propping up its doomed Sora video generation product , or launched its Codex coding environment. In simpler terms, OpenAI’s costs have likely accelerated dramatically with its supposed revenue growth. And all of this is happening before OpenAI has to spend the majority of its capital. Oracle has, per my sources in Abilene, only managed to successfully build and generate revenue from two buildings out of the eight that are meant to be done by the end of the year, which means that OpenAI is only paying a small fraction of the final costs of one Stargate data center. Its $138 billion deal with Amazon Web Services is only in its early stages, and as I explained a few months ago in the Hater’s Guide To Microsoft , Redmond’s Remaining Performance Obligations that it expects to make revenue from in the next 12 months have remained flat for multiple quarters, meaning that OpenAI’s supposed purchase of “ an incremental $250 billion in Azure compute ” are yet to commence. In practice, this means that OpenAI’s expenses are likely to massively increase in the coming months. And while the “ $122 billion ” funding round it raised — with $35 billion of it contingent on either AGI or going public (Amazon), and $60 billion of it paid in tranches by SoftBank and NVIDIA — may seem like a lot, keep in mind that OpenAI had received $22.5 billion from SoftBank on December 31 2025 , a little under four months ago.  This suggests that either OpenAI is running out of capital, or has significant up-front commitments it needs to fulfil, requiring massive amounts of cash to be sent to Amazon, Microsoft, CoreWeave ( which it pays on net 360 terms ) and Oracle.  And if I’m honest, I think the entire goal of the funding round was to plug OpenAI’s leaky finances long enough to take it public, against the advice of CFO Sarah Friar. One under-discussed part of Farrow and Marantz’s piece was a quote about OpenAI’s overall finances, emphasis mine : As I wrote up earlier in the week , OpenAI CFO Sarah Friar does not believe, per The Information , that OpenAI is ready to go public, and is concerned about both revenue growth slowing and OpenAI’s ability to pay its bills: To make matters worse, Friar also no longer reports to Altman — and god is it strange that the CFO doesn’t report to the CEO! — and it’s actually unclear who it is she reports to at all, as her current report, Fiji Simo, has taken an indeterminately-long leave of medical absence . Friar has also, per The Information, been left out of conversations around financial planning for data center capacity. These are the big, flashing warning signs of a company with serious financial and accounting issues, run by Sam Altman, a CEO with a vastly-documented pattern of lies and deceit. Altman is sidelining his CFO, rushing the company to go public so that his investors can cash out and the larger con of OpenAI can be dumped onto public investors. And beneath the surface, the raw economics of OpenAI do not make sense. You’ll notice I haven’t talked much about OpenAI’s products yet, and that’s because I do not believe they can exist without venture capital funding them and the customers that buy them. These products only have market share as long as other parties continue to build capacity or throw money into the furnace. To explain: While OpenAI is not systemically necessary , the continued enabling and normalization of its egregious and impossible promises has created an existential threat to multiple parties named above. Its continued existence requires more money than anybody has ever raised for a company — private or public — and in the event it’s allowed to go public, I believe that both retail investors and large equity investors like SoftBank will be left holding the bag. OpenAI has a fundamental lack of focus as a business, despite how many articles have claimed over the last year that it’s working on a “SuperApp” and has some sort of renewed plan to take on whoever it is that OpenAI perceives as the competition in any given calendar month.  Everything OpenAI does is a reaction to somebody else. Its Atlas browser was a response to Perplexity’s Comet browser , its first ( of multiple! ) Code Reds in 2025 was a reaction to Google’s Gemini 3, and its rapid deployment of its Codex model and platform was to compete with Anthropic’s Claude Code . I’ve read about this company and the surrounding industry for hours a day for several years, and I can’t think of a single product that OpenAI has launched first . Even its video-generating social network app Sora was beaten to market by five days by Meta’s putrid and irrelevant “Vibes.” Actually, that’s not true. OpenAI did have one original idea in 2025 — the launch of GPT-5, a much-anticipated new model launch that included a “model router” to make it “more efficient,” except it turned out that it boofed on benchmarks and that the model router actually made it (as I reported last year) more expensive , which led to the router being retired in December 2025 .  I tend to be pretty light-hearted in what I write, but please take me seriously when I say I have genuine concerns about the dangers posed by OpenAI. I believe that OpenAI is an incredibly risky entity, not due to the power of its models or its underlying assets, but due to Sam Altman’s ability to con people and find others that will con in his stead. Those responsible for rooting out con artists — regulators, investors, and the media — have not simply failed , but actively assisted Altman in this con. Here’re the crucial elements of the con: Sam Altman is a dull, mediocre man that loves money and power. He appears to be superficially charming, but his actual skill is ingratiating himself with others and having them owe him favors, or feel somehow indebted to him otherwise. He remembers people’s names and where he met them, and is very good at emailing people, writing checks, or finding reasons for somebody else to write a check. He is not technical — he can barely code and misunderstands basic machine learning ( to quote Futurism ) — but is very good at making the noises that people want to hear, be they big scary statements that confirm their biases or massive promises of unlimited revenue that don’t really make any rational sense. While OpenAI might have started on noble terms, it has since morphed into a massive con led by the Valley’s most-notable con artist.  I realize that those who like AI might find this offensive, but what else do you call somebody who makes promises they can’t keep ($300 billion to Oracle, $200 billion of revenue by 2030), spreads nonsensical financials (promises to spend $600 billion in compute), makes announcements of deals that don’t exist (see: NVIDIA’s $100 billion funding and the entire Stargate project), and speaks in hyperbolic terms to pump the value of his stock (such as basically every time he talks about Superintelligence). Altman has taken advantage of a tech and business media that wants to see him win, a market divorced from true fundamentals, desperate venture capitalists at the end of their rope , hyperscalers that have run out of hypergrowth ideas , and multiple large companies like Oracle and SoftBank that are run by people that can’t do maths. OpenAI is a psuedo-company that can only exist with infinite resources, its software sold on lies, its infrastructure built and paid for by other parties, and its entire existence fueled by compounding layers of leverage and risk.  OpenAI has never made sense, and was only rationalized through a network of co-conspirators. OpenAI has never had a path to profitability, and never had a product that was worthy of the actual cost of selling it. The ascension of this company has only been possible as part of an exploitation of ignorance and desperation, and its collapse will be dangerous for the entire tech industry. Today I’ll explain in great detail the sheer scale of Sam Altman’s con, how it was exacted, the danger it poses to its associated parties, and how it might eventually collapse. This is the Hater’s Guide To OpenAI, or Sam Altman, Freed.  OpenAI’s ChatGPT Subscriptions are, like every LLM product, deeply unprofitable, which means that OpenAI needs constant funding to keep providing them. I have found users of OpenAI Codex who have been able to burn between $1,000 and $2,000 in the space of a week on a $200-a-month subscription, and OpenAI just reset rate limits for the second time in a month. This isn’t a real business. OpenAI’s API customers (the ones paying for access to its models) are, for the most part, venture-backed startups providing services like Cursor and Perplexity that are powered by these models. These startups are all incredibly unprofitable, requiring them to raise hundreds of millions of dollars every few months ( as is the case with Harvey , Lovable, and many other big-name AI firms), which means that a large chunk — some estimate around 27% of its revenue — is dependent on customers that stop existing the moment that venture capital slows down. OpenAI’s infrastructure partners like CoreWeave and Oracle are taking on anywhere from a few billion to over a hundred billion dollars’ worth of debt to build data centers for OpenAI, putting both companies in material jeopardy in the event of OpenAI’s failure to pay or overall collapse. 67% of CoreWeave’s 2025 revenue came from Microsoft renting capacity to rent to OpenAI , and $22 billion (32%) of of CoreWeave’s $66.8 billion in revenue backlog , which requires it to build more capacity to fill.  Oracle took on $38 billion in debt in 2025 , and is in the process of raising another $50 billion more as it lays off thousands of people , with said debt’s only purpose being building data center capacity for OpenAI. OpenAI’s lead investor SoftBank is putting its company in dire straits to fund the company, with over $60 billion invested in the company so far, existentially tying SoftBank’s overall financial health to both OpenAI’s stock price and SoftBank’s ability to continue paying (or refinancing) its loans. SoftBank took on a year-long $15 billion bridge loan in 2025 , had to sell its entire stake in NVIDIA , and expand its ARM-stock-backed margin loan to over $11 billion to give OpenAI $30 billion in 2025, and then took on another $40 billion bridge loan a few weeks ago to fund the $30 billion it promised for OpenAI’s latest funding round . Creating a halo of uncertainty around the actual efficacies of LLMs, to the point that a cult of personality grew around a technology that obfuscated its actual outcomes and efficacies to the point that it could be sold based on what it might do rather than what it actually does . Creating a halo of “genius” around Altman himself, aided by constant and vague threats of human destruction with the suggestion that only Altman could solve them. Normalizing the idea that it’s both necessary and important to let a company burn billions of dollars. Normalizing the idea that it’s okay that a company has perpetual losses, and perpetuating the idea that these losses are necessary for innovation to continue at large.

0 views

Moving my mobile numbers to VoIP

For the last year or so I’ve been running three eSIMs on my iPhone: personal, work, and a data-only travel SIM that swaps in whenever I’m abroad. iOS only lets two eSIMs be active at any one time, which meant a small but constant dance of enabling and disabling profiles depending on what I was doing that day. I’ve now ported both my personal and work mobile numbers to VoIP, and the eSIM juggling is gone. The nudge came from Michael Bazzell’s Extreme Privacy: What It Takes to Disappear , which recommends moving your “real” numbers off a carrier and onto a VoIP provider as part of a broader privacy strategy. For Bazzell the point is untangling your identity from the mobile network. For me it’s almost entirely convenience. Whichever phone I pick up in the morning rings for both numbers, and the data SIM can sit wherever it’s most useful without me having to decide which mobile identity to sacrifice for the day. I’m using Andrews & Arnold (AAISP) as the VoIP provider. I’ve used them for broadband on and off for years and they remain one of the few ISPs I’d actively recommend: technically competent, refreshingly honest, and perfectly happy for you to do slightly unusual things with your service. Porting two mobile numbers to them was painless. For the client I’m using Groundwire from Acrobits. I’ve been through plenty of SIP clients over the years and most of them are either ugly, flaky on push, or weirdly hostile to the idea of multiple accounts. Groundwire is the first one that’s felt like a proper phone replacement. Push notifications actually work, call quality is good, and it handles multiple accounts without any drama. AAISP exposes SMS through a plain-text HTTP API, and Groundwire expects messages to be delivered via its own web service hooks in XML. The two formats don’t match, so out of the box sending and receiving text messages just didn’t work: calls were fine, but SMS was effectively dead. I ended up writing a small PHP proxy that sits between them. Outbound messages go from Groundwire into the proxy, get reshaped, and hit the AAISP API. Inbound messages arrive via an AAISP webhook, get stored in SQLite, and are picked up the next time Groundwire polls. It also pokes Acrobits’ push service when something arrives, so iOS actually surfaces the notification rather than silently waiting on the next poll cycle. It’s called aaisp-sms-proxy and it’s on GitHub if anyone else is in the same boat. AAISP credentials stay server-side, each number gets its own token so they’re properly isolated, and there’s a tiny bit of rate limiting and log sanitisation in there because it’s on the public internet. I use it every day now and mostly forget it’s there. The other reason this matters is that I’m planning to move my daily driver to GrapheneOS . If your numbers live on a physical or embedded SIM, switching devices is a faff: SIM swaps, eSIM transfers, carrier-app dances, the lot. With VoIP the numbers live in an account, so I install Groundwire on whichever phone I’m carrying and it just rings. Pixel one day, iPhone the next, both at the same time if I want. The one remaining puzzle is Signal. Signal still treats the phone as the primary device and the desktop clients as tethered secondaries, which is fine for a single-phone setup but doesn’t quite fit mine. I want something closer to proper multi-device: two phones, both independently functional, one potentially offline for weeks at a time without losing messages when it comes back online. That isn’t how Signal is designed to work today, so figuring out a sensible workaround is next on the list. If you’re reading Bazzell and coming at this from a privacy angle, AAISP isn’t the answer. They’re a UK telco and they verify you like any other provider, so the number is still firmly tied to your legal identity. Moving off a SIM buys you some separation from the mobile network itself, but not the kind of disappearance the book describes. For that you’d want a provider willing to sell you a number without identity checks, and AAISP explicitly doesn’t. My goal was never to vanish, just to stop playing eSIM Tetris every time I landed in another country. The juggling is gone.

0 views
Kev Quirk 2 days ago

I've Completed 100 Days To Offload (Again)

I just published my motorbike servicing rant and went over to my Pure Blog Dashboard to take a look at some stats, when I noticed this: 101 posts in the last year; which means I've complete 100 Days to Offload for a second time! 🎉 The whole point of the is to challenge you to publish 100 posts on your personal blog in a year. Mission accomplished! If you're interested in taking part in the challenge too, make sure you get yourself added to the hall of fame once you've completed it. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
Kev Quirk 2 days ago

Motorbike Servicing Rant

So my BMW S1000XR is now a year old and it's going in for its first "full service" . It had it's "break in" service after a few weeks of ownership, but that's just an oil change. New bikes come with a very thin oil inside the engine that's used to help with the break-in process. After 500 or so miles, this needs to be swapped out for proper oil. I contacted the dealership for a price and some potential dates, this is the breakdown they came back with: So nearly £350 for what's effectively an hour's work and around £50 in parts. I'm mechanically minded and could easily do this at home, but like most modern vehicles, my BMW doesn't come with a service book that is stamped. These days the service history is all stored centrally with BMW, so means that the service has to be carried out by them. There is a misconception that home servicing will void the warranty of a new bike. It won't as long as the person doing the service uses OEM parts and has done it to manufacturers specification - which I always do. But I bought this bike from BMW, so if I hand it back after 3 years with a generic eBay service book that's been stamped by me, even though it's been done to a high standard, it will affect the trade-in value. Ipso facto, they have me by the balls. I get it, margins are small and this is how dealerships make money, but I wish they would make it accessible for mechanically minded people, like me, to service at home. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . Labour - £150 Oil disposal - £20 Oil - £80.60 Sump plug washer - £0.96 Oil filter - £17.29 Brake fluid - £11.92 Tax @ 20% - £56.15 Total: £336.92 (~$455)

0 views
David Bushell 2 days ago

No-stack web development

This year I’ve been asked more than ever before what web development “stack” I use. I always respond: none. We shouldn’t have a go-to stack! Let me explain why. My understanding is that a “stack” is a choice of software used to build a website. That includes language and tooling, libraries and frameworks , and heaven forbid: subscription services. Text editors aren’t always considered part of the stack but integration is a major factor. Web dev stacks often manifest as used to install hundreds of megs of JavaScript, Blazing Fast ™ Rust binaries, and never ending supply chain attacks . A stack is also technical debt, non-transferable knowledge, accelerated obsolescence, and vendor lock-in. That means fragility and overall unnecessary complication. Popular stacks inevitably turn into cargo cults that build in spite of the web, not for it. Let’s break that down. If you have a go-to stack, you’ve prescribed a solution before you’ve diagnosed a problem. You’ve automatically opted in to technical baggage that you must carry the entire project. Project doesn’t fit the stack? Tough; shoehorn it to fit. Stacks are opinionated by design. To facilitate their opinions, they abstract away from web fundamentals. It takes all of five minutes for a tech-savvy person to learn JSON . It takes far, far longer to learn Webpack JSON . The latter becomes useless knowledge once you’ve moved on to better things. Brain space is expensive. Other standards like CSS are never truly mastered but learning an abstraction like Tailwind will severely limit your understanding. Stacks are a collection of move-fast-and-break churnware; fleeting software that updates with incompatible changes, or deprecates entirely in favour of yet another Rust refactor. A basic HTML document written 20 years ago remains compatible today. A codebase built upon a stack 20 months ago might refuse to play. The cost of re-stacking is usually unbearable. Stack-as-a-service is the endgame where websites become hopelessly trapped. Now you’re paying for a service that can’t fix errors . You’ve sacrificed long-term stability and freedom for “developer experience”. I’m not saying you should code artisanal organic free-range websites. I’m saying be aware of the true costs associated with a stack. Don’t prescribed a solution before you’ve diagnosed a problem. Choose the right tool for each job only once the impact is known. Satisfy specific goals of the website, not temporary development goals. Don’t ask a developer what their stack is without asking what problem they’re solving. Be wary of those who promote or mandate a default stack. Be doubtful of those selling a stack. When you develop for a stack, you risk trading the stability of the open web platform, that is to say: decades of broad backwards compatibility, for GitHub’s flavour of the month. The web platform does not require build toolchains. Always default to, and regress to, the fundamentals of CSS, HTML, and JavaScript. Those core standards are the web stack. Yes, you’ll probably benefits from more tools. Choose them wisely. Good tools are intuitive by being based on standards, they can be introduced and replaced with minimal pain. My only absolute advice: do not continue legacy frameworks like React . If that triggers an emotional reaction: you need a stack intervention! It may be difficult to accept but Facebook never was your stack; it’s time to move on. Use the tool, don’t become the tool. Edit: forgot to say: for personal projects, the gloves are off. Go nuts! Be the churn. Learn new tools and even code your own stack. If you’re the sole maintainer the freedom to make your own mistakes can be a learning exercise in itself. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
Daniel Mangum 2 days ago

PSA Crypto: The P is for Portability

Arm’s Platform Security Architecture (PSA) was released in 2017, but it was two years until the first beta release of the PSA Cryptography API in 2019, and another year until the 1.0 specification in 2020. Aimed at securing connected devices and originally targeting only Arm-based systems, PSA has evolved with the donation of the PSA Certified program to GlobalPlatform in 2025, allowing non-Arm devices, such as popular RISC-V microcontrollers (MCUs), to achieve certification.

0 views

watgo - a WebAssembly Toolkit for Go

I'm happy to announce the general availability of watgo - the W eb A ssembly T oolkit for G o. This project is similar to wabt (C++) or wasm-tools (Rust), but in pure, zero-dependency Go. watgo comes with a CLI and a Go API to parse WAT (WebAssembly Text), validate it, and encode it into WASM binaries; it also supports decoding WASM from its binary format. At the center of it all is wasmir - a semantic representation of a WebAssembly module that users can examine (and manipulate). This diagram shows the functionalities provided by watgo: watgo comes with a CLI, which you can install by issuing this command: The CLI aims to be compatible with wasm-tools [1] , and I've already switched my wasm-wat-samples projects to use it; e.g. a command to parse a WAT file, validate it and encode it into binary format: wasmir semantically represents a WASM module with an API that's easy to work with. Here's an example of using watgo to parse a simple WAT program and do some analysis: One important note: the WAT format supports several syntactic niceties that are flattened / canonicalized when lowered to wasmir . For example, all folded instructions are lowered to unfolded ones (linear form), function & type names are resolved to numeric indices, etc. This matches the validation and execution semantics of WASM and its binary representation. These syntactic details are present in watgo in the textformat package (which parses WAT into an AST) and are removed when this is lowered to wasmir . The textformat package is kept internal at this time, but in the future I may consider exposing it publicly - if there's interest. Even though it's still early days for watgo, I'm reasonably confident in its correctness due to a strategy of very heavy testing right from the start. WebAssembly comes with a large official test suite , which is perfect for end-to-end testing of new implementations. The core test suite includes almost 200K lines of WAT files that carry several modules with expected execution semantics and a variety of error scenarios exercised. These live in specially designed .wast files and leverage a custom spec interpreter. watgo hijacks this approach by using the official test suite for its own testing. A custom harness parses .wast files and uses watgo to convert the WAT in them to binary WASM, which is then executed by Node.js [2] ; this harness is a significant effort in itself, but it's very much worth it - the result is excellent testing coverage. watgo passes the entire WASM spec core test suite. Similarly, we leverage wabt's interp test suite which also includes end-to-end tests, using a simpler Node-based harness to test them against watgo. Finally, I maintain a collection of realistic program samples written in WAT in the wasm-wat-samples repository ; these are also used by watgo to test itself. Parse: a parser from WAT to wasmir Validate: uses the official WebAssembly validation semantics to check that the module is well formed and safe Encode: emits wasmir into WASM binary representation Decode: read WASM binary representation into wasmir

0 views

How I use org-roam

While Org-mode is fantastic in its core functionality, there is a lovely little extension that creates a way to build a wiki for all personal knowledge, ideas, writing, work, and so much more: org-roam . A “clone” of ROAM research , if you are familiar with logseq or obsidian , this will have you feeling right at home (albeit, actually at home inside emacs). It has taken some time to figure out how I wanted to use org-roam, but I think I have cracked the code. I will discuss how I’ve been capturing, filing away, and taking action on everything that pops into my head. As a small overview, Org-Roam gives you the ability to create notes (big whoop). The power comes in the backlink to any previous note that may be in your system, similar to how Wikipedia links between articles. As I write in any org-roam document (node), I see suggestions of past notes I have taken, giving the option to immediately create a link back to them. This is fine on it’s own, but you start to see inter-linking between ideas: which becomes massively helpful for research and creating new connections of information. Generally, one would be blind to in other methods of note taking. Org-roam uses an sqlite database (which some critique), as well as an ID system in which everything (files, org headers) have a unique ID. This ID is what forms the link between our notes. Let’s discuss how I’m using this. As with my org-mode flow, the goal is to not only capture, but to reduce friction of the capture to almost nothing. I have capture templates for the following files in my general org-mode file: What I was lacking was a way to integrate with org-roam and create backlinks across the notes I was taking on everything. Enter the new capture system. I use (mapped to ) to hit a daily org-roam file (~/org/roam/daily/2026-04-10.org for example) which is my capture file for everything for the day. I write everything in this file. I mean everything : I then take 5 minutes at the end of every day and file away these items into org-roam nodes if they are “seeds” (in the digital garden sense), actionable items, things I want to look into at some point, or just leave them in the daily file to be archived for posterity. Whenever I want to write something on the computer, emacs is the place I do so, in which I have autocomplete, spelling check, and macros right at my finger tips. I hit a keybind that universally reaches out to emacs and opens the org-roam-dailies-capture-today buffer if I am not on workspace 1 (emacs) and capture the thought/writing/email/text/content, and move on with my day. What this also allows it the use of my capture system via termux on my phone. I simply leave my ~/org/roam/daily/date.org file open every morning in termux running in emacsclient on my workstation, and go about my day. This means all notes live in one place, I don’t generally have to go into “note to self” in signal or xmpp and move things around, and org-roam works out of the box for backlinking and clean up. Is it ideal? No, but it is still better than the various mobile orgmode apps I have tried. I treat the phone just as a capture node, all organizing and refiling happens on my bigger screen at end of day. The major benefit of this methodology is that we have content which is greppable forevermore. If I write, it is written in emacs. Anything more than a sentence or two is in my daily file. I don’t care what it is, I can grep it for all time, version control it, and it is ready to expand upon in the future. By the end of the day, I may have dozens of captures in my daily file. I sit down, open the file up, and review. If the item is actionable or has a date/deadline associated with it, then it is filed to inbox.org/calendar.org. If it is an idea that is a seed of something larger, it is filed into its own org-roam node that can then grow on its own. If something needs to be filed under an existing roam-node, that occurs here as well, and backlinks organically take shape as I write. Finally, if the item is none of these things, it just lives in the daily file as an archive that can be revisited later with ripgrep as stated above. I have bound to project-wide for this, which I use frequently for finding anything. Refiling is simply accomplished by: Which will give you files and org headings under which to refile everything. As we grow our notes database, we will start to see that we have autosuggestions offered via cape and corfu. They look like so: allowing a direct link to previous notes’ IDs, which are portable across the filesystem, so you can move files around to logically work in a heirarchy if you so choose. The standard advice is to keep a flat file system in which all notes are in one directory, but I like organization too much and have created nested directories for this. These links and IDs are handled via the function that can be set to fire automatically on file changes. Oh the fabled “neuronal link graph” that was popularised by Obidian - how could we forget about that? opens a D3 rendered graph that looks nice, but I have not really found use for it other than pretty screenshots to show how “deep(ly autistic)” I am. I find this to be the easiest way to maintain a note taking system that actually grows with the author, while staying sane and keeping everything organized. The notes that we create allow us to understand deeply, and to make connections that are otherwise missed. As in my discussion with Prot , writing everything down has greatly impacted my thinking and allowed growth in areas that are deeply meaningful. Org-roam (and holistically org itself) is once again, just text files. So, you can very easily take any .org file and back it up and hold onto it for all time, as you will never have any proprietary lock in. The database is just an sqlite database, which is the most portable and easily malleable database in existence. The two interlink to give you peace of mind were you ever to leave emacs (haha, you won’t). If you don’t want the “heaviness” of org-roam’s database structure, you could use Prot’s denote package that is a more simplified (yet still highly powerful) method. I just like the autosuggestions and speed of roam, but your mileage may vary. So there you have it, the way that I am using org-roam to create a mind map/second brain and keep notes on everything I come across on a daily basis. How are you using org-roam, or do you have a note taking system you swear by? Post below or send me an email! As always, God bless, and until next time. If you enjoyed this post, consider Supporting my work , Checking out my book , Working with me , or sending me an Email to tell me what you think. inbox.org: Actionable items with a TODO - these are then filed away to projects or kept in this file until acted upon. calendar.org: Scheduled or deadlined items bookmarks.org: web bookmarks contacts.org: every contact I have and reach out to system. notes.org: but this is being replaced as we will see text messages emails (if not already sent via mu4e) notes to self LLM prompts websites I visit journal entries this very post, that will then become a blog post in my writing project code snippets things I want to remember

0 views
Evan Hahn 2 days ago

In defense of GitHub's poor uptime

In short: GitHub’s downtime is bad, but uptime numbers can be misleading. It’s not as bad as it looks; more like a D than an F. 99.99% uptime, or “four nines”, is a common industry standard. Four nines of uptime is equivalent to 1.008 minutes of downtime per week. GitHub is not meeting that, and it’s frustrating. Even though they’re owned by Microsoft’s, one of the richest companies on earth, they aren’t clearing this bar. Here are some things people are saying: According to “The Missing GitHub Status Page” , which reports historical uptime better than GitHub’s official source, they’ve had 89.43% uptime over the last 90 days. That’s zero nines of uptime. That implies more than 2.5 hours of downtime every day ! I dislike GitHub and Microsoft, so I shouldn’t be coming to their defense, but I think this characterization is unfair. I’m no mathematician, but let’s do a little math. Let’s say your enterprise has two services: Service A and Service B. Over the last 10 days: 3 of the last 10 days had outages. That’s 70% uptime total. (That’s how the Missing GitHub Status Page calculates it.) GitHub’s status page lists ten services: core Git operations, webhooks, Issues, and more. Sometimes they’re down simultaneously, but usually not. If all ten of those services have 99% uptime and outages don’t overlap, it’d look like GitHub had 90% uptime because some part of GitHub is out 10% of the time. That’s much worse! The numbers look better if outages happen at the same time. For example, if Service A and Service B go down on Saturday and Sunday, you’d have 80% uptime overall instead of 70%. Compared to the previous scenario, Service A is down twice as long, but the uptime number looks better. A downstream effect of this calculation is that your uptime numbers look worse if your services are well-isolated . I think it’s good that Service A doesn’t take down Service B! I think it’s good that a GitHub Packages outage doesn’t take down GitHub Issues! But if all you see is one aggregate uptime number, you might miss that. Things look rosier when you look at features individually. Over the last 90 days, core Git operations have had 98.98% uptime, or about 22 hours where things were broken. That’s still bad, but not as bad as some people are saying. D tier, not F tier. Also, an incident doesn’t mean everything is broken. For example, GitHub recently had an issue where things were slow for users on the west coast of the United States. Not good , but not “everything is broken for all users”. Again, the number doesn’t tell the whole story. I still think GitHub’s uptime is unacceptably low, especially because they’re owned by Microsoft, but I don’t think we’re being honest when we say that GitHub has “zero nines” of availability. To me, it’s more like: they have a bunch of unstable services which cumulatively have horrible uptime, but individually have not-very-good uptime. There are better reasons to dislike these companies. “GitHub appears to be struggling with measly three nines availability” “World’s First Enterprise Solution With Zero Nines Uptime” “Sure, they may have made the uptime worse, but remember what we got in exchange – when it’s up, the UI is slower and buggier.” Service A had one day of downtime. That means it has 90% uptime. Service B had two days of downtime on different days. That means it has 80% uptime.

0 views

Has Mythos just broken the deal that kept the internet safe?

For nearly 20 years the deal has been simple: you click a link, arbitrary code runs on your device, and a stack of sandboxes keeps that code from doing anything nasty. Browser sandboxes for untrusted JavaScript, VM sandboxes for multi-tenant cloud, ad iframes so banner creatives can't take over your phone or laptop - the modern internet is built on the assumption that those sandboxes hold. Anthropic just shipped a research preview that generates working exploits for one of them 72.4% of the time, up from under 1% a few months ago. That deal might be breaking. From what I've read Mythos is a very large model. Rumours have pointed to it being similar in size to the short lived (and very underwhelming) GPT4.5 . As such I'm with a lot of commentators in thinking that a primary reason this hasn't been rolled out further is compute. Anthropic is probably the most compute starved major AI lab right now and I strongly suspect they do not have the compute to roll this out even if they wanted more broadly. From leaked pricing, it's expensive as well - at $125/MTok output (5x more than Opus, which is itself the most expensive model out there). One thing that has really been overlooked with all the focus on frontier scale models is how quickly improvements in the huge models are being achieved on far smaller models. I've spent a lot of time with Gemma 4 open weights model, and it is incredibly impressive for a model that is ~50x smaller than the frontier models. So I have no doubt that whatever capabilities Mythos has will relatively quickly be available in smaller, and thus easier to serve, models. And even if Mythos' huge size somehow is intrinsic to the abilities (I very much doubt this, given current progress in scaling smaller models) it has, it's only a matter of time before newer chips [1] are able to serve it en masse. It's important to look to where the puck is going. As I've written before, LLMs in my opinion pose an extremely serious cybersecurity risk. Fundamentally we are seeing a radical change in how easy it is to find (and thus exploit) serious flaws and bugs in software for nefarious purposes. To back up a step, it's important to understand how modern cybersecurity is currently achieved. One of the most important concepts is that of a sandbox . Nearly every electronic device you touch day to day has one (or many) layers of these to protect the system. In short, a sandbox is a so called 'virtualised' environment where software can execute on the system, but with limited permissions, segregated from other software, with a very strong boundary that protects the software 'breaking out' of the sandbox. If you're reading this on a modern smartphone, you have at least 3 layers of sandboxing between this page and your phone's operating system. First, your browser has (at least) two levels of sandboxing. One is for the JavaScript execution environment (which runs the interactive code on websites). This is then sandboxed by the browser sandbox, which limits what the site as a whole can do. Finally, iOS or Android then has an app sandbox which limits what the browser as a whole can do. This defence in depth is absolutely fundamental to modern information security, especially allowing users to browse "untrusted" websites with any level of security. For a malicious website to gain control over your device, it needs to chain together multiple vulnerabilities, all at the same time. In reality this is extremely hard to do (and these kinds of chains fetch millions of dollars on the grey market ). Guess what? According to Anthropic, Mythos Preview successfully generates a working exploit for Firefox's JS shell in 72.4% of trials. Opus 4.6 managed this in under 1% of trials in a previous evaluation: Worth flagging a couple of caveats. The JS shell here is Firefox's standalone SpiderMonkey - so this is escaping the innermost sandbox layer, not the full browser chain (the renderer process and OS app sandbox still sit on top). And it's Anthropic's own benchmark, not an independent one. But even hedging both of those, the trajectory is what matters - we're going from "effectively zero" to "72.4% of the time" in one model generation, on a real-world target rather than a toy CTF. This is pretty terrifying if you understand the implications of this. If an LLM can find exploits in sandboxes - which are some of the most well secured pieces of software on the planet - then suddenly every website you aimlessly browse through could contain malicious code which can 'escape' the sandbox and theoretically take control of your device - and all the data on your phone could be sent to someone nasty. These attacks are so dangerous because the internet is built around sandboxes being safe. For example, each banner ad your browser loads is loaded in a separate sandboxed environment. This means they can run a huge amount of (mostly) untested code, with everyone relying on the browser sandbox to protect them. If that sandbox falls, then suddenly a malicious ad campaign can take over millions of devices in hours. Equally, sandboxes (and virtualisation) are fundamental to allowing cloud computing to operate at scale. Most servers these days are not running code against the actual server they are on. Instead, AWS et al take the physical hardware and "slice" it up into so called "virtual" servers, selling each slice to different customers. This allows many more applications to run on a single server - and enables some pretty nice profit margins for the companies involved. This operates on roughly the same model as your phone, with various layers to protect customers from accessing each other's data and (more importantly) from accessing the control plane of AWS. So, we have a very, very big problem if these sandboxes fail, and all fingers point towards this being the case this year. I should tone down the disaster porn slightly - there have been many sandbox escapes before that haven't caused chaos, but I have a strong feeling that this is going to be difficult. And to be clear, when just AWS us-east-1 goes down (which it has done many , many , times ) it is front page news globally and tends to cause significant disruption to day to day life. This is just one of AWS's data centre zones - if a malicious actor was able to take control of the AWS control plane it's likely they'd be able to take all regions simultaneously, and it would likely be infinitely harder to restore when a bad actor was in charge, as opposed to the internal problems that have caused previous problems - and been extremely difficult to restore from in a timely way. Given all this it's understandable that Anthropic are being cautious about releasing this in the wild. The issue though, is that the cat is out of the bag. Even if Anthropic pulled a Miles Dyson and lowered their model code into a pit of molten lava, someone else is going to scale an RL model and release it. The incentives are far, far too high and the prisoner's dilemma strikes again. The current status quo seems to be that these next generation models will be released to a select group of cybersecurity professionals and related organisations, so they can fix things as much as possible to give them a head start. Perhaps this is the best that can be done, but this seems to me to be a repeat of the famous "obscurity is not security" approach which has become a meme in itself in the information security world. It also seems far fetched to me that these organisations who do have access are going to find even most of the critical problems in a limited time window. And that brings me to my final point. While Anthropic are providing $100m of credit and $4m of 'direct cash donations' to open source projects, it's not all open source projects. There are a lot of open source projects that everyone relies on without realising. While the obvious ones like the Linux kernel are getting this "access" ahead of time, there are literally millions of pieces of open source software (nevermind commercial software) that are essential for a substantial minority of systems operation. I'm not quite sure where the plan leaves these ones. Perhaps this is just another round in the cat and mouse cycle that reaches a mostly stable equilibrium, and at worst we have some short term disruption. But if I step back and look how fast the industry has moved over the past few years - I'm not so sure. And one thing I think is for certain, it looks like we do now have the fabled superhuman ability in at least one domain. I don't think it's the last. Albeit at the cost of adding yet more pressure onto the compute crunch the AI industry is experiencing ↩︎ Albeit at the cost of adding yet more pressure onto the compute crunch the AI industry is experiencing ↩︎

0 views